Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148951 stories
·
33 followers

Dart in Google Summer of Code 2026

1 Share

We’re excited to announce that the Dart project will mentor contributors for the seventh time in Google Summer of Code 2026!

The Google Summer of Code (GSoC) program focuses on bringing student developers from around the world into open source software development. Google sponsors students to work with a mentoring open source organization on a 12-week (or longer) programming project during the summer. Over the past 21 years, more than 22,000 contributors have participated in Google Summer of Code.

Are you interested?

To get started now, read the list of project ideas to find a match for your skills and interests. Formal applications must be submitted before March 31st. We encourage prospective applicants to submit early drafts and ask for feedback.

The Dart team expects to have enough mentors to accept only a small number of applications, so we encourage you to review other mentoring organizations as well.

If you have questions specific to Dart and GSoC, ask them on our dedicated mailing list.

To learn more about Google Summer of Code, watch the following video or read the contributor guide for Google Summer of Code.

We look forward to hearing from you!


Dart in Google Summer of Code 2026 was originally published in Dart on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Elevating AI-assisted Android development and improving LLMs with Android Bench

1 Share

Posted by Matthew McCullough, VP of Product Management, Android Developer


We want to make it faster and easier for you to build high-quality Android apps, and one way we’re helping you be more productive is by putting AI at your fingertips. We know you want AI that truly understands the nuances of the Android platform, which is why we’ve been measuring how LLMs perform Android development tasks. Today we released the first version of Android Bench, our official leaderboard of LLMs for Android development.


Our goal is to provide model creators with a benchmark to evaluate LLM capabilities for Android development. By establishing a clear, reliable baseline for what high quality Android development looks like, we’re helping model creators identify gaps and accelerate improvements—which empowers developers to work more efficiently with a wider range of helpful models to choose for AI assistance—which ultimately will lead to higher quality apps across the Android ecosystem.


Designed with real-world Android development tasks

We created the benchmark by curating a task set against a range of common Android development areas. It is composed of real challenges of varying difficulty, sourced from public GitHub Android repositories. Scenarios include resolving breaking changes across Android releases, domain-specific tasks like networking on wearables, and migrating to the latest version of Jetpack Compose, to name a few.


Each evaluation attempts to have an LLM fix the issue reported in the task, which we then verify using unit or instrumentation tests. This model-agnostic approach allows us to measure a model’s ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day. 


We validated this methodology with several LLM makers, including JetBrains.


Measuring AI’s impact on Android is a massive challenge, so it’s great to see a framework that’s this sound and realistic. While we’re active in benchmarking ourselves, Android Bench is a unique and welcome addition. This methodology is exactly the kind of rigorous evaluation Android developers need right now.”  

- Kirill Smelov, Head of AI Integrations at JetBrains.


The first Android Bench results

For this initial release, we wanted to purely measure model performance and not focus on agentic or tool use. The models were able to successfully complete 16-72% of the tasks. This is a wide range that demonstrates some LLMs already have a strong baseline for Android knowledge, while others have more room for improvement. Regardless of where the models are at now, we’re anticipating continued improvement as we encourage LLM makers to enhance their models for Android development. 


The LLM with the highest average score for this first release is Gemini 3.1 Pro, followed closely by Claude Opus 4.6. You can try all of the models we evaluated for AI assistance for your Android projects by using API keys in the latest stable version of Android Studio.



Providing developers and LLM makers with transparency

We value an open and transparent approach, so we made our methodology, dataset, and test harness publicly available on GitHub.


One challenge for any public benchmark is the risk of data contamination, where models may have seen evaluation tasks during their training process. We have taken measures to ensure our results reflect genuine reasoning rather than memorization or guessing, including a thorough manual review of agent trajectories, or the integration of a canary string to discourage training. 


Looking ahead, we will continue to evolve our methodology to preserve the integrity of the dataset, while also making improvements for future releases of the benchmark—for example, growing the quantity and complexity of tasks.


We’re looking forward to how Android Bench can improve AI assistance long-term. Our vision is to close the gap between concept and quality code. We're building the foundation for a future where no matter what you imagine, you can build it on Android. 


Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

What's New in Uno Platform Hot Design

1 Share


Read the whole story
alvinashcraft
12 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

20 apps in 20 days with Flutter and Antigravity

1 Share

Why I stopped planning and started building

Dash having fun building Flutter apps.

Hi, I’m Kevin Lamenzo! I recently joined the Dart and Flutter teams (🎉), and this January I set out on a quest: build 20 apps in 20 days using Flutter and Antigravity. But why? First, the cost of curiosity has officially hit zero. You can go from idea to app in 10 minutes. Second, as a new member of the team I wanted to learn about the framework by building something. Last, and most importantly, I wanted to test the theory that in 2026, we all can be builders now.

I’m here to share what I learned from the trenches. If you take only one thing from this post, let it be this: stop reading, and go build something.

The spark

My first app was a health tracker.

During a recent check-up, my physician “strongly suggested” that I watch my blood pressure and alcohol intake. My first reaction? “Don’t tell me what to do!” My second reaction? “I’m gonna build my own app for that.” No subscription offers, no data harvesting, no gamified onboarding. Just a simple tool that solved my specific problem.

My first app — a personal health tracker.

Why Flutter?

I’m always on the go. I initially built a web app, which is nice, but I needed this tool to be in my pocket. Flutter makes the transition from web app to mobile app feel like magic. Antigravity barely needed to change the code.

After building the mobile version, I even opened a Google Play Developer account ($25), and released the app to myself as a tester. Now my creation is live on my own phone (although I still don’t have a logo or icon).

The health tracker app, with a default icon, on my phone.

Tearing through ideas

The success of building the health tracker was intoxicating. By the end of the first week, I had four more apps on my phone and I had launched an internal blog (I called it “App a Day”) to document my journey and share the messiness with my colleagues.

This felt like more than vibecoding — I was building. I was exploring my phone’s sensors, working with haptics, and even using APIs.

My internal blog landing page — App a Day

Hitting a wall trying to scale

Everything wasn’t rosy. When I tried to take one of my quick idea apps and add to it every day of the week, I hit a wall.

Large scale apps are possible, but you need a different mindset. You have to lean into the architecture. You likely have to ask the agent 100 follow up questions. The more knowledge you bring to this step of the flow, the better an experience you will have. This is your excuse to go learn “traditional” development.

However, the beauty of the “App a Day” mentality is that you don’t always have to scale. Small apps are fast to build, helpful, and you can wake up tomorrow and move to a completely new one.

This leads to my “AI thought leadership soapbox moment”:

Right now we’re all empowered to do amazing things alone, but the next frontier is collaboration. How we use our AI superpowers to work together in new ways is uncharted territory. So, if you go out and “vibe build” an app with your friends or a team, chart your journey and share it with the rest of us.
Dash making a point on his soapbox.

My recommended flow

I got some questions from colleagues and friends about how I was staying organized. Here’s my flow to building your own app:

  1. Get organized (Google Docs) - I haven’t gone fully AI yet. I need something static, something familiar, a place to park my ideas. I started every app, large or small, with a Google doc. I added the date and my quick notes. This gave me a place to return to as needed.
  2. Refine my ideas (Gemini) - This step is becoming a classic “AI hack”: don’t write your own prompts; write down your raw thoughts and ask an LLM to create a golden prompt for you.
  3. Build! (Antigravity) - Google’s new AI enabled IDE is a workhorse. By comparison, if you ask any of the popular LLMs (Gemini, ChatGPT, etc.) to “write me a book”, they can only output so much. When you ask Antigravity a complex task like this, first it makes a plan, then it works sequentially through each task in that plan. You guide it along the way. I put all of my golden prompts into Antigravity and guided it to build them into apps.
  4. Test, iterate, repeat (also Antigravity) - After Antigravity finishes your first build, it’s time to get hands on. Run the software. Try it out. Provide your feedback to Antigravity, and ask it to make the changes you need. Don’t know how to run a Flutter app? Just ask the Agent in Antigravity.
  5. (optional) Deploy (Google Play, Firebase - I’ve already mentioned getting my apps on my phone. Another great avenue for sharing is Firebase. Not sure where to start? Ask Gemini to guide you. Firebase makes hosting your apps and adding more advanced services (like authentication, for example) much easier.

Breaking out of my internal blog

The energy I got from this challenge eventually helped me explode right out of it. Instead of an internal blog, I went and launched my own personal site. Here I’m working outside the artificial bounds of building one small thing a day. Instead, I’m working on things big and small, grabbing ideas as they come, and moving on without hesitation.

Come find me at ladevzo.com/scrappy-path if you want to see how I’m applying these lessons outside the corporate firewall.

A screenshot of ladevzo.com. Check it out!

Conclusion: find your own flow

With that said, let me conclude with this: you don’t need to build 20 apps in 20 days. Start small. Use Gemini for brainstorming. Use Antigravity to build a prototype. Before you get bogged down mastering the code, focus on transitioning from “idea” to “working reality”.

The tools are ready. Are you?

Appendix: list of apps

  1. Synonym slider (1 file, 207 LOC) — A simple app to provide the user with a list of synonyms for a given word. Demonstrates basic HTTP integration.
  2. Sensor app (1 file, 228 LOC) — Interacts with device hardware using the sensors plus package.
  3. Sino shift (1 file, 230 LOC) — A language experiment to translate English phrases and sentences into a Chinese-language-style ideograph format (Chinese languages use far fewer tokens when interacting with LLMs by virtue of their syntax)
  4. Vip badge (1 file, 273 LOC) — Implements authentication through Firebase and Google Sign-In.
  5. My apps widget (4 files, 305 LOC) — An Android homescreen “container” to hold all of my apps. Utilizes Android Intents to create a home screen widget.
  6. Haptic soundboard (2 files, 311 LOC) — A soundboard application featuring audio playback capabilities.
  7. Rusty-haiku (4 files, 403 LOC) — A simple haiku generator. Demonstrates API usage and state management with Riverpod.
  8. Thought spot (5 files, 481 LOC) — Place a marker on a map and capture (by voice input) a thought. Features speech-to-text, maps, geolocation, and local databases.
  9. Jolt (6 files, 494 LOC) — Focuses on local storage with shared preferences and custom UI fonts.
  10. Accept changes (2 files, 547 LOC) — An attempt to bounce Antigravity’s push notifications to my phone. Provides device feedback via notification and connects to a Realtime Database.
  11. Street slueth (9 files, 574 LOC) — A spin on Geoguesser: a collaborative murder mystery game where you need to use Google Maps to find clues and solve a puzzle. Map-based application with user authentication and Google Street View integration.
  12. Learn witt (7 files, 588 LOC) — A learning app for understanding Wittgenstein. Focused on user interface design using custom fonts.
  13. Parcheesi game (5 files, 625 LOC) — A Parcheesi remake. This was an attempt at “once shot prompting” with Antigrabity. Manages complex logic and state using Riverpod and Equatable.
  14. Health tracker (6 files, 641 LOC) — Personal health tracking app for monitoring daily blood pressure and alcohol intake. Manages local state and utilizes the file system for storage.
  15. Magic octo (6 files, 692 LOC) — My take on the magic 8-ball.
  16. Meeting helper (5 files, 778 LOC) — Productivity tool using Firebase authentication and Cloud Firestore.
  17. Haircut log (6 files, 901 LOC) — A Google Photos integration that helps hair salons manage user haircut photos leveraging the Google Photos Library API.
  18. Wwks (6 files, 1121 LOC) — What would Kevin say? My personalized, AI-enabled, chat interface. Combines Google Generative AI with a full Firebase backend.
  19. Human speed (15 files, 1124 LOC) — A personal thinking tool. Allows you to self-manage context across LLM threads. Full-stack AI application structured with Riverpod, GoRouter, and Firebase.
  20. Math facts AI(11 files, 1520 LOC) — Educational tool leveraging generative AI to teach math facts.
  21. Workout buddy (11 files, 1582 LOC) — Workout tracker, featuring Riverpod, Cloud Functions, and Freezed code generation.
  22. Pulse (14 files, 1928 LOC) — Large-scale project integrating generative AI, robust state management, and code generation.
Dash getting fit on the treadmill.

20 apps in 20 days with Flutter and Antigravity was originally published in Flutter on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
17 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Protect Sensitive Data by Running LLMs Locally with Ollama

1 Share

Whenever engineers are building AI-powered applications, use of sensitive data is always a top priority. You don't want to send users' data to an external API that you don't control.

For me, this happened when I was building FinanceGPT, which is my personal open-source project that helps me with my finances. This application lets you upload your bank statements, tax forms like 1099s, and so on, and then you can ask questions in plain English like, "How much did I spend on groceries this month?" or "What was my effective tax rate last year?"

The problem is that answering these questions means sending all the sensitive transaction history, W-2s and income data to OpenAI or Anthropic or Google, which I was not comfortable with. Even after redacting PII data from these documents, I was not ok with the trade-off.

This is where Ollama comes in. Ollama lets you run large language models entirely on your own laptop. You don't need any API keys or cloud infrastructure and no data leaves your machine.

In this tutorial, I will walk you through what Ollama is, how to get started with it, and how to use it in a real Python application so that users of the application can choose to keep their data completely local.

Table of Contents

Prerequisites

You will need the following at a minimum:

  • Python 3.10+

  • A machine with at least 8GB of RAM (16GB recommended for larger models)

  • Basic familiarity with Python and pip

What is Ollama?

Ollama is an open-source tool that makes running LLMs locally very easy. You can think of it as Docker but for AI models. You can pull models using just one command and Ollama handles everything else like downloading the weights, managing memory and the serving the model through a local REST API.

The local REST API is compatible with OpenAI's API format which means any application that can talk to OpenAI, can switch to using Ollama without changing any code.

Installation

First thing you would need is to download the installer from ollama.com. Once installed, you can verify it is running:

ollama --version

The above command checks whether Ollama was installed correctly and prints the current version.

Pull and Run Your First Model

Ollama hosts a variety of models on ollama.com/library. To pull and immediately chat with one, just do:

ollama run llama3.2

This command will download the model from ollama and start an interactive chat session with it. Note: the model size would be a few GBs depending on which model is downloaded. Alternatively, if you want to download a specific model only:

ollama pull mistral

This downloads a model to your machine without starting a chat session which is useful when you want to set up models in advance.

You can run the following command to list the models you have installed:

ollama list

This shows all models you've downloaded locally along with their sizes.

I have used the following models and they have worked great for specific tasks:

Model Size Good For
llama3.2 ~2GB Fast, general purpose
mistral ~4GB Strong instruction following
qwen2.5:7b ~4GB Multilingual, reasoning
deepseek-r1:7b ~4GB Complex reasoning tasks

How Ollama's API works

Once Ollama is running, it will be served on localhost:11434. You can call it directly using curl:

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.2",
  "messages": [{ "role": "user", "content": "What is compound interest?" }],
  "stream": false
}'

This sends a chat message directly to Ollama's REST API from the command line, with streaming disabled so you get the full response at once. The above endpoint is to simply chat with the model. The more useful endpoint is http://localhost:11434/v1 as this is OpenAI-compatible. This is the key feature that makes it easy to drop into existing apps that use OpenAI or other LLMs.

How to Call Ollama from Python

How to Use the Ollama Python Library

Ollama has its own Python library that is pretty intuitive to use:

pip install ollama
from ollama import chat

response = chat(
    model='llama3.2',
    messages=[
        {'role': 'user', 'content': 'Explain what a Roth IRA is in simple terms.'}
    ]
)

print(response.message.content)

The above code uses Ollama's native Python SDK to send a message and print the model's reply, which is the most straightforward way to call Ollama from Python

How to Use the OpenAI SDK with Ollama as the Backend

As mentioned earlier, Ollama has an endpoint that is OpenAI compatible, so you can also use the OpenAI Python SDK and just point it to your local server:

pip install openai
from openai import OpenAI

client = OpenAI(
    base_url='http://localhost:11434/v1',
    api_key='ollama',  # Required by the SDK, but ignored by Ollama
)

response = client.chat.completions.create(
    model='llama3.2',
    messages=[
        {'role': 'user', 'content': 'Explain what a Roth IRA is in simple terms.'}
    ]
)

print(response.choices[0].message.content)

This uses the standard OpenAI Python SDK but redirects it to your local Ollama server. The api_key field is required by the SDK but ignored by Ollama. This pattern makes using Ollama seamless for existing applications. The code is nearly identical to what you would write for OpenAI.

How to Integrate Ollama into a LangChain App

Most production applications are built with an orchestration framework like LangChain, which has a native Ollama support. This means swapping providers is just a one-line change.

Install the integration:

pip install langchain-ollama

How to Create a Chat Model

from langchain_ollama import ChatOllama

llm = ChatOllama(model="llama3.2")

response = llm.invoke("What is the difference between a W-2 and a 1099?")
print(response.content)

This creates a LangChain-compatible chat model backed by a local Ollama model, a one-line swap from ChatOpenAI.

Compare this to the OpenAI version and you will see that the interface is almost identical:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o")

How to Build an LLM-Provider Agnostic App

The real power of the application comes from the abstraction of LLM providers. Applications like Perplexity lets users choose the LLM they want to use for their tasks. Here's a simple factory pattern that returns the right LLM based on the configuration:

from langchain_openai import ChatOpenAI
from langchain_ollama import ChatOllama
from langchain_anthropic import ChatAnthropic

def get_llm(provider: str, model: str):
    """
    Return the appropriate LangChain LLM based on the provider.
    
    Args:
        provider: One of "openai", "ollama", "anthropic"
        model: The model name (e.g. "gpt-4o", "llama3.2", "claude-3-5-sonnet")
    
    Returns:
        A LangChain chat model ready to use
    """
    if provider == "openai":
        return ChatOpenAI(model=model)
    elif provider == "ollama":
        return ChatOllama(model=model)
    elif provider == "anthropic":
        return ChatAnthropic(model=model)
    else:
        raise ValueError(f"Unknown provider: {provider}")

The above snippet shows a helper that returns the right LangChain model based on a provider string, so the rest of your app never needs to know which LLM is running underneath.

Now the rest of your code does not need to know about the provider who's LLM is running underneath. This includes your chains, your agents and your tools. You pass llm around and it just works.

How to use Ollama with LangGraph

If you're using LangGraph to build agents (as I covered in my previous article on AI agents), plugging in Ollama is equally seamless:

from langgraph.prebuilt import create_react_agent
from langchain_ollama import ChatOllama
from langchain_core.tools import tool

@tool
def get_spending_summary(category: str) -> str:
    """Get total spending for a given category this month."""
    # In a real app, this would query your database
    return f"You spent $342.50 on {category} this month."

llm = ChatOllama(model="llama3.2")

agent = create_react_agent(
    model=llm,
    tools=[get_spending_summary]
)

response = agent.invoke({
    "messages": [{"role": "user", "content": "How much did I spend on groceries?"}]
})

print(response["messages"][-1].content)

This snippet builds a ReAct agent that uses a locally-running model to decide when to call tools while keeping all data on-device even during agentic workflows.

The agent will decide to call the get_spending_summary tool when needed and get the result using the locally running model instead of sending your data over the internet to OpenAI.

How FinanceGPT Uses This in Practice

FinanceGPT is built to support OpenAI, Anthropic, Google and Ollama as LLM providers. The user sets their preference on the UI or in a config file and the application instantiates the right model using a pattern very similar to the factory pattern above.

When the user chooses Ollama, here's what happens:

  1. Their bank statements and other sensitive documents are parsed locally

  2. Sensitive fields like SSNs are masked before any LLM call

  3. The masked data and query goes to the local Ollama server running on their own machine

  4. The response comes back locally and nothing ever leaves their network

To run FinanceGPT locally with Ollama, the setup looks like this:

# 1. Pull a capable model
ollama pull llama3.2

# 2. Clone and configure FinanceGPT
git clone https://github.com/manojag115/FinanceGPT.git
cd FinanceGPT
cp .env.example .env

# 3. In .env, set your LLM provider to Ollama
# LLM_PROVIDER=ollama
# LLM_MODEL=llama3.2

# 4. Start the full stack
docker compose -f docker-compose.quickstart.yml up -d

With this setup, the entire application including the frontend, backend and LLM, runs on your own hardware.

Tradeoffs to be Aware Of

Ollama is a great local alternative to using cloud LLMs, but it comes with its own problems.

Response Quality

Ollama models are essentially 7B parameter models running locally, so by design they will not match GPT-4o on complex reasoning tasks. For simple Q&A and summarization tasks, the results would be comparable, but for multi-step reasoning or nuanced judgement calls, the gap is noticeable.

Speed

Inference speed depends on the hardware that is running the model. Without a GPU, the Ollama models can take several seconds to respond. On Apple Silicon (M1/M2/M3), the performance is surprisingly good even without a dedicated GPU.

Hardware Requirements

Small models (7B parameters) need around 8GB of RAM, however larger models (13B+) need 16GB or more. If you are building your application for end users, you cannot guarantee they have the hardware.

Tool Use and Function Calling

Not all local models support function calling reliably. If your agent depends heavily on tool use, test your chosen model carefully. Models like qwen2.5 and mistral generally handle this better than others.

The right mental model: use cloud models when you need maximum capability, and local models when privacy or cost constraints make cloud models impractical.

Conclusion

In this tutorial, you learned what Ollama is, how to install it and pull models, and three different ways to call it from Python: the native Ollama library, the OpenAI-compatible SDK, and LangChain. You also saw how to build a provider-agnostic factory pattern so your app can switch between cloud and local models with a single config change.

Ollama makes local LLMs genuinely practical for production apps. The OpenAI-compatible API means integration is nearly zero-friction, and LangChain's native support means you can build provider-agnostic apps from the start.

The finance domain is an obvious fit — but the same principle applies anywhere sensitive data is involved: healthcare, legal tech, HR, personal productivity. If your app processes data that users wouldn't want stored on someone else's server, giving them a local option isn't just a nice-to-have. It's a trust feature.

Check Out FinanceGPT

All the code examples here came from FinanceGPT. If you want to see these patterns in a complete app, poke around the repo. It's got document processing, portfolio tracking, tax optimization – all built with LangGraph.

If you find this helpful, give the project a star on GitHub – it helps other developers discover it.

Resources



Read the whole story
alvinashcraft
39 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing Rust 1.94.0

1 Share

The Rust team is happy to announce a new version of Rust, 1.94.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.94.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.94.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.94.0 stable

Array windows

Rust 1.94 adds array_windows, an iterating method for slices. It works just like windows but with a constant length, so the iterator items are &[T; N] rather than dynamically-sized &[T]. In many cases, the window length may even be inferred by how the iterator is used!

For example, part of one 2016 Advent of Code puzzle is looking for ABBA patterns: "two different characters followed by the reverse of that pair, such as xyyx or abba." If we assume only ASCII characters, that could be written by sweeping windows of the byte slice like this:

fn has_abba(s: &str) -> bool {
    s.as_bytes()
        .array_windows()
        .any(|[a1, b1, b2, a2]| (a1 != b1) && (a1 == a2) && (b1 == b2))
}

The destructuring argument pattern in that closure lets the compiler infer that we want windows of 4 here. If we had used the older .windows(4) iterator, then that argument would be a slice which we would have to index manually, hoping that runtime bounds-checking will be optimized away.

Cargo config inclusion

Cargo now supports the include key in configuration files (.cargo/config.toml), enabling better organization, sharing, and management of Cargo configurations across projects and environments. These include paths may also be marked optional if they might not be present in some circumstances, e.g. depending on local developer choices.

# array of paths
include = [
    "frodo.toml",
    "samwise.toml",
]

# inline tables for more control
include = [
    { path = "required.toml" },
    { path = "optional.toml", optional = true },
]

See the full include documentation for more details.

TOML 1.1 support in Cargo

Cargo now parses TOML v1.1 for manifests and configuration files. See the TOML release notes for detailed changes, including:

  • Inline tables across multiple lines and with trailing commas
  • \xHH and \e string escape characters
  • Optional seconds in times (sets to 0)

For example, a dependency like this:

serde = { version = "1.0", features = ["derive"] }

... can now be written like this:

serde = {
    version = "1.0",
    features = ["derive"],
}

Note that using these features in Cargo.toml will raise your development MSRV (minimum supported Rust version) to require this new Cargo parser, and third-party tools that read the manifest may also need to update their parsers. However, Cargo automatically rewrites manifests on publish to remain compatible with older parsers, so it is still possible to support an earlier MSRV for your crate's users.

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.94.0

Many people came together to create Rust 1.94.0. We couldn't have done it without all of you. Thanks!

Read the whole story
alvinashcraft
54 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories