Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150345 stories
·
33 followers

A developer’s guide to Antigravity and Gemini 3

1 Share

Google recently took the developer world by surprise with the release of Gemini 3, their newest flagship AI model, and Antigravity, their agent-first AI-powered IDE. The announcement sparked immediate curiosity and discussion across the internet, as many developers wanted to understand how Google’s new AI editor is different from the ones they’re already familiar with and how it might influence the way software is built and maintained.

gemini 3 and antigravity

One of the most interesting parts of the announcement is how tightly Antigravity is connected to Gemini 3. Google is treating these releases as parts of a unified platform, with the IDE serving as a showcase for what the new model can do. It also signals that Google wants to compete directly with the top AI coding tools on the market today, and if they execute well on this integration, there’s a real possibility they could overtake them.

In this article, we’ll explore both releases from a developer’s perspective. We’ll briefly review what’s new in Gemini 3 and, more importantly, explore how Antigravity reshapes the development workflow. You’ll learn how it works, how to use it, its major features and current drawbacks, and how it’s different from other AI text editors like Cursor and GitHub Copilot.

What’s new in Gemini 3

Gemini 3 is a major advancement for Google’s AI ecosystem and for the LLM landscape as a whole. It builds on Gemini 2.5, retains the one-million-token context window, and introduces several important improvements. Below are some of the most notable ones:

  • Core model upgrades: Gemini 3 delivers stronger logical reasoning, more reliable problem-solving, and better long-context understanding. It also handles multimodal inputs more smoothly, so it can work with text, code, images, audio, and video in a more consistent way than earlier versions.
  • Generative interface: One of the most talked-about additions in Gemini 3 is its generative interface capability. This feature allows the model to create interactive visual interfaces directly from plain language descriptions. A user might request an explanation of how DNA and RNA interact or a simple simulation of a supply-chain flow, and the model produces a rendered, interactive interface instead of a static response.
  • Nano-Banana Pro improvements: Google’s image-generation system also received a major upgrade. It now produces more realistic images and handles embedded text with greater clarity, which has been one of the hardest challenges for image models to solve. It is also more consistent when generating a series of related images and is less prone to hallucinations.

These updates give a clear picture of what’s new in Gemini 3. Now, let’s look at how Google built on these capabilities in its new AI-powered editor, Antigravity.

What is Antigravity?

Antigravity is Google’s new agent-first AI IDE, and it was created to demonstrate how Gemini 3 can use tools and reason through practical development tasks. Similar to all modern AI IDEs, Antigravity was also built on a fork of the open-source Visual Studio Code project, with the environment redesigned in a way that places AI agents at the center of the workflow rather than the text editor itself.

Some months back, Google acquired the Windsurf team, and this acquisition played an important role in Antigravity’s creation. Windsurf was known for its AI-augmented editor and for its early experimentation with agent-based development tools. After joining Google, the team brought much of its experience into the Antigravity project, and parts of its earlier work appear to have influenced the current design. The former Windsurf team now leads the engineering and product direction for Antigravity, guiding its shift toward a more agent-centered development experience.

Antigravity introduces a development environment that gives AI agents a central role in creating and managing projects. In addition to helping you generate code, it also helps you plan, execute, and monitor AI-driven actions across an entire project. Let’s look at some of its features below.

Customizable agentic workflow

Antigravity provides multiple agentic workflows that let you choose how much autonomy to give its AI coding agent. For example, you can grant complete autonomy, which gives the agent full control over actions like creating or deleting files or running terminal commands without asking for confirmation:customizing the agentic workflow with antigravity

You can also opt for partial autonomy, where the agent handles code generation and routine commands but asks for your input before performing critical actions such as file deletion.

Dedicated agent manager

Antigravity includes an agent manager that allows you to run multiple agents simultaneously. For example, you can run Agent A to add dark mode to your website while Agent B adds a feature to an existing codebase or builds an entirely separate app in parallel:google antigravity's dedicated agent manager

The manager provides a clear overview of what each agent is working on and how far along they are in their assigned tasks. Additionally, agents can interact with your web browser to automatically preview and run your application, verify whether the output meets your requirements, and iterate accordingly.

Nano Banana integration

Another interesting feature in Antigravity is its integration with Google’s Nano Banana generative image model. It can generate images on demand and use them directly in your project. This capability isn’t available directly in other AI IDEs and typically requires workarounds, such as connecting the IDE to an MCP.

Familiar AI-IDE features

Antigravity also includes the core features you’d expect from modern AI text editors, such as:

  • Tab auto-complete and chat in code: Accept AI code suggestions with a single press of the Tab key, and chat with the AI directly from your code.
  • MCP integration: The IDE supports MCP integration, which allows your editor to securely connect to your local tools and external services such as GitHub, Heroku, and Netlify.
  • Multi-model selection: In addition to Gemini 3, Antigravity supports other popular models, such as Claude Sonnet and OpenAI’s open-source GPT-OSS model, with more models to be added soon.

Now that we’ve explored its features, let’s look at how you can get started using Antigravity.

How to use Antigravity

To use Antigravity, you’ll first need to download it from its official website. Once it’s downloaded and installed, you’ll also need to sign in with your Google account.

After signing in, you should immediately see the familiar VS Code interface, as shown in the image below. And if you’ve used other AI IDEs like Cursor, Windsurf, or GitHub Copilot, the rest of the interface should be easy to navigate as well:familiar interface of antigravity because of VS code fork

To start a new project or update an existing one, use the conversation box in the right sidebar and describe what you want to build. You can also choose between two modes: Planning and Fast:

planning and fast mode in antigravity

In Fast mode, the agent will carry out the requested task immediately. In planning mode, however, the agent will first create an implementation plan document, then complete the task based on that plan, and when it’s done, generate an overview walkthrough document describing the changes it made.

Hands-on experience

To put Antigravity to the test, I gave it a vague prompt and asked it to “create a picture puzzle app” using Planning mode. It began by generating an implementation plan document for my review and then proceeded with the task. While the agent is working, you can add comments directly to the plan to guide its decisions, and it will incorporate those adjustments as it continues generating:adding comments to antigravity instructions

The Nano Banana integration also came into play here. Since a puzzle app needs an image to start with, Antigravity inferred this automatically and generated a default picture using Google’s Nano Banana model. This was not something I explicitly asked for; it recognized the requirement from the prompt and handled it on its own:nano banana integration

Once it finished building the app, Antigravity generated a walkthrough document summarizing what it had done, the features it implemented, and recommended next steps to improve or extend the project.

Overall, the workflow feels similar to other AI IDEs, such as Cursor and GitHub Copilot, but the built-in image generation really stands out as an integrated experience.

Notable drawback

Antigravity is still in its early stages, and you might notice certain latency issues or the software becoming unresponsive after heavy usage. However, these shortcomings are expected at this early stage, and the team will likely address them soon.

How Antigravity compares with Cursor and GitHub Copilot

The AI-IDE landscape is getting increasingly competitive, and most developers naturally compare new tools to Cursor and GitHub Copilot, the two most widely adopted options today. Here’s how Antigravity stacks up against them:

Category Antigravity Cursor GitHub Copilot
Core Philosophy Agent-first IDE designed around autonomous workflows AI-enhanced editor; strong planning tools Autocomplete-first assistant integrated into VS Code
Agent Support Full and partial autonomy, multi-agent manager, browser interaction Limited agents; no dedicated manager Minimal agentic behavior
Model Support Gemini 3 (default), Claude Sonnet, and GPT-OSS Multi-model: OpenAI, Anthropic, Google Multi-model: OpenAI, Anthropic, Google, Grok
Unique Features Built-in Nano Banana image generation, artifact tracking, browser-driven validation Mature UX, fast execution, workspace-level edits Best-in-class autocomplete, deep GitHub integration
MCP Integration Yes Yes Partial

Antigravity is still in its early stages, but with its agent-centric workflow and deep Gemini 3 integration, it enters the AI-IDE space as a serious competitor.

Wrapping up

Right now, Cursor is still the one everyone looks at when they think of the top AI IDE. It’s fast and reliable; however, its pricing model is a frequent point of frustration among users, and that continues to be a major drawback.

Antigravity enters this space as a credible alternative. It has most of the same core features as Cursor and is powered by Gemini 3, a state-of-the-art model. On top of that, Google has something Cursor doesn’t. They have the compute and the resources to make things better and way cheaper if they decide to. If they iron out the early-stage issues, Antigravity could easily give Cursor a real run for its money.

That said, the market is big enough for everyone. More options just mean better tools, faster improvements, and more ways to build software the way you want. And that’s a win for all of us.

Thanks for reading!

The post A developer’s guide to Antigravity and Gemini 3 appeared first on LogRocket Blog.

Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Stop using JavaScript to solve CSS problems

1 Share

A knowledge gap pushes people toward over-engineering, and sooner or later, it shows up in performance.

alt

Take content-visibility: auto. It does what React-Window does with zero JavaScript and zero bundle weight. Same story with the modern viewport units (dvh, svh, lvh). They solved the mobile height mess we’ve been patching with window.innerHeight has been hacked for years.

Both features cleared 90 percent global support in 2024. Both are ready for production today. Yet we keep defaulting to JavaScript because CSS evolved while we were all arguing about React Server Components.

This article closes that gap. We’ll look at benchmarks, show migration paths, and be honest about when JavaScript still wins. But before anything else, let’s state the obvious: if you’re reaching for useEffect and useState to fix a rendering issue, you’re probably barking up the wrong tree.

The React virtualization problem

React developers treat virtualization libraries like react-window and react-virtualized as the default fix for rendering lists. On paper, the logic is solid: if the user only sees 10 items at a time, why bother rendering all 1,000? Virtualization creates a small “window” of visible items and unmounts everything else as you scroll.

The issue isn’t that virtualization is wrong – it’s that we reach for it way too early and way too often. A product grid with 200 items? react-window. A blog feed with 50 posts? react-virtualized.

We’ve built a kind of cargo cult around list performance. Instead of checking whether the browser can handle the work natively, we jump straight into wrapping everything in useMemo and useCallback and call it “optimized.”

Here’s what a minimal react-virtualized setup actually looks like:

import { List } from 'react-virtualized';
import { memo, useCallback } from 'react';

const ProductCard = memo(({ product, style }) => {
  return (
    <div style={style} className="product-card">
      <img src={product.image} alt={product.name} />
      <h3>{product.name}</h3>
      <p>{product.price}</p>
      <p>{product.description}</p>
    </div>
  );
});

function ProductGrid({ products }) {
  // Memoize the row renderer to prevent unnecessary re-renders
  const rowRenderer = useCallback(
    ({ index, key, style }) => {
      const product = products[index];
      return <ProductCard key={key} product={product} style={style} />;
    },
    [products]
  );

  return (
    <List
      width={800}
      height={600}
      rowCount={products.length}
      rowHeight={300}
      rowRenderer={rowRenderer}
    />
  );
}

This works fine. It’s roughly 50 lines of code, adds about 15KB to your bundle, and requires you to set up item heights and container dimensions. Pretty standard stuff.

But React developers rarely stop there. We’ve all been trained to chase re-render optimizations, so we start wrapping everything in memoization and callbacks:

import { List } from 'react-virtualized';
import { memo, useCallback, useMemo } from 'react';

const ProductCard = memo(({ product, style }) => {
  return (
    <div style={style} className="product-card">
      <img src={product.image} alt={product.name} />
      <h3>{product.name}</h3>
      <p>{product.price}</p>
      <p>{product.description}</p>
    </div>
  );
});

function ProductGrid({ products }) {
  const rowCount = products.length;


  // Memoize the row renderer to prevent unnecessary re-renders
  const rowRenderer = useCallback(
    ({ index, key, style }) => {
      const product = products[index];
      return <ProductCard key={key} product={product} style={style} />;
    },
    [products]
  );

  // Memoize row height calculation
  const rowHeight = useMemo(() => 300, []);

  return (
    <List
      width={800}
      height={600}
      rowCount={rowCount}
      rowHeight={rowHeight}
      rowRenderer={rowRenderer}
    />
  );
}

Look at that useMemo(() => 300, []). We’re memoizing a constant. We wrapped the component in memo() to avoid re-renders that probably weren’t happening in the first place. We tossed in useCallback for a function react-window already optimizes internally.

We’re doing all of this because we think we’re supposed to, not because we actually measured a problem. And while we were busy shaving off hypothetical re-renders, CSS quietly shipped a native solution.

It’s called content-visibility. It tells the browser to skip rendering off-screen content. Same idea as virtualization, except the browser handles it for you – no JavaScript, no scroll math, no item height configuration.

The question isn’t whether virtualization works. It does. The question is whether your list actually needs it. Most React apps deal with lists in the hundreds, not the tens of thousands. For those cases, content-visibility gets you about 90 percent of the benefit with a fraction of the complexity.

Here’s a quick overview of what content-visibility does. If you want the full deep dive, check out our guide.

What content-visibility actually does

The content-visibility property has three values: visible, hidden, and auto. Only auto matters for performance.

When you apply content-visibility: auto to an element, the browser skips layout, style, and paint work for that element until it’s close to the viewport. The keyword is “close” – the browser starts rendering a bit before the element enters view, so scrolling stays smooth. As soon as it moves out of view again, the browser pauses all that work.

The browser already knows what’s visible. It already has viewport intersection APIs. It already handles scroll performance. content-visibility: auto just gives it permission to skip rendering work.

Using the content-visibility with the same product grid, we will have this:

function ProductGrid({ products }) {
  return (
    <div className="product-grid">
      {products.map(product => (
        <div key={product.id} className="product-card">
          <img src={product.image} alt={product.name} />
          <h3>{product.name}</h3>
          <p>{product.price}</p>
          <p>{product.description}</p>
        </div>
      ))}
    </div>
  );
}

CSS:

.product-card {
  content-visibility: auto;
  contain-intrinsic-size: 300px;
}

Two lines. The contain-intrinsic-size property tells the browser how much space to reserve for off-screen content. Without it, the browser assumes those elements have zero height, which throws off the scrollbar. With it, scrolling stays consistent because the browser has a rough idea of the element’s size even when it’s not rendered.

And this isn’t the only place where CSS quietly took over jobs we used to handle in JavaScript. Another big one: container-based responsive design.

The container query problem

Responsive design taught us to write media queries based on viewport width. Works fine until you put a component in a sidebar. Your card component needs different layouts depending on its container width, not the screen width. A 300px card in a sidebar should look different from a 300px card in the main content area, even though the viewport is the same.

Developers reached straight for JavaScript. We used ResizeObserver to track container widths, toggled classes at different breakpoints, and forced layout updates on every resize. Any component that needed container-aware styling ended up with JavaScript measuring its width and pushing the right styles onto it.

function updateCardLayout() {
  const cards = document.querySelectorAll('.card');
  cards.forEach(card => {
    const width = card.offsetWidth;
    if (width < 300) {
      card.classList.add('card--small');
    } else if (width < 500) {
      card.classList.add('card--medium');  
    } else {
      card.classList.add('card--large');
    }
  });
}

const resizeObserver = new ResizeObserver(updateCardLayout);
document.querySelectorAll('.card').forEach(card => {
  resizeObserver.observe(card);
});

That’s 20+ lines of JavaScript to solve what should be a CSS problem. You’re measuring DOM elements, managing observers, adding event listeners, and maintaining class state. The browser already knows the container width. You’re asking for it in JavaScript instead of letting CSS handle it directly.

CSS container queries shipped in all major browsers in 2023. They let you write layout rules based on a parent container’s size, not the viewport.

.card-container {
  container-type: inline-size;
}

@container (min-width: 300px) {
  .card {
    display: grid;
    grid-template-columns: 1fr 2fr;
  }
}

@container (min-width: 500px) {
  .card {
    grid-template-columns: 1fr 1fr;
  }
}

Three declarations, the browser recalculates container queries the same way it recalculates media queries natively, without involving the main thread. Your card component responds to its container width automatically.

The container-type: inline-size property tells the browser this element is a container whose children might query its width. Then @container rules work like @media rules, except they check the container’s dimensions instead of the viewport’s.

Browser support is 90%+ as of 2025. Chrome 105+, Safari 16+, Firefox 110+. If you’re still writing ResizeObserver code to handle component-based responsive design, you’re solving yet another problem CSS already solved.

The scroll animation problem

Animations that fire when elements enter the viewport have always been a JavaScript job. You want something to fade in as the user scrolls, so you set up an IntersectionObserver, watch for visibility, add a class to trigger the CSS animation, and then unobserve the element to avoid leaks.

const observer = new IntersectionObserver((entries) => {
  entries.forEach(entry => {
    if (entry.isIntersecting) {
      entry.target.classList.add('fade-in');
      observer.unobserve(entry.target);
    }
  });
});

document.querySelectorAll('.animate-on-scroll').forEach(el => {
  observer.observe(el);
});
.fade-in {
  animation: fadeIn 0.5s ease-in forwards;
}

@keyframes fadeIn {
  from {
    opacity: 0;
    transform: translateY(20px);
  }
  to {
    opacity: 1;
    transform: translateY(0);
  }
}

This works. It’s been the standard approach since IntersectionObserver shipped in 2019. Every parallax effect, fade-in card, and scroll-triggered animation uses this pattern.

The problem is you’re using JavaScript to tell CSS when to run an animation based on scroll position. The browser already tracks scroll position. It already knows when elements enter the viewport. You’re bridging two systems that should talk directly.

CSS scroll-driven animations let you tie animations directly to scroll progress like this:

@keyframes fade-in {
  from {
    opacity: 0;
    transform: translateY(20px);
  }
  to {
    opacity: 1;
    transform: translateY(0);
  }
}

.animate-on-scroll {
  animation: fade-in linear both;
  animation-timeline: view();
  animation-range: entry 0% cover 30%;
}

The animation-timeline: view() property ties the animation progress to how much of the element is in view. The animation-range property controls when the animation starts and ends based on scroll position. The browser handles everything.

The animation runs on the compositor thread, not the main thread. IntersectionObserver callbacks run on the main thread. If your JavaScript is busy rendering React components or processing data, IntersectionObserver callbacks get delayed. Scroll-driven animations keep running smoothly because they’re not competing with JavaScript execution.

Browser support hit major milestones in 2024. Chrome 115+ (August 2023), Safari 18+ (September 2024). Firefox is implementing it behind a flag. You’re looking at 75%+ coverage now, which means you can use it with a progressive enhancement approach and have IntersectionObserver as a fallback for older browsers.

The real win is performance. Scroll-driven animations are declarative. You tell the browser what animation to run and when to run it. The browser optimizes the execution. With IntersectionObserver, you’re imperatively managing state, adding classes, and hoping you wrote efficient callback code.

When to stick with JavaScript workarounds

CSS isn’t always the answer. There are unique cases where JavaScript is still the right tool, and pretending otherwise is dishonest.

Use JavaScript for virtualization when:

You have truly infinite lists with 1000+ items. content-visibility loads all the data into the DOM even if it doesn’t render it. With 1000 items, that’s a memory problem. React-virtualized only creates DOM nodes for visible items, keeping memory usage low.

Your list has variable or unknown heights that change after render. content-visibility needs contain-intrinsic-size to work properly. If your items grow and shrink dynamically based on user interaction or loaded content, calculating intrinsic sizes becomes complicated. Virtualization libraries handle this with measurement APIs.

You need precise item tracking and scroll position control. If you’re building a data table where users can jump to row 5,000, or you need to restore exact scroll positions across page loads, virtualization gives you APIs for that. Content-visibility doesn’t expose that level of control.

Use JavaScript for layout when:

You still need JavaScript when your logic depends on exact measurements. Container queries let CSS adapt based on size, but if your code needs to know whether a container is exactly 247px wide, you’re back to ResizeObserver or getBoundingClientRect().

JavaScript also wins when the layout itself is too dynamic for CSS. If you’re building a dashboard with draggable panels, resizable columns, and layout rules driven by state and math, that’s squarely in JavaScript territory.

Use JavaScript for animations when:

You need callbacks at specific animation points. Scroll-driven animations don’t fire events when they start or end. If your animation triggers data fetching or needs to update application state, IntersectionObserver or scroll event listeners are still necessary.

Conclusion

I’ll leave you with a simple decision framework for when to reach for CSS or JavaScript. Start by checking whether CSS can handle the problem outright. If it can, use CSS. If it can’t, see whether a progressive-enhancement approach works – modern CSS first, with a JavaScript fallback. If that covers your case, go with it. Only default to a JavaScript-first solution when CSS truly can’t do the job.

The point isn’t to avoid JavaScript. It’s to stop using JavaScript by reflex when CSS already gives you the answer. Most lists don’t have a thousand items. Most animations don’t need precise callbacks. Most components do perfectly well with container queries.

Figure out what your UI actually needs. Measure real performance. Then pick the simplest tool that solves the problem. More often than not, that’s CSS.

And if you’ve replaced a long-standing JavaScript workaround with a clean CSS solution, drop it in the comments.

The post Stop using JavaScript to solve CSS problems appeared first on LogRocket Blog.

Read the whole story
alvinashcraft
22 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Why Document Processing Libraries Require a Document Editor

1 Share
A document processing library alone cannot guarantee reliable and predictable results. Users need a true WYSIWYG document editor to design and adjust templates to appear exactly as they will after processing. Solutions become stable, scalable, and suitable for real-world automation across Windows, Linux, and cloud environments by combining a visual editor with a high-performance backend engine.

Read the whole story
alvinashcraft
29 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

A Central Conflict in 'Readable' Code

1 Share

I have written before about why I don’t like to label code as “readable” or “unreadable”, but sadly, this has not stopped the world from doing this. I have lodged complaints with the appropriate departments.

I believe that programmers generally want to spend less energy trying to understand code. Since humans are energy-conserving machines, this feels like a reasonable belief. Programmers, then, want code that costs less to understand so that they can change it safely and accurately. “Costs less” can be (somewhat) measured by time, energy, or even money. I’ll assume, for the rest of this article, that we share the goal of making code less expensive to understand. I might even call this the primary goal of “software design”.

In order to make code less expensive to understand, you have two key strategies: simplifying it or becoming more familiar with it.

By “simplifying” code, I mean reducing the number of things you need to know in order to understand the code. This tends to mean things such as moving irrelevant details out of the way and making relevant details easier to see. Becoming familiar with code usually only happens by spending time engaging with the code, such as by reading it or testing it or trying to change it.

It makes sense to try to simplify code, because this activity both reduces the amount of code you need to become familiar with and is itself an activity that causes programmers to work with code and become more familiar with it. The act of trying to simplify code helps programmers understand code sooner both by reducing the necessary investment and by also being part of making that investment. It seems like a win-win.

This is why we refactor, remove duplication, improve names, introduce abstractions, add tests… all the things that we have been talking about and debating over the decades.

What’s the Catch?!

Sadly, in order to simplify code, we risk making it less familiar. This happens because we:

  • introduce libraries that not everyone knows
  • introduce abstractions that not everyone has participated in choosing
  • need to name new design elements and we’re not excellent at naming

Not only that, but while we are in the relatively early stages of learning to do these things, we have to overdo them in order to form the judgment to do them correctly and appropriately and judiciously.

Simplifying code therefore risks making it less familiar. When we do this, we are betting that the savings from simplifying will be greater than the additional costs of losing familiarity. We’re also betting that it will become cheaper to invest in restoring our previous levels of familiarity.

When we simplify code in a way that makes the code less familiar, then I recommend focusing on improving names to make the code more familiar to your intended audience. In the process, according to The Simple Design Dynamo, you would iterate through removing duplication and improving names until you reached a new balance point where you judged the code as “good enough” to move on. When does this happen? Probably when the act of simplifying the code has given you a chance to become familiar with it again. Your subjective experience of becoming familiar with the new design informs your choices to remove duplication and improve names in a way that helps the code be more familiar to the other programmers not actively involved in your work.

In short, you’d start by focusing on simplifying the code, then you’d gradually transition to making the code (more likely to seem) more familiar before declaring victory and moving on.

So What?!

And here is the key point: we must be prepared to make the code less familiar for some time in order to simplify it. I believe the real power of the work lies here. If we are not prepared to let the code be less familiar for some time, then we will only ever simplify it when there is a clear, obvious way to simplify it. That can help, too, but I believe you would miss out on powerful opportunities to improve the design if you stuck with such a conservative, passive strategy.

Many of those programmers arguing heatedly in meetings about refactoring are objecting to changes (or refusing to merge PRs) only because the code has become less familiar to them. Their reaction seems perfectly reasonable to me, but I believe it is ultimately self-limiting. When you add to this situation the tendency to conflate complicated code and unfamiliar code as “unreadable”, those arguments become nearly impossible to resolve without resorting to dysfunctions such as deferring to the Highest Paid Person In the Room or the Loudest Person In the Room. You need another tool to avoid this fate:

If you want code to become less expensive to understand over time, then I argue that you must be willing to risk the code becoming unfamiliar; otherwise, you will miss your best opportunities to simplify it.

Do you struggle with technical leadership issues such as this one? Do you want to take advice like this, but feel emotional obstacles that you don’t quite understand or know yet how to deal with? I believe I can help. It’s not free, but software development professionals are getting the help they need as part of The jbrains Experience.

Read the whole story
alvinashcraft
34 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Get Instant Answers to Your Technical Questions with the Telerik AI Assistant

1 Share

Progress Telerik and Kendo UI support just got even better! Here’s how to use the AI Assistant option to query for help, with an opportunity to send unresolved issues (and their context!) on for further support.

You’re building apps with Progress Telerik or Kendo UI and have a question for technical support. You can now get an instant technical support answer from the Telerik AI Assistant instead of waiting for support to review the case.

What Is the Telerik AI Assistant?

The Progress Telerik AI Assistant is an intelligent support tool that delivers instant technical answers to your Telerik and Kendo UI questions. It’s not a replacement for our support team, just a faster way to get help with many questions and implementation challenges.

It answers your technical questions in seconds using Retrieval-Augmented Generation (RAG) technology. When you ask a question, the Telerik AI Assistant searches through Telerik documentation, knowledge base and community forums to find the most relevant information. Then it generates a precise and structured answer grounded in verified sources.

How Is the Telerik AI Assistant Helping You?

The Telerik AI Assistant saves you time by providing an instant detailed response with references to official documentation and knowledge base articles. It is available 24 hours a day and 7 days a week. Yes, you get support during weekends and national holidays, too! Instead of waiting for a support response, you get:

  • Instant clarity on configuration questions, code patterns and implementation approaches
  • Accurate, contextual answers drawn from official documentation and verified knowledge
  • Source references so you can verify recommendations and explore topics deeper if needed
  • Faster problem resolution whether you’re troubleshooting an issue, exploring new features or learning best practices
  • Less downtime, fewer development delays and more confidence that you’re implementing things correctly the first time

Who Has Access to the Telerik AI Assistant?

The Telerik AI Assistant is available for most Telerik and Kendo UI products to everyone who has an active license or trial with access to customer support. Simply ask your question and get an instant answer if you’re using any of these products:

  • Telerik UI for Blazor
  • Telerik Reporting
  • Kendo React
  • Kendo UI for jQuery
  • Kendo UI for Angular
  • Kendo UI for Vue
  • Telerik UI for ASP.NET Core
  • Telerik UI for ASP.NET MVC
  • Telerik UI for ASP.NET AJAX
  • Telerik UI for WPF
  • Telerik UI for WinForms
  • Telerik UI for .NET MAUI

Why Use the AI Assistant in Addition to Contacting Support?

Using the Telerik AI Assistant is optional and completely up to you. It’s a choice that sits alongside our exceptional technical support, not a replacement for it.

Use the Telerik AI Assistant when you want instant answers for configuration questions, debugging issues, exploring features or learning patterns. If it fully resolves your issue, you’re done. Quick and easy!

If it doesn’t fully address the problem, you can send the entire conversation (including your original description and what the AI Assistant provided) directly to a support engineer just like a regular support ticket. They’ll have the complete context and can pick up where the assistant left off, often reaching a resolution faster than starting from scratch.

How to Get Support in Your Native Language?

Language should never stand in the way of successfully delivering your project. You can ask the Telerik AI Assistant questions in your native language, whether that is Deutsch, français, 日本語, български or another language. The AI Assistant will understand your question, translate it to English to search through the knowledge base, and provide the answer back in your original language.

This makes instant technical support truly accessible to developers around the world. While official support communication remains in English, the Telerik AI Assistant removes that language barrier, so you can get AI-assisted answers in the language you’re most comfortable with.

How to Use the Telerik AI Assistant?

To get started, visit the Telerik Support Center and go through the steps to get technical support. Describe your technical issue, configuration question or implementation challenge in natural language. You are more than welcome to share code snippets or error codes that illustrate the problem. Once you are done, click “Get AI Reply” and give it a minute.

Get Instant Support from the Telerik AI Assistant. Button options for: Submit Ticket or Get AI Reply

The Telerik AI Assistant provides a comprehensive answer with relevant code examples, configuration guidance, and references to the source documentation. The response is well formatted and easy to follow.

How to Follow Up with a Support Engineer?

You are always one click away from a technical support engineer. If the Telerik AI Assistant did not fully resolve your issue, you will find that often it will ask you questions that aim to get more details. Answer those questions and add more context before sending the entire conversation to a support engineer, by clicking the Send to Support button. This is how the Telerik AI Assistant and the support engineers work together most effectively to help you.

Did the Telerik AI Assistant help you solve the issue? Buttons for: Issue Solved or Send to Support

Our support engineers will respond within their standard timeframe depending on your support plan, and they’ll have a complete picture of your issue and the guidance you’ve already received from the AI Assistant.

Ready to move faster? With the Telerik AI Assistant, instant support is available 24/7! You can solve technical problems right away and continue with your momentum or send it to our expert support team. Either way, you’ll spend less time stuck and more time coding.


The Telerik AI Assistant is yet one more extension of award-winning support for Progress Telerik and Kendo UI. Not yet a user? This support is also available in the 30-day trial license!

Try Now

Read the whole story
alvinashcraft
43 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

.NET Conf 2025 Recap – Celebrating .NET 10, Visual Studio 2026, AI, Community, & More

1 Share

At .NET Conf 2025 we celebrated the official launch of .NET 10 and Visual Studio 2026. The event provided a deeper dive into the world of .NET for developers worldwide and there were many announcements across the entire .NET ecosystem. Organized by Microsoft and the .NET community, the event was a huge success, providing .NET developers with 3 days of incredible, free .NET content, plus a special Student Zone on November 14th.

On-Demand Recordings

If you missed the event, feel free to catch up on the sessions via our on-demand playlists on YouTube.

The conference kicked off with Scott Hanselman, .NET and Visual Studio product team members welcoming attendees and celebrating the incredible momentum of .NET. With over 7 million .NET developers using the Visual Studio family of products monthly and more than 23,000 pull requests merged to .NET 10 alone, the community continues to thrive. Since open-sourcing, .NET has received over 290,000 pull requests from the community, with more than 68,000 people contributing code and filing issues. Since 2020, .NET has been one of the highest velocity open-source projects tracked by the Cloud Native Computing Foundation (CNCF), and C# is consistently ranked in the top five programming languages on GitHub. Scott emphasized that the goal with .NET is to build “a complete development platform that developers love, that businesses trust” – a vision exemplified by how .NET powers many Microsoft products like Bing (running .NET 10 release candidates in production with noticeable P90 latency improvements), Xbox Gaming Copilot (using the entire .NET stack including Orleans and Aspire), and Copilot Studio (built as a Blazor WebAssembly app).

.NET Conf 2025 Keynote

The keynote also highlighted the incredible ecosystem of partners contributing to .NET’s success, including distro maintainers and security partners like Red Hat, Canonical, and IBM; software partners like Syncfusion and Uno Platform helping build .NET MAUI and its core components; hardware partners AMD and Intel contributing low-level hardware intrinsics for significant performance improvements in computationally intensive tasks like AI workloads; and Samsung working on instruction set architecture for RISC-V and ARM, preparing .NET for a bright future.

Companies all over the world trust .NET to power their businesses every day. Throughout the conference, we heard inspiring customer stories. Shoreless AI, built entirely in .NET, demonstrated how the platform enables developers to become AI architects. In collaboration with Texas A&M University’s Mays Business School, they powered the first-of-its-kind national AI pitch competition with over 100 applicants from 37 universities. FMG showcased the power of GitHub Copilot App Modernization by upgrading many of their applications to .NET 10 in just hours – a process that would typically take weeks.

A huge thank you to everyone who contributed to .NET!

This year, we delivered an action-packed program featuring the .NET team and community members across 3 days of live content:

  • Day 1: .NET 10 Launch (November 11) featured the official release of .NET 10, including the keynote and sessions led by the .NET team to introduce new features and enhancements including C# 14, performance improvements, ASP.NET Core updates, Blazor enhancements, .NET MAUI, Aspire, AI-powered development, and Visual Studio 2026.
  • Day 2: Azure, Cloud, and Deep Dives (November 12) kicked off with the Azure Keynote and provided deeper dives into cloud-native development, Azure Container Apps, Azure Kubernetes Service, Azure Functions, AI services, and advanced topics including testing, containers, and security.
  • Day 3: Community Day (November 13) featured speakers from around the world sharing real-world experiences, advanced techniques, and innovative projects built with .NET. The day included live sessions followed by a YouTube Premiere track featuring even more community content.

We also extended .NET Conf’s reach to developers in China through localized broadcasts on WeChat and Bilibili, featuring Chinese subtitles and region-specific scheduling. This effort engaged the vibrant Chinese .NET developer community and demonstrated our commitment to supporting .NET developers worldwide. 你好,中国的 .NET 开发者朋友们!.NET 爱你们! (Hello to our .NET developer friends in China! .NET loves you!)

.NET Conf 2025 Announcements

As part of the keynotes of .NET Conf 2025, we had exciting announcements from the teams at Microsoft:

.NET 10 is now available

The best version of .NET yet is now available! .NET 10 is released and ready to download today. This Long Term Support (LTS) release delivers a complete development platform that developers love and businesses trust, enabling you to build modern apps with a high-performance runtime, best-in-class programming languages, and powerful development tools. With updates across ASP.NET Core, .NET MAUI, WinForms, Blazor, C# 14, industry-leading performance improvements, enhanced security, and so much more, .NET 10 empowers you to meet the needs of tomorrow. As an LTS release, .NET 10 will be supported for three years until November 10, 2028.

Read the announcement | Download .NET 10

Visual Studio 2026 released

Visual Studio 2026 is now available, delivering best-in-class development tools as part of the complete .NET development platform! This release includes full .NET 10 support, a modern look and feel with FluentUI, improved hot reload and Razor editing experiences, enhanced diagnostics, streamlined upgrade processes, and GitHub Copilot as your AI pair programmer. Together with .NET 10, Visual Studio 2026 provides the tools developers love to build the modern applications that businesses trust.

Visual Studio 2026 release announcement

Read the announcement | Download Visual Studio 2026

Aspire 13 is here

Aspire is now more than just .NET. Featuring a code-first experience to help you build applications with modular and extensible integrations with popular frameworks and tools, Aspire 13 gives you the flexibility to build and deploy applications your way.

Read the announcement | Download Aspire

GitHub Copilot app modernization

You can now use GitHub Copilot for app modernization! Quickly upgrade your .NET applications to the latest versions of .NET with the power of AI. Get code suggestions, end-to-end assessments and remediation assistance to help you modernize your applications and migrate to Azure.

Learn more

GitHub Copilot testing for .NET

Use GitHub Copilot to improve your test coverage! Now in public preview, generate unit tests, cover more edge cases, accelerate your testing, and give yourself a productivity boost. Copilot can even suggest fixes when a test fails to help ensure your applications are working as expected.

Try the Learn module

Microsoft Agent Framework for .NET

The Microsoft Agent Framework for .NET is now available in public preview! As part of the complete .NET platform, the framework enables you to build intelligent AI-powered agents and assistants that can autonomously perform tasks, make decisions, and interact with users using natural language. The framework is designed to work seamlessly with the broader .NET ecosystem, including Microsoft.Extensions.AI and other .NET technologies.

.NET Conf 2025 Keynote - Agentic UI

Learn more

MCP C# SDK

The Model Context Protocol (MCP) C# SDK is now available in public preview! MCP enables AI applications to connect with external tools and data sources, extending the capabilities of AI agents. Microsoft products are already leveraging this technology – both Xbox Gaming Copilot and Copilot Studio (a Blazor WebAssembly app) use the MCP C# SDK in production. The SDK provides a set of tools and libraries to help you build intelligent applications with extensible AI capabilities.

Learn more

Visual Studio 2026 – Faster, Smarter, More Productive

Visual Studio 2026 shipped alongside .NET 10, bringing enhanced support for .NET development with improved performance, AI-powered development tools, and seamless compatibility with .NET 10 projects. The release includes full support for C# 14, improved debugging and profiling for .NET applications, and GitHub Copilot integration to accelerate your .NET development workflow.

Several sessions at .NET Conf highlighted Visual Studio 2026’s capabilities for .NET developers:

Read more in the Visual Studio 2026 announcement blog post.

Explore Slides & Demo Code

Access the PowerPoint slide decks, source code, and more from our amazing speakers on the official .NET Conf 2025 GitHub page. Plus, grab your 2025 Digital Swag!

Key Features and Improvements in .NET 10

Unparalleled Performance

.NET 10 is the fastest .NET yet with improvements across the runtime, workloads, and languages. Watch Performance Improvements in .NET 10 for a deep dive. Key improvements include:

  • JIT compiler enhancements: Better inlining, method devirtualization, and improved code generation
  • Hardware acceleration: AVX10.2 support for cutting-edge Intel silicon, Arm64 SVE for advanced vectorization
  • NativeAOT improvements: Smaller, faster ahead-of-time compiled apps
  • Runtime optimizations: Enhanced loop inversion and stack allocation strategies

Security & Cryptography

With quantum computing on the horizon, .NET 10 expands post-quantum cryptography support with Windows Cryptography API: Next Generation (CNG) support, ensuring your applications are prepared for the future of computing. Additional security enhancements throughout .NET 10 help protect your applications with hardened defaults and improved cryptographic capabilities.

Learn more in A Year in .NET Security (2024–2025) and Security-First .NET: How GitHub’s Tools Protect Your Open-Source Projects.

C# 14 & F# 10

C# 14 introduces powerful features including field-backed properties, extension properties and methods, first-class Span<T> conversions, and partial properties and constructors. F# 10 focuses on clarity, consistency, and performance with scoped warning suppression, access modifiers on auto property accessors, and parallel compilation. Learn more in What’s New in C# 14 and Smatterings of F#.

Cloud-Native Development with Aspire

Aspire 13 makes building observable, production-ready distributed apps straightforward with:

  • Modern development experience with CLI enhancements
  • Seamless build & deployment with built-in static file site support
  • Enterprise-ready infrastructure with flexible connection strings
  • Polyglot support for Python, JavaScript, and other languages

Watch Aspire: Cloud-Native Development Simplified, Deep Dive: Extending and Customizing Aspire, and Aspire Unplugged with David and Maddy for more details.

Artificial Intelligence

.NET 10 brings comprehensive AI capabilities:

  • Microsoft Agent Framework: Build intelligent multi-agent systems with sequential, concurrent, and handoff workflows
  • Microsoft.Extensions.AI: Unified abstractions for integrating AI services with any provider
  • Model Context Protocol (MCP): Extend AI agents with external tools and services
  • AG-UI Support: Build rich agent user interfaces with the AG-UI protocol

Explore these sessions: Building Intelligent Apps with .NET, Understanding Agentic Development, Model Context Protocol (MCP) for .NET Developers, Building Remote MCP Servers, Build smarter agents with Redis, .NET and Agent Framework, and AI Foundry for .NET Developers.

ASP.NET Core Enhancements

ASP.NET Core in .NET 10 includes:

  • Automatic Memory Pool Eviction: Reduces memory footprint in long-running applications
  • Enhanced Security: Passkey support in Identity, hardened defaults, improved certificate handling, and enhanced security headers ensure your apps are secure by default
  • Native AOT Enhancements: OpenAPI support in the webapiaot template
  • Blazor improvements: Component state persistence, circuit state persistence, and improved form validation

Learn more in What’s New in ASP.NET Core, Build better web apps with Blazor in .NET 10, and the .NET Conf 2025 release roundup on ASP.NET Community Standup.

.NET MAUI

.NET MAUI continues to evolve with:

  • Android 16 and iOS 26.0 bindings
  • Enhanced HybridWebView with initialization events and JavaScript exception handling
  • Improved XAML with global namespaces and new XAML source generator
  • MediaPicker multi-file selection and automatic EXIF handling

Watch What’s New in .NET MAUI for a complete overview, Ship Faster with .NET MAUI: Real-World Pitfalls and How to Nuke Them for practical tips, Migrating from Xamarin.Forms to .NET MAUI: The Hard Parts for migration guidance, and Uno Platform. Now with AI to see how Uno Platform extends .NET MAUI capabilities. Also check out What’s New in Windows Forms and Modern Windows Development with .NET for desktop development updates.

Day Two Deep Dives

Day Two provided advanced sessions on specialized topics:

Community Day Highlights

Day Three featured exceptional content from the .NET community, including the following top sessions:

Student Zone – November 14th

The Student Zone returned for .NET Conf 2025, providing a beginner-friendly virtual event featuring experts teaching students how to build amazing projects using C# and .NET. Topics covered AI, web development, mobile development, and game development, making it a perfect opportunity for students and newcomers to get started with .NET.

Local .NET Conf Events

The learning journey continues with community-run events. Join us in celebrating .NET around the globe! Find an event near you.

Join the Conversation

Share your thoughts and favorite moments from .NET Conf 2025 in the comments below or on social media using #dotNETConf2025. Let’s keep the conversation going!

🎥 Catch Up on Sessions: Watch all the sessions you missed or rewatch your favorites on on-demand playlists.

🚀 Get Started with .NET 10: Download the latest release of .NET 10 and explore the groundbreaking features it has to offer.

📚 Upskill on Aspire: Begin your journey with Aspire by watching the beginner video series and earning the Microsoft Learn credential.

🤖 Build AI Applications: Explore the new Microsoft Agent Framework and Microsoft.Extensions.AI to build intelligent applications.

Let’s continue building, innovating, and empowering developers with .NET!

The post .NET Conf 2025 Recap – Celebrating .NET 10, Visual Studio 2026, AI, Community, & More appeared first on .NET Blog.

Read the whole story
alvinashcraft
53 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories