Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148212 stories
·
33 followers

AWS creates a sandbox for its agent experiments

1 Share

Amazon Web Services (AWS) is launching a dedicated GitHub organization for its most experimental agentic AI work.

On Monday, the company launched Strands Labs, where teams across Amazon will publish frontier projects that aren’t quite ready for inclusion into the production-ready version of the company’s Strands Agents SDK.

The initial release includes two projects: AI Functions, which generates code at runtime from natural-language specifications, and Strands Robots, which connects large language models to physical hardware via vision-language-action (VLA) models.

Why a separate org?

The Strands Agents SDK, which AWS first open-sourced in May 2025, has been downloaded 14 million times, according to Clare Liguori, a Senior Principal Engineer at AWS who leads work on both Strands and the Kiro AI coding assistant. The SDK now ships in Python and TypeScript, and AWS uses it internally for production workloads.

That traction is precisely why Liguori’s team felt it needed a boundary between the stable SDK releases and more experimental work. “We want to make sure that the SDK continues to focus on that production-ready solution,” Liguori tells The New Stack.

“We want to make sure that the SDK continues to focus on that production-ready solution… [Strands Labs] gives both Amazon’s internal teams and the wider Strands community a place to iterate…”

Strands Labs gives both Amazon’s internal teams and the wider Strands community a place to iterate on ideas where “the interfaces might change a lot,” she adds, without destabilizing the core SDK’s API surface.

So while all Strands Labs projects will ship with documentation, functional code, and basic tests, users should expect breaking changes.

Even before the launch of Strands Labs, AWS had already pushed some experiments into the Strands SDK, but, as Liguori acknowledges, today those would likely have gone into Strands Labs first. One of those was an experiment in steering the agent, for example.

AI Functions: Prompts instead of code

For developers, the most interesting of these two current experiments is likely AI Functions. This lets developers define what a Python function should do in natural language, along with preconditions and postconditions that act as guardrails. At runtime, a coding agent then generates the implementation.

Since the agents aren’t always perfect, the built-in deterministic guardrails should ensure that if the output isn’t correct, the agent self-corrects and tries again.

Liguori uses the example of a receipt parser. Receipts vary wildly in format, making deterministic code brittle. With AI Functions, a developer specifies that the function must return a vendor name, a total price, and line items, and the agent handles the edge cases.

What’s important here is that from the program’s perspective, this looks like any other function. “It’s not this separate thing that’s an agent,” Liguori says. “It’s a normal function” embedded in otherwise deterministic logic.


@ai_function(
code_execution_mode="local",
code_executor_additional_imports=["pandas.*"],
)
def fuzzy_merge_products(invoice: DataFrame) -> DataFrame:
"""
Find product names that denote different versions of the same product, normalize them
by removing version suffixes and unifying spelling variants, update the product names
with the normalized names, and return a DataFrame with the same structure
(same columns and rows).
"""


# Load a JSON (the agent has to inspect the JSON to understand how to map it to a DataFrame)
df = import_invoice('data/invoice.json')
print("Invoice total:", df['price'].sum())

# Load a SQLite database. The agent will dynamically check the schema and generate
# the necessary queries to read it and convert it to the desired format)
df = import_invoice('data/invoice.sqlite3')
# Merge revisions of the same product
df = fuzzy_merge_products(df)

Longer term, the team sees AI Functions as a path toward a feedback loop: run an agentic function millions of times in production, observe which code paths emerge, and eventually collapse the results back into deterministic code that no longer needs a model call.

One thing the Strands team has always stressed is its belief that the models will only get better, so the agent framework should get out of the way as much as possible. In many ways, AI Functions is a push in that direction, too, with its focus on models’ ability to write code as needed.

Robots reasoning in the cloud

Strands Robots tackles a very different problem. It pairs lightweight, low-latency VLA models running on local hardware with frontier LLMs in the cloud. Frontier models, after all, are too computationally intensive to run directly on a robotic arm. But for the robot to work efficiently, you need to bring latency down as much as possible.

AWS is partnering with Nvidia and Hugging Face on the project, and the team is also releasing a simulated environment so developers can iterate without a physical robot on their desk.

Liguori says the team has been running proof-of-concept work with AWS customers, and she noted that Amazon itself, obviously, runs a very large fleet of warehouse robots. But she also sees applications for in-vehicle AI, for example, and other edge scenarios that need both domain-specific local inference and the kind of long-horizon planning that modern LLMs can offer.

Agent frameworks everywhere

AWS is obviously not the only hyperscaler investing heavily in agent frameworks (and there are reasons for that). Google’s open-source Agent Development Kit, announced at Cloud Next 2025, targets multi-agent orchestration. Microsoft’s Agent Framework, one of the few agentic frameworks to support .NET, recently reached its Release Candidate status. And startups like CrewAI, LangGraph, and others are already major players in this field, with personal agents also taking off thanks to the hype around OpenClaw.

The post AWS creates a sandbox for its agent experiments appeared first on The New Stack.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI

1 Share
Anthropic logo on an orange and grey background.

Anthropic claims DeepSeek and two other Chinese AI companies misused its Claude AI model in an attempt to improve their own products. In an announcement on Monday, Anthropic says the "industrial-scale campaigns" involved the creation of around 24,000 fraudulent accounts and more than 16 million exchanges with Claude, as reported earlier by The Wall Street Journal.

The three companies - DeepSeek, MiniMax, and Moonshot - are accused of "distilling" Claude, or training a smaller AI model based on a more advanced one. Though Anthropic says that distillation is a "legitimate training method," it adds that it can "also be used for illicit purpose …

Read the full story at The Verge.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Billions of dollars later and still nobody knows what an Xbox is

1 Share
The upper part of the Xbox Series X placed against a black background.

The last few years of Xbox have been expensive. Under Phil Spencer's leadership, Microsoft has spent billions of dollars in an attempt to build an ambitious future for gaming that looks a lot like Netflix. And while its subscription service Game Pass started out as a good deal for gamers (although now not so much), that spending spree has led to catastrophic layoffs, studio closures, and confused and inconsistent messaging about what Xbox actually stands for. And with Spencer set to retire as new leadership takes charge, the future of Microsoft's gaming efforts looks increasingly unclear.

Spencer announced his retirement last week, after ov …

Read the full story at The Verge.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

6 React Server Component performance pitfalls in Next.js

1 Share

Most teams adopt React Server Components for one specific reason: they want their apps to feel faster through better initial loads and less client-side JavaScript. The App Router in Next.js makes this transition feel straightforward. You fetch data on the server, React streams the UI, and the browser displays the page shell almost immediately.

6 React Server Component performance pitfalls in Next.js

The reality in production often looks different. A page request comes in, the server starts fetching multiple pieces of data, and nothing goes to the browser until the slowest request finishes. The header does not appear, and neither does the layout render. From the user’s point of view, the page simply hangs while the server is busy.

This performance gap usually comes from a small set of recurring implementation mistakes. Pages wait on data that does not need to block rendering, and layouts fetch global data far too early. These misplaced async boundaries can accidentally disable streaming entirely.

In this article, we’ll take a look at the most common React Server Component performance mistakes teams make in production. We will look at why they occur and how you can refactor them without changing your application’s behavior.

Mistake #1: Blocking the shell with top-level awaits

React Server Components can stream only when the server is able to start rendering without waiting on slow work. When a page component waits on a slow request before returning any JSX, the server has nothing to send to the browser, and streaming cannot begin.

Consider a dashboard route where analytics data is fetched at the very top of the page component:

export default async function Home() {
  const analytics = await getAnalytics(); // slow request
  return (
    <div>
      <Header />
      <AnalyticsWidget data={analytics} />
    </div>
  );
}

Because this await happens before the component returns any UI, the server blocks the entire render. Even parts of the page that do not depend on this data, such as the header, are delayed by the slowest request. These issues cause:

  • The initial shell is to be delayed, leaving the browser with nothing to display
  • Streaming provides no benefit, since no content is eligible to flush
  • Users perceive the page as unresponsive, despite other data sources being fast

The reason this happens is that the server can only stream what it has already rendered. When slow async operations run before the first render step, they block streaming by definition.

What you should do instead

To avoid this mistake, move slow data fetching out of the critical render path and isolate it behind Suspense boundaries so the shell can render immediately.

import { Suspense } from "react";
export default function Home() {
  return (
    <div>
      <Header />
      <Suspense fallback={<AnalyticsSkeleton />}>
        <AnalyticsWidget />
      </Suspense>
    </div>
  );
}

With this fix, the shell renders immediately, allowing the user to see meaningful structure without waiting for slow data. Sections that rely on longer-running requests load progressively as their data resolves, and the page feels responsive even though the same amount of work is still being done on the server.

Blocking Vs Streaming
Blocking vs. streaming.

Mistake #2: Passing large data structures across the server-client boundary

Server Components make it incredibly easy to move logic to the server, but the border between server and client is still expensive to cross. Data doesn’t magically teleport. It has to be serialized, shipped, parsed, and hydrated.

Let’s look at a classic inventory page scenario where we fetch the full product catalog on the server. We are talking about hundreds of items, complete with internal IDs, warehouse metadata, and long descriptions.

It feels natural to just pass that data straight down:

export default async function ProductsPage() {
  const products = await getProducts();
  return <FilterableProductList products={products} />;
}

The catch? If FilterableProductList is a Client Component, React has to serialize everything in that product’s array to send it to the browser. Even if the UI only renders the product name and price, we are forcing the user to download the whole database row for every single item.

From the browser’s perspective, this is a heavy payload. The fetch might have been fast on the server, but the client experience takes a hit, such as:

  • The HTML bloats — The serialized JSON is embedded directly in the document
  • The Main Thread locks up — The browser has to parse all that data before it can attach event listeners
  • Interaction lags — Users try to type in the search bar, but the input feels sticky because the CPU is busy unpacking data it doesn’t even need

This technical bottleneck creates a deceptive experience for your users. The page looks loaded. But for those first few hundred milliseconds, it is really just a painting you cannot interact with. You might try to click a button or scroll, but the browser is still too busy with that background work to respond.

What you should do instead

To avoid this mistake, narrow the boundary. Before data leaves the server, shape it into exactly what the client needs.

export default async function ProductsPage() {
  const raw = await getProducts();
  const products = raw.map((p) => ({
    id: p.id,
    name: p.name,
    price: p.price,
    stock: p.stock,
    description: p.description,
    // omit internalLogs and other unused fields
  }));
  return <FilterableProductList products={products} />;
}

If the client needs more detail later, fetch it on demand or handle it on the server in response to an interaction, rather than sending everything up front.

With this fix, you have less data crossing the boundary, hydration finishes sooner, and the UI becomes interactive earlier. Filtering and other client-side actions feel responsive, not because there’s less work, but because the work that never needed to happen on the client no longer does.

Mistake #3: Over-hydrating UI by overusing “use client”

In the App Router, "use client" is not just a switch for enabling interactivity. It is a boundary that cuts off server-only optimizations. Once a component is marked as a Client Component, everything it imports becomes part of the client JavaScript bundle and must be hydrated before the UI can fully respond.

A common mistake is placing this boundary too high in the tree to support a small interaction, such as a search input or a button, and unintentionally forcing large, mostly static sections of UI to hydrate on the client.

Still using the product list scenario, let’s say we want to add a “Restock” button to every row. The easiest move is to just wrap the whole list in "use client" so we can handle the click event.

// components/FilterableProductList.tsx
"use client";
import { Package, RefreshCw } from "lucide-react";
export function FilterableProductList({ products }) {
  // filtering and mutation logic
  return (
    <div>
      <input /* filter input */ />
      {products.map((product) => (
        <div key={product.id}>
          <Package />
          <h3>{product.name}</h3>
          <button>
            <RefreshCw />
          </button>
        </div>
      ))}
    </div>
  );
}

But by doing this, we are forcing the entire list to be a component client even though only the button needs interactivity. This means every static icon, layout div, and product title is being shipped as JavaScript.

From the user’s perspective, the page appears to load, but it takes longer to feel responsive, causing issues such as:

  • The client bundle grows because dependencies used for static rendering are included in JavaScript
  • Hydration work increases because React must initialize every row, not just the interactive parts
  • The main thread stays busy during load, delaying interactions like typing into the filter or clicking buttons

This pattern often starts as a simple shortcut. You might wrap a large component in “use client” because it is the fastest way to unblock development. However, that single choice eventually spreads client-side work across parts of the UI that never needed it.

The reason this spreads so easily is that client boundaries are inherited. When you place that directive at the top of a file, every component it imports becomes a Client Component by default. One misplaced line of code can accidentally turn a fast, server-rendered page into a heavy, fully hydrated client tree.

What you should do instead

Push the client boundary down to the leaves of the component tree. Isolate interactivity into small client components and keep the surrounding layout and rendering on the server.

import { Package } from "lucide-react";
import { RestockButton } from "@/components/RestockButton";
import { SearchInput } from "@/components/SearchInput";
export default async function ProductsPage({ searchParams }) {
  const products = await getFilteredProducts(searchParams.q);
  return (
    <div>
      <SearchInput />
      {products.map((product) => (
        <div key={product.id}>
          <Package />
          <h3>{product.name}</h3>
          <RestockButton id={product.id} />
        </div>
      ))}
    </div>
  );
}

Only the search input and the restock button are hydrated. Everything else renders on the server and ships as HTML.

With fewer components crossing the client boundary, the JavaScript bundle shrinks and hydration finishes sooner. Interactive elements respond immediately, and the page feels lighter, not because functionality changed, but because work that never needed to happen on the client no longer does.

Client Boundary

Mistake #4: Blocking streaming by treating all data as critical

One of the biggest wins for React Server Components is streaming, which allows the server to send the page shell almost instantly. This means the header, navigation, and basic layout can appear on the screen while the slower sections of the page load in the background as their data becomes available.

That promise quietly breaks when everything is treated as critical.

Imagine you have a dashboard page where you need to display some fast static content (the title) and some slow dynamic data (the analytics cards). To do this, you fetch the analytics at the top level like so:

// app/page.tsx
import { getAnalytics } from "@/lib/data";
import { RecentOrders } from "@/components/RecentOrders";
import { BarChart3, Users, CreditCard, ArrowUpRight } from "lucide-react";
export default async function Home() {
  const analytics = await getAnalytics(); // 1.2s delay
  return (
    <div className="space-y-8">
      <div>
        <h1 className="text-3xl font-bold text-gray-900">Dashboard</h1>
        <p className="text-gray-500 mt-2">Overview of your store performance</p>
      </div>
      <div className="grid grid-cols-1 md:grid-cols-3 gap-6">
        {/* Widget 1 */}
        <div className="bg-white p-6 rounded-xl shadow-sm border border-gray-100">
           <div className="flex items-center justify-between mb-4">
              <div className="p-2 bg-blue-50 text-blue-600 rounded-lg">
                  <Users size={24} />
              </div>
           </div>
           <h3 className="text-gray-500 text-sm font-medium">Total Visitors</h3>
           {/* This number delayed the whole page */}
           <p className="text-2xl font-bold text-gray-900 mt-1">{analytics.dailyVisitors.toLocaleString()}</p>
        </div>


        {/* ... other widgets ... */}
      </div>
    </div>
  );
}

This means the server holds back everything until that 1.2-second analytics request finishes. Even the static ‘Dashboard’ heading stays hidden. The user stares at a white screen for over a second, and then suddenly the whole page pops into view all at once. Think of apps like Uber Eats. They show the navigation and layout skeletons immediately while the menus load in.

This leads to issues like:

  • The Shell is being delayed — The user doesn’t even see the header or sidebar
  • No visible progress — The server is working hard, but the user assumes it crashed
  • Streaming is useless — You can’t stream if you don’t render. By waiting for the last byte of data, you’ve effectively turned off streaming

This technical trap usually starts with a good intention. Most of us have a default instinct to await everything at the top level. We want the page to feel complete before it renders. We try to avoid layout shifts, but we accidentally trade those small shifts for a much worse problem: a long, empty wait for the user.

What you should do instead

To solve this, you need to stop waiting for slow data at the page level. Decide what is critical (Shell) and what is secondary (Slow Widgets).

Wrap the slow parts in &lt;Suspense> like so:

// app/page.tsx
import { Suspense } from "react";
import { AnalyticsWidgets } from "@/components/AnalyticsWidgets";
import { DashboardShell } from "@/components/DashboardShell";
export default async function Home() {
  //No top-level await here!


  return (
    <div className="space-y-8">
      {/* This shell renders IMMEDIATELY */}
      <DashboardShell title="Dashboard" subtitle="Overview of your store performance" />
      <div className="grid grid-cols-1 md:grid-cols-3 gap-6">
        {/* The slow data is isolated inside this component */}
        {/* The user sees the shell instantly, and skeletons while data loads */}
        <Suspense fallback={<AnalyticsSkeleton />}>
           <AnalyticsWidgets />
        </Suspense>
      </div>
    </div>
  );
}

This tells React: “Render the rest of the page NOW. Put a placeholder here, and fill it in when the data is ready.

With this change, the shell paints instantly. The fast data appears immediately. The slow widgets spin for a second and then pop in. Nothing about the backend got faster, but the perceived performance improved massively because the user wasn’t left waiting in the dark.

Blocking All Data Vs Streaming The Shell And Fast Content First
Blocking all data vs streaming the shell and fast content first.

Mistake #5: Treating server components as static templates

It is easy to assume everything is working perfectly when your requests are fast and your database updates without errors. However, a subtle problem often hides in plain sight right after you think you have finished a feature. A user clicks a button, the request succeeds, and the server responds correctly. Yet, for some reason, the UI stays exactly the same. The data eventually updates only after a manual refresh, but by then, the user experience has already suffered.

Let’s say we have an inventory dashboard where users can update stock levels. You might write a standard Server Action to handle the database write like this:

// app/actions.ts
"use server";
import { delay } from "@/lib/utils";
export async function updateStockAction(productId: string) {
  await delay(500); // Simulate DB write
  console.log(`Updated stock for ${productId}`);
  return { success: true, message: "Stock updated (but UI won't reflect it!)" };
}

From the database’s perspective, the operation was a success. The stock is updated. But from the user’s perspective, the number on the screen didn’t budge. They click the button again and again. Eventually, they hit refresh and see the value jump.

This happens because the server-rendered data on the screen is now stale. When that information remains out of sync with the database, it creates a series of problems:

  • Users doubt the system — “Did it work? Why didn’t it change?”
  • Double submissions start to occur — Users click repeatedly because they see no feedback
  • Perceived sluggishness in the app — The app feels broken or laggy, even though the backend was instant

At this stage, the problem is not about how fast your code runs. Your user is essentially flying blind because the number on the screen no longer matches the truth in your database. This mismatch makes the whole app feel unpredictable.

What you should do instead

In Next.js, you should use the revalidatePath (or revalidateTag) function. This tells the Router Cache: “The data on this path is dirty. Throw away the old snapshot and fetch a fresh one.”

"use server";
import { revalidatePath } from "next/cache";
import { delay } from "@/lib/utils";
export async function updateStockAction(productId: string) {
  await delay(500);

  revalidatePath('/products');


  return { success: true, message: "Stock updated and UI refreshed" };
}

With that one line, the disconnect vanishes. The moment the server action completes, Next.js knows the cache is stale, fetches the fresh component payload, and updates the DOM. The interface stays perfectly in sync with the user’s actions, and the system stops feeling unpredictable.

State Data Trap Vs Fresh Data Flow
State data trap vs fresh data flow in React Server Components.

Mistake #6: Incorrect async boundaries in layouts

Layouts are meant to be stable. They wrap every page in your app and persist across navigations. That’s exactly why they make transitions feel fast — the frame stays put while the content changes.

But that stability becomes a trap if you put the wrong kind of async work inside them.

Layouts feel like the natural home for global data. You need the user’s avatar in the sidebar and their role displayed at the bottom. Since the layout wraps everything, it seems like the perfect place to fetch that data.

import type { Metadata } from "next";
import { Inter } from "next/font/google";
import "./globals.css";
import { getUser } from "@/lib/data";
import { LayoutDashboard, ShoppingBag, Settings, LogOut, User as UserIcon } from "lucide-react";
const inter = Inter({ subsets: ["latin"] });
export default async function RootLayout({
  children,
}: Readonly<{
  children: React.ReactNode;
}>) {
  return (
    <html lang="en">
      <body className={`${inter.className} bg-gray-50 text-gray-900`}>
        <div className="flex min-h-screen">
          {/* Sidebar */}
          <aside className="w-64 bg-slate-900 text-white p-6 flex flex-col fixed h-full">
            <div className="mb-8">
              <h1 className="text-2xl font-bold tracking-tight text-white">Horizon</h1>
            </div>


            <nav className="flex-1 space-y-2">
               {/* ... static nav links ... */}
            </nav>
            <div className="mt-auto pt-6 border-t border-slate-700">
               {/* This section caused the blockage */}
               <div className="flex items-center gap-3 mb-4">
                  <div className="w-10 h-10 rounded-full bg-slate-700 flex items-center justify-center overflow-hidden">
                      {user.avatar ? <img src={user.avatar} alt={user.name} /> : <UserIcon />}
                  </div>
                  <div>
                      <p className="text-sm font-medium">{user.name}</p>
                      <p className="text-xs text-slate-400">{user.role}</p>
                  </div>
               </div>
            </div>
          </aside>
          {/* Main Content */}
          <main className="flex-1 ml-64 p-8">
            {children}
          </main>
        </div>
      </body>
    </html>
  );
}

The problem is how the boundary works. Since the layout is at the top of your app tree, it acts as a gatekeeper. Next.js can’t render the page content until the layout is done. This means every route in your app is slowed down just to show a small avatar icon.

This setup leads to a few frustrating experiences, such as:

  • The blank screen trap — On the first visit, the user does not even see the sidebar or the navigation. The whole app is stuck waiting for that getUser() request to finish
  • Broken streaming — You lose the primary benefit of the App Router. Even if your dashboard page is ready to go, the server cannot send a single byte because the layout above it is still busy
  • Heavier navigation — The app feels less responsive every time a user navigates. If you have any dynamic logic here, it blocks the transition and makes the navigation feel heavy

What you should do instead

You need to treat layouts as structural scaffolding, not data loaders. You also need to keep the layout itself static and move the data dependency into a dedicated component (like &lt;UserProfile />), then wrap it in &lt;Suspense>like so:

// app/layout.tsx
import { UserProfile } from "@/components/UserProfile"; // Client or Server component
import { UserSkeleton } from "@/components/skeletons";
export default function RootLayout({ children }) {


  return (
    <html lang="en">
      <body className="bg-gray-50 text-gray-900">
        <div className="flex min-h-screen">
          <aside className="w-64 bg-slate-900 text-white p-6 flex flex-col fixed h-full">
            {/* Static Nav renders instantly */}
            <div className="mb-8">
              <h1 className="text-2xl font-bold text-white">Horizon</h1>
            </div>
            <nav>...</nav>
            <div className="mt-auto pt-6 border-t border-slate-700">
               <Suspense fallback={<UserSkeleton />}>
                  <UserProfile />
               </Suspense>
            </div>
          </aside>
          <main className="flex-1 ml-64 p-8">
            {children}
          </main>
        </div>
      </body>
    </html>
  );
}
Blocking Layout Data Vs Isolated Async Boundaries
Blocking layout data vs isolated async boundaries in React Server Components.

Tooling and DX: Why teams miss these problems

Most of these issues slip into production because they are surprisingly hard to spot while you are coding. Your local dev environment often behaves differently from a production server. Streaming acts differently, hydration costs are hidden, and you almost never experience a true cold start. On top of that, Fast Refresh keeps your state alive. This makes transitions feel much smoother than a real user will ever experience.

We also focus too much on browser-side metrics. Bundle sizes and Lighthouse scores help, but they don’t tell the whole story. They don’t show when the server starts sending HTML or which async boundary is slowing things down. To the browser, everything just looks like ‘loading,’ and it can’t tell if the real bottleneck happened earlier on the server.

Poorly placed boundaries make things worse. Without clear Suspense or error boundaries, one slow section can quietly stall your whole render. The UI doesn’t crash, but it feels slow and heavy. This makes it hard to find the real problem during a quick manual test.

It all points to the same pattern. These are not obvious bugs, and they are almost never caused by slow APIs. Instead, the work is simply happening at the wrong boundary or in the wrong place. Until you start looking at performance from that perspective, these problems remain easy to miss and even harder to explain to your team.

Conclusion

RSCs are not slow by nature. They simply change where the work happens and how it affects the user experience. Most performance bottlenecks do not come from slow APIs or heavy math. Instead, they happen because work runs at the wrong boundary or starts too early in the render cycle.

The patterns we’ve discussed are consistent. The shell gets blocked, the client bundle grows too large, or the UI stays stale after an update. While these issues might seem small on their own, they define whether your application feels modern or sluggish.

Real performance is often less about absolute speed and more about visible progress. You make an app feel responsive when you show the layout structure early and defer the heavy lifting. Once you focus on these boundaries, the App Router stops being unpredictable. You get an interface that feels calm and fast. Best of all, you achieve that through small, targeted fixes instead of massive rewrites.

The post 6 React Server Component performance pitfalls in Next.js appeared first on LogRocket Blog.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

I Used AI to Build a Full Golf Scoring Web App – Here’s What Happened.

1 Share

I’m a golfer who wanted a better way to track my rounds. So I did what a growing number of people are doing in 2026: I sat down with an AI assistant and built the thing myself.

The result is GolfScorer, a full-featured Blazor Server application that lets me record rounds hole by hole, manage courses, and dig into my performance stats. It has user authentication, a SQL database, Azure cloud deployment, and a dark luxury UI theme inspired by Augusta National.

Why I Built It

There are golf tracking apps out there. Plenty of them. But they all come with compromises, monthly subscriptions, clunky interfaces, features I don’t need, or missing the specific stats I care about. I wanted something tailored to me: a clean dashboard that shows my putting percentages, green-in-regulation trends, a handicap index, and a hole difficulty ranking so I can see exactly where my game falls apart.

More than anything, I wanted to understand how the thing worked. Not just use someone else’s black box, but own every piece of it.

The Process

I started by describing what I wanted in plain English. A golf app. Hole-by-hole scoring. Courses with 18 holes of par data. Statistics that actually tell me something useful. From there, the AI and I went back and forth, shaping the data model, building out the UI, debugging migrations, and refining features.

The stack we landed on is .NET 10 with Blazor Server, Entity Framework Core, ASP.NET Core Identity for authentication, and SQL Server for persistence. If you’d asked me a year ago to pick a tech stack, I would have stared at you blankly. But through the process of building this, I started to understand why each piece exists and what it does.

Some of the features I’m proudest of came from iterating with the AI. The statistics dashboard, for example, breaks down my scoring into eagles, birdies, pars, bogeys, and worse. It shows per-hole analysis so I can see that I consistently blow up on hole 14 but quietly birdie hole 7 more often than I’d expect. That kind of insight is exactly why I wanted to build this.

The UI went through a major redesign too. We ended up with what I call the “Augusta Dark” theme, deep forest greens, gold accents, elegant serif headings using Playfair Display, and a monospaced font for scores so everything lines up cleanly. It looks and feels like a premium product, which still surprises me every time I open it. The theme came from the aigent skill called frontend-design.

What I Learned

Building with AI isn’t pressing a button and getting a finished app. It’s a collaboration. I had to make decisions constantly; how should the data model work? Should rounds store denormalized totals for performance, or calculate everything on the fly? Do I want fairway tracking on par 3s? (No there’s no fairway to hit.) These are domain decisions that the AI can’t make for you. You bring the knowledge of what matters, and the AI brings the ability to turn that into working code.

I also got AI to write the entire deployment to Azure, I plan the feature and it implemented the Bicep code, PowerShell deployment script and even how to update the app after each release.

The biggest lesson? You don’t need to be a developer to build software anymore, but you do need to be willing to think like one. You need patience, curiosity, and the willingness to read an error message and try to understand it before asking for help.

What’s Next

I’m using GolfScorer for every round now. I have plans to add trend charts, a round comparison feature, and maybe a mobile-friendly layout for entering scores on the course. The codebase has unit tests for the core services, so I can keep building with confidence that I’m not breaking things.

If you’ve been sitting on an idea for a tool that would make your life better, something specific to your hobby, your job, your weird niche interest, I’d encourage you to try building it with AI. You might be surprised by what you’re capable of.

Now if you’ll excuse me, I need to go work on that hole 14 problem.

If your interested in checking out a tiny little side hobby – here you go https://golfscoreapp-app.azurewebsites.net

And yes full source code is available- https://github.com/gsuttie/golfscoreapp

The post I Used AI to Build a Full Golf Scoring Web App – Here’s What Happened. appeared first on Azure Greg.

Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Talking Agentic AI with Julien Brun on the Secret Sauce Podcast

1 Share

Joined Julien Brun for a wide-ranging conversation about agentic AI, the current moment, and where we are heading.

  • From chatbots to agents: Agents differ from chatbots in one key way: they can take action, not just produce text.

  • 2026 as a breakout year: AI agents function as virtual employees you can scale indefinitely, making now an ideal time to start a business.

  • Zero-cost software economics: Writing code now costs near-zero, so cloning SaaS apps has little value. What matters is what only you uniquely know or can offer.

  • Agents for everyone: Tools like Claude Code and Claude Co-Work are general-purpose agents, not just for developers. Anyone can delegate tasks without writing code.

  • Agent skills as the new apps: Skills — small instruction packages that teach an agent your context — are the apps of the future. The shift is from building products to defining capabilities.

  • The end of software engineering: Developer roles will gradually disappear as AI surpasses human ability. Alignment is the defining challenge: losing control is inevitable; losing it badly is not.

  • Abundance and the painful transition: Material abundance is within reach but not guaranteed. The long-term outlook is optimistic; the transition period, with job losses and denial, is the real concern.


Julien: Hi Eleanor. I’m very happy to have you today. We’ll be talking about AI agents because you’re an expert in the field, and you will tell us everything about agents, about Claude Code, Claude Co-Work, and what we can build with all of this in 2026.

To introduce the topic: we’ve spent the last three years chatting with AI (ChatGPT, Gemini, etc.). How would you define the transition from a chatbot to an AI agent? What’s the fundamental shift in the way we interact with AI?

Eleanor: Thanks for inviting me; I’m always excited to talk about agents. The shift has been gradual: the same models improved and gained the ability to interact with the environment not just by emitting text, but by calling tools, taking action, and driving longer-running tasks. It became much more apparent last year, especially with coding agents.

The big distinction is: can it take action? Talking is nice, but being able to change things in the world is powerful. Once agents got that ability, it became a process of gradual improvement. Agents today are not radically different from early chatbots; they’re simply much more capable at long-running work, and they’re connected to the environment.

Julien: Exactly. Last year was the buzzword year for agents, but more than that, we started to see real impact on the workplace and how we collaborate. When did you realise this would have such a massive impact?

Eleanor: It’s funny: as a child (and later) I loved science fiction with computerised agents that could do things. It was always obvious to me that once we had something like that, it would change everything.

Until recently, I was sceptical we’d get there. For a long time, “agent” was used as a marketing buzzword without a practical basis. But models gradually improved to the point where, around the middle of last year, we could let them run, make tool calls, make changes, and come back.

The o3 model from OpenAI was a revelation. Then Claude 4 from Anthropic, and now we’re already another generation beyond that. These models can run for hours, make decisions, keep track of context. It became clear: this science-fiction thing is now real.

Julien: You’ve said 2026 would be a golden year for launching a business. Why does the convergence of AI agents make this an ideal moment for entrepreneurs?

Eleanor: Every year will be a golden year now, and 2027 will likely be even better because things move fast. But this year, for the first time, we can have something like a virtual artificial employee. We’ve had helpful software for decades, but now we can delegate work to an AI “employee” that actually does things.

If we can have one, we can have many. The constraints on what any one person can do have been lifted radically.

Julien: A common example is the junior developer: with Claude Code it’s like having a junior developer, an assistant, and you can stack agent over agent. The marginal cost to start a company is not zero, but it’s approaching it.

Do you think we’ve permanently lowered the cost of starting a company, or are there costs that remain?

Eleanor: It’s useful to look at it through economics. The fact you can ask an agent to write software and it’s basically free doesn’t mean the economics of starting a company are automatically favourable.

People are building clones of their favourite SaaS apps and getting excited. But if the cost is zero, it can’t be that valuable. Imagine it’s the 1920s and horses become extremely cheap. You decide to buy 20 horses and start a horse-and-carriage business. That’s a terrible plan: nobody wants horse-and-carriage; they want automobiles.

Many people are buying these proverbial horses. Yes, coding costs are low, but SaaS existed largely to amortise the high cost of development by selling to many customers. If the cost approaches zero, why would anyone buy your software? They can build it themselves.

There’s a radical change. It’s important to pay attention to what’s scarce versus abundant. Writing software is, increasingly, a solved problem. The cost is approximating zero.

Julien: We’ve come full circle: we used to buy SaaS so we didn’t have to build it. Now anyone can build almost anything. It’s easy to build, but that makes it harder to sell.

So starting a company is easier, but starting a meaningful company that stays profitable and relevant is the bigger question: how do you add value to the market?

Eleanor: Exactly. What’s special? What can only you do? If you and your neighbour and someone on the street can all do the same thing, it’s not valuable. But there is something only you understand about the world: an insight, a creative idea, a bit of craziness. That’s special.

Julien: That also enables niche products: things that only a few dozen people might use, which wouldn’t have made economic sense to build before.

In the past, a strategic move was raising capital to hire a team. Now you need agency instead of capital: you orchestrate agents. How do you see the role of the entrepreneur changing? From managing employees to managing agents.

Eleanor: The speed is a lot faster, which is great but also difficult. We’re used to taking time to think. Now, if you can make quick (and hopefully good) decisions, you can delegate to AI agents, contractors, and other mechanisms, and things start happening. Until you decide and delegate, nothing happens.

That can be stressful. Sometimes I feel it: if I’m relaxing, it feels like nothing is happening because I’m the only one who can make the decisions. You still need to rest, though.

Getting systematic about fast decisions, quick iteration, running experiments, then killing what doesn’t work and scaling what does, matters a lot.

Julien: Yes: kill it or scale it. The time required for decision-making is shrinking too.

Eleanor: Yes.

Julien: Claude Code has been a kind of revolution. It surprised me that it’s not only for developers; almost anybody can use it. In the Anthropic ecosystem, how does Claude Code benefit non-technical people?

Eleanor: We don’t need to over-index on Claude. There are many agents: Codex, Copilot, Gemini-based setups, GPT-5-based ones. They’re all variants of the same pattern.

They’ve been branded as developer tools because software was the first market where the impact was obvious. But the truth is: we now have general agents. They use a strong model and they are connected to an execution environment (a laptop, a VM, a container, or a more restricted environment). They can do stuff.

Great for coding, but also great for writing, generating and editing images and videos, even controlling physical devices if connected (I’ve seen people connect to 3D printers to print objects). You don’t need to write code; it writes the code. I’m a developer and I don’t write code anymore: I talk to the agent and describe what I need.

That’s why anyone can use it. And Claude Co-Work is essentially a rebrand aimed at making that clearer.

Julien: Claude Co-Work launched recently and changes the daily workflow of non-technical people and solo creators. How does it change our relationship with AI compared to a standard chatbot? Can we build an entire app if we don’t understand code?

Eleanor: It depends on what you define as an app. If it must have top-tier UI polish, run at massive scale, and serve millions, that’s still a complete engineering discipline.

But for many things you do every day: I’m teaching and lecturing, I need slides, handouts, booklets, and nice images. I’m not artistic, but an agent can produce amazing quality: beautiful images and layouts. I define the goal and what good looks like, provide context, and it creates it. I didn’t write a line of code. I would have had to hire someone; I can’t do it myself.

Julien: You’re a one-person company, and we’ll likely see more of those. Anthropic also launched agent skills recently. Can you tell us more? Are they the missing layer for personalised AI?

Eleanor: I’m very excited about skills. It’s most of what I do now: convincing people they’re important and helping them learn to think about them.

Agent skills are open standards, not just Anthropic. They’re the apps of the future. An agentic platform is a model plus an execution environment. It has strong general-purpose capabilities, but it doesn’t know the specific things you need done. Skills are how you teach the agent.

They’re a simple format: a folder with instructions (a Markdown file) and maybe resources like scripts or assets. The challenge isn’t the technique; it’s the mindset: don’t think of apps as windows with buttons. Think in terms of capabilities. What does your virtual employee need to know to do the work you need?

Imagine a shelf of folders behind me: skills for all kinds of work. The agent picks up the right folder when needed. It’s incredibly powerful. In workshops, there’s always a moment when it clicks and people realise they now have an agent that understands their business, their work, their world, and can continuously evolve.

Julien: You focus on the jobs to be done: outcome over the tech.

Eleanor: Yes: outcome and context. What does it need to know? What files or information will it need? That goes into the skill.

Julien: When you realise it actually works and has a positive impact, there’s a real wow moment. It feels like that science-fiction computer is finally here.

Eleanor: Yes.

Julien: A few years ago we talked a lot about no-code as a way for non-technical people to build prototypes and MVPs. Is it over for no-code? Has it morphed into agents?

Eleanor: No-code won. No one has to write code anymore (I’m exaggerating slightly; there are still specialised cases, but not for long).

The original no-code tools were “cheating” a bit: you were still coding, just by connecting icons. But they showed how much people want to build and create, and how much coding complexity got in the way. Anyone who wanted to build something and couldn’t because coding stood in the way is now liberated.

Julien: We’re heading into an era of hyper-automation. Sam Altman said we might see a one-person billion-dollar company. Is that possible, or marketing?

Eleanor: A one-person billion-dollar company is unlikely because if you have a billion dollars, you’ll probably hire someone (a barista, a coach, a massage therapist, anything). But the more grounded claim is true: you can do a lot with very few people.

Julien: If an agent can execute the “how”, what happens to the value of technical skills? Will there be fewer developers, or will their roles evolve?

Eleanor: There will be fewer and fewer, until there are none. These revolutions don’t happen overnight; they happen gradually.

People used to say front-end is easier now but serious programming still needs humans. That is increasingly not true. My background is distributed systems and complex server engineering. I don’t write code anymore. The AI is better than me, and I’m not foolish enough to intervene where I’d make it worse.

Some areas remain: the people building these models still do a lot of work, though they may be replaced too. And hardware and the physical world are still manual, but once you have robots, 3D printers, and other machinery, that too will be automated. It’s gradual, but very fast.

Julien: But don’t we lose control? With cars, they’re so complex now that if it breaks you can’t repair it yourself. If everything ends up automated, don’t we lose the plot?

Eleanor: We are losing control. The best we can hope for is to lose control in a slightly controlled way: to manage the transition so it isn’t catastrophic.

There are catastrophic scenarios, and there are good scenarios: a powerful, benevolent AI taking care of us like a loving parent. We need to make sure we move towards the good future rather than a messy transition that causes suffering. That is the focus.

Julien: Alignment is the issue of our time. Even if AI is benevolent, we still need challenges: sports, climbing mountains, improving ourselves. What’s your take?

Eleanor: I’m going to focus on drumming. I’m a very bad drummer; I started two years ago. I’m a musician and thought it would be fun, but it’s physical and I’m terrible at it. It’s embarrassing. I don’t have much time to practise because I work.

Maybe when I don’t have to work, I’ll practise all day. I’ll never be the best drummer in the world (or the best million), but I’ll find meaning and joy in it because it’s fun.

People don’t have trouble finding meaning. Look at children: before they’re told they need to make money and hit goals, everything is meaningful to them. I don’t think that’s going to change when we’re liberated from work.

Julien: I like this optimistic vision. If we manage alignment well, AI could run society, provide what we need, and make us more present for each other.

We’ve also heard talk about abundance and universal abundance. Some claim money might become irrelevant. Do you think money could become obsolete and abundance is in reach?

Eleanor: I think abundance is within reach if we don’t mess it up. I’m not sure money goes away. Money is a signalling mechanism. Even in abundance, we’d want ways to signal what we value: “I like your dancing; do more” or “I want another statue”. If not money, we’d invent something similar.

The hope is that scarcity goes away: people shouldn’t worry about food or a roof over their head in such a rich world. We need to ensure everyone benefits.

Julien: Abundance is about material needs, but some things remain scarce: performances, moments, experiences. Perhaps we refocus on those.

Eleanor: I hope so. But we should also notice: we already live in an age of abundance compared to human history, and yet there are hungry people and people without roofs in the rain. We’ve not done so well. It’s not deterministic that we’ll get a utopia; we need to create it.

Julien: There’s debate about meaning: many people define themselves partly by their work. If AI does everything better, could there be a meaning crisis? Could the transition be hard for some people?

Eleanor: The transition is what worries me. I’m optimistic long-term. Humans are good at finding meaning; it’s a core competency.

But if people lose jobs, have economic worries, or lose status, that can be painful. Telling them “don’t worry, in a few years there will be abundance” doesn’t help when they’re in pain now. We need to think seriously about making the transition less painful.

Julien: It depends how long the transition takes and how we ease it. Many people might deny what’s coming, or fear it. We need conversation and debate about how to handle the future.

Eleanor: There’s a lot of denialism and ignorance. In some ways it’s our responsibility to talk about it and help people think through solutions. If people bury their heads in the sand and wake up in a radically different world, that’s quite a shock.

Julien: The world has changed significantly since ChatGPT, only three years ago. It’s still the infancy of what’s coming. Many political and business leaders don’t see it, and we don’t talk about it enough.

Looking ahead to 2030: what will have changed? Is AGI in sight?

Eleanor: Predictions are hard. AGI is confusing: I think we have AGI in a way, and it just keeps improving. The threshold is unclear.

What I have now with agents and strong models is artificial, general, and highly capable, and it’s only getting better. Diffusion will increase, and likely exponentially. People sometimes say, “I saw a trick, you vibe-coded an app, but I don’t see it in society.” The mistake is not realising it’s diffusing quickly.

In a few years it will be everywhere and involved in everything people do. Robotics will improve: it won’t be just on screens; it’ll be in the world.

Prices will continue to fall. Today it’s still a luxury: you need to be relatively wealthy to afford subscriptions. But prices are decreasing quickly; in four years it could feel like a low-value utility.

It won’t be radically different tech; it will be today’s tech distributed everywhere. Your watch has it, your toaster has it. Cars drive themselves. You see buildings built by autonomous robots. It sounds like science fiction, but we should expect something that looks like today’s science-fiction fantasies in three or four years.

Julien: Diffusion depends on affordability. Subscriptions are expensive for most people on Earth. But open-source models and 3D printing could make it a commodity. Even today you don’t need the latest model to do useful work.

If we can 3D print simple robots to do tasks, the value creation could be enormous for most people on Earth.

Eleanor: Yes. Inference is expensive today, but part of that is how companies sell it. Some open models are already near-equivalent at a fraction of the cost. Companies are building data centres and power; there’s catch-up. Once it’s built, prices will sink again.

It’ll be different when nobody has to think, “Should I spend my monthly allowance on this?” Instead you’ll run millions of experiments. Maybe your robot builds lots of things; if you don’t like them, it takes them apart and builds something else. It’s an exciting future, and it’s probably coming sooner than people imagine.

Julien: We need to get ready, spread the word, and help people benefit in the best way possible. Thanks a lot; it’s been a pleasure.



Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories