Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149566 stories
·
33 followers

1M context is now generally available for Opus 4.6 and Sonnet 4.6

1 Share

1M context is now generally available for Opus 4.6 and Sonnet 4.6

Here's what surprised me:

Standard pricing now applies across the full 1M window for both models, with no long-context premium.

OpenAI and Gemini both charge more for prompts where the token count goes above a certain point - 200,000 for Gemini 3.1 Pro and 272,000 for GPT-5.4.

Tags: ai, generative-ai, llms, anthropic, claude, llm-pricing, long-context

Read the whole story
alvinashcraft
17 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Why physical AI is becoming manufacturing’s next advantage

1 Share

For decades, manufacturers have pursued automation to drive efficiency, reduce costs, and stabilize operations. That approach delivered meaningful gains, but it is no longer enough.

Today’s manufacturing leaders face a different challenge: how to grow amid labor constraints, rising complexity, and increasing pressure to innovate faster without sacrificing safety, quality, or trust. The next phase of transformation will not be defined by isolated AI tools or individual robots, but by intelligence that can operate reliably in the physical world.

This is where physical AI—intelligence that can sense, reason, and act in the real world—marks a decisive shift. And it is why Microsoft and NVIDIA are working together to help manufacturers move from experimentation to production at industrial scale.

The industrial frontier: Intelligence and trust, not just automation

Most early AI adoption focused on narrow optimization: automating tasks, improving utilization, and cutting costs. While valuable, that phase often created new friction, including skills gaps, governance concerns, and uncertainty about long‑term impact. Furthermore, the use cases were plentiful but not as strategic.

The industrial frontier represents a different approach. Rather than asking how much work machines can replace, frontier manufacturers ask how AI can expand human capability, accelerate innovation, and unlock new forms of value while remaining trustworthy and controllable.

Across industries, companies that successfully move into this frontier phase share two non‑negotiables:

  • Intelligence: AI systems must understand how the business actually handles its data, workflows, and institutional knowledge.
  • Trust: As AI begins to act in high‑stakes environments, organizations must retain security, governance, and observability at every layer.

Without intelligence, AI becomes generic. Without trust, adoption stalls.

Why manufacturing is the proving ground for physical AI

Manufacturing is uniquely positioned at the center of this shift.

AI is no longer confined to planning or analytics. It is moving into physical execution: coordinating machines, adapting to real‑world variability, and working alongside people on the factory floor. Robotics, autonomous systems, and AI agents must now perceive, reason, and act in dynamic environments.

This transition exposes a critical gap. Traditional automation excels at repetition but struggles with adaptability. Human workers bring judgment and context but are constrained by scale. Physical AI closes that gap by enabling human‑led, AI‑operated systems, where people set intent and intelligent systems execute, learn, and improve over time. Humans are essential for scaled success.

Microsoft and NVIDIA: Accelerating physical AI at scale

Physical AI cannot be delivered through point solutions. It requires agentic-driven, enterprise-grade development, deployment, and operations toolchains and workflows that connect simulation, data, AI models, robotics, and governance into a coherent system.

NVIDIA is building the AI infrastructure that makes physical AI possible, including accelerated computing, open models, simulation libraries, and robotics frameworks and blueprints that enable the ecosystem to build autonomous robotics systems that can perceive, reason, plan, and take action in the physical world. Microsoft complements this with a cloud and data platform designed to operate physical AI securely, at scale, and across the enterprise.

Together, Microsoft and NVIDIA are enabling manufacturers to move beyond pilots toward production‑ready physical AI systems that can be developed, tested, deployed, and continuously improved across heterogeneous environments spanning the product lifecycle, factory operations, and supply chain.

From intelligence to action: Human-agent teams in the factory

At the industrial frontier, AI is not a standalone system, but a digital teammate.

When AI agents are grounded in the proper operational data, embedded in human workflows, and governed end to end, they can assist with tasks such as:

  • Optimizing production lines in real time
  • Coordinating maintenance and quality decisions
  • Adapting operations to supply or demand disruptions
  • Accelerating engineering and product lifecycle decisions

For example, manufacturers are beginning to use simulation‑grounded AI agents to evaluate production changes virtually before deploying them on the factory floor, reducing risk while accelerating decision‑making.

Crucially, frontier manufacturers design these systems so humans remain in control. AI executes, monitors, and recommends, while people provide intent, oversight, and judgment. This balance allows organizations to move faster without losing confidence or control.

The role of trust in scaling physical AI

As physical AI systems scale, trust becomes the limiting factor.

Manufacturers must ensure that AI systems are secure, observable, and operating within policy, especially when they influence safety‑critical or mission‑critical processes. Governance cannot be an afterthought; It must be engineered into the platform itself.

This is why frontier manufacturers treat trust as a first‑class requirement, pairing innovation with visibility, compliance, and accountability. Only then can physical AI move from promising demonstrations to enterprise‑wide deployment.

Why this moment matters—and what’s next

The convergence of AI agents, robotics, simulation, and real‑time data marks an inflection point for manufacturing. What was once experimental is becoming operational. What was once siloed is becoming connected.

At NVIDIA GTC 2026, Microsoft and NVIDIA will demonstrate how this collaboration supports physical AI systems that manufacturers can deploy today and scale responsibly tomorrow. From simulation‑driven development to real‑world execution, the focus is on helping manufacturers cross the industrial frontier with confidence.

For manufacturing leaders, the question is no longer whether physical AI will reshape operations, but how quickly they can adopt it responsibly, at scale, and with trust built in from the start.

Discover more with Microsoft at NVIDIA GTC 2026.

This content was produced by Microsoft. It was not written by MIT Technology Review’s editorial staff.

Read the whole story
alvinashcraft
24 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Vibe Coding Enterprise Data Apps with Replit and Databricks

1 Share
Summary Manny (from Replit), and Cong & Denis (from Databricks) walked through our new Replit–Databricks connector, showing how we vibe coded enterprise data apps live. Databricks centralized enterprise data and AI for tens of thousands of customers, so our Replit apps could tap governed tables, models, and warehouses without copying sensitive data around. We used Replit’s Databricks connector to pull real sample datasets into an app, then let the agent vibe code a 3D weather globe that stayed safely inside Databricks. Replit Agent handled the scaffolding and UI while Databricks handled scale and governance, so Manny could ship a working data tool in minutes instead of weeks of traditional BI work. Databricks Genie acted as an in-app data copilot, answering natural-language questions with cited tables so builders saw exactly which data every insight came from.

Read the whole story
alvinashcraft
32 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

What’s New in Agent Skills: Code Skills, Script Execution, and Approval for Python

1 Share

Code-Defined Skills, Script Execution, and Approval for Agent Skills in Python

When we introduced Agent Skills for Microsoft Agent Framework, you could package domain expertise as file-based skill directories and have agents discover and load them on demand. Now, the Python SDK takes skills further — you can define skills entirely in code, let agents execute scripts bundled with skills, and gate script execution behind human approval. These additions give you more flexibility in how you author skills, more power in what agents can do with them, and more control over when agents are allowed to act.

Code-Defined Skills

Until now, every skill started as a directory on the filesystem with a SKILL.md file. That works well for static, shareable knowledge packages, but not every skill fits that mold. Sometimes skill content comes from a database. Sometimes you want skill definitions to live alongside the application code that uses them. And sometimes a resource needs to execute logic at read time rather than serve static text.

Code-defined skills address these scenarios. You create a Skill instance in Python with a name, description, and instruction content — no files required:

from textwrap import dedent
from agent_framework import Skill, SkillResource, SkillsProvider

code_style_skill = Skill(
    name="code-style",
    description="Coding style guidelines and conventions for the team",
    content=dedent("""\
        Use this skill when answering questions about coding style,
        conventions, or best practices for the team.
    """),
    resources=[
        SkillResource(
            name="style-guide",
            content=dedent("""\
                # Team Coding Style Guide
                - Use 4-space indentation (no tabs)
                - Maximum line length: 120 characters
                - Use type annotations on all public functions
            """),
        ),
    ],
)

skills_provider = SkillsProvider(skills=[code_style_skill])

The agent uses code-defined skills exactly like file-based ones — calling load_skill to retrieve instructions and read_skill_resource to fetch resources. From the agent’s perspective, there’s no difference.

Dynamic Resources

Static content is useful, but sometimes you need resources that return fresh data each time they’re read. The @skill.resource decorator registers a function as a resource. Both sync and async functions are supported:

import os
from agent_framework import Skill

project_info_skill = Skill(
    name="project-info",
    description="Project status and configuration information",
    content="Use this skill for questions about the current project.",
)

@project_info_skill.resource
def environment() -> Any:
    """Get current environment configuration."""
    env = os.environ.get("APP_ENV", "development")
    region = os.environ.get("APP_REGION", "us-east-1")
    return f"Environment: {env}, Region: {region}"

@project_info_skill.resource(name="team-roster", description="Current team members")
def get_team_roster() -> Any:
    """Return the team roster."""
    return "Alice Chen (Tech Lead), Bob Smith (Backend Engineer)"

When the decorator is used without arguments (@skill.resource), the function name becomes the resource name and the docstring becomes the description. Use @skill.resource(name="...", description="...") to set them explicitly. The function is called each time the agent reads the resource, so it can pull up-to-date data from databases, APIs, or environment variables.

Code-Defined Scripts

Use the @skill.script decorator to register a function as an executable script on a skill. Code-defined scripts run in-process as direct function calls:

from agent_framework import Skill

unit_converter_skill = Skill(
    name="unit-converter",
    description="Convert between common units using a conversion factor",
    content="Use the convert script to perform unit conversions.",
)

@unit_converter_skill.script(name="convert", description="Convert a value: result = value × factor")
def convert_units(value: float, factor: float) -> str:
    """Convert a value using a multiplication factor."""
    import json
    result = round(value * factor, 4)
    return json.dumps({"value": value, "factor": factor, "result": result})

A JSON Schema is automatically created from the function’s typed parameters and presented to the agent, so it knows what arguments the script expects and provides them accordingly when calling run_skill_script.

Combining File-Based and Code-Defined Skills

You can mix both approaches in a single SkillsProvider. Pass skill_paths for file-based skills and skills for code-defined ones. If a code-defined skill shares a name with a file-based skill, the file-based version takes precedence:

from pathlib import Path
from agent_framework import Skill, SkillsProvider

my_skill = Skill(
    name="my-code-skill",
    description="A code-defined skill",
    content="Instructions for the skill.",
)

skills_provider = SkillsProvider(
    skill_paths=Path(__file__).parent / "skills",
    skills=[my_skill],
)

Script Execution

Skills can include executable scripts that the agent runs via the run_skill_script tool. How a script runs depends on how it was defined:

  • Code-defined scripts (registered via @skill.script) run in-process as direct function calls. No runner is needed.
  • File-based scripts (.py files discovered in skill directories) require a SkillScriptRunner — any callable matching (skill, script, args) -> Any — that you provide to control how the script is executed.

To enable execution of file-based scripts, pass a script_runner to SkillsProvider:

from pathlib import Path
from agent_framework import Skill, SkillScript, SkillsProvider

def my_runner(skill: Skill, script: SkillScript, args: dict | None = None) -> str:
    """Run a file-based script as a subprocess."""
    import subprocess, sys
    cmd = [sys.executable, str(Path(skill.path) / script.path)]
    if args:
        for key, value in args.items():
            if value is not None:
                cmd.extend([f"--{key}", str(value)])
    # ... Execute cmd in a sandboxed subprocess and return stdout
    return result.stdout.strip()

skills_provider = SkillsProvider(
    skill_paths=Path(__file__).parent / "skills",
    script_runner=my_runner,
)

This runner is provided for demonstration purposes only. For production use, implement proper sandboxing, resource limits, input validation, and structured logging.

The runner receives the resolved Skill, SkillScript, and an optional args dictionary. You control the execution environment — how scripts are launched, what permissions they have, and how their output is captured.

Script Approval

When agents can execute scripts, you need a way to keep a human in the loop for sensitive operations. Setting require_script_approval=True on SkillsProvider gates all script execution behind human approval. Instead of executing immediately, the agent pauses and returns approval requests that your application handles:

from agent_framework import Agent, Skill, SkillsProvider

# Create provider with approval enabled
skills_provider = SkillsProvider(
    skills=[my_skill],
    require_script_approval=True,
)

# ... Create an agent with skills_provider as a context provider and start a session
result = await agent.run("Deploy version 2.5.0 to production", session=session)

# Handle approval requests
while result.user_input_requests:
    for request in result.user_input_requests:
        print(f"Script: {request.function_call.name}")
        print(f"Args: {request.function_call.arguments}")

        approval = request.to_function_approval_response(approved=True)
        result = await agent.run(approval, session=session)

When a script is rejected (approved=False), the agent is informed that the user declined and can respond accordingly — explaining the limitation or suggesting an alternative approach.

This pattern gives you the benefits of agent-driven script execution while maintaining the oversight that enterprise environments require.

Use Cases

Data Validation Pipelines

Package your organization’s data quality rules as a skill with validation scripts. When an analyst asks the agent to check a dataset, it loads the skill, runs the validation script against the data, and reports results — all following the same rules every time. With approval enabled, a data steward can review each validation before it executes.

DevOps Runbooks

Turn your team’s operational procedures into skills with executable scripts for common tasks like log retrieval, health checks, or configuration changes. The agent loads the right runbook based on the issue, and the approval mechanism ensures that no deployment or infrastructure change happens without human sign-off.

Dynamic Knowledge from Internal Systems

Use code-defined skills with dynamic resources to surface live information from internal APIs, databases, or configuration systems. An HR agent can pull current policy details from a CMS at read time rather than relying on a static copy that might be stale.

Security Considerations

Script execution introduces additional responsibility. Agent Skills should be treated like any third-party code you bring into your project:

  • Review before use — Read all skill content and scripts before deploying. Verify that a script’s actual behavior matches its stated intent.
  • Sandbox execution — Run file-based scripts in isolated environments. Limit filesystem, network, and system-level access to only what the skill requires.
  • Use approval for sensitive operations — Enable require_script_approval=True for any skill that can produce side effects in production systems.
  • Audit and log — Record which skills are loaded, which scripts are executed, and what arguments are passed to maintain an audit trail.

Get Started

Code-defined skills, script execution, and script approval are available now in the Python agent-framework package. These features give you more ways to author skills, more capability within skills, and the safety controls needed for production deployments.

To learn more and try it out:

We’re always interested in hearing from you. If you have feedback or questions, reach out to us on the GitHub discussion boards. And if you’ve been enjoying Agent Framework, give us a ⭐ on GitHub.

The post What’s New in Agent Skills: Code Skills, Script Execution, and Approval for Python appeared first on Microsoft Agent Framework.

Read the whole story
alvinashcraft
38 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

GPT-5.4 Makes A Splash, AI’s Growth on Mobile, Data Centers Go Off-Grid, Apple’s Diffusion Research

1 Share
The Batch AI News and Insights: Should there be a Stack Overflow for AI coding agents to share their learnings with each other?
Read the whole story
alvinashcraft
44 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

I wrote an amazing React Book

1 Share

In October 2025, I attended the React Alicante conference in Spain.

It was the first time for me when I went to a coding conference. Everything was so beautiful and well presented that I couldn't believe my eyes.

The content was so good that inspired me and made me realize I didn't know React as well as I should.

So, I was motivated to learn more important things about React, such as the new features and the performance parts and patterns that I didn't know.

It was this moment that created a question inside my mind: What if I would create a book that describes these topics and also make me learn them in depth?

And so I did. I present you: Best Ways to Improve Your React Project.

Topics

  • When. you should and should not useEffect
  • Patterns, like the Composition pattern
  • Architecture and folder structure using the Fractal pattern
  • Handling modern forms with RHF and Zod for Validation
  • Introduction to React Compiler and why you shouldn't memoize manually anymore
  • Virtualization
  • Using AI in your React workflow
  • useTransition and useOptimistic for instant UIs ... and more.

Preview

A free 20 pages preview is available on my website, on book's section: Free Preview

Where can you find it?

EPUB and Paperback versions on Amazon: Amazon EPUB & Paperback versions

PDF version and a 40% discount, use code LAUNCH on Gumroad:
PDF version with 40% discount

Read the whole story
alvinashcraft
51 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories