Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146378 stories
·
33 followers

Inside the Future of AI‑Powered Robotics with Tim Chung | Cozy AI Kitchen

1 Share
From: Microsoft Developer
Duration: 9:16
Views: 148

Welcome back to the Cozy AI Kitchen, where we stir up warm conversations about the coolest tech innovations. In this episode, host John Maeda welcomes Tim Chung, GM of Autonomy and Robotics at Microsoft, for an energizing, hands-on look at how AI is transforming robotics — and what it means for the future of human–robot collaboration.
From action models to physical intelligence, from Teams-connected robots to LED-powered robot dances, this episode brings you right into the next frontier: robots as teammates.
Whether you’re a developer, roboticist, maker, or simply “robotics curious,” this is your perfect deep-dive into how generative AI is opening entirely new possibilities in physical autonomy.

⏱️ CHAPTERS
00:00 – The robots are already here
00:16 – Meet Tim Chung, Microsoft’s robot whisperer
00:40 – Why the era of AI + robotics is different
01:06 – From language to action: action tokens explained
01:32 – Modeling randomness and real‑world unpredictability
02:01 – Introducing the Rainbow Robot
02:20 – Robots joining a Microsoft Teams meeting
03:13 – The Rainbow Robot demo: LEDs, zero‑position, and dance
05:30 – What’s next: the future of robots as teammates
06:20 – Advice for aspiring roboticists
07:00 – Closing thoughts and future possibilities

🎯 WHAT YOU’LL LEARN
- How generative AI is shaping physical intelligence and real‑world robotics
- What action tokens are and why roboticists rely on them
- How robots can join Microsoft Teams as collaborators
- Why randomness, probability, and generalization matter in robot control
- How robotic agents can shift from tools to teammates
- The emerging interplay between LLMs, vision models, and physical action models

👥 SPEAKERS
Tim Chung
GM, Autonomy and Robotics, Microsoft
https://www.linkedin.com/in/timothy-h-chung/

John Maeda
Host, Cozy AI Kitchen
VP of Design and Artificial Intelligence, Microsoft
https://www.linkedin.com/in/johnmaeda/

🔗 RESOURCES & LINKS
🚀 Try Azure for free
https://aka.ms/AzureFreeTrialYT
📚 Explore Microsoft Learn
https://learn.microsoft.com
📚 Learn more about AI tools for creators:
https://learn.microsoft.com/ai
📺 Watch all Cozy AI Kitchen episodes
https://aka.ms/CAIK-YTPlaylist

📌 HASHTAGS
#CozyAIKitchen #MicrosoftDeveloper #Robotics #AI #AutonomousSystems #AzureAI #GenerativeAI #RobotTeammates #ActionModels #PhysicalIntelligence #MicrosoftRobotics #AIInTheRealWorld #Developers #TechDemo

Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsof's new CLI tool for Windows App Devs

1 Share
From: Noraa on Tech
Duration: 0:18
Views: 280

Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Making AI Apps Enterprise-Ready with Microsoft Purview and Microsoft Foundry

1 Share

Building AI apps is easy. Shipping them to production is not.

Microsoft Foundry lets developers bring powerful AI apps and agents to production in days. But managing safety, security, and compliance for each one quickly becomes the real bottleneck. Every enterprise AI project hits the same wall: security reviews, data classification, audit trails, DLP policies, retention requirements. Teams spend months building custom logging pipelines and governance systems that never quite keep up with the app itself.

There is a faster way.

Enable Purview & Ship Faster! 

Microsoft Foundry now includes native integration with Microsoft Purview. When you enable it, every AI interaction in your subscription flows into the same enterprise data governance infrastructure that already protects your Microsoft 365 and Azure data estate.

No SDK changes. No custom middleware. No separate audit system to maintain.

Here is what you get:

  • Visibility within 24 hours. Data Security Posture Management (DSPM) shows you total interactions, sensitive data detected in prompts and responses, user activity across AI apps, and insider risk scoring. This dashboard exists the moment you flip the toggle.
  • Automatic data classification. The same classification engine that scans your Microsoft 365 tenant now scans AI interactions. Credit card numbers, health information, SSNs, and your custom sensitive information types are all detected automatically.
  • Audit logs you do not have to build. Every AI interaction is logged in the Purview unified audit log. Timestamps, user identity, the AI app involved, files accessed, sensitivity labels applied. When legal needs six months of AI interactions for an investigation, the data is already there.
  • DLP policy enforcement. Configure policies that block prompts containing sensitive information before they reach the model. This uses the same DLP framework you already know.
  • eDiscovery, retention, and communication compliance. Search AI interactions alongside email and Teams messages. Set retention policies by selecting "Enterprise AI apps" as the location. Detect harmful or unauthorized content in prompts.

How to Enable

Prerequisite: You need the “Azure AI Account Ownerrole assigned by your Subscription Owner.

  1. Open the Microsoft Foundry portal (make sure you are in the new portal)
  2. Select Operate from the top navigation
  3. Select Compliance in the left pane
  4. Select the Security posture tab
  5. Select the Azure Subscription
  6. Enable the toggle next to Microsoft Purview

Repeat the above steps for other subscriptions

 

By enabling this toggle, data exchanged within Foundry apps and agents' starts flowing to Purview immediately. Purview reports populate within 24 hours.

What shows up in Purview?

Purview Data Security Admins:

Go to the Microsoft Purview portal, open DSPM, and follow the recommendation to setup “Secure interactions from enterprise AI apps” .

 

 

 

Navigate to DSPM > Discover > Apps and Agents to review and monitor the Foundry apps built in your organization

 

 

Navigate to DSPM > Activity Explorer to review the activity on a given agent/application

 

 

What About Cost?

Enabling the integration is free. Audit Standard is included for Foundry apps. You will only be charged for data security policies you setup for governing Foundry data. 

A Real-World Scenario: The Internal HR Assistant

Consider a healthcare company building an internal AI agent for HR questions.

The Old Way: The developer team spends six weeks building a custom logging solution to strip PII/PHI from prompts to meet HIPAA requirements. They have to manually demonstrate these logs to compliance before launch.

The Foundry Way: The team enables the Purview toggle.

  • Detection: Purview automatically flags if an employee pastes a patient ID into the chat.
  • Retention: The team selects "Enterprise AI Apps" in their retention policy, ensuring all chats are kept for the required legal period.
  • Outcome: The app ships on schedule because Compliance trusts the controls are inherited, not bolted on.

Takeaway

Microsoft Purview DSPM is a gamechanger for organizations looking to adopt AI responsibly. By integrating with Microsoft Foundry, it provides a comprehensive framework to discover, protect, and govern AI interactions ensuring compliance, reducing risk, and enabling secure innovation.

We built this integration because teams kept spending months on compliance controls that already exist in Microsoft's stack.

The toggle is there. The capabilities are real. Your security team already trusts Purview. Your compliance team already knows the tools.

Enable it. Ship your agent. Let the infrastructure do what infrastructure does best: work in the background while you focus on what your application does.

Additional Resources

Documentation: Use Microsoft Purview to manage data security & compliance for Microsoft Foundry | Microsoft Learn

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

GCast 209: Running Tests in an Azure Playwright Workspace

1 Share

GCast 209:

Running Tests in an Azure Playwright Workspace

Learn how to run Playwright automated UI tests in the cloud using Azure Playwright Workspaces

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

make.ts

1 Share

make.ts

Up Enter Up Up Enter Up Up Up Enter

Sounds familiar? This is how I historically have been running benchmarks and other experiments requiring a repeated sequence of commands — type them manually once, then rely on shell history (and maybe some terminal splits) for reproduction. These past few years I’ve arrived at a much better workflow pattern — make.ts. I was forced to adapt it once I started working with multiprocess applications, where manually entering commands is borderline infeasible. In retrospect, I should have adapted the workflow years earlier.

The Pattern

Use a file for interactive scripting. Instead of entering a command directly into the terminal, write it to a file first, and then run the file. For me, I type stuff into make.ts and then run ./make.ts in my terminal (Ok, I need one Up Enter for that). I want to be clear here, I am not advocating writing “proper” scripts, just capturing your interactive, ad-hoc command to a persistent file.

There are many benefits relative to Up Up Up workflow:

  • Real commands tend to get large, and it is so much nicer to use a real 2D text editor rather than shell’s line editor.
  • If you need more than one command, you can write several commands, and still run them all with a single key (before make.ts, I was prone to constructing rather horrific && conjuncts for this reason).
  • With a sequence of command outlined, you nudge yourself towards incrementally improving them, making them idempotent, and otherwise investing into your own workflow for the next few minutes, without falling into the YAGNI pit from the outset.
  • At some point you might realize after, say, running a series of ad-hoc benchmarks interactively, that you’d rather write a proper script which executes a collection of benchmarks with varying parameters. With the file approach, you already have the meat of the script implemented, and you only need to wrap in a couple of fors and ifs.
  • Finally, if you happen to work with multi-process projects, you’ll find it easier to manage concurrency declaratively, spawning a tree of processes from a single script, rather than switching between terminal splits.

Details

Use a consistent filename for the script. I use make.ts, and so there’s a make.ts in the root of most projects I work on. Correspondingly, I have make.ts line in project’s .git/info/exclude — the .gitignore file which is not shared. The fixed name reduces fixed costs — whenever I need complex interactivity I don’t need to come up with a name for a new file, I open my pre-existing make.ts, wipe whatever was there and start hacking. Similarly, I have ./make.ts in my shell history, so fish autosuggestions work for me. At one point, I had a VS Code task to run make.ts, though I now use terminal editor.

Start the script with hash bang, #!/usr/bin/env -S deno run --allow-all in my case, and chmod a+x make.ts the file, to make it easy to run.

Write the script in a language that:

  • you are comfortable with,
  • doesn’t require huge setup,
  • makes it easy to spawn subprocesses,
  • has good support for concurrency.

For me, that is TypeScript. Modern JavaScript is sufficiently ergonomic, and structural, gradual typing is a sweet spot that gives you reasonable code completion, but still allows brute-forcing any problem by throwing enough stringly dicts at it.

JavaScript’s tagged template syntax is brilliant for scripting use-cases:

function $(literal, ...interpolated) {
  console.log({ literal, interpolated });
}

const dir = "hello, world";
$`ls ${dir}`;

prints

{
    literal: [ "ls ", "" ],
    interpolated: [ "hello, world" ]
}

What happens here is that $ gets a list of literal string fragments inside the backticks, and then, separately, a list of values to be interpolated in-between. It could concatenate everything to just a single string, but it doesn’t have to. This is precisely what is required for process spawning, where you want to pass an array of strings to the exec syscall.

Specifically, I use dax library with Deno, which is excellent as a single-binary batteries-included scripting environment (see <3 Deno). Bun has a dax-like library in the box and is a good alternative (though I personally stick with Deno because of deno fmt and deno lsp). You could also use famous zx, though be mindful that it uses your shell as a middleman, something I consider to be sloppy (explanation).

While dax makes it convenient to spawn a single program, async/await is excellent for herding a slither of processes:

await Promise.all([
    $`sleep 5`,
    $`sleep 10`,
]);

Concrete Example

Here’s how I applied this pattern earlier today. I wanted to measure how TigerBeetle cluster recovers from the crash of the primary. The manual way to do that would be to create a bunch of ssh sessions for several cloud machines, format datafiles, start replicas, and then create some load. I almost started to split my terminal up, but then figured out I can do it the smart way.

The first step was cross-compiling the binary, uploading it to the cloud machines, and running the cluster (using my box from the other week):

await $`./zig/zig build -Drelease -Dtarget=x86_64-linux`;
await $`box sync 0-5 ./tigerbeetle`;
await $`box run 0-5
    ./tigerbeetle format --cluster=0 --replica-count=6 --replica=?? 0_??.tigerbeetle`;
await $`box run 0-5
    ./tigerbeetle start --addresses=?0-5? 0_??.tigerbeetle`;

Running the above the second time, I realized that I need to kill the old cluster first, so two new commands are “interactively” inserted:

await $`./zig/zig build -Drelease -Dtarget=x86_64-linux`;
await $`box sync 0-5 ./tigerbeetle`;

await $`box run 0-5 rm 0_??.tigerbeetle`.noThrow();
await $`box run 0-5 pkill tigerbeetle`.noThrow();

await $`box run 0-5
    ./tigerbeetle format --cluster=0 --replica-count=6 --replica=?? 0_??.tigerbeetle`;
await $`box run 0-5
    ./tigerbeetle start --addresses=?0-5? 0_??.tigerbeetle`;

At this point, my investment in writing this file and not just entering the commands one-by-one already paid off!

The next step is to run the benchmark load in parallel with the cluster:

await Promise.all([
    $`box run 0-5 ./tigerbeetle start     --addresses=?0-5? 0_??.tigerbeetle`,
    $`box run 6   ./tigerbeetle benchmark --addresses=?0-5?`,
])

I don’t need two terminals for two processes, and I get to copy-paste-edit the mostly same command.

For the next step, I actually want to kill one of the replicas, and I also want to capture live logs, to see in real-time how the cluster reacts. This is where 0-5 multiplexing syntax of box falls short, but, given that this is JavaScript, I can just write a for loop:

const replicas = range(6).map((it) =>
    $`box run ${it}
        ./tigerbeetle start --addresses=?0-5? 0_??.tigerbeetle
        &> logs/${it}.log`
        .noThrow()
        .spawn()
);

await Promise.all([
    $`box run 6 ./tigerbeetle benchmark --addresses=?0-5?`,
    (async () => {
        await $.sleep("20s");
        console.log("REDRUM");
        await $`box run 1 pkill tigerbeetle`;
    })(),
]);

replicas.forEach((it) => it.kill());
await Promise.all(replicas);

At this point, I do need two terminals. One runs ./make.ts and shows the log from the benchmark itself, the other runs tail -f logs/2.log to watch the next replica to become primary.

I have definitelly crossed the line where writing a script makes sense, but the neat thing is that the gradual evolution up to this point. There isn’t a discontinuity where I need to spend 15 minutes trying to shape various ad-hoc commands from five terminals into a single coherent script, it was in the file to begin with.

And then the script is easy to evolve. Once you realize that it’s a good idea to also run the same benchmark against a different, baseline version TigerBeetle, you replace ./tigerbeetle with ./${tigerbeetle} and wrap everything into

async function benchmark(tigerbeetle: string) {
    // ...
}

const tigerbeetle = Deno.args[0]
await benchmark(tigerbeetle);
$ ./make.ts tigerbeetle-baseline
$ ./make.ts tigerbeetle

A bit more hacking, and you end up with a repeatable benchmark schedule for a matrix of parameters:

for (const attempt of [0, 1])
for (const tigerbeetle of ["baseline", "tigerbeetle"])
for (const mode of ["normal", "viewchange"]) {
    const results = $.path(
        `./results/${tigerbeetle}-${mode}-${attempt}`,
    );
    await benchmark(tigerbeetle, mode, results);
}

That’s the gist of it. Don’t let the shell history be your source, capture it into the file first!

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

GitHub Copilot app modernization for C++ is now in Public Preview

1 Share

With the launch of Visual Studio 2026, we announced a Private Preview of GitHub Copilot app modernization for C++, which reduces the cost of adopting the latest version of the MSVC Build Tools. We used the feedback we received from our many Private Preview participants to make improvements that benefit all our users. After receiving feedback, we added support for CMake projects, reduced hallucinations, removed several critical failures, improved Copilot’s behavior when encountering an internal compiler error, and reinforced Copilot’s understanding of when project files need to be modified to do the upgrade.

Here’s what one of our Private Preview participants said about their experience:

“Having Copilot guide the upgrade flow and surface suggested changes in context has made that process smoother than doing it entirely by hand or by another agent.” – Private Preview participant

We are happy to announce that this feature is now available to all C++ users as a Public Preview in Visual Studio 2026 Insiders.

To get started, check out the documentation on Microsoft Learn.

What to expect

After launching GitHub Copilot app modernization, Copilot will examine your project to see if there are any steps to take to update your project settings to move to the newer MSVC version. If so, it’ll assist you in making those changes.

Assessment

After the settings have been updated, Copilot will do an initial build to assess if there are any issues blocking your upgrade, such as stricter conformance, warnings whose warning level has changed, or non-standard extensions that have been deprecated or removed. After the assessment is complete, Copilot checks with you to confirm accuracy, and gives you a chance to give it further instructions like ignoring specific or entire categories of issues.

assessment.md file open in Visual Studio showing the assessment that GitHub Copilot generated after upgrading MSVC

Planning

After you and Copilot agree on the assessment, it will move into the planning stage, where Copilot will propose solutions to all the issues that need to be addressed. Again, it will produce a detailed description of these solutions and its reasoning for applying them, and it will check with you for any additional information. If you don’t like the proposed solution, you can direct it down another path.

plan.md file open in Visual Studio showing the plan that GitHub Copilot generated for addressing build issues

Execution

Once the plan is set, Copilot will break the plan down into concrete tasks for execution. You can direct it to approach the implementation in ways that fit your organization’s processes, such as by keeping similar changes in the same commit or using a particular style guideline when editing the code. Copilot will execute the tasks and initiate another build to check that all issues are resolved. If they aren’t, it will iterate until it has resolved the issues for you.

Execution summary provided by GitHub Copilot after addressing build issues

You are in control

At every step of the way, you can shape Copilot’s behavior, guiding it towards solutions that fit your own expectations, saving you time researching, diagnosing issues, designing solutions, and implementing those solutions. It can take a multi-person, multi-week task of upgrading your build tools and turn it into something you do on the same day as the release of the new tools.

Talk to us!

We are excited for you to try out this feature. Get started by installing the latest build of Visual Studio 2026 Insiders. Let us know how well this feature is working for you and how we can make it even better. If you have any questions or general comments about the feature, feel free to leave a comment on this blog post. If you want to suggest an improvement, you can use the Help > Send Feedback menu directly in Visual Studio to post on Developer Community.

The post GitHub Copilot app modernization for C++ is now in Public Preview appeared first on C++ Team Blog.

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories