Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151907 stories
·
33 followers

Sperm Whales' Communication Closely Parallels Human Language, Study Finds

1 Share
An anonymous reader quotes a report from the Guardian: We may appear to have little in common with sperm whales – enormous, ocean-dwelling animals that last shared a common ancestor with humans more than 90 million years ago. But the whales' vocalized communications are remarkably similar to our own, researchers have discovered. Not only do sperm whale have a form of "alphabet" and form vowels within their vocalizations but the structure of these vowels behaves in the same way as human speech, the new study has found. Sperm whales communicate in a series of short clicks called codas. Analysis of these clicks shows that the whales can differentiate vowels through the short or elongated clicks or through rising or falling tones, using patterns similar to languages such as Mandarin, Latin and Slovenian. The structure of the whales' communication has "close parallels in the phonetics and phonology of human languages, suggesting independent evolution," the paper, published in the Proceedings B journal, states. Sperm whale coda vocalizations are "highly complex and represent one of the closest parallels to human phonology of any analyzed animal communication system," it added. [...] The new study shows that "sperm whale communication isn't just about patterns of clicks -- it involves multiple interacting layers of structure," said Mauricio Cantor, a behavioral ecologist at the Marine Mammal Institute who was not involved in the research. "With this study, we're starting to see that these signals are organized in ways we didn't fully appreciate before." The latest discovery around sperm whale speech has inched forward the possibility of someday fully understanding the creatures and even communicating with them. Project CETI has set a goal of being able to comprehend 20 different vocalized expressions, relating to actions such as diving and sleeping, within the next five years. A future where we're able to fully understand what the whales are saying and be able to have a conversation with them is "totally within our grasp," said David Gruber, founder and president of Project CETI. "We've already got a lot further than I thought we could. But it will take time, and funding. At the moment we are like a two-year-old, just saying a few words. In a few years' time, maybe we will be more like a five-year-old."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Intel's New Core Series 3 Is Its Answer To the MacBook Neo

1 Share
Intel has launched a new budget-focused Core Series 3 processor line for lower-cost laptops -- "Intel's response to budget CPUs that are appearing in laptops like the Apple MacBook Neo," writes PCWorld's Mark Hachman. From the report: Intel unexpectedly launched the Core Series 3, based on its excellent "Panther Lake" (Core Ultra Series 3) architecture and 18A manufacturing, for devices for home consumers and small business on Thursday. Intel announced that a number of partners will launch laptops based upon the chip, including Acer, Asus, HP, Lenovo, and others. Although those laptops will be available beginning today, a number of them will begin shipping later this year, the partners said. All of it -- from the specifications down to the messaging -- feels extremely aimed at trimming the fat and delivering to users just what they'll want. Intel's new Core Series 3 family just includes two "Cougar Cove" performance cores and four low-power efficiency "Darkmont" cores, with two Xe graphics cores on top of it. Intel isn't really worrying about AI, with an NPU capable of just 17 TOPS, though the company claims the CPU, NPU, and GPU combined reach 40 TOPS of performance. Yes, laptops will use pricey DDR5 memory, but at the lower end: just DDR5-6400 speeds. Support for three external displays will be included, though, maximizing multiple screens for maximum productivity. Intel used the term "all day battery life" without elaboration. [...] Intel Core Series 3 delivers up to 47 percent better single-thread performance, up to 41 percent better multi thread performance, and up to 2.8x better GPU AI performance, Intel said. Compared against Intel's older Core 7 150U, Intel is saying that the new chip will outperform it by 2.1 times in content-creation and 2.7 times the AI performance. [...] We still don't know what Intel will charge for the chip, nor do we know what you'll be able to buy a Core Series 3 laptop for.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

No country left behind with sovereign AI

1 Share
Ryan welcomes Stephen Watt, distinguished engineer and VP of Red Hat’s Office of the CTO, to chat about digital sovereignty and sovereign AI.
Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

🏆 Agents League Winner Spotlight – Reasoning Agents Track

1 Share

Agents League was designed to showcase what agentic AI can look like when developers move beyond single‑prompt interactions and start building systems that plan, reason, verify, and collaborate.

Across three competitive tracks—Creative Apps, Reasoning Agents, and Enterprise Agents—participants had two weeks to design and ship real AI agents using production‑ready Microsoft and GitHub tools, supported by live coding battles, community AMAs, and async builds on GitHub.

Today, we’re excited to spotlight the winning project for the Reasoning Agents track, built on Microsoft Foundry: CertPrep Multi‑Agent System — Personalised Microsoft Exam Preparation by Athiq Ahmed.

 

The Reasoning Agents Challenge Scenario

The goal of the Reasoning Agents track challenge was to design a multi‑agent system capable of effectively assisting students in preparing for Microsoft certification exams. Participants were asked to build an agentic workflow that could understand certification syllabi, generate personalized study plans, assess learner readiness, and continuously adapt based on performance and feedback. The suggested reference architecture modeled a realistic learning journey: starting from free‑form student input, a sequence of specialized reasoning agents collaboratively curated Microsoft Learn resources, produced structured study plans with timelines and milestones, and maintained learner engagement through reminders. Once preparation was complete, the system shifted into an assessment phase to evaluate readiness and either recommend the appropriate Microsoft certification exam or loop back into targeted remediation—emphasizing reasoning, decision‑making, and human‑in‑the‑loop validation at every step.

All details are available here: agentsleague/starter-kits/2-reasoning-agents at main · microsoft/agentsleague.

 

The Winning Project: CertPrep Multi‑Agent System

The CertPrep Multi‑Agent System is an AI solution for personalized Microsoft certification exam preparation, supporting nine certification exam families.

At a high level, the system turns free‑form learner input into a structured certification plan, measurable progress signals, and actionable recommendations—demonstrating exactly the kind of reasoned orchestration this track was designed to surface.

 

 

Inside the Multi‑Agent Architecture

At its core, the system is designed as a multi‑agent pipeline that combines sequential reasoning, parallel execution, and human‑in‑the‑loop gates, with traceability and responsible AI guardrails.

The solution is composed of eight specialized reasoning agents, each focused on a specific stage of the learning journey:

  • LearnerProfilingAgent – Converts free‑text background information into a structured learner profile using Microsoft Foundry SDK (with deterministic fallbacks).
  • StudyPlanAgent – Generates a week‑by‑week study plan using a constrained allocation algorithm to respect the learner’s available time.
  • LearningPathCuratorAgent – Maps exam domains to curated Microsoft Learn resources with trusted URLs and estimated effort.
  • ProgressAgent – Computes a weighted readiness score based on domain coverage, time utilization, and practice performance.
  • AssessmentAgent – Generates and evaluates domain‑proportional mock exams.
  • CertificationRecommendationAgent – Issues a clear “GO / CONDITIONAL GO / NOT YET” decision with remediation steps and next‑cert suggestions.

Throughout the pipeline, a 17‑rule Guardrails Pipeline enforces validation checks at every agent boundary, and two explicit human‑in‑the‑loop gates ensure that decisions are made only when sufficient learner confirmation or data is present.

CertPrep leverages Microsoft Foundry Agent Service and related tooling to run this reasoning pipeline reliably and observably:

  • Managed agents via Foundry SDK
  • Structured JSON outputs using GPT‑4o (JSON mode) with conservative temperature settings
  • Guardrails enforced through Azure Content Safety
  • Parallel agent fan‑out using concurrent execution
  • Typed contracts with Pydantic for every agent boundary
  • AI-assisted development with GitHub Copilot, used throughout for code generation, refactoring, and test scaffolding

Notably, the full pipeline is designed to run in under one second in mock mode, enabling reliable demos without live credentials.

 

User Experience: From Onboarding to Exam Readiness

Beyond its backend architecture, CertPrep places strong emphasis on clarity, transparency, and user trust through a well‑structured front‑end experience. The application is built with Streamlit and organized as a 7‑tab interactive interface, guiding learners step‑by‑step through their preparation journey.

From a user’s perspective, the flow looks like this:

  1. Profile & Goals Input
    Learners start by describing their background, experience level, and certification goals in natural language. The system immediately reflects how this input is interpreted by displaying the structured learner profile produced by the profiling agent.
  2. Learning Path & Study Plan Visualization
    Once generated, the study plan is presented using visual aids such as Gantt‑style timelines and domain breakdowns, making it easy to understand weekly milestones, expected effort, and progress over time.
  3. Progress Tracking & Readiness Scoring
    As learners move forward, the UI surfaces an exam‑weighted readiness score, combining domain coverage, study plan adherence, and assessment performance—helping users understand why the system considers them ready (or not yet).
  4. Assessments and Feedback
    Practice assessments are generated dynamically, and results are reported alongside actionable feedback rather than just raw scores.
  5. Transparent Recommendations
    Final recommendations are presented clearly, supported by reasoning traces and visual summaries, reinforcing trust and explainability in the agent’s decision‑making.

The UI also includes an Admin Dashboard and demo‑friendly modes, enabling judges, reviewers, or instructors to inspect reasoning traces, switch between live and mock execution, and demonstrate the system reliably without external dependencies.

 

Why This Project Stood Out

This project embodies the spirit of the Reasoning Agents track in several ways:

  • Clear separation of reasoning roles, instead of prompt‑heavy monoliths
  • Deterministic fallbacks and guardrails, critical for educational and decision‑support systems
  • Observable, debuggable workflows, aligned with Foundry’s production goals
  • Explainable outputs, surfaced directly in the UX

It demonstrates how agentic patterns translate cleanly into maintainable architectures when supported by the right platform abstractions.

 

Try It Yourself

Explore the project, architecture, and demo here:

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

⚡Foundry Toolkit for VS Code: A Deep Dive on GA

1 Share

As we shared in the announcement, Microsoft Foundry Toolkit for Visual Studio Code is now generally available. In this deep dive, we walk through everything that’s in the GA release — from the rebrand and extension consolidation, to model experimentation, agent development, evaluations, and on-device AI for scientists and engineers pushing the boundaries of edge hardware.

Whether you’re exploring your first model, shipping a production agent, or squeezing performance from edge hardware, Foundry Toolkit meets you where you are.

🧪The Fastest Way to Start Experimenting with AI Models

You’ve heard about a new model and want to try it right now — not after spinning up infrastructure or writing boilerplate API code. That’s exactly what Microsoft Foundry Toolkit is built to deliver.

With a Model Catalog spanning 100+ models — cloud-hosted from GitHub, Microsoft Foundry, OpenAI, Anthropic, and Google, plus local models via ONNX, Foundry Local, or Ollama — you go from curiosity to testing in minutes.

The Model Playground is where experimentation lives: compare two models side by side, attach files for multimodal testing, enable web search, adjust system prompts, and watch streaming responses come in.

When something works, View Code generates ready-to-use snippets in Python, JavaScript, C#, or Java — the exact API call you just tested, translated into your language of choice and ready to paste.

🤖Building AI Agents: From Prototype to Production

Foundry Toolkit supports the full agent development journey with two distinct paths and a clean bridge between them.

Path A: The Prototyper: No Code Required

Agent Builder is a low-code interface that lets you take an idea, define instructions, attach tools, and start a conversation — all without writing a line of code. It’s the fastest way to validate whether an agent concept actually works. You can:

  • Write and refine instructions with the built-in Prompt Optimizer, which analyzes your instructions and suggests improvements
  • Connect tools from the Tool Catalog — browse tools from the Foundry public catalog or local MCP servers, configure them with a few clicks, and wire them into your agent
  • Configure MCP tool approval — decide whether tool calls need your sign-off or can run automatically
  • Switch between agents instantly with the quick switcher, and manage multiple agent drafts without losing work (auto-save has you covered)
  • Save to Foundry with a single click and manage your agents from there.

The result is a working, testable agent in minutes — perfect for validating use cases or prototyping features before investing in a full codebase.

Path B: The Professional Team: Code-First, Production-Ready

For teams building complex systems — multi-agent workflows, domain-specific orchestration, production deployments — code gives you control. Foundry Toolkit scaffolds production-ready code structures for Microsoft Agent Framework, LangGraph, and other popular orchestration frameworks. You’re not starting from scratch; you’re starting from a solid foundation.

Once your agent is running, Agent Inspector turns debugging from guesswork into real engineering:

  • Hit F5to launch your agent with full VS Code debugger support — breakpoints, variable inspection, step-through execution
  • Watch real-time streaming responses, tool calls, and workflow graphs visualize as your agent runs
  • Double-click any node in the workflow visualization to jump straight to the source code behind it
  • Local tracing captures the full execution span tree across tool calls and delegation chains — no external infrastructure needed

When you’re ready to ship, one-click deployment packages your agent and deploys it to a production-grade runtime on Microsoft Foundry Agent Service as a hosted-agent. The Hosted Agent Playground lets you test it directly from the VS Code sidebar, keeping the feedback loop tight.

The Bridge: From Prototype to Code, Seamlessly

These paths aren’t silos — they’re stages. When your Agent Builder prototype is ready to grow, export it directly to code with a single click. The generated project includes the agent’s instructions, tool configurations, and scaffolding — giving your engineering team a real starting point rather than a rewrite.

GitHub Copilot with the Microsoft Foundry Skill keeps momentum going once you’re in code. The skill knows the Agent Framework patterns, evaluation APIs, and Foundry deployment model. Ask it to generate an agent, write an evaluation, or scaffold a multi-agent workflow, and it produces code that works with the rest of the toolkit.

🎯Evaluations: Quality Built Into the Workflow

At every stage — prototype or production — integrated evaluations let you measure agent quality without switching tools. Define evaluations using familiar pytest syntax, run them from VS Code Test Explorer alongside your unit tests, and analyze results in a tabular view with Data Wrangler integration. When you need scale, submit the same definitions to run in Microsoft Foundry. Evaluations become versioned, repeatable, and CI-friendly — not one-off scripts you hope to remember.

💻Unlock AI's Full Capabilities on Edge Device    

AI running on your device  at your pace, without data leaving your machine 

 Cloud-hosted AI is convenient — but it's not always the right fit. Local models offer: 

  • Privacy and Compliance: Your data stays on your machine. No round-trips to a server. 
  • Cost control: Run as many inferences as you want — no per-token billing. 
  • Offline capability: Works anywhere, even without internet access. 
  • Hardware leverage: Modern Windows devices are built for localAI. 

That's why we're bringing a complete end-to-end workflow for discovering, running, converting, profiling, and fine-tuning AI models directly on Windows. Whether you're a developer exploring what models can do, an engineer optimizing models for production, or a researcher training domain-specific model adapters, Foundry Toolkit gives you the tools to work with local AI without compromise. 

Model Playground: Try Any Local Model, Instantly 

As we mentioned at the beginning of this article, the Model Playground is your starting point — not only for cloud models but also for local models. It includes Microsoft's full catalog of models, including the Phi open model family and Phi Silica — Microsoft's local language model optimized for Windows. As you go deeper, the Playground also supports any LLM model you've converted locally through the Conversion workflow — add it to My Resources and try it immediately in the same chat experience.  

Model Conversion: From Hugging Face to Hardware-Ready on Windows 

Getting a model from a research checkpoint to something that runs efficiently on your specific hardware is non-trivial. Foundry Toolkit's conversion pipeline handles the full transformation for a growing selection of popular HuggingFace models: Hugging Face → Conversion → Quantization → Evaluation → ONNX 

The result: a model optimized for Windows ML — Microsoft's unified runtime for local AI on Windows.   

All supported hardware targets are aligned with Windows ML's execution provider ecosystem: 

  • MIGraphX (AMD) 
  • NvTensorRtRtx (NVIDIA) 
  • OpenVINO (Intel) 
  • QNN (Qualcomm) 
  • VitisAI (AMD) 

 

Why Windows ML matters for you: Windows ML lets your app automatically acquire and use hardware-specific EPs  at runtime — no device-specific code requiredYour converted model runs across the full range of supported Windows hardware. 

Once your model has been converted successfully, Foundry Toolkit gives you everything you need to validate, share, and ship: 

  • Benchmark results: Every conversion run is automatically tracked in the History Board — giving you an easy way to validate accuracy, latency, and throughput across model variants before you ship. 
  • Sample code with Windows ML: Get ready-to-use code showing how to load and inference your converted model with the Windows ML runtime — no boilerplate hunting, just copy and go. 
  • Quick Playground via GitHub Copilot: Ask GitHub Copilot to generate a playground web demo for your converted model. Instantly get an interactive experience to validate behavior before integrating into your application. 
  • Package as MSIX: Package your converted model into an MSIX installer. Share it with teammates or incorporate into your application. 

Profiling: See Exactly What Your Model Is Doing 

Converting a local model is one thing. Understanding how it uses your hardware is another. Foundry Toolkit’s profiling tools give you real-time visibility into CPU, GPU, NPU, and memory consumption — with per-second granularity and a 10-minute rolling window. 

Three profiling modes cover different workflows: 

  • Attach at startup — profile a model from the moment it loads 
  • Connect to a running process — attach to an already-running inference session 
  • Profile an ONNX model directly — The Toolkit feeds data to the model and runs performance measurement directly, no application or process needed 

For example, when you run a local model in the Playground, you get detailed visibility into what's happening under the hood during inference — far beyond basic resource usage. Windows ML Event Breakdown surfaces how execution time is spent: a single model execution is broken down into phases — such as session initialization versus active inference — so you know whether slowness is a one-time startup cost or a per-request bottleneck. 

When you profile any ONNX model directly, operator-level tracing shows exactly which graph nodes and operators are dispatched to the NPU, CPU, or GPU, and how long each one takes. This makes it straightforward to identify which parts of your model are underutilizing available hardware — and where quantization, graph optimization, or EP changes will have the most impact. 

Fine-Tuning: Make Phi Silica Yours 

Generic models are capable. Domain-specific models are precise with LoRA (Low-Rank Adaption). Foundry Toolkit's fine-tuning workflow lets you train LoRA adapters for Phi Silica using your own data — no ML infrastructure required. 

Bring your data, customize your LoRA parameters, and submit a job to the cloud. Foundry Toolkit spins up Azure Container Apps to train your adapter with your own subscription. To validate finetuning quality, the workflow tracks training and evaluation loss curves for your LoRA adapter and cloud inference is available to validate the adapter’s behaviorhelping you confirm learning progress and output quality before shipping.  

Once satisfied, download the adapter and incorporate it into your app for use at runtime. 

This is the full loop: train in the cloud → run at the edge. Domain adaptation for local AI, without standing up your own training infrastructure. 

🚀One Toolkit for Every Stage.

Foundry Toolkit for VS Code GA supports every stage of serious AI development:

  • Explore 100+ models without commitment
  • Prototype agents in minutes with no code
  • Build production agents with real debugging, popular frameworks, and coding agent assistance
  • Deploy to Microsoft Foundry with one click and test without leaving VS Code
  • Measure quality with evaluations that fit your existing test workflows
  • Optimize models for specific hardware and use cases

All of it, inside VS Code. All of it, now generally available. Install Foundry Toolkit from the VS Code Marketplace →

Get Started with Hands on Labs and Samples:

We'd love to hear what you build. Share feedback and file issues on GitHub, and join the broader conversation in the Microsoft Foundry Community.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Context Is Everything: Getting the Most from GitHub Copilot with Joydip Kanjilal

1 Share

Strategic Technology Consultation Services

This episode of The Modern .NET Show is supported, in part, by RJJ Software's Strategic Technology Consultation Services. If you're an SME (Small to Medium Enterprise) leader wondering why your technology investments aren't delivering, or you're facing critical decisions about AI, modernization, or team productivity, let's talk.

Show Notes

"Artificial intelligence is nothing new. It enables machines to simulate human cognitive functions such as reasoning, learning, problem solving and all using algorithms and vast data data sets to recognise patterns. And then it makes predictions and performs, you know, language processing, image recognition, and all those stuff."— Joydip Kanjilal

Hey everyone, and welcome back to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. I'm your host Jamie Taylor, bringing you conversations with the brightest minds in the .NET ecosystem.

Today, we're joined by Joydip Kanjilal to talk about GitHub Copilot, agentic workflows for developers, and the benefits (and drawbacks) of having an AI agent help you write code.

Note that I didn't say, "write all the code for you," because an AI agent is simply helping you to be more productive.

"You want to you know, convert, I mean uh migrate a legacy application to a modern-day enterprise application, there will be a lot of redundant code that you will otherwise have to write. So that all that code can be automatically generated by Copilot, provided you have provided the right context."— Joydip Kanjilal

Along the way, we talked about the importance of the context that you give to an AI agent, security best practises (spoiler: you wouldn't give a new junior the keys to teh castle on day one, do the same with your AI agents), and the most important things to remember when using AI agents.

Before we jump in, a quick reminder: if The Modern .NET Show has become part of your learning journey, please consider supporting us through Patreon or Buy Me A Coffee. Every contribution helps us continue bringing you these in-depth conversations with industry experts. You'll find all the links in the show notes.

Anyway, without further ado, let's sit back, open up a terminal, type in `dotnet new podcast` and we'll dive into the core of Modern .NET.

Full Show Notes

The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at: https://dotnetcore.show/season-8/context-is-everything-getting-the-most-from-github-copilot-with-joydip-kanjilal/

Useful Links:

Supporting the show:

Getting in Touch:

Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend.

And don't forget to reach out via our Contact page. We're very interested in your opinion of the show, so please get in touch.

You can support the show by making a monthly donation on the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast.

Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show.

Editing and post-production services for this episode were provided by MB Podcast Services.





Download audio: https://traffic.libsyn.com/clean/secure/thedotnetcorepodcast/817-JoydipKanjilal.mp3?dest-id=767916
Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories