Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147024 stories
·
33 followers

MVPs Global Student Innovation: Sprint to Imagine Cup 2026

1 Share

Introduction

Microsoft MVPs played a pivotal role in igniting student creativity through Sprint to Imagine Cup 2026 engagements. These community-driven sessions brought Agentic AI, Azure AI, and Copilot Studio directly to universities and developer communities across Asia, Africa, Europe, and Latin America. In many regions with limited access to advanced AI technologies, MVPs bridged the gap through mentorship, hands-on learning, and inspiring demonstrations. What began as local sprints evolved into a global movement democratizing innovation and empowering thousands of students to build their first AI-powered solutions.

Excited students who embraced the major challenge of the AI Journey during one of the Sprint to Imagine Cup events in India.

Story

This year’s Sprint to Imagine Cup journey reached diverse countries and communities—including India, Nepal, Pakistan, South Korea, South Africa, Denmark, Spain, Peru, and participants from around the world joining virtually. Every location brought forward inspiring stories of resilience, curiosity, and transformation.

In India, MVP Augustine Correa led a 1,000 km tour from Mumbai to Mangaluru. Remote colleges without air conditioning, long travel distances, and high heat did not stop students from attending. Live coding errors became teachable moments as Augustine used AI Agents to collaborate with students, debug code, and accelerate project velocity. Many students left with working prototypes and their first GitHub pull requests.

Students are learning about Agentic AI and Copilot Studio at the sprints for the Imagine Cup in Bangalore

During the Mumbai session at the Microsoft office, student Ajinkya Furange reflected:

“Thrilled to share that I successfully took on the first big challenge of my AI journey… This hands‑on workshop boosted my confidence to build impactful AI-driven solutions.”

Another participant, Mitansh Jadhav, added:

“One of the most eye‑opening concepts was seeing the AI Agent’s decision-making loop in action… We were challenged to solve five labs using Copilot, perfectly simulating real-world problem solving.”

In Bangalore and Chennai, MVP Mohamed Azarudeen hosted two Sprint sessions with 250 and 120 participants. Students refined ideas, clarified Imagine Cup pathways, and built early-stage AI projects. Students frequently shared how the sprint turned “I have an idea” into “I know how to move forward.”

MVP Gulnaz hosted 10 Sprint to Imagine Cup event across Pakistan

Across Pakistan and Nepal, MVPs delivered AI workshops on Azure AI, Foundry, Copilot Studio, and Responsible AI—often serving as students’ first exposure to advanced AI technologies. MVP Gulnaz Mushtaq in Pakistan hosted ten Sprint events across major university hubs including Peshawar, Lahore, Islamabad, Karachi, and Rawalpindi. Nepal’s innovation culture continued as MVP Pradeep Kandel led the Kathmandu Ideathon, engaging 150–200 students from 70 universities. The event strengthened idea development, mentorship pairing, and preparation for Imagine Cup 2026.

Also, MVPs Heo Soek, Inhee Lee and Jaeseok Lee in Korea led a successful Sprint at Microsoft Office Seoul allowing students to explore AI startup concepts. A student participant from Korea shared:

“In this fast-changing AI era, I was unsure about my direction… but this event helped me understand what kind of talent I should become and find clarity.”

Another female student team from a regional Korean university said:

“We will prepare for Imagine Cup together—thank you for giving us this opportunity.”

Korean university students are exploring AI startup concepts at Microsoft Office Seoul

A Korean attendee added:

“Even though the workshop lasted more than six hours, it was never boring—well‑timed hands‑on labs and activities kept it both fun and meaningful.”

In Europe, MVP Thomas Martinsen (Denmark) and MVP Roberto Corella (Spain) expanded the movement with sessions on Copilot extensibility and AI for Business Central. Latin American MVPs Jorge Castaneda, Meerali Naseet and Juan Rafael delivered cybersecurity and Spanish-language AI workshops supporting students across Peru and Costa Rica.

Impact Insights

Global impact from Sprint to Imagine Cup 2026 has been broad and profound. A total of 70 worldwide events reached an estimated 4,200–5,000 students globally. More than 3,300–4,000 learners engaged directly with Microsoft AI tools such as Azure AI Services, Copilot Studio, and Foundry Agents.

Across all regions, 65% of participants attended in-person while 35% joined through online or hybrid formats, including Spanish-language virtual events in Latin America. Social media amplified momentum as students shared prototypes, learnings, and excitement on LinkedIn and X using hashtags such as #SprintToImagineCup, #ImagineCup, #MumTechUp, and #HMNOV25. Many students shared sentiments similar to:  

Students are learning Agentic AI in the Sprint to Imagine Cup in Europe

“The meeting was very informative and inspiring. I learned a lot about the competition and technologies involved, and I’m excited to begin this journey.”

and

“Thank you so much… your explanation made everything easier to understand. Looking forward to attending more sessions!”

Call to Action / Closing

The global Sprint to Imagine Cup movement demonstrates that innovation thrives when community leaders uplift new creators. MVPs are equipping students with the skills, confidence, and AI fluency needed to build solutions for the future. As the Imagine Cup 2026 season continues, now is the perfect time for MVPs and community leaders to host sessions, mentor teams, and amplify student stories—helping shape the next generation of AI innovators.



Resources

Microsoft Learn – Azure AI: https://learn.microsoft.com/azure/ai

Microsoft Copilot Studio: https://learn.microsoft.com/microsoft-copilot-studio

GitHub Agentic AI Samples: https://github.com/microsoft

Imagine Cup Official Site: https://imaginecup.microsoft.com

 

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Observability in Generative AI: Building Trust with Systematic Evaluation in Microsoft Foundry

1 Share

Why observability matters for generative AI

Generative AI systems operate in complex and dynamic environments. Without systematic evaluation and monitoring, these systems can produce outputs that are factually incorrect, irrelevant, biased, unsafe, or vulnerable to misuse.

Observability helps teams understand how their AI systems behave over time. It enables early detection of quality degradation, safety issues, and operational problems, allowing teams to respond before users are impacted. In GenAIOps, observability is not a one-time activity but a continuous process embedded throughout development and deployment.

 

What is observability in generative AI?

AI observability refers to the ability to monitor, understand, and troubleshoot AI systems throughout their lifecycle. It combines multiple signals, including evaluation metrics, logs, traces, and model or agent outputs, to provide visibility into performance, quality, safety, and operational health.

In practical terms:

  • Metrics indicate how well the AI system is performing
  • Logs show what happened during execution
  • Traces explain where time is spent and how components interact
  • Evaluations assess whether outputs meet defined quality and safety standards

Together, these signals help teams make informed decisions about improving their AI applications.

 

Evaluators: measuring quality, safety, and reliability

Evaluators are specialized tools used to assess the behavior of generative AI models, applications, and agents. They provide structured ways to measure quality and risk across different scenarios and workloads.

General-purpose quality evaluators

These evaluators focus on language quality and logical consistency. They assess aspects such as clarity, fluency, coherence, and response quality in question-answering scenarios.

Textual similarity evaluators

Textual similarity evaluators compare generated responses with ground truth or reference answers. They are useful when measuring overlap or alignment in tasks such as summarization or translation.

Retrieval‑Augmented Generation (RAG) evaluators

For applications that retrieve external information, RAG evaluators assess whether:

  • Relevant information was retrieved
  • Responses remain grounded in retrieved content
  • Answers are relevant and complete for the user query

Risk and safety evaluators

These evaluators help detect potentially harmful or risky outputs, including biased or unfair content, violence, self-harm, sexual content, protected material usage, code vulnerabilities, and ungrounded or fabricated attributes.

Agent evaluators

For tool‑using or multi‑step AI agents, agent evaluators assess whether the agent follows instructions, selects appropriate tools, executes tasks correctly, and completes objectives efficiently.

To align with compliance and responsible AI practices, it is important to describe these capabilities carefully. Instead of making absolute claims, language such as “helps detect” or “helps identify potential risks” should be used.

Observability across the GenAIOps lifecycle

Observability in Microsoft Foundry aligns naturally with three stages of the GenAIOps lifecycle.

  1. Base model selection

Before building an application, teams must select the right foundation model. Early evaluation helps compare candidate models based on:

  • Quality and accuracy for intended scenarios
  • Task performance for specific use cases
  • Ethical considerations and bias indicators
  • Safety characteristics and risk exposure

Evaluating models at this stage reduces downstream rework and helps ensure a stronger starting point for development.

  1. Preproduction evaluation

Once an AI application or agent is built, preproduction evaluation acts as a quality gate before deployment. This stage typically includes:

  • Testing using evaluation datasets that represent realistic user interactions
  • Identifying edge cases where response quality might degrade
  • Assessing robustness across different inputs and prompts
  • Measuring key metrics such as relevance, groundedness, task adherence, and safety indicators

Teams can evaluate using their own datasets, synthetic data, or simulation-based approaches. When test data is limited, simulators can help generate representative or adversarial prompts.

AI red teaming for risk discovery

Automated AI red teaming can be used to simulate adversarial behavior and probe AI systems for potential weaknesses. This approach helps identify content safety and security risks early. Automated scans are most effective when combined with human review, allowing experts to interpret results and apply appropriate mitigations.

  1. Post-production monitoring

After deployment, continuous monitoring helps ensure AI systems behave as expected in real-world conditions. Key practices include:

  • Tracking operational metrics such as latency and usage
  • Running continuous or scheduled evaluations on sampled production traffic
  • Monitoring evaluation trends to detect quality drift
  • Setting alerts when evaluation results fall below defined thresholds
  • Periodically running red teaming exercises to assess evolving risk

Microsoft Foundry integrates with Azure Monitor and Application Insights to provide dashboards and visibility into these signals, supporting faster investigation and issue resolution.

A practical evaluation workflow

A repeatable evaluation process typically follows these steps:

  1. Define what you are evaluating for, such as quality, safety, or RAG performance
  2. Select or generate appropriate datasets, including synthetic data if needed
  3. Run evaluations using built-in or custom evaluators
  4. Analyze results using aggregate metrics and detailed views
  5. Apply targeted improvements or mitigations and re-evaluate

This iterative approach helps teams continuously improve AI behavior as requirements and usage patterns evolve.

Operational considerations

When planning observability and evaluation, teams should consider:

  • Regional availability of certain AI-assisted evaluators
  • Networking constraints, such as virtual network support
  • Identity and access requirements, including managed identity roles
  • Cost implications, as evaluation and monitoring features are consumption-based

Reviewing these factors early helps avoid deployment surprises and delays.

Conclusion

Trustworthy generative AI systems are built through continuous measurement, learning, and improvement. Observability provides the foundation to understand how AI applications behave over time, detect issues early, and respond with confidence.

By embedding evaluation and monitoring across model selection, preproduction testing, and production operation, Microsoft Foundry enables teams to make trust measurable and maintain high standards of quality, safety, and reliability as AI systems scale.

Key takeaways

  • Observability is essential for understanding and managing generative AI systems throughout their lifecycle
  • Evaluators help assess quality, safety, RAG performance, and agent behavior in a structured way
  • GenAIOps observability spans base model selection, preproduction evaluation, and post-production monitoring
  • Automated techniques such as AI red teaming help identify risks early and should complement human review
  • Continuous evaluation and monitoring support reliable, safe, and evolving AI systems

Useful resources

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

The Plan Agent Improvements in VS Code are INCREDIBLE

1 Share


Read the whole story
alvinashcraft
29 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Beige is back: Remembering the BBC Micro with Raspberry Pi 500+

1 Share

The BBC Microcomputer System, or BBC Micro, taught a generation how to use personal computers. Raspberry Pi exists partly because of that legacy. Our CEO and co-founder Eben Upton’s own journey began with a Beeb, and when he recently floated the idea of making a Raspberry Pi 500+ look like a BBC Micro, it felt less like a gimmick and more like a polite nod to four decades of British computing.

The BBC Micro was released in 1981. Manufactured by Acorn Computers, it had an 8-bit CPU running at 2MHz, and came in two main variants: the 16KB Model A, initially priced at £299, and the more popular 32KB Model B, priced at £399. According to the Bank of England’s inflation calculator, Model B would set you back something in the region of £1600 today. So, it was expensive to say the least. Despite this, it went on to sell over 1.5 million units, and was found in almost every UK school at the time. The BBC Micro’s entire memory could comfortably fit inside a modern emoji, but at the time it felt revolutionary, offering up a whole new world to the masses.

Back to BASICs

Within minutes of starting the makeover, I discovered that beige spray paint is unsurprisingly not very popular anymore — especially this exact shade, which reminds me of nicotine-stained pub wallpaper. A couple of purchases later, I found one that just about did the job. After a quick disassembly of a Raspberry Pi 500+ (which is designed to be taken apart so you can upgrade the SSD), a coat of primer, and a top coat of RAL 1001 Beige enamel spray paint, we had the base of our imitation Micro.

But that old-school beige was not the classic computer’s only distinguishing feature; the BBC Micro also had a very distinctive set of keycaps. For those above a certain age, the keyboard is instantly recognisable — mostly for its bright red function keys, which seem to cry out “we do something powerful”. In practice, they were programmable macros for BBC BASIC commands (RUN, LIST, etc.), and their vibrant colour made them feel special, almost like hardware buttons rather than just keys.

Because Raspberry Pi 500+ was built with customisation in mind, recreating this look was easy; the keycaps could easily be swapped out using the removal tool included with every purchase. Signature Plastics LLC offer a variety of unique, high-quality keycaps, and they certainly delivered on our request for this project. Within minutes, the transformation was complete. My hat respectfully doffed to an iconic British computer that introduced millions of people to computing.

Microcomputer, major impact

Raspberry Pi’s all-in-one PCs have always been inspired by the home computers of the 1980s, and much like the classics, they help put high-performance, programmable computers into the hands of people all over the world.

Raspberry Pi 500+ is our most premium product yet, giving you a quad-core 2.4GHz processor, 16GB of memory, 256GB of solid-state storage, modern graphics and networking, and a complete Linux desktop, all built into a beautiful mechanical keyboard. In 1981, this would have represented more raw processing power than every BBC Micro in a typical school combined. In simple terms, it delivers computing on an entirely different scale: around a million times more processing power, well over half a million times more memory, and several million times more storage. Not bad for the price of a routine car service — before they “find something”, anyway…

The post Beige is back: Remembering the BBC Micro with Raspberry Pi 500+ appeared first on Raspberry Pi.

Read the whole story
alvinashcraft
41 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

0.0.404

1 Share

2026-02-05

  • Add support for claude-opus-4.6 model
  • /allow-all and /yolo execute immediately
  • MCP servers shut down concurrently for improved performance
  • Cancel --resume session picker to start a new session
  • MCP server configurations default to all tools when tools parameter not specified
  • Add /tasks command to view and manage background tasks
  • Enable background agents for all users
  • Simplify and clarify /delegate command messaging
  • GITHUB_TOKEN environment variable now accessible in agent shell sessions
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Go 1.25.7-1 and 1.24.13-1 Microsoft builds now available

1 Share

A new release of the Microsoft build of Go including security fixes is now available for download. For more information about this release and the changes included, see the table below:

Microsoft Release Upstream Tag
v1.25.7-1 go1.25.7 release notes
v1.24.13-1 go1.24.13 release notes

The post Go 1.25.7-1 and 1.24.13-1 Microsoft builds now available appeared first on Microsoft for Go Developers.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories