Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151246 stories
·
33 followers

Coffee and Open Source Conversation - Courtney Yatteau

1 Share
From: Isaac Levin
Duration: 0:00
Views: 0

Courtney Yatteau is a Developer Advocate on Esri's Developer Experience Team, where she helps developers build better web applications with modern JavaScript tools, libraries, and mapping technology. Before Esri, she taught computer science and mathematics, which shaped her approach to making technical concepts clear, approachable, and visual.

You can follow Courtney on Social Media
https://x.com/c_yatteau
https://bsky.app/profile/cyatteau.bsky.social
http://youtube.com/@c_yatteau
https://www.linkedin.com/in/courtneyyatteau/
https://github.com/cyatteau

PLEASE SUBSCRIBE TO THE PODCAST

- Spotify: http://isaacl.dev/podcast-spotify
- Apple Podcasts: http://isaacl.dev/podcast-apple
- Google Podcasts: http://isaacl.dev/podcast-google
- RSS: http://isaacl.dev/podcast-rss

You can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.com

Coffee and Open Source is hosted by Isaac Levin (https://twitter.com/isaacrlevin)

Read the whole story
alvinashcraft
41 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Making the Map for Everyone!

1 Share
From: Fritz's Tech Tips and Chatter
Duration: 0:00
Views: 42

I've got Blazor Component updates and I'm breaking the map out for ALL streamers to use!

Read the whole story
alvinashcraft
42 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Build Your First AI Search in .NET with Nuclia RAG

1 Share
From: Fritz's Tech Tips and Chatter
Duration: 13:29
Views: 22

Learn how to build AI-powered search with Progress® Nuclia RAG and .NET. In this first video of our 3-part series, you'll create a working "Blazor Migration Assistant" console app that answers questions using retrieval-augmented generation (RAG)—all in just 13 minutes.

In this video, you'll learn:
- What RAG (Retrieval-Augmented Generation) is and why it matters
- How to set up Progress Nuclia and explore the dashboard
- Installing the Progress.Nuclia .NET SDK via NuGet
- Configuring credentials with dotnet user-secrets
- Building a console app that queries a pre-loaded Knowledge Box
- Making your first AskAsync call to get AI-generated answers
- Streaming responses in real-time with AskStreamAsync
- Extracting and displaying citations for source attribution
- Handling errors with ApiResponse

Part 1 of a 3-part series:
🎬 Part 1: Your First AI-Powered Search (this video)
🔜 Part 2: Build a Real App — Blazor web UI with
document upload & chat
🔜 Part 3: Production Patterns — multi-tenant, structured output & knowledge graphs

Timestamps:
0:00 — Welcome & series overview
1:40 — What is RAG?
6:00 — SDK installation & setup
7:32 — Progress Nuclia dashboard walkthrough
9:47 — Making your first AskAsync call
10:58 — Streaming responses & citations

Resources & Links:
📦 Progress Agentic RAG: https://www.telerik.com/agenticrag
📚 NuGet Package: https://www.nuget.org/packages/Progress.Nuclia
🔗 SDK Repository: https://github.com/telerik/nuclia-dotnet-sdk

Find me online:
🎥 YouTube: https://youtube.com/csharpfritz
𝕏 X/Twitter: https://x.com/csharpfritz
💻 GitHub: https://github.com/csharpfritz
📺 Twitch: https://twitch.tv/csharpfritz

Hit subscribe and don't miss Part 2 where we build a full Blazor web app with document upload and real-time AI chat!

Read the whole story
alvinashcraft
42 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

MVP Champ Spotlight- Uros Babic

1 Share

Uros is recognized as a Most Valued Professional (MVP) by Microsoft as an exceptional community leader for their technical expertise, leadership, speaking experience, online influence, and commitment to solving real-world problems. Learn more about MVPs and what it takes to become one here: FAQ | Most Valuable Professionals. Within our Security MVPs, Microsoft has hand-selected some of our top collaborative MVPs with a passion for working directly with the Product Group to share community insights with Microsoft and co-create content to help address the community needs. Read the interview below!

What first inspired you to pursue a career in cybersecurity?

My interest in cybersecurity began more than 20 years ago, long before it became a mainstream discipline. Early in my career, I was fascinated by how systems communicate, establish trust, and ultimately fail when that trust is broken. What started as a technical curiosity quickly grew into something far more meaningful.

As technology evolved, I witnessed security incidents become increasingly sophisticated and impactful. Attackers shifted from targeting isolated on‑premises environments to exploiting complex cloud and hybrid ecosystems, where identity, automation, and scale dramatically increased both attack surface and blast radius. Security incidents were no longer purely technical issues—they became business‑critical events with real operational, financial, and human consequences.

This evolution naturally led me toward Zero Trust principles—the idea that trust should never be implicit and must be continuously verified. I saw firsthand how traditional perimeter‑based models failed in cloud‑first and identity‑driven environments, and how modern attackers abused excessive trust, identity misconfigurations, and weak access controls to move laterally and escalate privileges.

What ultimately anchored me in cybersecurity was the realization that this field demands both deep technical expertise and responsibility. Over the past two decades, platforms, threats, and attack techniques have continuously changed—from on‑premises defenses to cloud‑native, identity‑centric attacks—but the core challenge has remained the same: protecting trust.

The need to constantly adapt, learn, and apply Zero Trust thinking to ever‑changing environments is what keeps me motivated in cybersecurity. In this field, standing still is not an option—and that ongoing challenge is exactly what continues to drive me forward.

Can you walk us through your journey to becoming a recognized MVP?

My journey to becoming a Microsoft MVP was long, intentional, and deeply tied to real-world security work across SIEM & XDR and Cloud Security. I spent years working hands‑on with Microsoft Sentinel, Defender XDR, and cloud security technologies, dealing with the realities security teams face every day—complex environments, noisy alerts, identity-driven attacks, and the constant pressure to do more with less. Over time, I realized that security operations and cloud security cannot be treated as separate disciplines; they must work together to be truly effective.

Much of my journey was built quietly, without guarantees or shortcuts. I consistently shared practical experiences—what worked, what failed, and what needed rethinking—rooted in real operational challenges. There were long periods of effort without visibility, recognition, or certainty that it would lead anywhere. But I kept going because the goal was never a title; it was to help others navigate complexity and build better security outcomes.

Being recognized as an MVP in both SIEM & XDR and Cloud Security was especially meaningful because it reflected sustained impact across two demanding domains. That recognition represented patience, persistence, and long‑term contribution—not a single moment or achievement. It reinforced my belief that staying hands‑on, sharing honestly, and consistently giving back to the community ultimately matters.

As a Microsoft Security MVP, I’m excited to keep contributing to the global tech community — sharing insights, exchanging knowledge, and learning together. Here’s to continued innovation, collaboration, and pushing the boundaries of what we can achieve in cybersecurity and AI!

What does being named an MVP mean to you personally and professionally?

Being named a Microsoft MVP is deeply personal to me because it reflects years of consistent blogging and intentional knowledge sharing, rather than a single achievement or milestone. Personally, the MVP award validates the time and effort I’ve invested in writing detailed technical blog posts focused on real‑world security challenges—often based on hands‑on experience, lessons learned the hard way, and practical scenarios from production environments. Blogging has been my primary way of contributing to the community: turning complex topics into clear, actionable guidance that helps others learn faster, avoid common mistakes, and gain confidence in their work.

Professionally, being an MVP represents a responsibility to continue that consistency. It means staying hands‑on, continuing to document real experiences, and mentoring others through written content that is honest, practical, and grounded in reality. I strongly believe blogging is one of the most powerful forms of mentorship—it scales knowledge, creates long‑term value, and supports people I may never meet directly. As an MVP, I also see blogging as a bridge between practitioners and Microsoft: sharing real‑world feedback, highlighting gaps, and helping shape solutions that truly work in day‑to‑day security operations. Ultimately, being an MVP reinforces my commitment to long‑term contribution through blogging, transparency, and community‑driven growth.

What are the biggest security challenges organizations are facing today?

Organizations are facing a rapid evolution in attack tactics, especially around identity, cloud, and unmanaged assets. One major challenge is ransomware targeting cloud servers, where attackers exploit identity access, lateral movement, and misconfigured cloud workloads to reach critical resources. Another growing risk comes from human‑operated ransomware (HumOR) attacks starting on unmanaged or lightly managed devices, which bypass traditional controls and use identity credentials to move into the enterprise environment.
Additionally, adversary‑in‑the‑middle business email compromise (BEC) attacks are becoming more sophisticated, enabling attackers to intercept authentication flows, steal session tokens or credentials, and impersonate trusted users without triggering traditional alerts. Across all these scenarios, the common theme is the abuse of identity and trust relationships rather than direct exploitation of infrastructure. This makes visibility across identity, cloud, endpoints, and email—and the ability to correlate signals across them—a critical challenge for modern security teams.

In your experience, what’s one vulnerability that teams consistently overlook?

Identity misconfigurations are the most consistently overlooked vulnerability I see across organizations. This includes over‑privileged users, legacy and unmanaged service accounts, dormant identities, weak or inconsistently applied conditional access policies, and unmanaged or non‑compliant devices authenticating into trusted environments.

Many security teams continue to focus primarily on perimeter defenses or endpoint protections, while modern attackers increasingly target identity as the primary entry point—and the easiest path to lateral movement and privilege escalation. In cloud and hybrid environments especially, attackers don’t need malware if they can abuse trust relationships, token theft, or misconfigured access controls.

What makes identity risk particularly dangerous is that small gaps—often labeled as low risk or deferred as technical debt—can be easily chained into full attack paths. A single stale service account, excessive directory role, or missing conditional access policy can undermine even the most advanced security tooling.

Treating identity as a core security control, not just an authentication layer, is essential. This means continuous identity hygiene, least‑privilege enforcement, visibility into attack paths, and validating how identity controls behave under real attack scenarios. Organizations that fail to prioritize identity security often discover it only after an incident—when it’s already been exploited.

Can you share a project or achievement you’re particularly proud of?

One project I’m particularly proud of involved helping organizations transition from fragmented security tooling to a unified security operations model built around Microsoft Sentinel, Microsoft Defender XDR, and Microsoft Security Copilot. The objective went well beyond tool consolidation—it was about fundamentally improving how SOC teams investigate threats, respond to incidents, and proactively hunt across their environments.

The work included redesigning detection logic, introducing SOC‑as‑code practices, and implementing automation for incident response and attack disruption. We also embedded Microsoft Security Copilot directly into investigation and threat‑hunting workflows, enabling analysts to quickly summarize incidents, understand attack paths, pivot across multiple data sources, and accelerate investigations using natural‑language queries.

The outcomes were tangible and measurable: reduced alert noise, faster investigation times, greater analyst confidence, and significantly less burnout. Seeing SOC teams shift from reactive firefighting to proactive, intelligence‑driven security operations made this project especially meaningful to me.

I’m also proud to share that I successfully completed the Microsoft Connected Security Program 2025, earning five Microsoft Black Belt badges across key security domains and MVP categories SIEM/XDR and Cloud Security:

Microsoft Sentinel SIEM

Microsoft Defender XDR

Microsoft Defender for Cloud

Microsoft Defender for Cloud Apps

Microsoft Defender for Endpoint

This marks three incredible years in the Microsoft Customer Connection Programs. I’m deeply grateful to the amazing Microsoft Security Community team in Redmond—Kristina Quick, Pablo J. Chacón, Katie Ryckman, Linnet Kariuki, Adrian Moore, Kari Feistner, Jeena Cassidy, Rod Trent, and Ashley Martin—for their mentorship, collaboration, and inspiration throughout this journey.

I’m excited to continue this momentum and take on new CCP challenges in 2026, especially around Microsoft Threat Protection Advisors.

How do you balance strong security practices with user experience and productivity?

I focus on risk‑based, adaptive security rather than one‑size‑fits‑all controls. Strong security doesn’t have to create friction if it’s driven by context and automation. By using identity signals, device posture, behavior, and location, organizations can apply stricter controls only when risk increases, while keeping everyday user experience seamless.
In parallel, automation plays a critical role. In the SoftwareOne Global Center of Excellence, we actively apply SOC‑as‑code practices, where detections, response logic, and security workflows are built, versioned, and deployed consistently through automation. This approach reduces human error, speeds up response, and ensures security controls are applied reliably at scale without interrupting users. By combining identity‑driven controls with automated, repeatable SecOps processes, security becomes embedded into daily operations—protecting critical assets while enabling productivity instead of slowing it down.

What emerging threats or trends are you paying closest attention to right now?

I’m paying closest attention to the growing abuse of identity and trust relationships, especially in cloud‑first environments. Attacks increasingly bypass traditional malware and instead exploit valid credentials, token theft, MFA fatigue, and misconfigured identities to move laterally and persist quietly. Closely related to this is the rise of human‑operated ransomware, where attackers adapt in real time and leverage legitimate tools, APIs, and automation.
Another key trend is the convergence of cloud, identity, and security operations—attackers no longer distinguish between endpoints, SaaS, or cloud workloads, so defenders can’t either. Finally, I’m watching how AI is being used on both sides: defenders are gaining major advantages in investigation and hunting, while attackers are using AI to scale social engineering and reconnaissance. These trends reinforce the need for unified visibility, strong identity posture, and automation at SOC scale.

How is AI impacting cybersecurity—from both a defensive and offensive perspective?

AI is accelerating cybersecurity on both sides, changing not just the scale but the speed of attacks and defense. From a defensive perspective, AI helps security teams process massive volumes of signals, quickly summarize incidents, identify patterns, and accelerate investigation and threat hunting. Used correctly, AI reduces cognitive load on analysts and enables faster, more consistent decision‑making, especially in complex, multi‑signal environments.

From an offensive perspective, attackers are also using AI to scale social engineering, improve phishing quality, automate reconnaissance, and adapt attacks in real time. This lowers the barrier to entry and increases the effectiveness of identity‑based attacks. The key difference will be how responsibly defenders integrate AI into real security operations—pairing it with strong identity controls, automation, and human oversight. AI doesn’t replace skilled defenders, but it significantly amplifies their impact when embedded into unified security operations.

What tools or approaches do you find most effective in modern security work?

The most effective approach in modern security is unified security operations, where identity, endpoints, cloud workloads, email, SIEM, and XDR are treated as a single operational system. Platforms like Microsoft Defender XDR and Microsoft Sentinel provide this foundation, but real value comes from how teams operate them. High‑quality detections, meaningful threat hunting, and automation at SOC scale are what turn signals into outcomes.
Security Copilot plays an increasingly important role by using generative AI to accelerate investigations, summarize incidents, explain attack paths, and assist with threat hunting across multiple data sources. Combined with automatic attack disruption, which can stop attacks in progress by disabling compromised identities or containing affected assets, this allows SOC teams to act decisively and consistently. When these capabilities are implemented more automation, operations become repeatable, reliable, and scalable
.

How do you stay up to date in such a rapidly evolving field?

Staying current in such a rapidly evolving field requires continuous, hands‑on engagement, not passive consumption of information. I spend a significant amount of time actively testing new security features, detection logic, and response capabilities in lab environments that mirror real‑world attack scenarios. This allows me to validate how modern defenses behave under pressure—particularly AI‑assisted investigations, automated attack disruption, and identity‑driven controls—and to understand how effectively they integrate into existing SOC workflows and operational processes.

Equally important is learning from real incidents. Post‑incident analysis, attack path reconstruction, and understanding how small misconfigurations turn into large‑scale compromises provide insights that no documentation or release notes ever could. These experiences directly influence how I design detections, tune response automation, and prioritize security controls in practice.

I also stay closely connected to the security community through conferences, deep technical discussions, and collaboration with other practitioners across different industries and regions. Exchanging perspectives with peers helps surface emerging attack patterns, operational challenges, and practical solutions that are often ahead of formal guidance.

Security evolves far too quickly for static knowledge. True relevance comes from experimentation, failure, iteration, and continuous learning. Actively breaking things, validating assumptions, and adapting to new threat models is what keeps skills sharp—and ensures that security strategies remain effective in real‑world environments rather than just on paper.

What role does community involvement play in your work as an MVP?

Community involvement plays a central role in my work as an MVP. The community is where real operational challenges surface first—often long before they appear in official documentation, best‑practice guides, or product roadmaps. Engaging with practitioners exposes the realities of running SIEM, XDR, and cloud security platforms under pressure, in environments shaped by legacy systems, constraints, and constantly evolving threats.

By actively sharing hands‑on experiences, lessons learned, and practical patterns—from Microsoft Sentinel and Defender XDR to Security Copilot and automated response—I aim to contribute knowledge grounded in real‑world operations, not theory alone. Mentoring, presenting, and openly discussing what works (and what doesn’t) helps the entire community move faster and avoid repeating the same mistakes.

Community engagement also serves as a continuous feedback loop. Real scenarios, questions, and incident stories directly inform better detections, more effective automation, and more usable security features. This connection between practitioners and platform capabilities helps bridge the gap between product design and operational reality—and ultimately contributes to stronger, more resilient security solutions.

Just as importantly, the community keeps my own perspective grounded. It continually challenges assumptions, surfaces new attack techniques, and highlights emerging operational pain points. This ensures that my contributions as an MVP in SIEM, XDR, and cloud security remain practical, relevant, and aligned with how SOC teams actually work day to day—not how we wish they worked.

In a field that evolves this quickly, community is not optional; it is an essential part of learning, teaching, and improving security outcomes together.

What advice would you give to aspiring cybersecurity professionals?

Start with strong fundamentals and don’t rush the journey. Identity, networking, operating systems, logging, and cloud architecture matter far more than any single tool. Invest time in building hands‑on experience—set up labs, simulate attacks, analyze logs, and understand how incidents actually unfold from initial access to impact. Theory is important, but real understanding comes from working with real scenarios.
As automation becomes a core part of modern security operations, focus on learning how to design, build, and validate automated workflows. Understand how detections trigger responses, how playbooks behave under different conditions, and how to safely automate containment without breaking business processes.  SOAR Automation should amplify your effectiveness, not obscure what’s happening. Most importantly, share what you learn through documentation, mentoring, or community discussions—teaching others helps solidify your own understanding and accelerates long‑term growth.

What skills or mindset traits set top security experts apart from the rest?

Top security experts combine deep curiosity, systems thinking, and humility. They don’t just learn tools—they seek to understand how technologies, identities, networks, and people interact as a system. They’re comfortable questioning assumptions, revisiting designs, and adapting as environments and threats evolve. Strong practitioners also value hands‑on validation: they test detections, break their own automations, and learn how incidents and hunting unfold end‑to‑end, not just how they’re described on slides. Just as important is the ability to communicate clearly—explaining risk, tradeoffs, and decisions—because effective security depends as much on collaboration as it does on technical depth.

Looking ahead, what excites you most about the future of cybersecurity?

What excites me most is the shift toward proactive, unified, and automated security operations. Security teams are moving away from isolated tools and manual workflows toward integrated platforms that correlate signals across identity, cloud, endpoints, and email. With better automation and intelligence‑driven workflows, SOCs can interrupt attacks earlier, reduce noise, and focus on real risk instead of constant reaction. When automation is applied thoughtfully—paired with strong fundamentals and human oversight—it has the potential to significantly improve both security outcomes and the day‑to‑day sustainability of security teams. That evolution is what I find most encouraging about the future of cybersecurity.

Read the whole story
alvinashcraft
42 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Latest enhancements for Copilot security, management, and analytics

1 Share

As Copilot becomes a daily workflow for more teams, IT and security leaders need clear, practical controls to deploy and manage confidently—without slowing down adoption or compromising governance.

Let's take a closer look at the new enhancements providing built-in security, management, and governance controls to help give you greater visibility and control over your Copilot deployment.

Secure and Govern Microsoft 365 Copilot – a Foundational Deployment Guide

We are excited to share that we have updated and expanded the deployment blueprint to address Oversharing into the Secure and Govern Microsoft 365 Copilot foundational deployment guidance! This guide provides these three essential steps for establishing a secure and governed foundation for Copilot:

  1. Remediate oversharing
  2. Implement reliable guardrails
  3. Meet AI-related regulatory obligations

Delivering a foundational path to help every organization get started with Copilot with confidence. Read more and download the blueprint at aka.ms/Copilot/SecureGovern!

As part of this deployment guidance, here are the features from Microsoft Purview to help you secure and govern Microsoft 365 Copilot.

Microsoft Purview Data Loss Prevention to safeguard Copilot prompts

Microsoft Purview Data Loss Prevention (DLP) for Microsoft 365 Copilot to safeguard prompts helps organizations prevent sensitive information from being used in Copilot prompts. Admins can define policies that detect and restrict prompts containing sensitive information types (SITs), ensuring that Copilot does not respond to such inputs or use them in any web grounding. Generally available.

Microsoft Purview Data Loss Prevention for web queries to safeguard sensitive web search

We are also expanding Microsoft Purview DLP for Microsoft 365 Copilot and Copilot Chat to safeguard web searches containing sensitive data. Admins now have the option to define policies that detect and restrict prompts containing sensitive information types (SITs) from being used for web search, while allowing the response to be grounded in Work IQ. This capability currently extends to Microsoft 365 Copilot and agents built in Copilot Studio that are published to Microsoft 365 Copilot. Now available in Public preview.

Microsoft Purview: Data Security Posture Management: Remediate overshared files

Now, in addition to identifying sharing links across SharePoint sites, this enhanced capability enables bulk remediation by allowing admins to remediate or disable overshared links at scale. This helps organizations proactively reduce data exposure, strengthen compliance posture, and ensure sensitive files are only accessible to the right people. Generally available.

Purview in Microsoft 365 Admin Center

Finally, AI and IT admins in Microsoft 365 Admin Center can gain visibility around oversharing risks and drive remediations, understand how many sensitive copilot interactions are protected, and turn on Microsoft Purview DLP for Copilot right there. This enables secure adoption of Copilot and collaboration with the Security team. Generally available.

Organizational messages enhancements

Organizational Messages now includes email as a delivery channel, expanding beyond existing surfaces such as the Windows Taskbar, Windows Notifications, Windows Spotlight, and Teams Popovers. Email enables high‑confidence reach by delivering critical messages directly to users’ inbox. It also supports targeted, timely communication for important updates such as adoption nudges while providing a familiar, auditable channel IT can trust. Generally available.

 

Usage‑based targeting enables IT admins to drive Copilot adoption by delivering Organizational Messages to users based on real usage behavior, not static group membership. By leveraging dynamic, pre‑defined usage segments, admins can reach the right users with timely, relevant guidance that accelerates awareness, engagement, and value from Copilot investments. Rolling out to general availability this month.

Copilot Dashboard: Expanded access and deeper insights

The Copilot Dashboard is now available to customers with at least 1 Microsoft 365 Copilot license, and includes new capabilities that help you understand how Copilot is being used, how it is changing work patterns, and where it’s delivering value. Analyze how your organization uses both Microsoft 365 Copilot and Copilot Chat so you can make informed decisions about rollout and enablement. Metrics include total users, usage trends, adoption by group, intensity, retention, and app-level breakdowns.​ Generally available.

User satisfaction tracking at scale

Understand how users perceive the value of Microsoft 365 Copilot by analyzing real‑time feedback captured in their natural workflow. This feature aggregates thumbs‑up and thumbs‑down reactions to Copilot responses across Microsoft 365 apps, helping you track satisfaction trends over time and compare satisfaction across groups. These insights help you identify where Copilot is resonating—and where additional guidance or enablement may be needed. Generally available.

New Intent Patterns Across Microsoft 365 Apps

Get a deeper understanding of how employees use Copilot to get work done. New intent‑based metrics categorize individual user prompts into common intent categories to better understand Copilot usage. Analyze common Copilot tasks and usage patterns, including activity in the Microsoft 365 Copilot app, Edge, and OneNote, as well as key scenarios in Outlook, Word, Excel, and PowerPoint such as suggested reply, translate, coach, and clean data. These metrics are available in the Copilot Dashboard and advanced reporting in Viva Insights to support deeper behavioral analysis. Rolling out to public preview this month.

Export Copilot Dashboard Data

Extend your analysis beyond the dashboard by exporting Copilot metrics for custom reporting in your own reporting tools. You can download de‑identified Copilot Dashboard data as a CSV file, including weekly metrics covering the past six months, to support offline analysis or integration with other analytics tools. This makes it easier to tailor insights to your organization’s specific reporting needs. Rolling out to general availability this month.

Conclusion & next steps

These latest updates are designed to help IT administrators and security professionals address practical needs when deploying Microsoft 365 Copilot: establish a secure and governed foundation, drive adoption, and understand where it’s delivering value. Learn more in the Get started deploying Copilot and agents playbook and at aka.ms/Copilot/SecureGovern.

Read the whole story
alvinashcraft
42 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Flutter’s Material and Cupertino code freeze

1 Share
Material and Cupertino libraries are frozen and will be moved from the Flutter framework to new packages

We’ve been hard at work preparing to decouple Material and Cupertino from the Framework, and now our first major milestone has arrived! As of April 7th, all contributions to the Material and Cupertino libraries in flutter/flutter are frozen. Our next milestone will be the re-release of these libraries as the material_ui and cupertino_ui packages on pub.dev.

This means that, after the code freeze, no more changes will be allowed to the Material and Cupertino libraries inside of flutter/flutter. Further development on these libraries will resume in the flutter/packages repository once the new packages are released.

If you write Flutter apps or plugins, but don’t contribute to Material or Cupertino itself, you can stop reading now. This won’t affect you… yet.

After the 3.44 stable release, the new packages will be published and developers will eventually need to migrate. The old Material and Cupertino code will be deprecated in the stable release after 3.44 and deleted some time after that. Of course, when the time comes, we’ll follow up with detailed instructions about this migration.

For those who actively contribute to these libraries or are otherwise invested in their development, here are some things you should know:

What if you have PRs in flight?

Despite the code freeze, we want development on Material and Cupertino to continue with minimal interruption! Any open PRs that touch Material or Cupertino should remain open, and reviewers will continue reviewing and giving feedback as usual. Once the new packages are published, we will provide instructions on how to port these kinds of PRs to flutter/packages. Eventually, your change will go out as a part of a new material_ui or cupertino_ui release.

How about new and existing issues related to Material and Cupertino?

Issues that relate to Material or Cupertino will remain in flutter/flutter as always. This unified issue tracker approach is the same pattern that we follow for other packages in the flutter/packages repo and a few other repositories.

Why freeze the code now?

The moment that we release the 1.0.0 versions of the material_ui and cupertino_ui packages, we think it’s important to have a seamless migration process for every Flutter developer who is ready to migrate, regardless of which release channel they’re coming from. This means that we need to keep the risk of breaking changes to an absolute minimum between the Material and Cupertino libraries in flutter/flutter and in flutter/packages. We can achieve this by freezing the code one stable release cycle ahead and copying that frozen code to the new packages.

The first step in the migration process for Flutter developers is to perform a normal SDK migration to v3.44 or above on any channel. Once there, we know that they have a copy of Material and Cupertino that is frozen. Even if they upgrade their SDK again, that Material and Cupertino code will not change (until it’s deprecated and deleted in the long term). What’s more, we know that the frozen Material and Cupertino code is identical to the code in the 1.0.0 material_ui and cupertino_ui packages, or as close to identical as possible. From there, the developer can migrate from the Material and Cupertino code inside their copy of the SDK onto the material_ui and cupertino_ui packages with minimal friction.

How we got here

It’s been a long journey to this point with many contributions and feedback from across the community. A few months ago when I realized that we had test dependencies that would get in the way of decoupling, I posted an issue and figured I was in for a lot of migration work. Instead, contributors from across the community immediately jumped in to help migrate hundreds of tests. The support we received from first-time contributors to veterans was critical to getting us ready for decoupling. THANK YOU!

What’s next?

After the code freeze, we’ll begin preparing for migration to the new material_ui and cupertino_ui packages. This includes tasks like porting the code over, implementing CI/CD, testing, and setting up docs infrastructure to make sure we can keep the same high quality developer experience that you expect from Flutter.

As the new packages near readiness, we’ll publish more information about how to migrate successfully, so keep an eye out. Also, if you see anything that you think we’ve missed, please jump in with an issue or a PR. We couldn’t have gotten this far without help from the amazing Flutter community, and we can’t wait to see where we’ll go from here.


Flutter’s Material and Cupertino code freeze was originally published in Flutter on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
42 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories