Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151243 stories
·
33 followers

MVP Champ Spotlight- Uros Babic

1 Share

Uros is recognized as a Most Valued Professional (MVP) by Microsoft as an exceptional community leader for their technical expertise, leadership, speaking experience, online influence, and commitment to solving real-world problems. Learn more about MVPs and what it takes to become one here: FAQ | Most Valuable Professionals. Within our Security MVPs, Microsoft has hand-selected some of our top collaborative MVPs with a passion for working directly with the Product Group to share community insights with Microsoft and co-create content to help address the community needs. Read the interview below!

What first inspired you to pursue a career in cybersecurity?

My interest in cybersecurity began more than 20 years ago, long before it became a mainstream discipline. Early in my career, I was fascinated by how systems communicate, establish trust, and ultimately fail when that trust is broken. What started as a technical curiosity quickly grew into something far more meaningful.

As technology evolved, I witnessed security incidents become increasingly sophisticated and impactful. Attackers shifted from targeting isolated on‑premises environments to exploiting complex cloud and hybrid ecosystems, where identity, automation, and scale dramatically increased both attack surface and blast radius. Security incidents were no longer purely technical issues—they became business‑critical events with real operational, financial, and human consequences.

This evolution naturally led me toward Zero Trust principles—the idea that trust should never be implicit and must be continuously verified. I saw firsthand how traditional perimeter‑based models failed in cloud‑first and identity‑driven environments, and how modern attackers abused excessive trust, identity misconfigurations, and weak access controls to move laterally and escalate privileges.

What ultimately anchored me in cybersecurity was the realization that this field demands both deep technical expertise and responsibility. Over the past two decades, platforms, threats, and attack techniques have continuously changed—from on‑premises defenses to cloud‑native, identity‑centric attacks—but the core challenge has remained the same: protecting trust.

The need to constantly adapt, learn, and apply Zero Trust thinking to ever‑changing environments is what keeps me motivated in cybersecurity. In this field, standing still is not an option—and that ongoing challenge is exactly what continues to drive me forward.

Can you walk us through your journey to becoming a recognized MVP?

My journey to becoming a Microsoft MVP was long, intentional, and deeply tied to real-world security work across SIEM & XDR and Cloud Security. I spent years working hands‑on with Microsoft Sentinel, Defender XDR, and cloud security technologies, dealing with the realities security teams face every day—complex environments, noisy alerts, identity-driven attacks, and the constant pressure to do more with less. Over time, I realized that security operations and cloud security cannot be treated as separate disciplines; they must work together to be truly effective.

Much of my journey was built quietly, without guarantees or shortcuts. I consistently shared practical experiences—what worked, what failed, and what needed rethinking—rooted in real operational challenges. There were long periods of effort without visibility, recognition, or certainty that it would lead anywhere. But I kept going because the goal was never a title; it was to help others navigate complexity and build better security outcomes.

Being recognized as an MVP in both SIEM & XDR and Cloud Security was especially meaningful because it reflected sustained impact across two demanding domains. That recognition represented patience, persistence, and long‑term contribution—not a single moment or achievement. It reinforced my belief that staying hands‑on, sharing honestly, and consistently giving back to the community ultimately matters.

As a Microsoft Security MVP, I’m excited to keep contributing to the global tech community — sharing insights, exchanging knowledge, and learning together. Here’s to continued innovation, collaboration, and pushing the boundaries of what we can achieve in cybersecurity and AI!

What does being named an MVP mean to you personally and professionally?

Being named a Microsoft MVP is deeply personal to me because it reflects years of consistent blogging and intentional knowledge sharing, rather than a single achievement or milestone. Personally, the MVP award validates the time and effort I’ve invested in writing detailed technical blog posts focused on real‑world security challenges—often based on hands‑on experience, lessons learned the hard way, and practical scenarios from production environments. Blogging has been my primary way of contributing to the community: turning complex topics into clear, actionable guidance that helps others learn faster, avoid common mistakes, and gain confidence in their work.

Professionally, being an MVP represents a responsibility to continue that consistency. It means staying hands‑on, continuing to document real experiences, and mentoring others through written content that is honest, practical, and grounded in reality. I strongly believe blogging is one of the most powerful forms of mentorship—it scales knowledge, creates long‑term value, and supports people I may never meet directly. As an MVP, I also see blogging as a bridge between practitioners and Microsoft: sharing real‑world feedback, highlighting gaps, and helping shape solutions that truly work in day‑to‑day security operations. Ultimately, being an MVP reinforces my commitment to long‑term contribution through blogging, transparency, and community‑driven growth.

What are the biggest security challenges organizations are facing today?

Organizations are facing a rapid evolution in attack tactics, especially around identity, cloud, and unmanaged assets. One major challenge is ransomware targeting cloud servers, where attackers exploit identity access, lateral movement, and misconfigured cloud workloads to reach critical resources. Another growing risk comes from human‑operated ransomware (HumOR) attacks starting on unmanaged or lightly managed devices, which bypass traditional controls and use identity credentials to move into the enterprise environment.
Additionally, adversary‑in‑the‑middle business email compromise (BEC) attacks are becoming more sophisticated, enabling attackers to intercept authentication flows, steal session tokens or credentials, and impersonate trusted users without triggering traditional alerts. Across all these scenarios, the common theme is the abuse of identity and trust relationships rather than direct exploitation of infrastructure. This makes visibility across identity, cloud, endpoints, and email—and the ability to correlate signals across them—a critical challenge for modern security teams.

In your experience, what’s one vulnerability that teams consistently overlook?

Identity misconfigurations are the most consistently overlooked vulnerability I see across organizations. This includes over‑privileged users, legacy and unmanaged service accounts, dormant identities, weak or inconsistently applied conditional access policies, and unmanaged or non‑compliant devices authenticating into trusted environments.

Many security teams continue to focus primarily on perimeter defenses or endpoint protections, while modern attackers increasingly target identity as the primary entry point—and the easiest path to lateral movement and privilege escalation. In cloud and hybrid environments especially, attackers don’t need malware if they can abuse trust relationships, token theft, or misconfigured access controls.

What makes identity risk particularly dangerous is that small gaps—often labeled as low risk or deferred as technical debt—can be easily chained into full attack paths. A single stale service account, excessive directory role, or missing conditional access policy can undermine even the most advanced security tooling.

Treating identity as a core security control, not just an authentication layer, is essential. This means continuous identity hygiene, least‑privilege enforcement, visibility into attack paths, and validating how identity controls behave under real attack scenarios. Organizations that fail to prioritize identity security often discover it only after an incident—when it’s already been exploited.

Can you share a project or achievement you’re particularly proud of?

One project I’m particularly proud of involved helping organizations transition from fragmented security tooling to a unified security operations model built around Microsoft Sentinel, Microsoft Defender XDR, and Microsoft Security Copilot. The objective went well beyond tool consolidation—it was about fundamentally improving how SOC teams investigate threats, respond to incidents, and proactively hunt across their environments.

The work included redesigning detection logic, introducing SOC‑as‑code practices, and implementing automation for incident response and attack disruption. We also embedded Microsoft Security Copilot directly into investigation and threat‑hunting workflows, enabling analysts to quickly summarize incidents, understand attack paths, pivot across multiple data sources, and accelerate investigations using natural‑language queries.

The outcomes were tangible and measurable: reduced alert noise, faster investigation times, greater analyst confidence, and significantly less burnout. Seeing SOC teams shift from reactive firefighting to proactive, intelligence‑driven security operations made this project especially meaningful to me.

I’m also proud to share that I successfully completed the Microsoft Connected Security Program 2025, earning five Microsoft Black Belt badges across key security domains and MVP categories SIEM/XDR and Cloud Security:

Microsoft Sentinel SIEM

Microsoft Defender XDR

Microsoft Defender for Cloud

Microsoft Defender for Cloud Apps

Microsoft Defender for Endpoint

This marks three incredible years in the Microsoft Customer Connection Programs. I’m deeply grateful to the amazing Microsoft Security Community team in Redmond—Kristina Quick, Pablo J. Chacón, Katie Ryckman, Linnet Kariuki, Adrian Moore, Kari Feistner, Jeena Cassidy, Rod Trent, and Ashley Martin—for their mentorship, collaboration, and inspiration throughout this journey.

I’m excited to continue this momentum and take on new CCP challenges in 2026, especially around Microsoft Threat Protection Advisors.

How do you balance strong security practices with user experience and productivity?

I focus on risk‑based, adaptive security rather than one‑size‑fits‑all controls. Strong security doesn’t have to create friction if it’s driven by context and automation. By using identity signals, device posture, behavior, and location, organizations can apply stricter controls only when risk increases, while keeping everyday user experience seamless.
In parallel, automation plays a critical role. In the SoftwareOne Global Center of Excellence, we actively apply SOC‑as‑code practices, where detections, response logic, and security workflows are built, versioned, and deployed consistently through automation. This approach reduces human error, speeds up response, and ensures security controls are applied reliably at scale without interrupting users. By combining identity‑driven controls with automated, repeatable SecOps processes, security becomes embedded into daily operations—protecting critical assets while enabling productivity instead of slowing it down.

What emerging threats or trends are you paying closest attention to right now?

I’m paying closest attention to the growing abuse of identity and trust relationships, especially in cloud‑first environments. Attacks increasingly bypass traditional malware and instead exploit valid credentials, token theft, MFA fatigue, and misconfigured identities to move laterally and persist quietly. Closely related to this is the rise of human‑operated ransomware, where attackers adapt in real time and leverage legitimate tools, APIs, and automation.
Another key trend is the convergence of cloud, identity, and security operations—attackers no longer distinguish between endpoints, SaaS, or cloud workloads, so defenders can’t either. Finally, I’m watching how AI is being used on both sides: defenders are gaining major advantages in investigation and hunting, while attackers are using AI to scale social engineering and reconnaissance. These trends reinforce the need for unified visibility, strong identity posture, and automation at SOC scale.

How is AI impacting cybersecurity—from both a defensive and offensive perspective?

AI is accelerating cybersecurity on both sides, changing not just the scale but the speed of attacks and defense. From a defensive perspective, AI helps security teams process massive volumes of signals, quickly summarize incidents, identify patterns, and accelerate investigation and threat hunting. Used correctly, AI reduces cognitive load on analysts and enables faster, more consistent decision‑making, especially in complex, multi‑signal environments.

From an offensive perspective, attackers are also using AI to scale social engineering, improve phishing quality, automate reconnaissance, and adapt attacks in real time. This lowers the barrier to entry and increases the effectiveness of identity‑based attacks. The key difference will be how responsibly defenders integrate AI into real security operations—pairing it with strong identity controls, automation, and human oversight. AI doesn’t replace skilled defenders, but it significantly amplifies their impact when embedded into unified security operations.

What tools or approaches do you find most effective in modern security work?

The most effective approach in modern security is unified security operations, where identity, endpoints, cloud workloads, email, SIEM, and XDR are treated as a single operational system. Platforms like Microsoft Defender XDR and Microsoft Sentinel provide this foundation, but real value comes from how teams operate them. High‑quality detections, meaningful threat hunting, and automation at SOC scale are what turn signals into outcomes.
Security Copilot plays an increasingly important role by using generative AI to accelerate investigations, summarize incidents, explain attack paths, and assist with threat hunting across multiple data sources. Combined with automatic attack disruption, which can stop attacks in progress by disabling compromised identities or containing affected assets, this allows SOC teams to act decisively and consistently. When these capabilities are implemented more automation, operations become repeatable, reliable, and scalable
.

How do you stay up to date in such a rapidly evolving field?

Staying current in such a rapidly evolving field requires continuous, hands‑on engagement, not passive consumption of information. I spend a significant amount of time actively testing new security features, detection logic, and response capabilities in lab environments that mirror real‑world attack scenarios. This allows me to validate how modern defenses behave under pressure—particularly AI‑assisted investigations, automated attack disruption, and identity‑driven controls—and to understand how effectively they integrate into existing SOC workflows and operational processes.

Equally important is learning from real incidents. Post‑incident analysis, attack path reconstruction, and understanding how small misconfigurations turn into large‑scale compromises provide insights that no documentation or release notes ever could. These experiences directly influence how I design detections, tune response automation, and prioritize security controls in practice.

I also stay closely connected to the security community through conferences, deep technical discussions, and collaboration with other practitioners across different industries and regions. Exchanging perspectives with peers helps surface emerging attack patterns, operational challenges, and practical solutions that are often ahead of formal guidance.

Security evolves far too quickly for static knowledge. True relevance comes from experimentation, failure, iteration, and continuous learning. Actively breaking things, validating assumptions, and adapting to new threat models is what keeps skills sharp—and ensures that security strategies remain effective in real‑world environments rather than just on paper.

What role does community involvement play in your work as an MVP?

Community involvement plays a central role in my work as an MVP. The community is where real operational challenges surface first—often long before they appear in official documentation, best‑practice guides, or product roadmaps. Engaging with practitioners exposes the realities of running SIEM, XDR, and cloud security platforms under pressure, in environments shaped by legacy systems, constraints, and constantly evolving threats.

By actively sharing hands‑on experiences, lessons learned, and practical patterns—from Microsoft Sentinel and Defender XDR to Security Copilot and automated response—I aim to contribute knowledge grounded in real‑world operations, not theory alone. Mentoring, presenting, and openly discussing what works (and what doesn’t) helps the entire community move faster and avoid repeating the same mistakes.

Community engagement also serves as a continuous feedback loop. Real scenarios, questions, and incident stories directly inform better detections, more effective automation, and more usable security features. This connection between practitioners and platform capabilities helps bridge the gap between product design and operational reality—and ultimately contributes to stronger, more resilient security solutions.

Just as importantly, the community keeps my own perspective grounded. It continually challenges assumptions, surfaces new attack techniques, and highlights emerging operational pain points. This ensures that my contributions as an MVP in SIEM, XDR, and cloud security remain practical, relevant, and aligned with how SOC teams actually work day to day—not how we wish they worked.

In a field that evolves this quickly, community is not optional; it is an essential part of learning, teaching, and improving security outcomes together.

What advice would you give to aspiring cybersecurity professionals?

Start with strong fundamentals and don’t rush the journey. Identity, networking, operating systems, logging, and cloud architecture matter far more than any single tool. Invest time in building hands‑on experience—set up labs, simulate attacks, analyze logs, and understand how incidents actually unfold from initial access to impact. Theory is important, but real understanding comes from working with real scenarios.
As automation becomes a core part of modern security operations, focus on learning how to design, build, and validate automated workflows. Understand how detections trigger responses, how playbooks behave under different conditions, and how to safely automate containment without breaking business processes.  SOAR Automation should amplify your effectiveness, not obscure what’s happening. Most importantly, share what you learn through documentation, mentoring, or community discussions—teaching others helps solidify your own understanding and accelerates long‑term growth.

What skills or mindset traits set top security experts apart from the rest?

Top security experts combine deep curiosity, systems thinking, and humility. They don’t just learn tools—they seek to understand how technologies, identities, networks, and people interact as a system. They’re comfortable questioning assumptions, revisiting designs, and adapting as environments and threats evolve. Strong practitioners also value hands‑on validation: they test detections, break their own automations, and learn how incidents and hunting unfold end‑to‑end, not just how they’re described on slides. Just as important is the ability to communicate clearly—explaining risk, tradeoffs, and decisions—because effective security depends as much on collaboration as it does on technical depth.

Looking ahead, what excites you most about the future of cybersecurity?

What excites me most is the shift toward proactive, unified, and automated security operations. Security teams are moving away from isolated tools and manual workflows toward integrated platforms that correlate signals across identity, cloud, endpoints, and email. With better automation and intelligence‑driven workflows, SOCs can interrupt attacks earlier, reduce noise, and focus on real risk instead of constant reaction. When automation is applied thoughtfully—paired with strong fundamentals and human oversight—it has the potential to significantly improve both security outcomes and the day‑to‑day sustainability of security teams. That evolution is what I find most encouraging about the future of cybersecurity.

Read the whole story
alvinashcraft
12 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Latest enhancements for Copilot security, management, and analytics

1 Share

As Copilot becomes a daily workflow for more teams, IT and security leaders need clear, practical controls to deploy and manage confidently—without slowing down adoption or compromising governance.

Let's take a closer look at the new enhancements providing built-in security, management, and governance controls to help give you greater visibility and control over your Copilot deployment.

Secure and Govern Microsoft 365 Copilot – a Foundational Deployment Guide

We are excited to share that we have updated and expanded the deployment blueprint to address Oversharing into the Secure and Govern Microsoft 365 Copilot foundational deployment guidance! This guide provides these three essential steps for establishing a secure and governed foundation for Copilot:

  1. Remediate oversharing
  2. Implement reliable guardrails
  3. Meet AI-related regulatory obligations

Delivering a foundational path to help every organization get started with Copilot with confidence. Read more and download the blueprint at aka.ms/Copilot/SecureGovern!

As part of this deployment guidance, here are the features from Microsoft Purview to help you secure and govern Microsoft 365 Copilot.

Microsoft Purview Data Loss Prevention to safeguard Copilot prompts

Microsoft Purview Data Loss Prevention (DLP) for Microsoft 365 Copilot to safeguard prompts helps organizations prevent sensitive information from being used in Copilot prompts. Admins can define policies that detect and restrict prompts containing sensitive information types (SITs), ensuring that Copilot does not respond to such inputs or use them in any web grounding. Generally available.

Microsoft Purview Data Loss Prevention for web queries to safeguard sensitive web search

We are also expanding Microsoft Purview DLP for Microsoft 365 Copilot and Copilot Chat to safeguard web searches containing sensitive data. Admins now have the option to define policies that detect and restrict prompts containing sensitive information types (SITs) from being used for web search, while allowing the response to be grounded in Work IQ. This capability currently extends to Microsoft 365 Copilot and agents built in Copilot Studio that are published to Microsoft 365 Copilot. Now available in Public preview.

Microsoft Purview: Data Security Posture Management: Remediate overshared files

Now, in addition to identifying sharing links across SharePoint sites, this enhanced capability enables bulk remediation by allowing admins to remediate or disable overshared links at scale. This helps organizations proactively reduce data exposure, strengthen compliance posture, and ensure sensitive files are only accessible to the right people. Generally available.

Purview in Microsoft 365 Admin Center

Finally, AI and IT admins in Microsoft 365 Admin Center can gain visibility around oversharing risks and drive remediations, understand how many sensitive copilot interactions are protected, and turn on Microsoft Purview DLP for Copilot right there. This enables secure adoption of Copilot and collaboration with the Security team. Generally available.

Organizational messages enhancements

Organizational Messages now includes email as a delivery channel, expanding beyond existing surfaces such as the Windows Taskbar, Windows Notifications, Windows Spotlight, and Teams Popovers. Email enables high‑confidence reach by delivering critical messages directly to users’ inbox. It also supports targeted, timely communication for important updates such as adoption nudges while providing a familiar, auditable channel IT can trust. Generally available.

 

Usage‑based targeting enables IT admins to drive Copilot adoption by delivering Organizational Messages to users based on real usage behavior, not static group membership. By leveraging dynamic, pre‑defined usage segments, admins can reach the right users with timely, relevant guidance that accelerates awareness, engagement, and value from Copilot investments. Rolling out to general availability this month.

Copilot Dashboard: Expanded access and deeper insights

The Copilot Dashboard is now available to customers with at least 1 Microsoft 365 Copilot license, and includes new capabilities that help you understand how Copilot is being used, how it is changing work patterns, and where it’s delivering value. Analyze how your organization uses both Microsoft 365 Copilot and Copilot Chat so you can make informed decisions about rollout and enablement. Metrics include total users, usage trends, adoption by group, intensity, retention, and app-level breakdowns.​ Generally available.

User satisfaction tracking at scale

Understand how users perceive the value of Microsoft 365 Copilot by analyzing real‑time feedback captured in their natural workflow. This feature aggregates thumbs‑up and thumbs‑down reactions to Copilot responses across Microsoft 365 apps, helping you track satisfaction trends over time and compare satisfaction across groups. These insights help you identify where Copilot is resonating—and where additional guidance or enablement may be needed. Generally available.

New Intent Patterns Across Microsoft 365 Apps

Get a deeper understanding of how employees use Copilot to get work done. New intent‑based metrics categorize individual user prompts into common intent categories to better understand Copilot usage. Analyze common Copilot tasks and usage patterns, including activity in the Microsoft 365 Copilot app, Edge, and OneNote, as well as key scenarios in Outlook, Word, Excel, and PowerPoint such as suggested reply, translate, coach, and clean data. These metrics are available in the Copilot Dashboard and advanced reporting in Viva Insights to support deeper behavioral analysis. Rolling out to public preview this month.

Export Copilot Dashboard Data

Extend your analysis beyond the dashboard by exporting Copilot metrics for custom reporting in your own reporting tools. You can download de‑identified Copilot Dashboard data as a CSV file, including weekly metrics covering the past six months, to support offline analysis or integration with other analytics tools. This makes it easier to tailor insights to your organization’s specific reporting needs. Rolling out to general availability this month.

Conclusion & next steps

These latest updates are designed to help IT administrators and security professionals address practical needs when deploying Microsoft 365 Copilot: establish a secure and governed foundation, drive adoption, and understand where it’s delivering value. Learn more in the Get started deploying Copilot and agents playbook and at aka.ms/Copilot/SecureGovern.

Read the whole story
alvinashcraft
17 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Flutter’s Material and Cupertino code freeze

1 Share
Material and Cupertino libraries are frozen and will be moved from the Flutter framework to new packages

We’ve been hard at work preparing to decouple Material and Cupertino from the Framework, and now our first major milestone has arrived! As of April 7th, all contributions to the Material and Cupertino libraries in flutter/flutter are frozen. Our next milestone will be the re-release of these libraries as the material_ui and cupertino_ui packages on pub.dev.

This means that, after the code freeze, no more changes will be allowed to the Material and Cupertino libraries inside of flutter/flutter. Further development on these libraries will resume in the flutter/packages repository once the new packages are released.

If you write Flutter apps or plugins, but don’t contribute to Material or Cupertino itself, you can stop reading now. This won’t affect you… yet.

After the 3.44 stable release, the new packages will be published and developers will eventually need to migrate. The old Material and Cupertino code will be deprecated in the stable release after 3.44 and deleted some time after that. Of course, when the time comes, we’ll follow up with detailed instructions about this migration.

For those who actively contribute to these libraries or are otherwise invested in their development, here are some things you should know:

What if you have PRs in flight?

Despite the code freeze, we want development on Material and Cupertino to continue with minimal interruption! Any open PRs that touch Material or Cupertino should remain open, and reviewers will continue reviewing and giving feedback as usual. Once the new packages are published, we will provide instructions on how to port these kinds of PRs to flutter/packages. Eventually, your change will go out as a part of a new material_ui or cupertino_ui release.

How about new and existing issues related to Material and Cupertino?

Issues that relate to Material or Cupertino will remain in flutter/flutter as always. This unified issue tracker approach is the same pattern that we follow for other packages in the flutter/packages repo and a few other repositories.

Why freeze the code now?

The moment that we release the 1.0.0 versions of the material_ui and cupertino_ui packages, we think it’s important to have a seamless migration process for every Flutter developer who is ready to migrate, regardless of which release channel they’re coming from. This means that we need to keep the risk of breaking changes to an absolute minimum between the Material and Cupertino libraries in flutter/flutter and in flutter/packages. We can achieve this by freezing the code one stable release cycle ahead and copying that frozen code to the new packages.

The first step in the migration process for Flutter developers is to perform a normal SDK migration to v3.44 or above on any channel. Once there, we know that they have a copy of Material and Cupertino that is frozen. Even if they upgrade their SDK again, that Material and Cupertino code will not change (until it’s deprecated and deleted in the long term). What’s more, we know that the frozen Material and Cupertino code is identical to the code in the 1.0.0 material_ui and cupertino_ui packages, or as close to identical as possible. From there, the developer can migrate from the Material and Cupertino code inside their copy of the SDK onto the material_ui and cupertino_ui packages with minimal friction.

How we got here

It’s been a long journey to this point with many contributions and feedback from across the community. A few months ago when I realized that we had test dependencies that would get in the way of decoupling, I posted an issue and figured I was in for a lot of migration work. Instead, contributors from across the community immediately jumped in to help migrate hundreds of tests. The support we received from first-time contributors to veterans was critical to getting us ready for decoupling. THANK YOU!

What’s next?

After the code freeze, we’ll begin preparing for migration to the new material_ui and cupertino_ui packages. This includes tasks like porting the code over, implementing CI/CD, testing, and setting up docs infrastructure to make sure we can keep the same high quality developer experience that you expect from Flutter.

As the new packages near readiness, we’ll publish more information about how to migrate successfully, so keep an eye out. Also, if you see anything that you think we’ve missed, please jump in with an issue or a PR. We couldn’t have gotten this far without help from the amazing Flutter community, and we can’t wait to see where we’ll go from here.


Flutter’s Material and Cupertino code freeze was originally published in Flutter on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
29 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Hello Developer: April 2026

1 Share
The words "Hello World" floating against a space background with the Earth floating beneath.

In this edition:

  • Join us on bilibili and LinkedIn.
  • Catch up on essential sessions before WWDC26.
  • Build a travel app with sample code.
  • Browse the latest edition of our new design gallery.
  • Learn about the biggest-ever update to Analytics in App Store Connect.

Read now

Read the whole story
alvinashcraft
36 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Alternatives to the !important Keyword

1 Share

Every now and then, I stumble onto an old project of mine, or worse, someone else’s, and I’m reminded just how chaotic CSS can get over time. In most of these cases, the !important keyword seems to be involved in one way or another. And it’s easy to understand why developers rely on it. It provides an immediate fix and forces a rule to take precedence in the cascade.

That’s not to say !important doesn’t have its place. The problem is that once you start using it, you’re no longer working with the cascade; you’re bypassing it. This can quickly get out of hand in larger projects with multiple people working on them, where each new override makes the next one harder.

Cascade layers, specificity tricks, smarter ordering, and even some clever selector hacks can often replace !important with something cleaner, more predictable, and far less embarrassing to explain to your future self.

Let’s talk about those alternatives.

Specificity and !important

Selector specificity is a deep rabbit hole, and not the goal of this discussion. That said, to understand why !important exists, we need to look at how CSS decides which rules apply in the first place. I wrote a brief overview on specificity that serves as a good starting point. Chris also has a concise piece on it. And if you really want to go deep into all the edge cases, Frontend Masters has a thorough breakdown.

In short, CSS gives each selector a kind of “weight.” When two rules target the same element, the rule with higher specificity wins. If the specificity is equal, the rule declared later in the stylesheet takes precedence.

  • Inline styles (style="...") are the heaviest.
  • ID selectors (#header) are stronger than classes or type selectors.
  • Class, attribute, and pseudo-class selectors (.btn, [type="text"], :hover) carry medium weight.
  • Type selectors and pseudo-elements (div, p, ::before) have the lowest weight. Although, the * selector is even lower with a specificity of 0-0-0 compared to type selectors which have a specificity of 0-0-1.
/* Low specificity (0,0,1) */
p {
  color: gray;
}

/* Medium specificity (0,1,0) */
.button {
  color: blue;
}

/* High specificity (1,1,0) */
#header .button {
  color: red;
}
<!-- Inline style (1,0,0) -->
<p style="color: green;">Hello</p>

Inline styles being the heaviest also explains why they’re often frowned upon and not considered “clean” CSS since they bypass most of the normal structure we try to maintain.

!important changes this behavior. It skips normal specificity and source order, pushing that declaration to the top within its origin and cascade layer:

p {
  color: red !important;
}

#main p {
  color: blue;
}

Even though #main p is more specific, the paragraph will appear red because the !important declaration overrides it.

Why !important can be problematic

Here’s the typical lifecycle of !important in a project involving multiple developers:

“Why isn’t this working? Add !important. Okay, fixed.”

Then someone else comes along and tries to change that same component. Their rule doesn’t apply, and after some digging, they find the !important. Now they have a choice:

  • remove it and risk breaking something else,
  • or add another !important to override it.

And since no one is completely sure why the first one was added, the safer move often feels like adding another one. This can quickly spiral out of control in larger projects.

On a more technical note, the fundamental problem with !important is that it breaks the intended order of the cascade. CSS is designed to resolve conflicts predictably through specificity and source order. Later rules override earlier ones, and more specific selectors override less specific ones.

A common place where this becomes obvious is theme switching. Consider the example below:

.button {
  color: red !important;
}

.dark .button {
  color: white;
}

Even inside a dark theme, the button stays red. This results in the stylesheet becoming harder to reason about, because the cascade is no longer predictable.

In large teams, especially, this results in maintenance and debugging becoming harder. None of this means !important should never be used. There are legitimate cases for it, especially in utility classes, accessibility overrides, or user stylesheets. But if you’re using it as your go-to method to resolve a selector/styling conflict, it’s usually a sign that something else in the cascade needs attention.

Let’s look at alternatives.

Cascade layers

Cascade layers are a more advanced feature of CSS, and there’s a lot of theory on them. For the purposes of this discussion, we’ll focus on how they help you avoid !important. If you want to learn more, Miriam Suzanne wrote a complete guide on CSS Cascade Layers on it that goes into considerable detail.

In short, cascade layers let you define explicit priority groups in your CSS. Instead of relying on selector specificity, you decide up front which category of styles should take precedence. You can define your layer order up front:

@layer reset, defaults, components, utilities;

This establishes priority from lowest to highest. Now you can add styles into those layers:

@layer defaults {
  a:any-link {
    color: maroon;
  }
}

@layer utilities {
  [data-color='brand'] {
    color: green;
  }
}

Even though [data-color='brand'] has lower specificity than a:any-link, the utilities layer takes precedence because it was defined later in the layer stack.

It’s worth noting that specificity still works inside a layer. But between layers, layer order is given priority.

With cascade layers, you can prioritize entire categories of styles instead of individual rules. For example, your “overrides” layer always takes precedence over your “base” layer. This sort of architectural thinking, instead of reactive fixing saves a lot of headaches down the line.

One very common example is integrating third-party CSS. If a framework ships with highly specific selectors, you can do this:

@layer framework, components;

@import url('framework.css') layer(framework);

@layer components {
  .card {
    padding: 2rem;
  }
}

Now your component styles automatically override the framework styles, regardless of their selector specificity, as long as the framework isn’t using !important.

And while we’re talking about it, it’s good to note that using !important with cascade layers is actually counterintuitive. That’s because !important actually reverses the layer order. It is no longer a quick way to jump to the top of the priorities — but an integrated part of our cascade layering; a way for lower layers to insist that some of their styles are essential.

So, if we were to order a set of layers like this:

  1. utilities (most powerful)
  2. components
  3. defaults (least powerful)

Using !important flips things on their head:

  1. !important defaults (most powerful)
  2. !important components
  3. !important utilities
  4. normal utilities
  5. normal components
  6. normal defaults (least powerful)

Notice what happens there: it generates three new, reversed important layers that supersede the original three layers while reversing the entire order.

The :is() pseudo

The :is() pseudo-class is interesting because it takes the specificity of its most specific argument. Say you have a component that needs to match the weight of a more specific selector elsewhere in the codebase:

/* somewhere in your styles */
#sidebar a {
  color: gray;
}

/* your component */
.nav-link {
  color: blue;
}

Rather than using !important, you can bump .nav-link up by wrapping it in :is() with a more specific argument:

:is(#some_id, .nav-link) {
  color: blue;
}

Now this has id-level specificity while matching only .nav-link. It’s worth noting that the selector inside :is() doesn’t have to match an actual element. We’re using #some_id purely to increase specificity in this case.

Note: If #some_id actually exists in your markup, this selector would also match that element. So it would be best to use an id not being used to avoid side effects.

On the flip side, :where() does the opposite. It always resolves to a specificity of (0,0,0), no matter what’s inside it. This is handy for reset or base styles where you want anything downstream to override easily.

Doubling up a selector

A pretty straightforward way of increasing a selectors specificity is repeating the selector. This is usually done with classes. For example:

.button {
  color: blue;
}

.button.button {
  color: red;  /* higher specificity */
}

You would generally not want to do this too often as it can become a readability nightmare.

Reordering

CSS resolves ties in specificity by source order, so a rule that comes later is prioritized. This is easy to overlook, especially in larger stylesheets where styles are spread across multiple files.

If a more generic rule keeps overriding a more targeted one and the specificity is the same, check whether the generic rule is being loaded after yours. Flipping the order can fix the conflict without needing to increase specificity.

This is also why it’s worth thinking about stylesheet organization from the start. A common pattern is to go from generic to specific (resets and base styles first, then layout, then components, then utilities).

When using !important does make sense

After all that, it’s worth being clear: !important does have legitimate use cases. Chris discussed this a while back too, and the comments are worth a read too.

The most common case is utility classes. For example, the whole point of classes like .visually-hidden is that they do one thing, everywhere. In this cases, you don’t want a more specific selector quietly undoing it somewhere else. The same is true for state classes like .disabled or generic component styles like .button.

.visually-hidden {
  position: absolute !important;
  width: 1px !important;
  height: 1px !important;
  overflow: hidden !important;
  clip-path: inset(50%) !important;
}

Third-party overrides are another common scenario. !important can be used here to either override inline styles being set in JavaScript or normal styles in a stylesheet that you can’t edit.

From an accessibility point of view, !important is irreplaceable for user stylesheets. Since these are applied on all webpages and there’s virtually no way to guarantee if the stylesheets’ selectors will always have the highest specificity, !important is basically the only reliable way to make sure your styles always get precedence.

Another good example is when it comes to respecting a user’s browser preferences, such as reducing motion:

@media screen and (prefers-reduced-motion: reduce) {
  * {
    animation-duration: 0.001ms !important;
    animation-iteration-count: 1 !important;
    transition-duration: 0.001ms !important;
  }
}

Wrapping up

The difference between good and bad use of !important really comes down to intent. Are you using it because you understand the CSS Cascade and have made a call that this declaration should always apply? Or are you using it as a band-aid? The latter will inevitably cause issues down the line.

Further reading


Alternatives to the !important Keyword originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Read the whole story
alvinashcraft
45 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

New in WordPress Studio: Studio CLI on npm & phpMyAdmin Access

1 Share

We recently shipped two big updates for our local development tool, WordPress Studio:

  1. Studio CLI as a standalone npm module
  2. phpMyAdmin access

One is for the terminal devotee, the other is for anyone who dreads opening a separate database tool — both are for anyone who’d rather just get building.

Studio CLI on npm

Until now, using Studio meant needing to download the desktop app. That changes today.

If you work primarily in the terminal — whether you’re a Linux user or just prefer to keep your hands on the keyboard — you can now install Studio directly via npm and skip the GUI entirely.

The CLI fits naturally in automated test runs, deployment scripts, and AI coding agent workflows — anywhere you’re already working in the terminal and spinning up a WordPress site by hand would slow you down.

How to install

If you already have the Studio desktop app installed, the CLI is already available — just enable Studio CLI for terminal under Preferences. 

The WordPress Studio Settings window showing the Studio CLI for terminal toggle, appearance settings, and more
Did you notice dark mode? That’s new, too. Head to your Preferences to change your app’s appearance.

If you want to install the CLI as a standalone tool, simply run npm install -g wp-studio. Alternatively, if you just want to run it once without installing the command, run npx wp-studio.

From there, you can authenticate with WordPress.com, create and manage local sites, preview in the browser, and run WP-CLI commands. Sync with WordPress.com and Pressable, import, export, and more are on the way.

The CLI and the desktop app are companions, not competitors: you can switch between them freely and they stay in sync. And don’t worry: the desktop app isn’t going anywhere.

phpMyAdmin access

On the desktop side, Studio now includes phpMyAdmin access directly from the Overview tab, giving you a visual interface to manage your site’s database. 

Inspecting or editing your local database used to mean reaching for a separate tool and going through a setup you’d rather not bother with. Now you can start querying tables, checking data, and debugging schema issues in just one click.

An orange arrow pointing to the phpMyAdmin button in the WordPress Studio Open in... menu

More ways to build locally

These two updates push Studio further in the same direction: less friction between
you and building on WordPress. 

The CLI removes the GUI as a requirement, and phpMyAdmin removes the need to leave the app when you need to get into your database.

If you haven’t tried WordPress Studio yet, this is a good time to start.

Questions or feedback? We’re in GitHub — open an issue to share feedback, bugs, and feature requests.





Read the whole story
alvinashcraft
50 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories