Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148430 stories
·
33 followers

Under the hood: How Firefox suggests tab groups with local AI

1 Share
Browser popup showing the “Create tab group” menu with color options and AI tab suggestions button.

Background

Mozilla launched Tab Grouping in early 2025, allowing tabs to be arranged and grouped with persistent labels. It was the most requested feature in the history of Mozilla Connect. While tab grouping provides a great way to manage tabs and reduce tab overload, it can be a challenge to locate which tabs to group when you have many open.

We sought to improve the workflows by providing an AI tab grouping feature that enables two key capabilities:

  • Suggesting a title for a tab group when it is created by the user.
  • Suggesting tabs from the current window to be added to a tab group.

Of course, we wanted this to work without you needing to send any data of yours to Mozilla, so we used our local Firefox AI runtime and built an efficient model that delivers the features entirely on your own device. The feature is opt-in and downloads two small ML models when the user clicks to run it the first time.

Group title suggestion

Understanding the problem

Suggesting titles for grouped tabs is a challenge because it is hard to understand user intent when tabs are first grouped. Based on our interviews when we started the project, we found that while tab groups are sometimes generic terms like ‘Shopping’ or ‘Travel’, over half the time users’ tabs were specific terms such as name of a video game, friend or town. We also found tab names to be extremely short – 1 or 2 words.

Diagram showing Firefox tab information processed by a generative AI model to label topics like Boston Travel

Generating a digest of the group

To address these challenges, we adopt a hybrid methodology that combines a modified TF-IDF–based textual analysis with keyword extraction. We identify terms that are statistically distinctive to the titles of pages within a tab group compared to those outside it. The three most prominent keywords, along with the full titles of three randomly selected pages, are then combined to produce a concise digest representing the group, which is used as input for the subsequent stage of processing using a language model.

Generating the label

The digest string is used as an input to a generative model that returns the final label. We used a T5 based encoder-decoder model (flan-t5-base) that was fine tuned on over 10,000 example situations and labels.  

One of the key challenges in developing the model was generating the training data samples to tune the model without any user data. To do this, we defined a set of user archetypes and used an LLM API (OpenAI GPT-4) to create sample pages for a user performing various tasks. This was augmented by real page titles from the publicly available common crawl dataset. We then used the LLM to suggest short titles for those use cases. The process was first done at a small scale of several hundred group names. These were manually corrected and curated, adjusting for brevity and consistency. As the process scaled up, the initial 300 group names were used as examples passed to the LLM so that the additional examples created would meet those standards.  

Shrinking things down

We need to get the model small enough to run on most computers. Once the initial model was trained, it was sampled to a smaller model using a process known as knowledge distillation. For distillation, we tuned a t5-efficient-tiny model from the token probability outputs of our teacher flan-t5-base model.  Midway through the distillation process we also removed two encoder transformer layers and two decoder layers to further reduce the number of parameters.

Finally, the model parameters were quantized from floating point (4 bytes per parameter) to integer 8 bit. In the end this entire reduction process reduced the model from 1GB to 57 MB, with only a modest reduction in accuracy. 

Suggesting tabs 

Understanding the problem

For tab suggestions, we identified a couple of approaches on how people prefer grouping their tabs. Some people prefer grouping by domain to easily access all documents for work for instance. Others might prefer grouping all their tabs together when they are planning a trip. Others still might prefer separating their “work” and “personal” tabs.

Our initial approach on suggesting tabs was based on semantic similarity. Tabs that are topically similar are suggested.

Browser pop-up suggesting related tabs for a Boston trip using AI-based grouping

Identifying topically similar tabs

We first convert tab titles to a feature vector locally using a MiniLM embedding model. Embedding models are trained so that similar content produces vectors that are close together in embedding space. Using a similarity measure such as cosine similarity, we’re able to assign how closely similar a tab title or url is to another.

The similarity score between an anchor tab chosen by the user and another tab is a linear combination of the candidate tab with the group title (if present) of the anchor tab, the anchor tab title and the anchor url. Using these values, we generate a similarity probability and tabs that have a high probability threshold are suggested to be part of the group.

Mathematical formula showing conditional probability using weighted similarity and sigmoid function

where,
w is the weight,
t_i is the candidate tab,
t_a is the anchor tab,
g_a is the anchor group title,
u_i is the candidate url
u_a is the anchor url, and,
σ is the sigmoid function

Optimizing the weights

In order to find the weights, we framed the problem as a classification task, where we calculate the precision and recall based on the tabs that were correctly classified given an anchor tab. We used synthetic data generated by OpenAI based on the user archetypes above.

We initially used a clustering approach to establish a baseline and switched to a logistic regression when we realized that treating the group, title and url features with varying importances improved our metrics.

Bar chart comparing DBScan and Logistic Regression by precision, recall, and F1 performance metrics

Using logistic regression, there was an 18% improvement against the baseline.

Performance

While the median number of tabs for people using the feature is relatively small (~25), there are some “power” users whose tab count reaches the thousands. This would cause the tab grouping feature to take uncomfortably long. 

This was part of the reason why we switched from a clustering based approach to a linear model. 

Using our performance framework, we found that the p99 of running logistic regression compared to a clustering based method such as KMeans improved by 33%.

Bar chart comparing KMeans and Logistic Regression using percentile metrics p50, p95, and p99

Future work here would involve improving F1 score. These could be by adding a time-related component as part of the inference (we are more likely to group tabs together that we’ve opened at the same time) or using a fine-tuned embedding model for our use case.

Thanks for reading

All of our work is open source. If you are a developer feel free to peruse our source code on our model training, or view our topic model on Huggingface.

Feel free to try the feature and let us know what you think!

Take control of your internet

Download Firefox

The post Under the hood: How Firefox suggests tab groups with local AI appeared first on The Mozilla Blog.

Read the whole story
alvinashcraft
33 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

A Practical Guide To UX Strategy

1 Share

For years, “UX strategy” felt like a confusing, ambiguous, and overloaded term to me. To me, it was some sort of a roadmap or a “grand vision”, with a few business decisions attached to it. And looking back now, I realize that I was wrong all along.

UX Strategy isn’t a goal; it’s a journey towards that goal. A journey connecting where UX is today with a desired future state of UX. And as such, it guides our actions and decisions, things we do and don’t do. And its goal is very simple: to maximize our chances of success while considering risks, bottlenecks and anything that might endanger the project.

Let’s explore the components of UX strategy, and how it works with product strategy and business strategy to deliver user value and meet business goals.

Strategy vs. Goals vs. Plans

When we speak about strategy, we often speak about planning and goals — but they are actually quite different. While strategy answers “what” we’re doing and “why”, planning is about “how” and “when” we’ll get it done. And the goal is merely a desired outcome of that entire journey.

  • Goals establish a desired future outcome,
  • That outcome typically represents a problem to solve,
  • Strategy shows a high-level solution for that problem,
  • Plan is a detailed set of low-level steps for getting the solution done.

A strong strategy requires making conscious, and oftentimes tough, decisions about what we will do — and just as importantly, what we will not do, and why.

Business Strategy

UX strategy doesn’t live in isolation. It must inform and support product strategy and be aligned with business strategy. All these terms are often slightly confusing and overloaded, so let's clear it up.

At the highest level, business strategy is about the distinct choices executives make to set the company apart from its competitors. They shape the company’s positioning, objectives, and (most importantly!) competitive advantage.

Typically, this advantage is achieved in two ways: through lower prices (cost leadership) or through differentiation. The latter part isn't about being different, but rather being perceived differently by the target audience. And that’s exactly where UX impact steps in.

In short, business strategy is:

  • A top-line vision, basis for core offers,
  • Shapes positioning, goals, competitive advantage,
  • Must always adapt to the market to keep a competitive advantage.
Product Strategy

Product strategy is how a high-level business direction is translated into a unique positioning of a product. It defines what the product is, who its users are, and how it will contribute to the business’s goals. It’s also how we bring a product to market, drive growth, and achieve product-market fit.

In short, product strategy is:

  • Unique positioning and value of a product,
  • How to establish and keep a product in the marketplace,
  • How to keep competitive advantage of the product.
UX Strategy

UX strategy is about shaping and delivering product value through UX. Good UX strategy always stems from UX research and answers to business needs. It established what to focus on, what our high-value actions are, how we’ll measure success, and — quite importantly — what risks we need to mitigate.

Most importantly, it’s not a fixed plan or a set of deliverables; it’s a guide that informs our actions, but also must be prepared to change when things change.

In short, UX strategy is:

  • How we shape and deliver product value through UX,
  • Priorities, focus + why, actions, metrics, risks,
  • Isn’t a roadmap, intention or deliverables.
Six Key Components of UX Strategy

The impact of good UX typically lives in differentiation mentioned above. Again, it’s not about how “different” our experience is, but the unique perceived value that users associate with it. And that value is a matter of a clear, frictionless, accessible, fast, and reliable experience wrapped into the product.

I always try to include 6 key components in any strategic UX work so we don’t end up following a wrong assumption that won’t bring any impact:

  1. Target goal
    The desired, improved future state of UX.
  2. User segments
    Primary users that we are considering.
  3. Priorities
    What we will and, crucially, what we will not do, and why.
  4. High-value actions
    How we drive value and meet user and business needs.
  5. Feasibility
    Realistic assessment of people, processes, and resources.
  6. Risks
    Bottlenecks, blockers, legacy constraints, big unknowns.

It’s worth noting that it’s always dangerous to be designing a product with everybody in mind. As Jamie Levy noted, by being very broad too early, we often reduce the impact of our design and messaging. It’s typically better to start with a specific, well-defined user segment and then expand, rather than the other way around.

Practical Example (by Alin Buda)

UX strategy doesn’t have to be a big 40-page long PDF report or a Keynote presentation. A while back, Alin Buda kindly left a comment on one of my LinkedIn posts, giving a great example of what a concise UX strategy could look like:

UX Strategy (for Q4)

Our UX strategy is to focus on high-friction workflows for expert users, not casual usability improvements. Why? Because retention in this space is driven by power-user efficiency, and that aligns with our growth model.

To succeed, we’ll design workflow accelerators and decision-support tools that will reduce time-on-task. As a part of it, we’ll need to redesign legacy flows in the Crux system. We won’t prioritize UI refinements or onboarding tours, because it doesn’t move the needle in this context.

What I like most about this example is just how concise and clear it is. Getting to this level of clarity takes quite a bit of time, but it creates a very precise overview of what we do, what we don't do, what we focus on, and how we drive value.

Wrapping Up

The best path to make a strong case with senior leadership is to frame your UX work as a direct contributor to differentiation. This isn’t just about making things look different; it’s about enhancing the perceived value.

A good strategy ties UX improvements to measurable business outcomes. It doesn’t speak about design patterns, consistency, or neatly organized components. Instead, it speaks the language of product and business strategy: OKRs, costs, revenue, business metrics, and objectives.

Design can succeed without a strategy. In the wise words of Sun Tzu, strategy without tactics is the slowest route to victory. And tactics without strategy are the noise before defeat.

Meet “How To Measure UX And Design Impact”

You can find more details on UX Strategy in 🪴 Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

Useful Resources

Read the whole story
alvinashcraft
48 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Launches Magentic Marketplace for AI Agents

1 Share
magenta

Microsoft Research has just launched an open source environment for studying agentic markets, called Magentic Marketplace. In advance of the release, I spoke to Ece Kamar, Managing Director of the AI Frontiers Lab at Microsoft Research.

Kamar’s research group had previously developed AutoGen, an agentic development framework that has become popular with Python developers — especially for building multi-agent AI systems. In part due to that success, the development of Magentic Marketplace was inspired by AutoGen.

“AutoGen is part of the Microsoft Agent Framework [that] was released a month ago,” Kamar told me. “So we were able to get all of that programming layer and ship it on a Microsoft product. And now we use all of the learnings from AutoGen — what people do with AutoGen — to think about what agents are going to become.”

What Is Magentic Marketplace?

The idea of Magentic Marketplace is to allow researchers to simulate a marketplace for AI agents, to test “how agents negotiate, transact, and collaborate under real-world market dynamics.” The marketplace will also monitor safety and fairness in these systems.

Magentic Marketplace high-level

Magentic Marketplace high-level.

Although Magentic Marketplace is a research project, it could easily become a commercial project later — similar to how AutoGen has evolved into Microsoft Agent Framework (the result of a recent merger between AutoGen and Semantic Kernel, an SDK I profiled back in April 2023).

“We are expecting that there will be public markets coming,” Kamar said. “We [Microsoft Research] are probably not going to be the team to build them, sitting in research. But […] when you look into some of the latest releases coming in this space, it’s all kind of gearing towards starting to test these marketplaces.”

“I personally believe that a lot of the way we use technology will be rethought, redesigned with these agents in mind,” she added. “And marketplaces is going to be one of the domains I expect to see a lot of activity going on.”

Protocols in a ‘Society of Agents’

Like any good research project, there is a working theory about how AI agents should work. Kamar, whose PhD at Harvard during the 2000s was on the very subject of AI agents, is using the phrase “society of agents” to describe the project’s goals.

“In this notion of ‘society of agents’, it is really about AI agents coming together, interacting, collaborating, negotiating,” she said. “Also, with the supervision of people, and really uncovering how the world is going to look like when we have these agents, how having these agents by our side is going to be able to address some of the inefficiencies we have in the world.”

“In this notion of ‘society of agents’, it is really about AI agents coming together, interacting, collaborating, negotiating.”
– Ece Kamar, Microsoft Research

A key part of the research is testing communications protocols like Model Context Protocol (MCP) and Agent2Agent (A2A), along with emerging payment protocols. For agentic commerce, there isn’t yet a default protocol — although recently OpenAI announced the Agentic Commerce Protocol (ACP), Google announced Agent Payments Protocol (AP2) in September, while others (like Shopify) have been using MCP-UI.

Kamar also expects new protocols to emerge that will help agents collaborate, or for protocols like MCP and A2A to expand for marketplace use cases. For example, what is the right way for agents to show information for a transaction?

Key Challenges and Biases in AI Agent Simulations

Kamar said they also recognize the risks that come with AI agents — like safety and bias — and she described some of the challenges they’ve come across so far in the marketplace simulations.

“One of the things that we are seeing is that, again, while we have these communication protocols [MCP, A2A, et al], the models powering these agents sometimes can get into some kind of a decision paradox. If they have too many choices, they may not be that effective yet in terms of being able to make the right choices.”

Magentic Marketplace in action

Magentic Marketplace in action.

The group has also seen “some biases coming up.”

“For example, one of the biases we have identified is something called a ‘proposal bias.’ The models right now are preferring options that are coming up fast. Like, if you’re a fast agent, you are much more preferred whether you have the best proposal or not.”

So while agents have been able to communicate with each other in the marketplace simulation, there is much work to be done to make multi-agent collaboration a reality. To get to the highest level of utility from these marketplaces, Kamar noted, “we will need to train these agents and build them in different ways.”

She mentioned a couple of the technical issues they’ve come across so far in the simulations. One is what she termed “tool space interference”, which basically means the agents get confused by the proliferation of AI tools. “Right now, MCP has so many different tools,” she said, “and sometimes they are named the same way, or even the name conventions are not there yet; and we are seeing that as this protocol is maturing, there are still issues with it.”

Magentic Marketplace has already shown “the limitations of the existing frontier models when it comes to collaboration and negotiation.”
– Kamar

In fact, Kamar’s group has itself built an open source MCP tool, called MCP Interviewer. She explained that it “helps developers […] kind of interview these tools, look at interference issues, so that they can be more informed about which tools to bring in; and see issues like tool interference before it happens in their real systems.”

The second issue is further down the stack — she noted “the limitations of the existing frontier models when it comes to collaboration and negotiation.” They’ve tried to get LLMs to collaborate with each other to help agents perform a task, and found that model performance degrades with this collaboration.

“So, as a team, we’re also looking into what needs to change in the way models are trained, so that these models can empower stronger agents in terms of their collaboration capabilities,” Kamar said.

Balancing AI Agent Autonomy with Human Supervision

Those of you old enough to remember the dot-com era of the internet will recall that it took several years for people to feel confident entering their credit card information into a web browser to make an online purchase. So how long will it take to feel confident giving our credit cards — or indeed our personal preferences — to an AI agent?

“I think it is for us, for researchers, it is just very important that we are improving the technology and creating clarity around the technology as much as we can,” Kamar said. “And when it is time for these technologies to be in the hands of the people, we are not giving them something that we built but we don’t really understand; but we are giving them something that we truly understand and we have tested, we understood the rough edges and we have worked on improving them.”

She added that her team also considers when human supervision is appropriate in these agentic systems — more commonly referred to in the industry as “human in the loop.”

“If we are going to be building these marketplaces and ecosystems, we can also invest time on understanding and building these layers where, as a user, I still have the control…”
– Kamar

“So I think there is also going to be a spectrum where we are not going to go to full agent autonomy on day one,” she said. “You know, it doesn’t have to be. If we are going to be building these marketplaces and ecosystems, we can also invest time on understanding and building these layers where, as a user, I still have the control — I’m still looking at all the interactions, I’m still looking at the options, I can still ask questions about what the agent is recommending to me.”

Before this interview, I must admit I wasn’t sure why Microsoft would be releasing a simulated marketplace instead of the real thing. But Kamar has convinced me that it’s not only sensible to fully test how agents collaborate before a public marketplace goes live, but it’s actually dangerous not to run the simulations first!

Also, Magentic Marketplace should help us improve the LLMs, protocols and AI tooling that companies will need to make a public agent marketplace viable.

The post Microsoft Launches Magentic Marketplace for AI Agents appeared first on The New Stack.

Read the whole story
alvinashcraft
56 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Handling Complex Forms with Validation and Dynamic Rules

1 Share
Learn how to build enterprise-grade Blazor forms using Blazorise Validation, with async validators, conditional rules, and dynamically generated fields.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

The AI ick

1 Share
How we feel about AI-generated content, what AI detectors tell us, and why human creativity matters. Also, what is art?
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Beware of double agents: How AI can fortify — or fracture — your cybersecurity

1 Share

AI is rapidly becoming the backbone of our world, promising unprecedented productivity and innovation. But as organizations deploy AI agents to unlock new opportunities and drive growth, they also face a new breed of cybersecurity threats.

There are a lot of Star Trek fans here at Microsoft, including me. One of our engineering leaders gifted me a life-size cardboard standee of Data that lurks next to my office door. So, as I look at that cutout, I think about the Great AI Security Dilemma: Is AI going to be our best friend or our worst nightmare? Drawing inspiration from the duality of the android officer Data, and his evil twin Lore in the Star Trek universe, today’s AI agents can either fortify your cybersecurity defenses — or, if mismanaged — fracture them.

The influx of agents is real. IDC research[1] predicts there will be 1.3 billion agents in circulation by 2028. When we think about our agentic future in AI, the duality of Data and Lore seems like a great way to think about what we’ll face with AI agents and how to avoid double agents that upend control and trust. Leaders should consider three principles and tailor them to fit the specific needs of their organizations.

1. Recognize the new attack landscape

Security is not just an IT issue — it’s a board-level priority. Unlike traditional software, AI agents are even more dynamic, adaptive and likely to operate autonomously. This creates unique risks.

We must accept that AI can be abused in ways beyond what we’ve experienced with traditional software. We employ AI agents to perform well-meaning tasks, but those with broad privileges can be manipulated by bad actors to misuse their access, such as leaking sensitive data via automated actions. We call this the “Confused Deputy” problem. AI Agents “think” in terms of natural language where instructions and data are tightly intertwined, much more than in typical software we interact with. The generative models agents depend on dynamically analyze the entire soup of human (or even non-human) languages, making it hard to distinguish well-known safe operations from new instructions introduced through malicious manipulation. The risk grows even more when shadow agents — unapproved or orphaned — enter the picture. And as we saw in Bring Your Own Device (BYOD) and other tech waves, anything you cannot inventory and account for magnifies blind spots and drives risk ever upward.

2. Practice Agentic Zero Trust

AI agents may be new as productivity drivers, but they can still be managed effectively using established security principles. I’ve had great conversations about this here at Microsoft with leaders like Mustafa Suleyman, cofounder of DeepMind and now Executive Vice President and CEO of Microsoft AI. Mustafa frequently shares a way to think about this, which he outlined in his book The Coming Wave, in terms of Containment and Alignment.

Containment simply means we do not blindly trust our AI Agents, and we significantly box every aspect of what they do. For example, we cannot let any agent’s access privileges exceed its role and purpose — it’s the same security approach we take to employee accounts, software and devices, what we refer to as “least privilege.” Similarly, we contain by never implicitly trusting what an agent does or how it communicates — everything must be monitored — and when this isn’t possible, agents simply are not permitted to operate in our environment.

Alignment is all about infusing positive control of an AI agent’s intended purpose, through its prompts and the models it uses. We must only use AI agents trained to resist attempts at corruption, with standard and mission-specific safety protections built into both the model itself and the prompts used to invoke the model. AI agents must resist attempts to divert them from their approved uses. They must execute in a Containment environment that watches closely for deviation from their intended purpose. All this requires strong AI agent identity and clear accountable ownership within the organization. As part of AI governance, every agent must have an identity, and we must know who in the organization is accountable for its aligned behavior.

Containment (least privilege) and Alignment will sound familiar to enterprise security teams, because they align with some of the basic principles of Zero Trust. Agentic Zero Trust includes “assuming breach,” or never implicitly trusting anything, making humans, devices and agents verify who they are explicitly before they gain access and limiting their access to only what’s needed to perform a task. While Agentic Zero Trust ultimately includes deeper security capabilities, discussing Containment and Alignment is a good shorthand in security-in-AI strategy conversations with senior stakeholders to keep everyone grounded in managing the new risk. Agents will keep joining and adapting at work — some may become double agents. With proper controls, we can protect ourselves.

3. Foster a culture of secure innovation

Technology alone won’t solve AI security. Culture is the real superpower in managing cyber risk — and leaders have the unique ability to shape it. Start with open dialogue: make AI risks and responsible use part of everyday conversations. Keep it cross-functional: legal, compliance, HR and others should have a seat at the table. Invest in continuous education: train teams on AI security fundamentals and clarify policies to cut through noise. Finally, embrace safe experimentation: give people approved spaces to learn and innovate without creating risk.

Organizations that thrive will treat AI as a teammate, not a threat — building trust through communication, learning and continuous improvement.

The path forward: What every company should do

AI isn’t just another chapter — it’s a plot twist that changes everything. The opportunities are huge, but so are the risks. The rise of AI requires ambient security, which executives create by making cybersecurity a daily priority. This means blending robust technical measures with ongoing education and clear leadership so that security awareness influences every choice made. Organizations maintain ambient security when they:

  • Make AI security a strategic priority.
  • Insist on Containment and Alignment for every agent.
  • Mandate identity, ownership and data governance.
  • Build a culture that champions secure innovation.

And it will be important to take a set of practical steps:

  • Assign every AI agent an ID and owner — just like employees need badges. This ensures traceability and control.
  • Document each agent’s intent and scope.
  • Monitor actions, inputs and outputs. Map data flows early to set compliance benchmarks.
  • Keep agents in secure, sanctioned environments — no rogue “agent factories.”

The call to action for every business is: Review your AI governance framework now. Demand clarity, accountability and continuous improvement. The future of cybersecurity is human plus machine — lead with purpose and make AI your strongest ally.

At Microsoft, we know we have a huge role to play in empowering our customers in this new era. In May, we introduced Microsoft Entra Agent ID as a way to help customers place unique identities to agents from the moment they are created in Microsoft Copilot Studio and Azure AI Foundry. We leverage AI in Defender and Security Copilot, combined with the massive security signals we collect, to expose and defeat phishing campaigns and other attacks that cybercriminals may use as entry points to compromise AI agents. We’ve also been committed to a platform approach with AI agents, to help customers safely use both Microsoft and third-party agents on their journey, avoiding complexity and risk that come from needing to juggle excessive dashboards and management consoles.

I’m excited by several other innovations we will be sharing at Microsoft Ignite later this month, alongside customers and partners.

We may not be conversing with Data on the bridge of the USS Enterprise quite yet, but as a technologist, it’s never been more exciting than watching this stage of AI’s trajectory in our workplaces and lives. As leaders, understanding the core opportunities and risks helps create a safer world for humans and agents working together.

Charlie Bell is executive vice president of Microsoft Security, leading teams advancing cybersecurity, compliance, identity and management. With more than 40 years in technology, he’s held leadership roles at Oracle, founded Server Technologies Group and unified engineering at AWS before joining Microsoft, driving innovation and protection for global digital systems.

 

NOTE

[1] IDC Info Snapshot, sponsored by Microsoft, 1.3 Billion AI Agents by 2028, May 2025 #US53361825

The post Beware of double agents: How AI can fortify — or fracture — your cybersecurity appeared first on The Official Microsoft Blog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories