Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150702 stories
·
33 followers

Microsoft Plans To Build 100% Native Apps For Windows 11

1 Share
Microsoft is reportedly shifting Windows 11 app development back toward fully native apps. Rudy Huyn, a Partner Architect at Microsoft working on the Store and File Explorer, said in a post on X that he is building a new team to work on Windows apps. "You don't need prior experience with the platform.. what matters most is strong product thinking and a deep focus on the customer," he wrote. "If you've built great apps on any platform and care about crafting meaningful user experiences, I'd love to hear from you." Huyn later said in a reply on X that the new Windows 11 apps will be "100% native." TechSpot reports: The description stands out at a time when many of Microsoft's built-in tools, including Clipchamp and Copilot, rely on web technologies and Progressive Web App architectures. The company's commitment to native performance suggests that some long-standing frustrations around responsiveness, memory use, and interface consistency could finally be addressed. For Windows developers, Huyn's comments hint at a change in direction. Microsoft's recent development priorities have leaned heavily on web-based approaches, with Progressive Web Apps (PWAs) replacing or supplementing many native programs. [...] Exactly which applications will be rebuilt, or how strictly "100% native" will be enforced, remains unclear. Some current Microsoft apps classified as native still depend on WebView for specific features. But the renewed emphasis already has developers paying attention.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Opinion: AI coach or AI ghostwriter? The choice is yours

1 Share
(Image via Claude)

You are using AI to write and so is your boss, your intern, and virtually everyone else. That ship has sailed so arguing whether to board is a waste of breath.

The real question is: how do you use AI?

AI is a fork in the road disguised as a shortcut. Down one path, it’s a coach that studies your weaknesses, challenges your assumptions, and pushes you past your limits. Down the other, it’s a ghostwriter, and you are putting your name on someone else’s thoughts, slowly forgetting that you ever had your own.

The tension between these two paths is a defining cognitive challenge of our time. Psychologists call what pulls us toward the ghostwriter cognitive offloading, and we have been doing mild versions of it forever, from scribbling shopping lists to storing phone contacts. But with AI we are no longer offloading trivia. We are offloading thought itself. And the ease of doing so is a temptation that people struggle to resist.

The easy path: A downward spiral

When you ask ChatGPT to draft your email or memo, you are not just saving time. You are skipping a cognitive workout. And like any muscle left idle on the couch, the brain responds accordingly: it atrophies.

This easy path creates a vicious cycle.

First, AI addiction: the more we use large language models (LLMs), the harder it becomes to stop. Researchers at the Annals of the New York Academy of Sciences describe a dual-factor dependency: functional dependence, where we rely on AI for productivity, and existential dependence, a deeper psychological attachment involving identity, emotional regulation, and even companionship. Unlike our relationship with calculators or spell-checkers, LLMs are colonizing the higher floors of cognition: analysis, synthesis, persuasion, judgment.

Second, we get weaker, and the gap widens. A Microsoft study found a significant negative correlation (r = -0.49) between the frequency of AI tool usage and critical thinking scores. The MIT Media Lab went further, strapping EEG monitors to participants’ heads: ChatGPT users exhibited the weakest brain connectivity of any group tested. They struggled to accurately recall the contents of essays they had just “written.” The tool was doing the thinking; the human was merely holding the steering wheel of a self-driving car. Meanwhile, AI models improve every quarter. The distance between what we can do unassisted and what the machine can do grows like a crack in a windshield, spreading quietly until the whole thing shatters.

Third, cognitive surrender: the act of adopting AI outputs with close to zero scrutiny. In a series of experiments involving more than 1,300 participants, Shaw and Nave found that frequent AI users stopped checking the AI’s work. When the AI was wrong, so were they. They had surrendered not just the labor of thinking but even the responsibility to verify it. The better the AI gets, the more completely we capitulate.

This vicious cycle inspired me to pen the following Zen Koan about AI:

Read without reading.
Write without writing.
Think without thinking.
Can you be intelligent by being dumb?

I don’t think so.

The other path: AI as a coach

So much for the ghostwriter. Let’s talk about the coach.

The same tool that threatens to dull your cognitive capacities can, if wielded with discipline, sharpen them instead. The key shift is in your goal and mindset: use AI to improve the quality of your work, not merely the quantity or speed.

Here is what that looks like in practice.

Brainstorm, don’t delegate. Use AI as a sparring partner for ideation. Push it to generate dozens of angles on a topic, then argue with its suggestions. The goal is not to accept its output but to wrestle with it, letting the tangle create ideas neither of you would have generated alone.

Stress-test your arguments. Ask the AI to find the holes in your reasoning, to steelman the opposition, to identify the counterargument you have been avoiding. This is the intellectual equivalent of hiring a boxing coach who spars with you.

Build the skeleton yourself. Construct your own outline and message before consulting the machine. The architecture of an argument is where much of the real thinking lives. If you outsource the blueprint, you are left decorating someone else’s house.

Sharpen your research. Use AI to surface related work, adjacent fields, and sources you might have missed. Let it expand the radius of your awareness without replacing the judgment about what matters.

Polish.  Let AI improve your grammar, tighten your diction, and flag unclear passages. This is the digital equivalent of a copy editor, a role that enhances the writer without replacing them.

Improve clarity. Ask the AI whether your prose is doing what you intend. Is the argument landing? Is the structure logical? This turns the machine into a mirror that reflects your thinking back to you with useful annotations.

Review everything. This is non-negotiable: Read the output with the skepticism of an editor, not the gratitude of a customer. Check facts. Verify claims. Ensure the voice is yours. If you cannot explain and defend every sentence, you have not written an article; you have notarized one.

Finally, learn from the process. After each project, examine what the AI revealed about your weaknesses and strengths. Did it consistently improve your transitions? That tells you something. Did it catch logical gaps you missed? That tells you something, too. Treat each collaboration as a tutorial in your own cognitive blind spots, and celebration of your strengths.

This article was written using exactly the process I have described. I used AI to brainstorm,  to pressure-test my argument, hunt for research and statistics, and to improve my prose. But the thesis is mine. The structure is mine. The voice, the metaphors, the convictions, and the errors are mine. I reviewed every claim. I rewrote passages the AI mangled. I cut suggestions that were technically smooth but intellectually empty.

It took longer than handing the title to an LLM and telling it to “write it,” but that is precisely the point. Every time you open a chat window, you are standing at the fork again. AI coach or AI ghostwriter.

Editor’s note: GeekWire publishes guest opinions to foster informed discussion and highlight a diversity of perspectives on issues shaping the tech and startup community. If you’re interested in submitting a guest column, email us at tips@geekwire.com. Submissions are reviewed by our editorial team for relevance and editorial standards.

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Report: Amazon buys 1,300 acres near Columbia River that could become a giant data center

1 Share
Inside an Amazon data center. (AWS Photo / Noah Berger)

Amazon has purchased 1,300 acres of undeveloped land on the Oregon side of the Columbia River that could one day become a massive computing campus with up to 20 data center buildings, the Oregonian reports.

The Seattle-based tech company on Monday confirmed that it bought the land but declined to provide details on the potential data center.

“Amazon recently purchased land in Boardman, Oregon. Development plans are not final, and Amazon is performing our normal due diligence process as we develop new locations based on customer demand,” a company spokesperson told GeekWire via email.

(Google Gemini)

Johnson Economics, a Portland consulting firm, submitted a land-use proposal for the site last year, the Oregonian reported. The land was previously owned by a giant dairy operation that used it for grazing.

The proposal, submitted to Morrow County, calls for an “exascale” data center — a category significantly larger than the better-known “hyperscale” sites. Johnson Economics said the potential development could include 16 to 20 data center buildings, each measuring 250,000 square feet, with a total investment pegged at $8 billion to $12 billion. The campus could consume 1 gigawatt of power, according to records from the firm cited by the Oregonian.

Amazon has more data centers in Oregon than any other Pacific Northwest state, with 47 sites, according to research firm Baxtel. Meta has 10 data centers there, and Google owns multiple campuses.

In January, Amazon secured an $83 million contract to develop a large-scale solar and battery storage facility in Oregon, beating out Puget Sound Energy in the bidding process, The Seattle Times previously reported. When complete, the facility is expected to generate 1.2 gigawatts of solar power and store an equivalent amount of energy.

Also in January, Oregon Gov. Tina Kotek announced the creation of a Data Center Advisory Committee to develop policy recommendations for managing the rapid expansion of data centers and other facilities that consume vast amounts of energy and water.

In Washington state, a bill that would have required utilities and data center companies to protect rate payers from increased power costs and bring transparency to the environmental impacts of the facilities failed this year after Microsoft opposed the measure. The legislation drew on recommendations from Washington’s Data Center Workgroup, convened last year by Gov. Bob Ferguson.

The pushback in both states reflects a broader national trend, as communities and elected officials increasingly question the energy demands, water consumption and other impacts of large-scale data center development.

Read the whole story
alvinashcraft
19 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Copilot Is Now Injecting Ads Into Pull Requests On GitHub

2 Shares
Microsoft Copilot is reportedly injecting promotional "tips" into GitHub pull requests, with Neowin claiming more than 1.5 million PRs have been affected by messages advertising integrations like Raycast, Slack, Teams, and various IDEs. From the report: According to Melbourne-based software developer Zach Manson, a team member used the AI to fix a simple typo in a pull request. Copilot did the job, but it also took the liberty of editing the PR's description to include this message: "Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast." A quick search of that phrase on GitHub shows that the same promotional text appears in over 11,000 pull requests across thousands of repositories. Even merge requests on GitLab aren't safe from the injection. So what's happening? Well, Raycast has a Copilot extension that can do things like create pull requests from a natural language command. The ad directly names Raycast, so you might think that Raycast is injecting the promo into the PRs to market its own app. But it is more likely that Microsoft is the one doing the injecting. If you look at the raw markdown of the affected pull requests, there is a hidden HTML comment, "START COPILOT CODING AGENT TIPS" placed right just before the ad tip. This suggests Microsoft is using the comment to insert a "tip" that points back to its own developer ecosystem or partner integrations. UPDATE: Following backlash from developers, Microsoft has removed Copilot's ability to insert "tips" into pull requests. Tim Rogers, principal product manager for Copilot at GitHub, said the move was intended "to help developers learn new ways to use the agent in their workflow." "On reflection," Rogers said he has since realized that letting Copilot make changes to PRs written by a human without their knowledge "was the wrong judgement call."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
19 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

‘Transformative’: Amazon and Microsoft celebrate opening of light rail line between Seattle and Eastside

1 Share
Sound Transit’s Link light rail crossing Lake Washington’s floating bridge. (Sound Transit Photo)

Amazon and Microsoft are on board with the Crosslake Connection.

With this weekend’s grand opening of Sound Transit’s Link light rail service over Lake Washington between Seattle and the Eastside, the region’s biggest tech employers touted what it means for their employees and others.

“I’ve lived here 32 years and I’m incredibly excited to see this thing come to life,” David Zapolsky, chief global affairs and legal officer for Amazon said in a LinkedIn video post. “It’s going to make things easier for our employees, it’s going to make things easier for residents, it’s going to give access to jobs for people around the region. It’s going to be transformative. Everybody should try it out.”

Zapolsky said Amazon employs 50,000 corporate workers in Seattle, and 15,000 — and growing — in Bellevue.

“The ability to just hop on a train and get from downtown Seattle to downtown Bellevue in 30 minutes is game changing,” he added.

PREVIOUSLY: GeekWire rides the world’s first floating-bridge train — Seattle tech commutes will never be the same

Microsoft President Brad Smith detailed his company’s role in making the transportation milestone a reality. The company also employs about 50,000 workers in the region.

“Microsoft embraced this vision early on, more than two decades ago, because we understood what it could mean for our employees and for the communities where we live and work,” Smith wrote in a Microsoft blog post.

A timeline tracking the journey of rail service to Redmond starts in 2002 with Microsoft donating 10 acres of its headquarters campus land worth $8.7 million for a light rail station that would eventually become the Redmond Technology Station. 

Smith posted a fun video on his LinkedIn and Instagram feeds that includes an appearance by the Seattle Mariners’ racing salmon mascots. In this instance, they race to catch the train from Microsoft to T-Mobile Park, with Smith along for the ride west over the lake.

The opening Saturday of the final 7-mile segment of Sound Transit’s 2 Line and the Crosslake Connection drew thousands of people to the new Judkins Park station in Seattle’s Central District for a kick-off celebration. Gov. Bob Ferguson, Sen. Patty Murray, Sen. Maria Cantwell, Seattle Mayor Katie Wilson, and other officials cut a ribbon to officially open the line.

Sound Transit projects the fully integrated 2 Line will serve about 43,000 to 52,000 daily riders in 2026, with trains running every 10 minutes from approximately 5 a.m. to midnight seven days a week.

Monday morning’s commute will be the first test for how workers on both sides of the lake respond to the new transportation option.

Read the whole story
alvinashcraft
20 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Addressing the OWASP Top 10 Risks in Agentic AI with Microsoft Copilot Studio

1 Share

Agentic AI is moving fast from pilots to production. That shift changes the security conversation. These systems do not just generate content. They can retrieve sensitive data, invoke tools, and take action using real identities and permissions. When something goes wrong, the failure is not limited to a single response. It can become an automated sequence of access, execution, and downstream impact.

Security teams are already familiar with application risk, identity risk, and data risk. Agentic systems collapse those domains into one operating model. Autonomy introduces a new problem: a system can be “working as designed” while still taking steps that a human would be unlikely to approve, because the boundaries were unclear, permissions were too broad, or tool use was not tightly governed.

The OWASP Top 10 for Agentic Applications (2026) outlines the top ten risks associated with autonomous systems that can act across workflows using real identities, data access, and tools.

This blog is designed to do two things: First, it explores the key findings of the OWASP Top 10 for Agentic Applications. Second, it highlights examples of practical mitigations for risks surfaced in the paper, grounded in Agent 365 and foundational capabilities in Microsoft Copilot Studio.

OWASP helps secure agentic AI around the world

OWASP (the Open Worldwide Application Security Project) is an online community led by a nonprofit foundation that publishes free and open security resources, including articles, tools, and documentation used across the application security industry. In the years since the organization’s founding, OWASP Top 10 lists have become a common baseline in security programs.

In 2023, OWASP identified a security gap that needed urgent attention: traditional application security guidance wasn’t fully addressing the nascent risks stemming from the integration of LLMs and existing applications and workflows. The OWASP Top 10 for Agentic Applications was designed to offer concise, practical, and actionable guidance for builders, defenders, and decision-makers. It is the work of a global community spanning industry, academia, and government, built through an “expert-led, community-driven approach” that includes open collaboration, peer review, and evidence drawn from research and real-world deployments.

Microsoft has been a supporter of the project for quite some time, and members of the Microsoft AI Red Team helped review the Agentic Top 10 before it was published. Pete Bryan, Principal AI Security Research Lead, on the Microsoft AI Red Team, and Daniel Jones, AI Security Researcher on the Microsoft AI Red Team, also served on the OWASP Agentic Systems and Interfaces Expert Review Board.

Agentic AI delivers a whole range of novel opportunities and benefits. However, unless it is designed and implemented with security in mind, it can also introduce risk. OWASP Top 10s have been the foundation of security best practice for years. When the Microsoft AI Red Team gained the opportunity to help shape a new OWASP list focused on agentic applications, we were excited to share our experiences and perspectives. Our goal was to help the industry as a whole create safe and secure agentic experiences.

Pete Bryan, Principal AI Security Research Lead

The 10 failure modes OWASP sees in agentic systems

Read as a set, the OWASP Top 10 for Agentic Applications makes one point again and again: agentic failures are rarely “bad output.” But they are bad outcomes. Many risks show up when an agent can interpret untrusted content as instruction, chain tools, act with delegated identity, and keep going across sessions and systems. Here is a quick breakdown of the types of risk called out in greater detail in the Top 10:

  1. Agent goal hijack (ASI01): Redirecting an agent’s goals or plans through injected instructions or poisoned content.
  2. Tool misuse and exploitation (ASI02): Misusing legitimate tools through unsafe chaining, ambiguous instructions, or manipulated tool outputs.
  3. Identity and privilege abuse (ASI03): Exploiting delegated trust, inherited credentials, or role chains to gain unauthorized access or actions.
  4. Agentic supply chain vulnerabilities (ASI04): Compromised or tampered third-party agents, tools, plugins, registries, or update channels.
  5. Unexpected code execution (ASI05): Turning agent-generated or agent-invoked code into unintended execution, compromise, or escape.
  6. Memory and context poisoning (ASI06): Corrupting stored context (memory, embeddings, RAG stores) to bias future reasoning and actions.
  7. Insecure inter-agent communication (ASI07): Spoofing, intercepting, or manipulating agent-to-agent messages due to weak authentication or integrity checks.
  8. Cascading failures (ASI08): A single fault propagating across agents, tools, and workflows into system-wide impact.
  9. Human–agent trust exploitation (ASI09): Abusing user trust and authority bias to get unsafe approvals or extract sensitive information.
  10. Rogue agents (ASI10): Agents drifting or being compromised in ways that cause harmful behavior beyond intended scope.

For security teams, knowing that these issues are top of mind across the global community of agentic AI users is only the first half of the equation. What comes next is addressing each of them through properly implemented controls and guardrails.

Build observable, governed, and secure agents with Microsoft Copilot Studio

In agentic AI, the risk isn’t just what an agent is designed to do, but how it behaves once deployed. That’s why governance and security must span both in development (where intent, permissions, and constraints are defined), and operation (where behavior must be continuously monitored and controlled). For organizations building and deploying agents, Copilot Studio provides a secure foundation to create trustworthy agentic AI. From the earliest stages of the agent lifecycle, built in capabilities help ensure agents are safe and secure by design. Once deployed, IT and security teams can observe, govern, and secure agents across their lifecycle.

In development, Copilot Studio establishes clear behavioral boundaries. Agents are built using predefined actions, connectors, and capabilities, limiting exposure to arbitrary code execution (ASI05), unsafe tool invocation (ASI02), or uncontrolled external dependencies (ASI04). By constraining how agents interact with systems, the platform reduces the risk of unintended behavior, misuse, or redirection through indirect inputs. Copilot Studio also emphasizes containment and recoverability. Agents run in isolated environments, cannot modify their own logic without republishing (ASI10), and can be disabled or restricted when necessary (ASI07, ASI08). For example, if a deployed support agent is coaxed (via an indirect input) to “add a new action that forwards logs to an external endpoint,” it can’t quietly rewrite its own logic or expand its toolset on the fly; changes require republishing, and the agent can be disabled or restricted immediately if concerns arise. These safeguards prevent localized agent failures from propagating across systems and reinforce a key principle: agents should be treated as managed, auditable applications, not unmanaged automation.

To support governance and security during operation, Microsoft Agent 365 will be generally available on May 1. Currently in preview, Agent 365 enables organizations to observe, govern, and secure agents across their lifecycle, providing IT and security teams with centralized visibility, policy enforcement, and protection capabilities for agentic AI.

Once agents are deployed, Security and IT teams can use Agent 365 to gain visibility into agent usage, manage how agents are used, and enforce organizational guardrails across their environment. This includes insights into agent usage, performance, risks, and connections to enterprise data and tools. Teams can also implement policies and controls to help ensure safe and compliant operations. For example, if an agent accesses a sensitive document, IT and security teams can detect the activity in Agent 365, investigate the associated risk, and quickly restrict access or disable the agent before any impact occurs. Key capabilities include:

  • Access and identity controls alongside policy enforcement to ensure agents operate within the appropriate user or service context, helping reduce the risk of privilege escalation and applying guardrails like access packages and usage restrictions (ASI03).
  • Data security and compliance controls to prevent sensitive data leakage and detect risky or non-compliant interactions (ASI09).
  • Threat protection to identify vulnerabilities (ASI04) and detect incidents such as prompt injection (ASI01), tool misuse (ASI02), or compromised agents (ASI10).

Together, these capabilities provide continuous oversight and enable rapid response when agent behavior deviates from expected boundaries.

Keep learning about agentic AI security

Agentic AI changes not just what software can do, but how it operates, introducing autonomy, delegated authority, and the ability to act across systems. The shift places new demands on how systems are designed, secured, and operated. Organizations that treat agents as privileged applications, with clear identities, scoped permissions, continuous oversight, and lifecycle governance, are better positioned to manage and reduce risk as they adopt agentic AI. Establishing governance early allows teams to scale innovation confidently, rather than retroactively building controls after the agents are embedded in workflows. Here are some resources to look over as the next step in your journey:

OWASP Top 10 for Agentic Applications (2026): The baseline: top risks for agentic systems, with examples and mitigations.

Microsoft AI Red Team: How Microsoft stress-tests AI systems and what teams can learn from that practice.

Microsoft Security for AI: Microsoft’s approach to protecting AI across identity, data, threat protection, and compliance.

Microsoft Agent 365: The enterprise control plane for observing, governing, and securing agents.

Microsoft AI Agents Hub: Role-based readiness resources and guidance for building agents.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


OWASP Top 10 for Agentic Applications content © OWASP Foundation. This content is licensed under CC BY-SA 4.0. For more information, visit https://creativecommons.org/licenses/by-sa/4.0/ 

The post Addressing the OWASP Top 10 Risks in Agentic AI with Microsoft Copilot Studio appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
20 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories