Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146942 stories
·
33 followers

I'm a senior UX researcher at Microsoft. Here's how I broke into AI without a tech background — and 3 lessons I learned.

1 Share
Read the whole story
alvinashcraft
19 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

I Just Don’t Understand Why You Don’t Update SSMS.

1 Share

A long time ago in a galaxy far, far away, SQL Server Management Studio was included as part of the SQL Server installer.

Back then, upgrading SSMS was not only a technical problem, but a political one too. Organizations would say things like, “Sorry, we haven’t certified that cool new SQL Server 1982 here yet, so you can’t have access to the installer.” Developers and DBAs were forced to run SSMS from whatever ancient legacy version of SQL Server that their company had certified.

These days, SQL Server Management Studio v22 has:

  • A totally separate standalone installer
  • A totally separate version numbering system (SSMS v22 as opposed to SQL Server’s year-based numbers)
  • No designed-in dependencies (you can run new versions of SSMS on your desktop and connect to any supported version of SQL Server)
  • A much, much, much faster release schedule than SQL Server
  • Pretty few known issues – the list looks long at first, but if you go through ’em, few are relevant to the kind of work you do, and frankly, it’s still a shorter list than most of the previous SSMS versions I’ve used
  • A lot more cool features than the old and busted version you’re running today

And current versions even have a built-in, kick-ass upgrade mechanism:

Easier than gaining weight on a cruise ship

You should upgrade.
It keeps improving, quickly.

For example, SSMS v22.2.1 – a seemingly tiny version number change – just got a massive improvement in code completions. T-SQL code completion has never been great – IntelliSense doesn’t even auto-complete foreign key relationships. SSMS v22.2.1’s code completion will make your jaw drop.

For example, I never remember the syntax to write a cursor. It’s the kind of thing I don’t have to do often, and for years, I’ve used text files with stuff like this that I rarely (but sometimes) need quickly. With SSMS’s latest update, I just start typing a comment:

Declare a cursor

In that screenshot, see the different text colors? I’d started a comment and just written “Declare a cursor to” – and SSMS has started to fill in the rest. My goal in this case isn’t to loop through all the tables, though, so I’ll keep typing, explaining that I want to iterate through rows:

Interesting cursor choice

SSMS guessed that I wanted to iterate through the Posts table – and that’s SO COOL because SSMS actually looked at the tables in the database that I was connected to! If I try that same thing in the master database’s context, I get a different code completion!

Now, this does mean that Github Copilot & SSMS are running queries against your server in order to do code completion, and that they’re sending this data up to the cloud to do code completion. I totally understand that that’s a big security problem for many companies, and … okay, maybe I just answered that question about why some of you aren’t upgrading. But look, you can turn that feature off if you want, and you can track what queries it’s running if you’re curious. Let’s keep moving on through the task I have at hand today. I’m not trying to run through the Posts table, I need to do something else, so let’s keep typing:

Uh that's an odd cursor choice

uh wait what

In the words of Ron Burgundy, that escalated quickly. That is most definitely NOT what I’m trying to do, but that’s the state of AI these days. It’ll gladly help you build a nuclear footgun with speed and ease. Let’s continue typing:

The cursor I want

(I don’t really need this specific thing, mind you, dear reader – it’s already built into sp_Blitz – but I’m just using this as an example for something a client asked me to do.) Now that I’ve clearly defined the comment, SSMS starts writing the code for me. I’m going to just tab my way through this, taking SSMS’s code completion recommendations for everything from here on out, just so you can see what it coded for me:

The completed code

In a matter of seconds, just by hitting tab and enter to let AI code for me, it’s done! Not only did it write the cursor, but it wrote the dynamic SQL for me to do the task too. Now all I have to do is click execute, and:

Presto! The power of AI!

This right here is the part where you expect me to make an AI joke.

But let’s stop for a second and just appreciate what happened. All I needed SSMS to do was just to build a cursor for me, and it went WAY above and beyond that. It wrote dynamic SQL too, because it understood that in order to get the right checkdb date, it has to be run inside dynamic SQL. That’s pretty impressive. I don’t mind troubleshooting some dynamic SQL that frankly, I probably would have written incorrectly the first time too!

Today, what we have is Baby’s First Code Completions. I can’t get angry about that – I’m elated about it, because we’ve never had code completions before, and now at least we have it! That’s fantastic, and it will absolutely make me more productive – in the places where I choose to use it, judiciously. I can’t rely on it to build whole tools for me out of nothing, but as an expert, using it to augment and speed things up, it’s helpful, period.

I expect it to get even better, quickly.

I’m not saying that because I’m optimistic or because I have inside information. Microsoft simply doesn’t have a choice, because the only AI model that SSMS v22.2.1 supports right now is GPT-4.1. That’s so old and underpowered that OpenAI is retiring it this month, so Microsoft is going to have to switch to a newer model – which will automatically give us better code completions.

You’ll see evidence of that in the code completion documentation, and under SSMS v22.2.1’s tools-options, under Text Editor, Code Completions:

Text completion settings

Because I installed the AI components of SSMS, I get a dropdown for Copilot Completions Model. That’s the brains of the operation, the cloud AI model that comes up with the ideas of what you’re trying to code, and codes it for you.

Today, as of this writing, the only option is GPT 4-1, the old and busted one. I’m excited to see which one(s) we get access to next. Github Copilot’s list of supported models is huge, and it includes some really heavy hitters that produce spectacular results, like Claude Opus 4.5 and Gemini 3 Pro.

Side note – if you’re on the free Copilot individual tier, you only get 2,000 code completions per month for free. You’re gonna wanna check the box in the above screenshot that says “Show code completions only after a pause in typing” – otherwise you’ll keep getting irrelevant suggestions like how to drop all your databases, ha ha ho ho, and you’ll run out of completion attempts pretty quickly.

So do it. Go update your SSMS, make sure to check the AI tools during the install, sign up for a free Github Copilot account if your company doesn’t already give you a paid one, configure SSMS with your Copilot account, and get with the program. You’ll thank me later when it starts auto-completing joins and syntax for you. It’s free, for crying out loud.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Brings Copilot Studio Agents Directly Into Visual Studio Code

1 Share

Welcome to the Cloud Wars Minute — your daily cloud news and commentary show. Each episode provides insights and perspectives around the “reimagination machine” that is the cloud.

In today’s Cloud Wars Minute, I look at how Microsoft is helping developers build and scale AI agents safely inside Visual Studio Code.

Highlights

00:10 — The Microsoft Copilot Studio extension for Visual Studio Code is now generally available, providing developers with the ability to build and manage Copilot Studio agents directly within the IDE. This extension is designed for developers and integrates seamlessly into their workflows.

00:28 — It includes standard Git integration, request-based pull reviews, auditability, and is tailored to the VS Code UX. The new extension reflects the growing complexity of agents and equips developers with the same best practices they use for app development, including, as Microsoft puts it, source control, pull requests, change history, and repeatable deployments.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.

01:02 — This extension really benefits developers when they need to manage complex agents, collaborate with multiple stakeholders, and ensure that any changes made are done so safely. It’s ideal for developers who prefer to build within their IDE while also having an AI assistant available to help them iterate more quickly and productively.

01:30 — The extension introduces important structural support for the development of AI agents. By integrating Copilot Studio directly into VS Code, Microsoft is empowering developers to build more efficiently, without compromising control, access to collaborators, or safety. This is a critical combination as AI agents become increasingly more powerful and complex.

02:00 — As these agents continue to evolve, they require the same stringent checks and balances as traditional software. Microsoft’s Copilot Studio extension addresses this by giving developers the tools they need to scale agents responsibly while maintaining performance.


The post Microsoft Brings Copilot Studio Agents Directly Into Visual Studio Code appeared first on Cloud Wars.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

A New Mistral AI Model's Ultra-Fast Translation Gives Big AI Labs a Run for Their Money

1 Share
“Too many GPUs makes you lazy,” says the French startup's vice president of science operations, as the company carves out a different path than the major US AI companies.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Code smells for AI agents: Q&A with Eno Reyes of Factory

1 Share
Quality software still needs high-quality code, AI agents or not.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Beyond Pilot Purgatory

1 Share

The hard truth about AI scaling is that for most organizations, it isn’t happening. Despite billions in investment, a 2025 report from the MIT NANDA initiative reveals that 95% of enterprise generative AI pilots fail to deliver measurable business impact. This isn’t a technology problem; it’s an organizational design problem.

The reason for this systemic failure is surprisingly consistent: Organizations isolate their AI expertise. This isolation creates two predictable patterns of dysfunction. In one model, expertise is centralized into a dedicated team—often called a Center of Excellence (CoE). While intended to accelerate adoption, this structure invariably becomes a bottleneck, creating a fragile “ivory tower” disconnected from the business realities where value is actually created. Business units wait months for resources, incentives become misaligned, and the organization’s overall AI literacy fails to develop.

In the opposite model, expertise is so distributed that chaos ensues. Autonomous business units build redundant infrastructure, hoard knowledge, and operate without coordinated governance. Costs spiral, incompatible technology stacks proliferate, and the organization as a whole becomes less intelligent than its individual parts.

Both approaches fail for the same underlying reason: They treat AI development as a separate activity from the core business.

The numbers confirm this struggle. Gartner predicts that 30% of GenAI projects will be abandoned after proof of concept by 2025 due to poor data quality, inadequate risk controls, and escalating costs. McKinsey’s State of AI in 2025 report reveals that while adoption is high, only one-third of organizations have scaled AI enterprise-wide. Even fewer—just 5%, according to BCG—have built the capabilities to generate significant value at scale.

The organizations that have successfully scaled AI beyond this “pilot purgatory”—companies like JPMorganChase, Walmart, and Uber—didn’t choose between these broken models. They built a third way, discovering through pressure from reality that the only thing that works is an outcome-oriented hybrid architecture. This model combines centralized enablement with distributed execution, aggressive governance with operational autonomy, and technical excellence with a relentless focus on business value.

This isn’t abstract theory. The characteristics of these successful architectures are becoming clear enough to articulate—and specific enough to implement. Here is what actually works.

What Actually Works: Outcome-Oriented Hybrid Architecture

The organizations that have successfully scaled AI share surprising structural similarities—not because they all studied the same framework but because they independently discovered the same operating model through trial and error.

This model has several key characteristics:

Platform teams with product thinking, not project thinking

Rather than treating central AI infrastructure as a cost center or a research lab, successful organizations build it as an internal product with defined customers (the business units), success metrics, and a roadmap.

Airbnb’s “Bighead” platform exemplifies this. The team didn’t just build ML infrastructure; they built a product that product teams could consume. Standardized feature engineering, model training, and deployment pipelines reduced development time from months to weeks. The platform team measured success not by research excellence but by adoption rates and time-to-market reductions for dependent teams.

Uber’s Michelangelo platform followed a similar pattern: develop shared ML infrastructure, price it internally to make resource allocation explicit, measure platform adoption and the business impact of applications built on it, and evolve the platform based on actual usage patterns.

Implementation reality: Platform teams need authority to make technical decisions while remaining accountable for business adoption. They require sustained funding separate from individual project budgeting. They need internal customers who participate in roadmap planning. Most organizations struggle with this because platform thinking requires executives to invest in capability that won’t generate revenue for 18+ months.

Outcome-driven embedded specialists, not isolated teams

Successful organizations don’t ask centralized AI teams to deliver solutions. They embed AI specialists directly into business value streams where they co-own business outcomes.

A telecommunications company we studied restructured its 50-person AI CoE by embedding team members into four core business units. Instead of business units requesting AI solutions, they now had dedicated specialists sitting in weekly operations meetings, understanding real problems, building real solutions, and feeling the pressure of business metrics. The result? Deployment speed increased 60% and adoption tripled.

The model works because:

  • Embedded specialists develop tacit knowledge about business constraints and operational realities that remote teams can never have.
  • They face direct accountability for outcomes, aligning incentives.
  • They become translators between technical and business languages.

Implementation reality: Embedding requires letting go of centralized command-and-control. The embedded specialists report dotted-line to central leadership but are primarily accountable to business unit leadership. This creates tension. Managing that tension (not eliminating it) is essential. Organizations that try to eliminate tension by centralizing authority again lose the benefits of embedding.

Dynamic governance, not static policies

Traditional governance models assume relatively stable, predictable environments where you can write policies in advance and enforce them. AI systems exhibit emergent behavior that governance can’t predict. You need frameworks that adapt as you learn.

JPMorganChase demonstrates this through its multilayered governance approach:

  • The Centralized Model Risk team reviews all AI systems before production deployment using consistent technical standards.
  • Domain-specific oversight committees in lending, trading, and compliance understand business context and risk appetite.
  • Ongoing monitoring systems track model performance, drift, and unintended consequences.
  • Clear escalation protocols activate when algorithmic decisions fall outside acceptable parameters.
  • Continuous improvement mechanisms incorporate lessons from deployed systems back into policies.

Implementation reality: Dynamic governance requires specialists who combine technical AI expertise with organizational knowledge and the authority to make decisions. These are expensive, scarce roles. Most organizations underinvest because governance doesn’t appear as a direct cost center. It gets underfunded relative to its importance.

Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.

Capability building, not just capability buying

Organizations that scale AI sustainably invest heavily in building organizational AI literacy across multiple levels:

  • Frontline workers need basic understanding of how to use AI tools and when to trust them.
  • Team leads and domain experts need to understand what AI can and can’t do in their domain, how to formulate problems for AI, and how to evaluate solutions.
  • Technical specialists need deep expertise in algorithm selection, model validation, and system integration.
  • Executives and boards need enough understanding to ask intelligent questions and make strategic decisions about AI investment.

Implementation reality: Capability building is a multiyear investment. It requires systematic training programs, rotation opportunities, and senior engineers willing to mentor junior people. It requires tolerance for people operating at reduced productivity while they’re developing new capabilities.

Measuring What Matters

Organizations caught in pilot purgatory often measure the wrong things. They track model accuracy, deployment cycles, or adoption rates. These vanity metrics look good in board presentations but don’t correlate with business value. Successful organizations understand AI is a means to an end and measure its impact on the business relentlessly.

Business outcomes: Track AI’s direct impact on primary financial and customer metrics.

  • Revenue growth: Does AI increase cross-sell and upsell opportunities through hyperpersonalization? Does it improve customer retention and Net Promoter Score (NPS)?
  • Cost and efficiency: Does AI increase throughput, lower operational cycle times, or improve first-contact resolution rates in customer service?
  • Risk reduction: Does AI reduce financial losses through better fraud detection? Does it lower operational risk by automating controls or reducing error rates?

Operational velocity: This measures time-to-market. How quickly can your organization move from identifying a business problem to deploying a working AI solution? Successful organizations measure this in weeks, not months. This requires a holistic view of the entire system—from data availability and infrastructure provisioning to governance approvals and change management.

Value-realization velocity: How long after deployment does it take to achieve a positive ROI? Organizations that track this discover that technical integration and user adoption are often the biggest delays. Measuring this forces a focus not just on building the model but on ensuring it’s used effectively.

System resilience: When individual components fail—a key person leaves, a data source becomes unavailable, or a model drifts—does your AI capability degrade gracefully or collapse? Resilience comes from modular architectures, shared knowledge, and having no single points of failure. Organizations optimized purely for efficiency are often fragile.

Governance effectiveness: Is your organization proactively catching bias, drift, and unintended consequences, or are problems only discovered when customers complain or regulators intervene? Effective governance is measured by the ability to detect and correct issues automatically through robust monitoring, clear incident response procedures, and continuous learning mechanisms.

The Implementation Reality

None of this is particularly new or revolutionary. JPMorganChase, Walmart, Uber, and other successfully scaling organizations aren’t doing secret magic. They’re executing disciplined organizational design:

Start with business, not technology capability. Identify key business drivers and values that you measure, look at balance sheet levers, and see how AI can unlock value. Don’t build impressive systems for nonproblems.

Address technical debt first. You can’t deploy AI efficiently on fragile infrastructure. Many organizations waste 60%–80% of AI development capacity fighting integration problems that wouldn’t exist with better foundations. This doesn’t mean leaving speed behind but adopting a balanced infrastructure with clear integration points.

Design human-AI decision patterns intentionally. The most successful AI implementations don’t try to create fully autonomous systems. Instead, they create hybrid systems where algorithms handle speed and scale while humans maintain meaningful control. Commerzbank’s approach to automating client call documentation exemplifies this: Rather than replacing advisors, the system freed them from tedious manual data entry so they could focus on relationship-building and advice.

The pattern: AI proposes; rules constrain; humans approve; every step is logged. This requires API-level integration between algorithmic and rule-based processing, clear definitions of what gets automated versus what requires human review, and monitoring systems that track override patterns to identify when the algorithm is missing something important.

Invest heavily in governance before scaling. Don’t treat it as an afterthought. Organizations that build governance structures first scale much faster because they don’t have to retrofit controls later.

Embed AI expertise into business units but provide platform support. Neither pure centralization nor pure distribution works. The hybrid model requires constant attention to balance autonomy with coordination.

Accept that 18–24 months is a realistic timeline for meaningful scale. Organizations expecting faster transformations are usually the ones that end up with integration debt and abandoned projects.

Build organizational capability, not just buy external talent. The organizations that sustain AI advantage are those that develop deep organizational knowledge, not those that cycle through external consultants.

Why This Still Matters

The reason organizations struggle with AI scaling isn’t that the technology is immature. Modern AI systems are demonstrably capable. The reason is that enterprises are fundamentally organizational problems. Scale requires moving AI from skunkworks (where brilliant people build brilliant systems) to operations (where average people operate systems reliably, safely, and profitably).

That’s not a technology problem. That’s an operating-model problem. And operating-model problems require organizational design, not algorithm innovation.

The organizations that figure out how to design operating models for AI will capture enormous competitive advantages. The organizations that continue bolting AI onto 1980s organizational structures will keep funding pilot purgatory.

The choice is structural. And structure is something leadership can control.



Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories