Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
145482 stories
·
32 followers

Microsoft favors Anthropic over OpenAI for Visual Studio Code

1 Share

Microsoft is adding automatic AI model selection to its Visual Studio Code editor that will automatically pick the best model for "optimal performance." This new auto model feature will select between Claude Sonnet 4, GPT-5, GPT-5 mini and other models for GitHub Copilot free users, but paid users will "primarily rely on Claude Sonnet 4."

It's a tacit admission from Microsoft that the software maker is favoring Anthropic's AI models over OpenAI's latest GPT-5 models for coding and development. Sources familiar with Microsoft's developer plans tell me that the company has been instructing its own developers to use Claude Sonnet 4 in recent m …

Read the full story at The Verge.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

MCP in Practice

1 Share

The following was originally published in Asimov Addendum,
September 10, 2025:
https://asimovaddendum.substack.com/p/read-write-act-inside-the-mcp-server

AI Disclosures Project

1. The Rise and Rise of MCP

Anthropic’s Model Context Protocol (MCP) was released in November 2024 as a way to make tools and platforms model-agnostic. MCP works by defining servers and clients. MCP servers are local or remote end points where tools and resources are defined. For example, GitHub released an MCP server that allows LLMs to both read from and write to GitHub. MCP clients are the connection from an AI application to MCP servers—they allow an LLM to interact with context and tools from different servers. An example of an MCP client is Claude Desktop, which allows the Claude models to interact with thousands of MCP servers.

In a relatively short time, MCP has become the backbone of hundreds of AI pipelines and applications. Major players like Anthropic and OpenAI have built it into their products. Developer tools such as Cursor (a coding-focused text editor or IDE) and productivity apps like Raycast also use MCP. Additionally, thousands of developers use it to integrate AI models and access external tools and data without having to build an entire ecosystem from scratch.

In previous work published with AI Frontiers, we argued that MCP can act as a great unbundler of “context”—the data that helps AI applications provide more relevant answers to consumers. In doing so, it can help decentralize AI markets. We argued that, for MCP to truly achieve its goals, it requires support from:

  1. Open APIs: So that MCP applications can access third-party tools for agentic use (write actions) and context (read)
  2. Fluid memory: Interoperable LLM memory standards, accessed via MCP-like open protocols, so that the memory context accrued at OpenAI and other leading developers does not get stuck there, preventing downstream innovation

We expand upon these two points in a recent policy note for those looking to dig deeper.

More generally, we argue that protocols, like MCP, are actually foundational “rules of the road” for AI markets, whereby open disclosure and communication standards are built into the network itself, rather than imposed after the fact by regulators. Protocols are fundamentally market-shaping devices, architecting markets through the permissions, rules, and interoperability of the network itself. They can have a big impact on how the commercial markets built on top of them function too.

1.1 But how is the MCP ecosystem evolving?

Yet we don’t have a clear idea of the shape of the MCP ecosystem today. What are the most common use cases of MCP? What sort of access is being given by MCP servers and used by MCP clients? Is the data accessed via MCP “read-only” for context, or does it allow agents to “write” and interact with it—for example, by editing files or sending emails?

To begin answering these questions, we look at the tools and context which AI agents use via MCP servers. This gives us a clue about what is being built and what is getting attention. In this article, we don’t analyze MCP clients—the applications that use MCP servers. We instead limit our analysis to what MCP servers are making available for building.

We assembled a large dataset of MCP servers (n = 2,874), scraped from Pulse MCP.1 We then enriched it with GitHub star-count data on each server. On GitHub, stars are similar to Facebook “likes,” and developers use them to show appreciation, bookmark projects, or indicate usage.

In practice, while there were plenty of MCP servers, we found that the top few garnered most of the attention and, likely by extension, most of the use. Just the top 10 servers had nearly half of all GitHub stars given to MCP servers.

Some of our takeaways are:

  1. MCP usage appears to be fairly concentrated. This means that, if left unchecked, a small number of servers and (by extension) APIs could have outsize control over the MCP ecosystem being created.
  2. MCP use (tools and data being accessed) is dominated by just three categories: Database & Search (RAG), Computer and Web Automation, and Software Engineering. Together, they received nearly three-quarters (72.6%) of all stars on GitHub (which we proxy for usage).
  3. Most MCP servers support both read (access context) and write (changing context) operations, showing that developers want their agents to be able to act on context, not just consume it.

2. Findings

To start with, we analyzed the MCP ecosystem for concentration risk.

2.1   MCP server use is concentrated

We found that MCP usage is concentrated among several key MCP servers, judged by the number of GitHub stars each repo received.

Despite there being thousands of MCP servers, the top 10 servers make up nearly half (45.7%) of all GitHub stars given to MCP servers (pie chart below) and the top 10% of servers make up 88.3% of all GitHub stars (not shown).

The top 10 servers received 45.7% of all GitHub stars in our dataset of 2,874 servers.
The top 10 servers received 45.7% of all GitHub stars in our dataset of 2,874 servers.

This means that the majority of real-world MCP users are likely relying on the same few services made available via a handful of APIs. This concentration likely stems from network effects and practical utility: All developers gravitate toward servers that solve universal problems like web browsing, database access, and integration with widely used platforms like GitHub, Figma, and Blender. This concentration pattern seems typical of developer-tool ecosystems. A few well-executed, broadly applicable solutions tend to dominate. Meanwhile, more specialized tools occupy smaller niches.

2.2    The top 10 MCP servers really matter

Next, the top 10 MCP servers are shown in the table below, along with their star count and what they do.

Among the top 10 MCP servers, GitHub, Repomix, Context7, and Framelink are built to assist with software development: Context7 and Repomix by gathering context, GitHub by allowing agents to interact with projects, and Framelink by passing on the design specifications from Figma directly to the model. The Blender server allows agents to create 3D models of anything, using the popular open source Blender application. Finally, Activepieces and MindsDB connect the agent to multiple APIs with one standardized interface: in MindsDB’s case, primarily to read data from databases, and in Activepieces to automate services.

The top 10 MCP servers with short descriptions, design courtesy of Claude.
The top 10 MCP servers with short descriptions, design courtesy of Claude.

The dominance of agentic browsing, in the form of Browser Use (61,000 stars) and Playwright MCP (18,425 stars), stands out. This reflects the fundamental need for AI systems to interact with web content. These tools allow AI to navigate websites, click buttons, fill out forms, and extract data just like a human would. Agentic browsing has surged, even though it’s far less token-efficient than calling an API. Browsing agents often need to wade through multiple pages of boilerplate to extract slivers of data a single API request could return. Because many services lack usable APIs or tightly gate them, browser-based agents are often the simplest—sometimes the only—way to integrate, underscoring the limits of today’s APIs.

Some of the top servers are unofficial. Both the Framelink and Blender MCP are servers that interact with just a single application, but they are both “unofficial” products. This means that they are not officially endorsed by the developers of the application they are integrating with—those who own the underlying service or API (e.g., GitHub, Slack, Google). Instead, they are built by independent developers who create a bridge between an AI client and a service—often by reverse-engineering APIs, wrapping unofficial SDKs, or using browser automation to mimic user interactions.

It is healthy that third-party developers can build their own MCP servers, since this openness encourages innovation. But it also introduces an intermediary layer between the user and the API, which brings risks around trust, verification, and even potential abuse. With open source local servers, the code is transparent and can be vetted. By contrast, remote third-party servers are harder to audit, since users must trust code they can’t easily inspect.

At a deeper level, the repos that currently dominate MCP servers highlight three encouraging facts about the MCP ecosystem:

  1. First, several prominent MCP servers support multiple third-party services for their functionality. MindsDB and Activepieces serve as gateways to multiple (often competing) service providers through a single server. MindsDB allows developers to query different databases like PostgreSQL, MongoDB, and MySQL through a single interface, while Taskmaster allows the agent to delegate tasks to a range of AI models from OpenAI, Anthropic, and Google, all without changing servers.
  2. Second, agentic browsing MCP servers are being used to get around potentially restrictive APIs. As noted above, Browser Use and Playwright access internet services through a web browser, helping to bypass API restrictions, but they instead run up against anti-bot protections. This circumvents the limitations that APIs can impose on what developers are able to build.
  3. Third, some MCP servers do their processing on the developer’s computer (locally), making them less dependent on a vendor maintaining API access. Some MCP servers examined here can run entirely on a local computer without sending data to the cloud—meaning that no gatekeeper has the power to cut you off. Of the 10 MCP servers examined above, only Framelink, Context7, and GitHub rely on just a single cloud-only API dependency that can’t be run locally end-to-end on your machine. Blender and Repomix are completely open source and don’t require any internet access to work, while MindsDB, Browser Use, and Activepieces have local open source implementations.

2.3   The three categories that dominate MCP use

Next, we grouped MCP servers into different categories based on their functionality.

When we analyzed what types of servers are most popular, we found that three dominated: Computer & Web Automation (24.8%), Software Engineering (24.7%), and Database & Search (23.1%).

Software engineering, computer and web automation, and database and search received 72.6% of all stars given to MCP servers.
Software engineering, computer and web automation, and database and search received 72.6% of all stars given to MCP servers.

Widespread use of Software Engineering (24.7%) MCP servers aligns with Anthropic’s economic index, which found that an outsize portion of AI interactions were related to software development.

The popularity of both Computer & Web Automation (24.8%) and Database & Search (23.1%) also makes sense. Before the advent of MCP, web scraping and database search were highly integrated applications across platforms like ChatGPT, Perplexity, and Gemini. With MCP, however, users can now access that same search functionality and connect their agents to any database with minimal effort. In other words, MCP’s unbundling effect is highly visible here.

2.4   Agents interact with their environments

Lastly, we analyzed the capabilities of these servers: Are they allowing AI applications just to access data and tools (read), or instead do agentic operations with them (write)?

Across all but two of the MCP server categories looked at, the most popular MCP servers supported both reading (access context) and writing (agentic) operations—shown in turquoise. The prevalence of servers with combined read and write access suggests that agents are not being built just to answer questions based on data but also to take action and interact with services on a user’s behalf.

Showing MCP servers by category. Dotted red line at 10,000 stars (likes). The most popular servers support both read and write operations by agents. In contrast, almost no servers support just write operations.
Showing MCP servers by category. Dotted red line at 10,000 stars (likes). The most popular servers support both read and write operations by agents. In contrast, almost no servers support just write operations.

The two exceptions are Database & Search (RAG) and Finance MCP servers, in which read-only access is a common permission given. This is likely because data integrity is critical to ensuring reliability.

3. The Importance of Multiple Access Points

A few implications of our analysis can be drawn out at this preliminary stage.

First, concentrated MCP server use compounds the risks of API access being restricted. As we discussed in “Protocols and Power,” MCP remains constrained by “what a particular service (such as GitHub or Slack) happens to expose through its API.” A few powerful digital service providers have the power to shut down access to their servers.

One important hedge against API gatekeeping is that many of the top servers try not to rely on a single provider. In addition, the following two safeguards are relevant:

  • They offer local processing of data on a user’s machine whenever possible, instead of sending the data for processing to a third-party server. Local processing ensures that functionality cannot be restricted.
  • If running a service locally is not possible (e.g., email or web search), the server should still support multiple avenues of getting at the needed context through competing APIs. For example, MindsDB functions as a gateway to multiple data sources, so instead of relying on just one database to read and write data, it goes to great lengths to support multiple databases in one unified interface, essentially making the backend tools interchangeable.

Second, our analysis points to the fact that current restrictive API access policies are not sustainable. Web scraping and bots, accessed via MCP servers, are probably being used (at least in part) to circumvent overly restrictive API access, complicating the increasingly common practice of banning bots. Even OpenAI is coloring outside the API lines, using a third-party service to access Google Search’s results through web scraping, thereby circumventing its restrictive API.

Expanding structured API access in a meaningful way is vital. This ensures that legitimate AI automation runs through stable, documented end points. Otherwise, developers resort to brittle browser automation where privacy and authorization have not been properly addressed. Regulatory guidance could push the market in this direction, as with open banking in the US.

Finally, encouraging greater transparency and disclosure could help identify where the bottlenecks in the MCP ecosystem are.

  • Developers operating popular MCP servers (above a certain usage threshold) or providing APIs used by top servers should report usage statistics, access denials, and rate-limiting policies. This data would help regulators identify emerging bottlenecks before they become entrenched. GitHub might facilitate this by encouraging these disclosures, for example.
  • Additionally, MCP servers above certain usage thresholds should clearly list their dependencies on external APIs and what fallback options exist if the primary APIs become unavailable. This is not only helpful in determining the market structure, but also essential information for security and robustness for downstream applications.

The goal is not to eliminate all concentration in the network, but to ensure that the MCP ecosystem remains contestable, with multiple viable paths for innovation and user choice. By addressing both technical architecture and market dynamics, these suggested tweaks could help MCP achieve its potential as a democratizing force in AI development, rather than merely shifting bottlenecks from one layer to another.


Footnotes

  1. For this analysis, we categorized each repo into one of 15 categories using GPT-5 mini. We then human-reviewed and edited the top 50 servers that make up around 70% of the total star count in our dataset.

Appendix

Dataset

The full dataset, along with descriptions of the categories, can be found here (constructed by Sruly Rosenblat):

https://huggingface.co/datasets/sruly/MCP-In-Practice

Limitations

There are a few limitations to our preliminary research:

  • GitHub stars aren’t a measure of download counts or even necessarily a repo’s popularity.
  • Only the name and description were used when categorizing repos with the LLM.
  • Categorization was subject to both human and AI errors and many servers would likely fit into multiple categories.
  • We only used the PulseMCP list for our dataset, other lists had different servers (e.g. Browser Use isn’t on mcpmarket.com).
  • We excluded some repos from our analysis, such as those that had multiple servers and those we weren’t able to fetch the star count for. We may miss some popular servers by doing this.

MCP Server Use Over Time

The growth of the top nine repos’ star count over time from MCP’s launch date on November 25, 2024, until September 2025. NOTE: We were only able to track the Browser-Use’s repo until 40,000 stars; hence the flat line for its graph. In reality, roughly 21,000 stars were added over the next few months (the other graphs in this blog are properly adjusted).
The growth of the top nine repos’ star count over time from MCP’s launch date on November 25, 2024, until September 2025.
NOTE: We were only able to track the Browser-Use’s repo until 40,000 stars; hence the flat line for its graph. In reality, roughly 21,000 stars were added over the next few months (the other graphs in this blog are properly adjusted).


Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft leads shift beyond data unification to organization, delivering next-gen AI readiness with new Microsoft Fabric capabilities

1 Share

We’re in a hinge moment for AI. The experiments are over and the real work has begun. Centralizing data, once the finish line, is now the starting point. The definition of “AI readiness” is evolving as increasingly sophisticated agents demand rich, contextualized data grounded in business operations to deliver meaningful results. What sets leaders apart is the quality of the data platform experience in delivering on the shared meaning, live context and interactivity that helps systems understand the business as it is, not just as a static report. Across industries, frontier firms are dissolving silos and equipping teams with AI agents and reasoning systems that go beyond answers to help people build, explore, decide and act. The result: a new rhythm of work that’s faster, more connected, more explainable and closer to the customer.

Microsoft Fabric: Powering AI‑Ready data innovation enterprise‑wide at FabCon Europe

As the first hyperscaler to fully embrace this paradigm, Microsoft is introducing new capabilities in its fastest-growing data and analytics platform, Microsoft Fabric, at the European Microsoft Fabric Community Conference (FabCon). With Fabric, we are bringing together all of an organization’s data into a single, AI‑ready foundation so every team can turn data into actionable insight with the full context of their business. At FabCon, Microsoft is announcing a major leap forward in its delivery of AI data readiness with Graph in Fabric, a low/no-code platform for modeling and analyzing relationships across enterprise data; and Maps in Fabric, which joins the recently launched digital twin builder in Microsoft Fabric as part of Real-Time Intelligence and brings geospatial analytics into Fabric, enabling users to visualize and enrich location-based data at scale.

We’re also expanding Fabric’s capabilities further with new OneLake shortcuts and mirroring sources, a Graph database connecting entities across OneLake, enhanced developer experiences and new security controls — providing everything needed to run mission-critical scenarios on Fabric.

These capabilities mark a fundamental evolution in data strategy for business leaders scaling intelligent AI applications and agents across their organizations.

Train smarter agents with Graph and Maps

The foundation of every successful AI agent isn’t just data — it’s organized knowledge. As businesses accelerate into the AI era, the challenge isn’t gathering more information, but structuring it so agents can reason, connect and act with purpose.

The previews of Graph and Maps in Fabric are designed to help businesses organize their raw data for real-world impact. Graph in Fabric draws on the graph design principles proven at LinkedIn to reveal connections across customers, partners and supply chains, enabling organizations to visualize and query relationships that drive business outcomes.

Maps in Fabric brings geospatial analytics, empowering teams to make location-aware decisions as they respond to operational challenges in real time.

But these aren’t just technical milestones, they’re strategic tools for business leaders. AI is sparking new cross-company collaboration by connecting enterprise data — uniting business functions, accelerating decisions and empowering teams to share and scale value through open data flow. Whether it’s mapping supply chain dependencies or visualizing customer journeys, Graph and Maps help businesses move from isolated data points to a connected, actionable foundation for AI.

Discover how Graph and Maps in Fabric unlock real-time intelligence for AI-driven operations. Get the engineering inside scoop from Corporate Vice President of Messaging and Real-Time Analytics, Yitzhak Kesselman, in his latest blog: “The Foundation for Powering AI-Driven Operations.”

Enhancing developer experiences across Fabric to accelerate AI projects

Fabric is quickly becoming the go-to platform for data developers worldwide. To fuel that momentum, we’re rolling out new tools that make it easier to build, automate and innovate.

The new Fabric Extensibility Toolkit simplifies architecture and automation — so every solution is secure, scalable and aligned to business needs. And with the preview of Fabric Model Context Protocol (MCP) developers can tap into AI-assisted code generation and item authoring right inside familiar environments like Visual Studio Code and GitHub Codespaces.

These updates aren’t just for software developers. They’re for any business leader ready to turn organized data into competitive advantage. Fabric helps teams move from experimentation to enterprise-scale impact, with speed and governance built in.

OneLake: The AI-Ready data foundation

OneLake is the unified data lake at the heart of Fabric. It’s designed to ingest data once and make it instantly usable across analytics, AI and applications to accelerate insight. Today, we’re introducing new features to give teams unprecedented visibility and control with OneLake.

With the addition of mirroring capabilities for Oracle and Google BigQuery, expanded support for data agents and OneLake shortcuts to Azure Blob Storage, organizations can bring all their data together, no matter where it lives.

OneLake shortcut transformations can now convert JSON and Parquet files to Delta tables for instant analysis. OneLake also offers secure governance tools, including a new Secure tab in the catalog for managing permissions and a Govern tab for data oversight.

We’re also releasing the Azure AI Search integration with OneLake. By making this available in the Azure AI Foundry portal, we’re streamlining the experience for developers and data teams, helping them build smarter, more context-aware agents faster.

Our OneLake Table API preview allows apps to discover and inspect tables using Fabric’s security model, and OneLake diagnostics, enabling workspace owners to capture all data activity and storage operations.

Microsoft Fabric and Azure AI Foundry: A complete data, AI and agent ecosystem

In the AI era, every project is a data project, and success depends on reducing complexity. Microsoft is addressing this head-on by continuing to natively integrate Fabric and Azure AI Foundry together to help simplify how enterprises design, customize and manage AI apps and agents.

Fabric provides a single way to reason over data wherever it resides, delivering the structured, contextualized foundation AI needs. On top of that foundation, Azure AI Foundry enables developers to work with their favorite tools, including GitHub, Visual Studio and Copilot Studio, to efficiently build and scale AI applications and agents, while giving IT leaders visibility into performance, governance and ROI.

By bringing data, models and operations together, Fabric and Azure AI Foundry help businesses accelerate innovation and align AI initiatives with strategic goals. This unified approach eliminates complexity, speeds adoption and creates a platform-first advantage so organizations can unlock new value from their data and lead in the next generation of AI readiness.

Build the foundation, lead the future

The organizations leading this next chapter aren’t just deploying AI, they’re engineering for it. That starts with a foundation where data is unified, governed and now enriched with context so AI apps and agents can act confidently and scale without friction. Graph and Maps, enhanced developer tools, OneLake improvements and integration with Azure AI Foundry push Microsoft Fabric past data unification into AI‑ready, context‑rich data built for tomorrow’s AI challenges.

Those organizations are also skilling up. Thousands of Fabric users have passed their exams to achieve more than 50,000 certifications collectively for Foundry, Fabric Analytics Engineers and Fabric Data Engineers roles.

The future of AI belongs to platforms, not point solutions — ecosystems that connect data, intelligence and action. With that foundation, every agent, app and insight compounds value. Microsoft delivers that platform today, helping organizations unlock new levels of intelligence and impact.

Explore the full spectrum of new features coming to Fabric in today’s blog from Arun Ulagaratchagan, Corporate Vice President of Azure Data: “FabCon Vienna: Build data-rich agents on an enterprise-ready foundation.”

The post Microsoft leads shift beyond data unification to organization, delivering next-gen AI readiness with new Microsoft Fabric capabilities appeared first on The Official Microsoft Blog.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

What an MCP implementation looks like at a CRM company

1 Share
Ryan chats with Karen Ng, EVP of Product at HubSpot, to chat about Model Context Protocol (MCP) and how they implemented it for their server for their CRM product.
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The Database Migration Disaster— Why Software Development Teams Need Psychological Safety | Shawn Dsouza

1 Share

Shawn Dsouza: The Database Migration Disaster— Why Software Development Teams Need Psychological Safety

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

Shawn worked with a skilled team migrating a database from local to cloud-based systems, supported by a strong Product Owner. Despite surface-level success in ceremonies, he noticed the team avoided discussing difficult topics. After three months of seemingly smooth progress, they delivered to pre-production only to discover 140 critical issues. The root cause? Unspoken disagreements and tensions that festered beneath polite ceremony facades. The situation deteriorated to the point where a senior engineer quit, teaching Shawn that pausing to address underlying issues doesn't cost time—it builds sustainability.

In this segment, we refer to the episodes with Mahesh Jade, a previous guest on the Scrum Master Toolbox podcast.

Featured Book of the Week: The Advice Trap by Michael Bungay Stanier

Shawn discovered this transformative book when he realized he was talking too much in team meetings despite wanting to add value. The Advice Trap revealed how his instinct to give advice, though well-intentioned, was actually self-defeating. The book taught him to stay curious longer and ask better questions rather than rushing to provide solutions. As Shawn puts it, "The minute you think you have the answer you stop listening"—a lesson that fundamentally changed his coaching approach and helped him become more effective with his teams.

Self-reflection Question: When working with teams, do you find yourself jumping to advice-giving mode, or do you stay curious long enough to truly understand the underlying challenges?

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn’t just about innovation—it’s about coaching!🔥

Angela thought she was just there to coach a team. But now, she’s caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn’t just about the product—it’s about the people.

🚨 Will Angela’s coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

Buy Now on Amazon

[The Scrum Master Toolbox Podcast Recommends]

About Shawn Dsouza
Shawn, a Mangalore native and Software Technology postgraduate from AIMIT, brings 8+ years of IT expertise, excelling as a Scrum Master fostering innovation and teamwork. Beyond technology, he leads SPARK, a social service initiative, and pursues his passion as an aquarist, nurturing vibrant aquatic ecosystems with dedication.

You can link with Shawn Dsouza on LinkedIn. 





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20250916_Shawn_Dsouza_Tue.mp3?dest-id=246429
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The PowerShell Podcast Beginnings, Blue Bars, and the Valley of Despair with Steven Judd

1 Share

In this episode, host Andrew Pla welcomes back Steven Judd, Microsoft MVP, teacher, and longtime community contributor. Together, they dive into the theme of beginnings, from starting careers in IT, to first encounters with PowerShell, and the importance of resilience while navigating the “valley of despair” in learning. Steven shares his journey from music and business studies into technology, where curiosity, persistence, and a willingness to read the manuals shaped his career.
The conversation also explores how community, conferences, and friendships have been essential to Steven’s growth. From humorous “please clap” moments at Nano Conf to building lasting connections, Steven highlights the power of showing up authentically, persevering through challenges, and helping others along the way.Key Takeaways

  • The Power of the Basics: Learning commands like Get-Command, Get-Help, and Get-Member (“the tripod”) forms the foundation of a strong PowerShell journey.
  • Resilience in Learning: Navigating the “valley of despair” in tough topics like PowerShell, certificates, or regex is where growth happens. Persistence pays off.
  • Community is Everything: From user groups to conferences, surrounding yourself with peers and mentors accelerates growth and helps combat imposter syndrome.

Guest Bio
Steven Judd is a Microsoft MVP, educator, and veteran PowerShell enthusiast who has been teaching and sharing knowledge in the community for many years. With a background that spans business, music, and IT, Steven brings a unique perspective on learning, resilience, and humor. Known for his approachable teaching style and dad jokes, Steven has helped countless professionals embrace automation, improve their skills, and find their place in the PowerShell community.Resource Links

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories