Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148548 stories
·
33 followers

Copilot is gaslighting developers and we’re all pretending it’s fine

1 Share

Microsoft’s AI sidekick is writing more code than ever, but the devs maintaining it are quietly losing their sanity. Here’s why it’s not just a GitHub problem it’s an industry symptom.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Strategic Pattern Selection: When to Use Factory vs Builder vs Prototype vs Object Pool in High-Performance C# Applications

1 Share

## 1 Set the Stage: Why Creational Pattern Choice Matters in Modern .NET

In high-performance C# applications—particularly those serving thousands of requests per second or processing streaming workloads—the way you create and manage objects is not an implementation detail. It directly shapes throughput, tail latency, and memory stability. This article explores *when and how to choose between Factory, Builder, Prototype, and Object Pool patterns* in modern .NET 9 and early .NET 10 systems. We'll reason through measurable signals—allocation rates, GC CPU, construction complexity—and demonstrate production-grade implementations.

### 1.1 The production reality: throughput, latency SLOs, and GC budgets

Modern .NET applications, from ASP.NET Core APIs to game servers and stream processors, run under *strict Service-Level Objectives (SLOs). Teams commit to latency percentiles—say, P95 < 30 ms and P99 < 100 ms*—while sustaining tens or hundreds of thousands of allocations per second.

The real constraint isn't just CPU; it's the *memory allocation and collection pipeline. Every allocation contributes to eventual GC work. As allocation rates grow, you hit Gen0/Gen1 churn, and occasionally Gen2 and Large Object Heap (LOH)* collections, which cause visible spikes.

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Moves Azure DevOps MCP Server From Preview To General Availability

1 Share

Microsoft announced in October 2025 that its Azure DevOps MCP Server, a local Model Context Provider designed to bring richer context to AI assistants like GitHub Copilot, has exited public preview and become generally available.

By Craig Risi
Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Treat Your AI Assistant Like an Overconfident Junior Developer

1 Share

From finishing your sentences in emails to finishing entire blocks of code, AI has come a long way. It’s like having a hyper-eager junior developer on your team – fast, capable, and sometimes overconfident.

But speed isn’t everything. These tools still need guidance, context, and careful oversight.

In this article, Birgitta Böckeler (Distinguished Engineer, Thoughtworks) shares practical strategies for using AI responsibly, helping developers harness its power without sacrificing quality or maintainability.

Clean code makes AI shine

In their early days, tools like GitHub Copilot mostly acted as advanced autocomplete assistants, predicting the next few lines of code. Today, AI has leveled up to agents that can tackle multi-step tasks – refactoring files, running tests, or even updating entire repositories.

AI agents can now fix failing tests, optimize dependencies, and even propose small architecture tweaks. Still, as Birgitta points out, these time-saving powers come with their own set of headaches:

Developers now need to give clearer context, define their goals more precisely, and double-check AI outputs with extra care.

Because these systems lack persistent memory, developers keep session notes or hand-offs to track project state. Birgitta Böckeler notes that AI assistants work best in modular, well-structured codebases where context and dependencies are clear.

In contrast, legacy or entangled systems often cause the AI to misinterpret relationships or overlook hidden dependencies. As a result, productivity improvements depend heavily on the specific context in which they are implemented.

Claims of 80% faster development rarely hold. AI speeds up small tasks, but big architecture, integrations, and testing still need human expertise.

AI can produce code fast, but it needs human oversight

Böckeler also addressed the growing gap between the hype surrounding AI and what it can actually do.

Many online demonstrations show AI building games or applications in mere minutes, but these impressive-looking outputs often exaggerate reality. In most cases, they produce only basic scaffolding or boilerplate code rather than fully functional, production-ready solutions, reminding developers that human oversight and refinement are still essential.

The quality of AI-generated code still depends on professional oversight, since trade-offs, compatibility concerns, and maintainability are inherently contextual and beyond the AI’s current reasoning capacity.

For example, an AI might correctly adjust a memory limit when a process fails, but it can miss deeper dependency conflicts. It may also merge methods incorrectly if compatibility rules are unclear or generate rigid test cases that complicate debugging instead of simplifying it.

Don’t blindly trust AI-generated code

To help developers navigate these realities, Böckeler proposed a useful mental model:

AI assistants should be treated like junior developers. They are fast, capable, and eager to help, but they can also be overconfident and prone to mistakes.

Understanding their limits is key; like mentoring a new team member, trust must be conditional and context-dependent. Blindly accepting AI-generated code can lead to subtle bugs and long-term maintainability issues.

The hidden pitfalls of AI-generated code

Drawing from her own experience, Böckeler emphasized several recurring pitfalls:

  • Superficial fixes: AI often suggests quick solutions that don’t address deeper architectural problems.
  • Problematic test cases: Generated tests can be too brittle or too vague, sometimes requiring as much debugging as the original code.
  • Reinforcing poor design: In messy or poorly structured systems, AI may perpetuate suboptimal design choices, increasing future maintenance costs.
  • Increased code churn: Studies show more rework is needed on AI-generated commits, often within weeks.
  • Unexpected debugging effort: Developers frequently spend more time fixing AI outputs than initially anticipated, highlighting the need for careful oversight and management.

This is why Böckeler recommends a proactive, disciplined approach: AI-generated code should never be accepted at face value but reviewed and thoroughly tested. Checkpoints and version control help roll back unwanted changes, and breaking complex tasks into smaller steps improves AI accuracy.

At the team level, quality control should remain a shared responsibility – automated tests and pull requests aren’t enough.

Monitoring quality metrics and integrating AI gradually helps prevent long-term risks to maintainability and security. Above all, expectations must remain realistic: AI cannot guarantee fixed productivity gains or eliminate the need for experienced developers.

The key lies in responsible use

In closing, Böckeler said that AI coding tools have become a permanent fixture in software development. They are robust, adaptable, and increasingly embedded in professional workflows, but their actual value depends on how responsibly they are used.

Developers must learn not only how to operate these tools, but also how to supervise, evaluate, and sustainably integrate them.

The challenge ahead lies not in automation itself, but in ensuring that it enhances productivity without compromising quality, maintainability, or team cohesion.

The post Treat Your AI Assistant Like an Overconfident Junior Developer appeared first on ShiftMag.

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Scale Tiny Projects into a Resilient Data Culture

1 Share

In today’s fast-paced business environment, the ultimate goal of any data effort is to enable better decisions and drive meaningful organizational outcomes. Too often, data initiatives fail because they treat data or “data culture” as the final product. However, the journey to a data-driven organization doesn’t have to start with massive, complex initiatives. Instead, leaders can strategically select and implement “tiny projects” that serve as stepping stones toward improving results. These small wins, rooted in principles of human-centered design, create momentum, secure buy-in for larger initiatives, and attract more collaborators along the way by focusing on tangible results, not just data collection.

Identifying and Scoping Tiny Projects: Starting with Empathy

The first step in this journey is to identify potential tiny projects that align with your organization’s goals. Crucially, this stage is driven by empathy, the foundational principle of human-centered design (HCD), which means putting the needs and experiences of the people—the users—at the center of the solution.

These projects should be manageable in scope but impactful enough to demonstrate value.

Here are some tips for selecting the right projects:

Focus on pain points (the empathy phase)

Look for areas within your organization where data could alleviate existing challenges. For example, a marketing team might struggle to analyze customer feedback effectively. A tiny project could involve using data analytics to identify key themes in customer sentiment from recent campaigns. This user-driven starting point ensures the solution is relevant and immediately valued.

Leverage existing resources

Consider projects that utilize tools and data already available within your organization. This approach minimizes costs and reduces the time needed for implementation. For instance, a sales team could analyze historical sales data to identify trends and improve forecasting. A great example of this is a project where a team of three—a data analyst, a policy advisor, and a communications staff member—identified over $4M in savings for a major American city. They simply used existing, albeit “dirty,” data to find cost reductions in postal charges.

Set clear objectives

Define specific, measurable goals for each tiny project. This clarity will help teams understand what success looks like and keep them focused. For example, if the goal is to reduce customer churn, aim for a specific percentage reduction within a set time frame.

Showcasing Wins to Build Momentum: Testing and Iteration

Once you’ve identified and scoped your tiny projects, the next step is to execute them effectively and showcase the wins. Celebrating small successes is crucial for building momentum and gaining support for future initiatives. In HCD terms, these tiny projects are rapid prototypes designed for quick testing and feedback.

Here’s how to do it:

Communicate results

Share the outcomes of your tiny projects with the broader organization. Use visual aids like dashboards or infographics to present data in an engaging way. Highlight not just the quantitative results, but also the qualitative benefits, such as improved team collaboration or enhanced customer satisfaction.

Gather testimonials (validating the prototype)

Encourage team members involved in the projects to share their experiences. Personal stories about how data-driven decisions made a difference can resonate more deeply than numbers alone. These testimonials provide qualitative feedback to validate the solution’s impact, illustrating the value of a data culture to skeptics. A powerful example of this is a team of four from a major metro area—including an HR person for the police department, a data analyst, a program manager, and a police officer—who, in less than two days, identified several constraints in their police department’s diversity hiring practices. Using only a small data set, post-it notes, and pens, they leveraged their collective knowledge and experience. Their results were shared with law enforcement leadership and led to direct policy and communication changes.

Create a feedback loop (continuous improvement)

After completing a tiny project, gather feedback from participants and stakeholders. This input can help refine future projects and demonstrates a commitment to continuous improvement, which is central to the iterative nature of HCD. It also fosters a sense of ownership among team members, encouraging them to engage in future initiatives.

Securing Buy-In for Larger Initiatives: Scaling the Design

As you build momentum with tiny projects, you’ll find it easier to secure buy-in for larger data initiatives. The successful prototypes created through the small projects provide the evidence needed to support scaling.

Here are some strategies to help you gain support:

Align with organizational goals

When proposing larger projects, ensure they align with the broader objectives of the organization. Demonstrating how these initiatives can drive strategic goals will make it easier to gain leadership support.

Showcase scalability

Use the successes of tiny projects to illustrate how larger initiatives can build on these foundations. For example, if a small project successfully improved customer insights, propose a larger initiative that expands this analysis across multiple customer segments.

Engage stakeholders early

Involve key stakeholders in the planning stages of larger initiatives. Their input can help shape the project and increase their investment in its success. This collaborative approach fosters a sense of shared ownership and commitment.

Attracting More Collaborators: Designing the Experience

As your organization begins to embrace a data-first culture, you’ll naturally attract more collaborators. It’s not just about a top-down mandate; it’s about creating an environment where people want to be involved. This is where human-centered design is applied to the process itself, making participation intrinsically rewarding.

Here’s how to encourage participation and make your data projects a magnet for talent:

Create cross-functional teams

Encourage collaboration across departments by forming cross-functional teams for data projects. This diversity of perspectives can lead to more innovative solutions and a stronger sense of community.

Offer training and resources

Provide training sessions and resources to help employees feel more comfortable with data tools and analytics. When team members feel equipped to contribute, they’re more likely to engage in data initiatives.

Celebrate collaboration

Recognize and reward collaborative efforts within your organization. Highlighting team achievements reinforces the value of working together and encourages others to join in.

Best Practices for Fostering a Collaborative Environment: HCD in Action

To truly make your data projects a success, you need to set up the right conditions for collaboration. The best results often come from casual, no-pressure environments where a diverse group of people can work together effectively.

Let participants inform their tiny project challenge (user agency)

A powerful way to spark collaboration is to allow participants to collaborate on their data problem topics. This aligns with the HCD principle of cocreation, instantly building synergy and a shared sense of purpose. This often reveals that people from different departments, many of whom have never met, are facing the exact same challenge but from different perspectives. They are often overjoyed to find a kindred spirit to collaborate and innovate with on a solution.

Optimize for interaction by balancing in-person and virtual collaboration

While the digital tools supporting remote work have expanded reach and accessibility, the choice of collaboration method for tiny projects is critical. In-person collaboration remains the most effective way to foster rapid, creative problem-solving. Being in the same room allows for spontaneous brainstorming, an immediate shared sense of energy, and the ability to read nonverbal cues, which accelerates the HCD empathy and ideation phases. The pros are speed, depth of connection, and cocreation quality. However, virtual or remote collaboration offers substantial pros like lower cost, greater geographic diversity, and increased participant accessibility, which can be invaluable for gathering a wider range of data perspectives. Therefore, for truly tiny, complex, or urgent problem-solving, prioritize the high-bandwidth interaction of in-person settings, but leverage virtual tools for asynchronous check-ins, data sharing, and ensuring wider organizational inclusion.

Cultivate a “freedom to fail” mindset (psychological safety)

Explicitly state that this is a no-pressure environment where experimentation is encouraged. When people aren’t afraid of making mistakes, they are more willing to try new ideas, challenge assumptions, and learn from what doesn’t work. This psychological safety is crucial for rapid iteration and innovation, the hallmarks of effective HCD.

Ensure a diverse mix of people

A successful project isn’t just about data and technology. Bring together a highly diverse range of people from different departments, with varying levels of experience, and from a variety of disciplines. A project team that includes an HR person, a police officer, a data analyst, and a program manager can uncover insights that a homogeneous group never would.

Design for active collaboration (experiential design)

Move beyond traditional conference room setups. Create a comfortable environment that is suitable for active collaboration. This means having space to stand up, walk around, and use whiteboards or walls for posting ideas. Getting people out from behind their laptops encourages dynamic interaction and shared focus, as HCD principles apply to designing the process experience itself.

Provide healthy food and drinks

Simple as it may seem, offering readily available, healthy, and tasty food and beverages makes a huge difference. It removes a minor distraction, signals that the organization values the team’s time, and fosters a more relaxed, communal atmosphere.

The Value Proposition for Collaborators: Designing for Intrinsic Motivation

The true secret to attracting collaborators isn’t just about providing resources—it’s about making the process personally and professionally rewarding. Tiny projects are an excellent way to do this because they’re inherently fun and self-edifying, and often lead to quick, visible success.

When projects are small and have a clear, rapid path to a solution, people are more willing to participate. They see it as a low-risk opportunity to experiment and have some fun. This is a chance to step away from their regular duties and engage in a different kind of problem-solving. This shift in mindset can be a refreshing and enjoyable experience.

Beyond the enjoyment, tiny projects offer a chance for personal and professional growth. Team members get to learn from their peers in different departments, gaining new skills and perspectives. It’s a form of on-the-job training that is far more engaging and relevant than a traditional workshop. They feel a sense of self-edification as they solve a real-world problem and gain confidence in their abilities.

Finally, the success of these projects is often wildly, visibly, and rapidly successful. Because the scope is small, teams can quickly deliver tangible results. A project that saves a city millions of dollars or leads to direct policy changes in a police department in less than two days is a powerful story.

These successes are great for the organization, but they’re also a massive win for the individuals involved. They get to demonstrate their expertise and showcase the value they can add beyond their job description. This visibility and recognition are powerful motivators, encouraging people to participate in future projects because they want to have fun, be successful, and add value again.

You don’t have to do many tiny projects to see the effect. The personal benefits—the fun, the learning, the rapid success—become organizational cultural values that expand rapidly to other individuals and parts of the organization. It’s the massively exponential positive feedback loop that transforms a data culture, one small, successful project at a time.

Scaling a Data-First Culture

Ultimately, the goal is to scale a data-first culture that extends beyond individual projects. By starting with tiny projects as HCD prototypes, showcasing wins as validated solutions, securing buy-in, and attracting collaborators through a well-designed process, organizations can create a sustainable environment where data-driven decision-making thrives.

As you embark on this journey, remember that building a resilient data culture is a marathon, not a sprint. Each tiny project is a step toward a larger vision, and with each success, you’ll be laying the groundwork for a future where data is at the heart of your organization’s strategy. Embrace the process, celebrate the wins, and watch as your data culture flourishes.



Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Advice for Writing Maintainable Python Code

1 Share

What are techniques for writing maintainable Python code? How do you make your Python more readable and easier to refactor? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder’s Weekly articles and projects.

We discuss a recent article about writing code that is easy to maintain. We cover writing comments, creating meaningful names, avoiding magic numbers, and preparing code for your future self.

We also share several other articles and projects from the Python community, including release news, modifying the REPL, differences between Polars and pandas, generating realistic test data in Python, investigating quasars with Polars and marimo, creating simple meta tags for Django objects, and a GUI toolkit for grids of buttons.

Course Spotlight: Modern Python Linting With Ruff

Ruff is a blazing-fast, modern Python linter with a simple interface that can replace Pylint, isort, and Black—and it’s rapidly becoming popular.

Topics:

  • 00:00:00 – Introduction
  • 00:01:53 – PyTorch 2.9 Release
  • 00:02:38 – Django 6.0 Beta 1
  • 00:03:05 – Handy Python REPL Modifications
  • 00:11:06 – Polars vs pandas: What’s the Difference?
  • 00:17:55 – Faker: Generate Realistic Test Data in Python
  • 00:22:06 – Video Course Spotlight
  • 00:23:35 – Investigating Quasars With Polars and marimo
  • 00:27:37 – Writing Maintainable Code
  • 00:49:48 – buttonpad: GUI Toolkit for Grids of Buttons
  • 00:52:10 – django-snakeoil: Simple Meta Tags for Django Objects
  • 00:54:07 – Thanks and goodbye

News:

Show Links:

  • Handy Python REPL Modifications – Trey uses the the Python REPL a lot. In this post he shows you his favorite customizations to make the REPL even better.
  • Polars vs pandas: What’s the Difference? – Discover the key differences in Polars vs pandas to help you choose the right Python library for faster, more efficient data analysis.
  • Faker: Generate Realistic Test Data in Python – If you want to generate test data with specific types (bool, float, text, integers) and realistic characteristics (names, addresses, colors, emails, phone numbers, locations), Faker can help you do that.
  • Investigating Quasars With Polars and marimo – Learn to visualize quasar redshift data by building an interactive marimo dashboard using Polars, pandas, and Matplotlib. You’ll practice retrieving, cleaning, and displaying data in your notebook. You’ll also build interactive UI components that live-update visualizations in the notebook.

Discussion:

  • Writing Maintainable Code – “Maintainable code can easily be the difference between long-lived, profitable software, and short-lived money pits.” Read on to see just what maintainable code is and how to achieve it.

Projects:

Additional Links:

Level up your Python skills with our expert-led courses:

Support the podcast & join our community of Pythonistas





Download audio: https://dts.podtrac.com/redirect.mp3/files.realpython.com/podcasts/RPP_E273_02_PyCoders.c4030c27e5a2.mp3
Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories