Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151319 stories
·
33 followers

As an Engineering Manager, I couldn’t ignore AI if my teams are to survive

1 Share

Even with years of experience in leadership and change management, I couldn’t escape the familiar phases of change, this time with AI adoption. And that’s natural. Whenever something new arises, our first instinct is fear, especially when it could solve years of “only ifs.”

Yes, the tech industry is in a storm, and being an engineering lead has never been harder. Everyone is looking to you for answers, direction, and vision.

At first, I was in denial, thinking AI was for someone with fewer responsibilities. But then I realized the real question was:

What can I do to help my teams adapt to AI?

The answer was simple: push myself through the change and proactively lead my team’s transformation. Needless to say, it wasn’t easy.

Chapter 1: Denial

Even without an assistant, I am a top performer; I do not need to perform better.

Last year, AI tools arrived in our workspaces, but most of us were still in denial. They made for good morning coffee talk, but using them daily? That felt unreal.

Yeah, right. That will not happen. It’s not that I don’t want to learn something new, but I already have a ton on my plate: five teams to lead, acting as PM for a few products, and coaching a few prospective managers. On top of that, there are parallel technical initiatives I need to oversee. I simply don’t have time to play with shiny new tools,that’s for people not juggling ten parallel topics.

You can see how easy it is to fall into the trap of your own perspective. The painful truth is: even if you keep delivering at your current pace, without adopting AI tools, you won’t be able to keep up in a few months.

How can you tell if you’re stuck in denial?

Start by asking yourself three crucial questions:

  1. Did I hear about a change? New trends in the industry? Internal reorganization? Anything that could be classified as a disruption topic? Actively listen to all sources of information.
  2. Am I feeling like the ‘only one who is overstretched, with no time for anything’? It could mean you’re blinded by your own perspective.
  3. Am I criticizing the change – openly or secretly? Together with the two above, a possible diagnosis is denial.

You don’t want to stay in this state for too long. As a proactive leader, your next step should be to take the time to investigate.

Chapter 2: Anger

I don’t want you! You are not my assistant!

And suddenly you realize the change is real. Your doubt becomes reality. Every topic is now an AI topic, and it’s irritating. It should feel new and interesting, but you feel pushed into it, without the chance to choose. Already overstretched, your natural response is another primary feeling: anger.

I could sense that state of mind in many of my peers and engineers over the last year. It’s not easy to force yourself into a positive mindset instantly.

The message I want to share is this: it’s okay to feel negative, but staying in that state too long can undermine your results. Spend as little time as possible in it, learn to let it go, and give it a try.

Chapter 3: Bargaining

Ok, let’s see what I can really do.

Keeping an open mind is valuable for anyone, no matter their rank or role. As an engineer, rolling up your sleeves should be natural.

So, aside from trying it out of curiosity, I decided to dive deeper and explore architecture and try the tools. Engineers usually transition quickly, but if they get stuck, you can help by highlighting the positive aspects. I used the opportunity to innovate and learn as a positive hook.

Chapter 4: Depression 

I need to relearn everything again – more work for human me… again.

Reality strikes. 

I’ve opened a new Pandora’s box. So much to adapt to, while still maintaining old performance. AI should help me, not add more work. Will I ever escape this loop?

Leading through this phase is about providing support and reminding people of the positive outcomes

Chapter 5: Acceptance 

We’re friends now. Sorry I was so mean before.

With the knowledge comes the acceptance.  

Yes, that was a big change, but I found a couple of good use cases quickly. Claude code amazed me, and I even saw a few valid Copilot use cases, even though I despised it at first. I started thinking about all the cases I could explore, and my inner engineer took over.

Now it’s easy to bring others on board and help them through the change. And stay transparent, sharing the doubts you’ve faced and showing the human side.

So, how does a true leader approach AI?

Remember: ignoring changes around you is risky for any organization. It’s natural to fall into denial, but as a leader, it’s crucial to recognize it and take action.

Being aware of the steps that individuals, teams, or the organization need to push through (and helping them do it) is a key leadership skill.

The post As an Engineering Manager, I couldn’t ignore AI if my teams are to survive appeared first on ShiftMag.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Using ACP + Deep Agents to Demystify Modern Software Engineering

1 Share

This guest post comes from Jacob Lee, Founding Software Engineer at LangChain, who set out to build a coding agent more aligned with how he actually likes to work. Here, he walks through what he built using Deep Agents and the Agent Client Protocol (ACP), and what he learned along the way.

I’ve come to accept that I will delegate an ever-increasing amount of my work as a software engineer to LLMs. I was an early Claude Code superfan, and though my ego still tells me I can write better code situationally than Anthropic’s proto-geniuses in a data center, these days I’m mostly making point edits and suggestions rather than writing modules by hand.

This shift has made me far more productive, but I’ve become increasingly uncomfortable with blindly turning over such a big part of my job to an opaque third party. While training my own model was out of the question for many obvious reasons (and model interpretability is an unsolved problem anyway), the agent harness and UX on top of it is just software, and software IS something I understand. So when I had some free time during my paternity leave, I took a stab at building some tooling to my own specifications.

I work at a startup called LangChain, where we’ve been developing our own set of open-source agentic building blocks, and I settled on building an adapter between our Deep Agents framework and Agent Client Protocol (ACP). My goal was just to build a bespoke coding agent that fit my workflows, but the results were better than I expected. Over the past few months, it’s completely replaced Claude Code as my daily driver, with the added benefit of full observability into my agent’s actions by running LangSmith on top. In this post, I’ll cover how it works and how to set it up for yourself!

Why an IDE + ACP instead of a terminal + TUI?

If you’re not familiar with ACP, it’s an open protocol that defines how a client (most often used with IDEs like WebStorm or Zed) interacts with AI agents. It allows you to do cool things like quickly pass a coding agent the exact context you’re looking at in an IDE.

I’ve gotten quite used to being productive in IDEs over my decade writing software professionally, and I still find them valuable for a few reasons:

  • I do still edit code by hand occasionally. Most often, these are small edits I can make faster than explaining the problem to an agent, or because I can do something in parallel alongside a running agent, like adding debug statements, but this still provides some alpha.
  • IDEs are fantastic interfaces for viewing code in context. I most often use this to understand the general scope of a problem before prompting, or to self-review my current branch, but it’s also often just faster for me to point the agent at a file rather than asking it to grep around.

I previously used Claude Code in a separate terminal pane in an IDE, which worked but always felt like two disconnected tools. In JetBrains IDEs, the agent lives in a native tool window with tight integration. I can @mention the file or block of code I’m currently looking at, and many of my threads are littered with messages like “Take a look at this. Does it look funny? @thisFile“.

How it works

The agent

Though I could have created the various pieces for my agent from scratch, Deep Agents provided a good, opinionated starting point, providing the following:

  • Tools around interacting with the filesystem (read/write/edit_file, ls, grep, etc.).
  • Shell access, which allows the agent to run verifications like lint, tests, and more.
  • A write_todos tool, which encourages the agent to take a planning step that breaks work into steps and tracks progress.
    • In practice, this makes a big difference for longer refactors to keep the agent focused.
  • Capabilities around spawning isolated sub-agents for parallel or compartmentalized work.
    • Each one gets its own context, runs independently, and reports back, keeping the model’s context window manageable.
  • Other important UX features like streaming, cancellation, prompt caching, and context summarization.

I also added some custom middleware that appends information about the current project setup in the system prompt, such as the current directory open in the IDE, whether a git repo was present, package manager detection, and more.

It’s also possible to add skills, tweak the system prompt, add custom tools or MCP servers, and more, directly in Python, rather than having to create a new CLI config option.

The ACP adapter

After deciding on a basic agent setup, I needed to hook that agent into the client via ACP. I created an adapter that implements the ACP interface and handles the session lifecycle, message routing, model switching, and streaming.

One nice surprise was how cleanly the agent’s capabilities mapped onto ACP concepts.

For example:

  • The agent’s planning step (write_todos) maps naturally to agent plans in ACP.
  • Interrupts from the agent (e.g. “I want to run this command”) map to permission requests.
  • Threads and session persistence were nearly 1:1 with Deep Agents checkpointers.

This meant I didn’t need to invent much glue logic – the protocol already had good primitives for most of what I wanted. The overall agent runner looks roughly like this, minus the tool call and message formatting:

current_state = None
user_decisions = []
while current_state is None or current_state.interrupts:
    # Check for cancellation
    if self._cancelled:
        self._cancelled = False  # Reset for next prompt
        return PromptResponse(stop_reason="cancelled")

    async for stream_chunk in agent.astream(
        Command(resume={"decisions": user_decisions})
        if user_decisions
        else {"messages": [{"role": "user", "content": content_blocks}]},
        config=config,
        stream_mode=["messages", "updates"],
        subgraphs=True,
    ):
        if stream_chunk.__interrupt__:
            # If Deep Agents interrupts, request next actions from
            # the client via ACP's session/request_permission method
            user_decisions = await self._handle_interrupts(
                current_state=current_state,
                session_id=session_id,
            )
            # Break out of the current Deep Agent stream. The while
            # loop above resumes it with the user decisions
            # returned from the session/request_permission method
            break

        # ...translate LangGraph output into ACP
        # Tools that do not require interrupts are called
        # internally results are just streamed back here as well

        # current_state will be none when the agent has finished
        current_state = await agent.aget_state(config)

return PromptResponse(stop_reason="end_turn")

The human-in-the-loop flow was where I spent the most time. When the agent wants to run a shell command or make a file edit that requires approval, the adapter intercepts the interrupt from Deep Agents, and depending on what permissions mode the user has selected and what they have previously approved, either resumes immediately or sends a permission request to the IDE with options to approve, reject, or always-allow that command type.

The always-allow is session-scoped – if you approve uv sync once and choose “always allow”, subsequent uv sync calls skip the prompt automatically, but I made efforts to prevent similar commands such as uv run script.py from bypassing the permission check.

Here’s how the end result looks in WebStorm:

How it went

While I haven’t run formal evals, I was pleasantly surprised by how well my agent performed after only a few iterations. I didn’t actually expect to switch away from Claude Code, and it was a great dogfooding exercise as well, since our OSS team was able to upstream some of my feedback back into Deep Agents itself.

My original goal of regaining code-level, rather than config-level, control over my daily workflows has also been great. When Anthropic had an outage a few weeks ago, I was able to switch over to OpenAI’s gpt-5.4 without skipping a beat, and I even found that it had some interesting quirks. I switch back and forth between models mid-session to gain different perspectives from each model when working on tricky tasks, and have also found open-source models like GLM-5 are quite capable while offering significant cost savings.

Another boon is observability via LangSmith tracing, which allows me to debug and improve my agent when I run into issues. Being able to see exactly what context was passed to the model, which tools it called, and where it went sideways helped me understand behaviors that were previously hidden inside the harness. Here’s an example of what such a trace looks like:

For example, when I noticed that my agent was starting to take wide, slow sweeps of my filesystem, I used a trace to find a bug in my system prompt that told the agent the project was at the filesystem root rather than the current working directory.

Taking back your dev workflows for fun and profit

What started as a small late-night project I worked on around taking care of a newborn daughter turned into a huge success, both for my own understanding of agent behavior and for improving my daily workflow.

It proved to me that Claude Code isn’t magic but a bundle of very clever tricks rolled up into a neat package. The harness layer is just software, and software is something any developer can shape to fit how they want to work.

If you’re curious, I’d highly recommend trying an experiment like this yourself. Even a small prototype can teach you a lot about how these systems think and where they break. Clone the repo and follow the setup guide here to get started from source code. I’d love to know what you think. You can reach out to me on X @Hacubu to let me know!

Special thanks to @veryboldbagel and @masondxry for helping productionize the adapter and dealing with my unending questions and feedback!

Read the whole story
alvinashcraft
16 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Junie CLI Now Connects to Your JetBrains IDE

1 Share

Until now, Junie CLI has worked like any other standalone agent. It was powerful, but disconnected from the workflows you set up for your specific projects. That changes today.

Junie CLI can now connect to your running JetBrains IDE and use its full code intelligence, including the indexing, semantic analysis, and tooling you already rely on. The agent works with your IDE the same way you do. It sees what you see, knows what you’ve been working on, and uses the same build and test configurations you’ve already set up.

No manual setup is required – Junie CLI detects your running IDE automatically. If you have a JetBrains AI subscription, everything works out of the box.

What Junie can do with your IDE

Most AI coding agents operate in isolation. They read your files, guess at your project structure, and and attempt to run builds or tests without full context. This can work for simple projects, but it falls apart in real-world codebases, such as monorepos with complex build configurations, projects with hundreds of modules, or test setups that took your team weeks to get right.

Junie doesn’t guess. It asks your IDE, which gives it the power to:

Understand your context

Junie sees what you’re working on right now – which file is open, what code you’ve selected, and which builds and tests you’ve run recently. Instead of scanning your entire repository to understand what’s relevant, it starts with the same context you have.

Run tests without guessing

On a monorepo or any project with a non-trivial test setup, Junie uses the IDE’s pre-configured test runners – no guessing at commands and no broken configurations.

Refactor with precision

When Junie renames a symbol, it uses the IDE’s semantic index to find every usage – searching across files, respecting scope, and handling overloads and variables with the same name that appear in different contexts. This is the kind of refactoring that text-based search gets wrong.

Build and debug complex projects

Junie runs builds and tests using your existing IDE configurations.

Custom build commands, non-obvious test runners, cross-compilation targets – if your IDE understands them, Junie does too.

Use semantic code navigation

From the IDE’s index, Junie accesses the project structure without reading files line by line. Its synonym-aware search finds “variants” when you search for “options”. It navigates code the way you would, not the way grep does.

Installation

Junie CLI’s IDE integration works in all JetBrains IDEs. Support for Android Studio is coming soon.

Make sure your JetBrains IDE is running, then launch Junie CLI in your project directory. It will automatically detect the IDE and prompt you to install the integration plugin. One click, and you’re connected.

If you’re a JetBrains AI subscriber, authentication is automatic, while Bring Your Own Key (for Anthropic, OpenAI, etc.) is also fully supported.

What’s next

This integration is currently in Beta. We’re actively expanding the capabilities Junie can access through your IDE, and your feedback will directly shape what comes next.

Try it out, and let us know what you think.

Read the whole story
alvinashcraft
29 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How WordPress 7.0 Is Building the Foundation for AI-Powered Sites

1 Share

Until now, every WordPress plugin that integrated AI had to build its own foundations. 

The upcoming WordPress 7.0 changes this by introducing a shared infrastructure that supports how AI works across sites.

AI tools can now discover what a site can do, access AI services through a consistent layer, and trigger actions across plugins without requiring custom integration code for every combination.

The Abilities API: Defining what a site can do

WordPress 6.9 introduced the Abilities API as one of the first foundational pieces of this infrastructure. It gives plugins a standard way to register their capabilities in one place.

Instead of each plugin building its own custom integration, it declares what it exposes: 

  • Specific actions
  • The inputs those actions need
  • What they return
  • Which permissions are required

Those capabilities become discoverable through REST endpoints or the Model Context Protocol (MCP) Adapter.

This means automation tools and AI assistants can interact with WordPress without needing custom code for every plugin. A tool such as Zapier or an AI assistant such as Claude reads what’s available and acts on it.

A practical example: WooCommerce can register capabilities such as updating stock status, retrieving order data, or modifying product attributes. An AI assistant connecting to that site discovers those capabilities automatically. It doesn’t need a bespoke WooCommerce integration — it works with what the plugin has declared.

The WordPress AI Client: One connection to AI models

Before the WordPress AI Client, every plugin that wanted to use AI handled its own integration. Authentication, request formatting, response parsing — all built from scratch, again and again.

The AI Client introduces a shared interface for interacting with AI models. Plugins send prompts through one consistent layer, regardless of the provider.

WordPress 7.0 introduces the Connectors API alongside it. This is a system for managing connections to external services. It also adds a Connectors screen where site owners can configure AI providers in one place. Once configured, those connections are available across plugins without needing additional setup.

Screenshot of the WordPress 7.0 AI Connectors page.

This makes AI interactions composable across plugins. 

A workflow can span multiple tools, such as retrieving WooCommerce product data and passing it through an AI model to generate descriptions, without custom glue code holding it together. 

For developers, this means no more rebuilding the same integrations. For site owners, it means configuring AI once and using it everywhere.

The MCP Adapter: Connecting to external AI tools

MCP is an open standard for how AI assistants communicate with external tools. The WordPress MCP Adapter implements that protocol for WordPress, exposing registered abilities as tools that any MCP-compatible client can discover and call.

The adapter ships separately from WordPress core and was available prior to 7.0, but it becomes significantly more useful with the new AI infrastructure in place. 

Once connected, tools such as Claude, ChatGPT, or Gemini can see what your site can do and trigger actions directly.

This opens up workflows that would have required significant manual work or custom scripting before, such as having to:

  • Batch-update hundreds of posts with a single natural language command.
  • Find all WooCommerce products with inconsistent attributes and standardize them at scale.
  • Query order data to identify top-performing products or spot return trends.

How the pieces fit together

Each component handles a different part of the problem. The Abilities API defines what actions a site can perform. The AI Client connects plugins to AI models. The MCP Adapter exposes those actions to external AI assistants.

Here’s what it might look like in a real workflow:

  1. An AI assistant retrieves posts from WordPress.
  2. An AI model analyzes the content.
  3. The assistant triggers an ability to update the metadata.

Each step uses shared infrastructure. This makes these workflows reusable and composable across the ecosystem rather than locked inside a single plugin.

How this works on WordPress.com

On WordPress.com, this infrastructure is already in place.

Site owners can use the AI Assistant directly in the editor and Media Library to create and rewrite content, adjust layouts, generate images, and more. You can also connect the WordPress site to Claude to analyze content, identify gaps, generate ideas, and push updates back to your site.

Screenshot of the WordPress AI Assistant

For development, WordPress Studio provides a local environment where you can use tools such as Claude Code to build and test plugins, themes, and custom functionality. Telex extends this further, letting you generate blocks and themes from prompts and add them to your site.

Screenshot of the WordPress AI Website Builder.

The bottom line

The AI infrastructure in WordPress 7.0 is making AI-powered plugins and workflows possible at scale.

The Abilities API and AI Client are at the core of that shift — a shared infrastructure that gives the entire ecosystem something consistent to build on. 

Together, they represent a meaningful step toward creating a world where WordPress doesn’t just support AI workflows but actively enables them.





Read the whole story
alvinashcraft
48 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Give your Foundry Agent Custom Tools with MCP Servers on Azure Functions

1 Share

This blog post is for developers who have an MCP server deployed to Azure Functions and want to connect it to Microsoft Foundry agents. It walks through why you’d want to do this, the different authentication options available, and how to get your agent calling your MCP tools.

Connecting to Foundry agent through OAuth Identity Passthrough demo

Connect your MCP server on Azure Functions to Foundry Agent

Azure Functions is a great place to host remote MCP servers with scalable infrastructure, built-in auth, and serverless billing. But hosting an MCP server is only half the picture. The real value comes when something actually uses those tools.

Microsoft Foundry lets you build AI agents that can reason, plan, and take actions. By connecting your MCP server to an agent, you’re giving it access to your custom tools, whether that’s querying a database, calling an API, or running some business logic. The agent discovers your tools, decides when to call them, and uses the results to respond to the user.

Why connect MCP servers to Foundry agents?

You might already have an MCP server that works great with VS Code, Visual Studio, Cursor, or other MCP clients. Connecting that same server to a Foundry agent means you can reuse those tools in a completely different context. For example, in an enterprise AI agent that your team or customers interact with. No need to rebuild anything. Your MCP server stays the same; you’re just adding another consumer.

Prerequisites

Before proceeding, make sure you have the following:

  1. An MCP server deployed to Azure Functions. If you don’t have one yet, you can deploy one quickly by following one of the samples:
  2. A Foundry project with a deployed model and a Foundry agent

Authentication options

Depending on where you are in development, you can pick what makes sense and upgrade later. Here’s a summary:

Method Description Use case
Key-based (default) Agent authenticates by passing a shared function access key in the request header. This method is the default authentication for HTTP endpoints in Functions. Use during development or when the MCP server doesn’t require Microsoft Entra authentication.
Microsoft Entra Agent authenticates using either its own identity (agent identity) or the shared identity of the Foundry project (project managed identity). Use agent identity for production scenarios, but limit shared identity to development.
OAuth identity passthrough Agent prompts users to sign in and authorize access, using the provided token to authenticate. Use in production when each user must authenticate with their own identity and user context must be persisted.
Unauthenticated access Agent makes unauthenticated calls. Use during development or when your MCP server accesses only public information.

Connect your MCP server to your Foundry agent

If your server uses key-based auth or is unauthenticated, it should be relatively straightforward to set up the connection from a Foundry agent.

The Microsoft Entra and OAuth identity passthrough are options that require extra steps to set up. Check out detailed step-by-step instructions for each authentication method.

At a high level, the process looks like this:

  1. Enable built-in MCP authentication: When you deploy a server to Azure Functions, key-based auth is the default. You’ll need to disable that and enable built-in MCP auth instead. If you deployed one of the sample servers in the Prerequisites section, this step is already done for you.
  2. Get your MCP server endpoint URL: For MCP extension-based servers, it’s https://<FUNCTION_APP_NAME>.azurewebsites.net/runtime/webhooks/mcp.
  3. Get your credentials based on your chosen auth method: A managed identity configuration, OAuth credentials, or a function key.
  4. Add the MCP server as a tool in the Foundry portal by navigating to your agent, adding a new MCP tool, and providing the endpoint and credentials.

Microsoft Entra connection required fields:

Field Description Example
Name A unique identifier for your MCP server. You can use your function app name. contoso-mcp-tools
Remote MCP Server endpoint The URL endpoint for your MCP server. https://contoso-mcp-tools.azurewebsites.net/runtime/webhooks/mcp
Authentication The authentication method to use. Microsoft Entra
Type The identity type the agent uses to authenticate. Project Managed Identity
Audience The Application ID URI of your function app’s Entra registration. This value tells the identity provider which app the token is intended for. api://00001111-aaaa-2222-bbbb-3333cccc4444

OAuth identity passthrough required fields:

Field Description Example
Name A unique identifier for your MCP server. You can use your function app name. contoso-mcp-tools
Remote MCP Server endpoint The URL endpoint for your MCP server. https://contoso-mcp-tools.azurewebsites.net/runtime/webhooks/mcp
Authentication The authentication method to use. OAuth Identity Passthrough
Client ID The client ID of your function app Entra registration. 00001111-aaaa-2222-bbbb-3333cccc4444
Client secret The client secret of your function app Entra registration. abcEFGhijKLMNopqRST
Token URL The endpoint your server app calls to exchange an authorization code or credential for an access token. https://login.microsoftonline.com/<TENANT_ID>/oauth2/v2.0/token
Auth URL The endpoint where users are redirected to authenticate and grant authorization to your server app. https://login.microsoftonline.com/<TENANT_ID>/oauth2/v2.0/authorize
Refresh URL The endpoint used to obtain a new access token when the current one expires. https://login.microsoftonline.com/<TENANT_ID>/oauth2/v2.0/token
Scopes The specific permissions or resource access levels your server app requests from the authorization server. api://00001111-aaaa-2222-bbbb-3333cccc4444/user_impersonation

Once the server is configured as a tool, test it in the Agent Builder playground by sending a prompt that triggers one of your MCP tools.

Closing thoughts

What I find exciting about this is the composability. You build your MCP server once and it works everywhere: VS Code, Visual Studio, Cursor, ChatGPT, and now Foundry agents. The MCP protocol is becoming the universal interface for tool use in AI, and Azure Functions makes it easy to host these servers at scale and with security.

Are you building agents with Foundry? Have you connected your MCP servers to other clients? I’d love to hear what tools you’re exposing and how you’re using them. Share with us your thoughts!

What’s next

In the next blog post, we’ll go deeper into other MCP topics and cover new MCP features and developments in Azure Functions. Stay tuned!

The post Give your Foundry Agent Custom Tools with MCP Servers on Azure Functions appeared first on Azure SDK Blog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Stop Wasting Hours Writing Unit Tests: Use GitHub Copilot to Explode Code Coverage Fast

1 Share

Most development teams say they want better test coverage. Fewer teams say out loud why they do not have it: writing unit tests is slow, repetitive, and usually postponed until after the “real work” is done. This is exactly where GitHub Copilot changes the economics. Using GitHub Copilot to generate unit tests for code you […]

The article Stop Wasting Hours Writing Unit Tests: Use GitHub Copilot to Explode Code Coverage Fast was originally published on Build5Nines. To stay up-to-date, Subscribe to the Build5Nines Newsletter.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories