Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149485 stories
·
33 followers

Despite Chinese hacks, Trump’s FCC votes to scrap cybersecurity rules for phone and internet companies

1 Share
Two Trump-appointed FCC officials voted to undo the telecom industry's cybersecurity rules. One Democratic commissioner dissented, saying the decision leaves the United States "less safe" at a time when threats are increasing.
Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Windows 11 tests Microsoft 365 Copilot button in File Explorer, and universal “writing assistant”

1 Share

File Explorer already comes with “Ask Copilot” in the right-click menu (context menu), and Microsoft is now testing “Ask Microsoft 365 Copilot” within the “Home” tab. In addition, Windows 11 is adding a new universal writing assistant tool, which allows you to use Microsoft 365 Copilot to automatically correct grammatical errors or rewrite texts.

File Explorer home tab
Current UI of the Home tab in File Explorer

Right now, the Home tab in File Explorer is quite simple, but in a future release, you will see a new option, Ask Microsoft 365 Copilot when you hover over recent files. If you tap on the “Ask M365 Copilot” option, it’ll send your file to the Microsoft 365 Copilot app for a quick summary or insights.

Ask M365 Copilot in File Explorer

How is it different from the “Ask Copilot” in the right-click menu?

Ask Copilot in context menu

We already have one “Ask Copilot” in File Explorer, but it’s visible only when you right-click a file. It’s optimized for regular conversations with the selected file unless you write a detailed prompt.

M365 Copilot with File Explorer

On the other hand, MS365 Copilot integration is going to be within the “Home” tab of File Explorer, and it has been prompted to better handle Office files.

However, the integrations are really just two sides of the same coin, as the underlying AI is ChatGPT after all.

Microsoft says M365 Copilot integration into File Explorer delivers insights for the selected file without leaving your current context. But in reality, it just calls the ChatGPT API and summarises the selected file. We don’t know if files are processed on the device or sent to Microsoft’s cloud directly from File Explorer, but it seems to be the latter.

M365 Copilot summaries
File processed using the M365 Copilot app directly from File Explorer | Image Courtesy: WindowsLatest.com

Also, there’s a new folder icon in the Home, which lets you open the file location directly from the Home tab. This is the only useful change.

These two changes are for everyone (Intel, AMD and Snapdragon).

Universal writing assistant on Windows 11, but only for Copilot+ PCs

Microsoft Edge AI Rewriting hands on
Microsoft Edge AI Writing hands-on | Image Courtesy: WindowsLatest.com

Up until now, only Microsoft Edge shipped with a writing assistant, which was powered by a small language model and allowed you to fix grammatical errors within any text field. More recently, Microsoft retired Microsoft Editor, which was an alternative to Grammarly, a popular tool for proofreading and now AI (sadly).

Now, Microsoft is testing a universal AI-powered writing assistant. The idea is to show a ‘writing assistant’ pop-up when you’re interacting with a text field on a website, let’s say LinkedIn. It can help you automatically proofread the text and patch a few errors, but if you want to go a step further, you can also rewrite the text in your preferred tone.

Microsoft 365 Copilot pop up

As you can see in the above screenshot, when you use the Writing Assistance feature on a Copilot+ PC, a small window pops up with options to help you proofread the grammar of the text in the field and rewrite it. For example, it can be auto, concise, friendly, or professional.

This feature is exclusive to Copilot+ PCs (requires NPU) for now.

 

The post Windows 11 tests Microsoft 365 Copilot button in File Explorer, and universal “writing assistant” appeared first on Windows Latest

Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Beyond the Hype: How to Use AI to Actually Increase Your Productivity as a Dev

1 Share

When I started incorporating AI tools into my workflow, I was first frustrated. I didn’t get the 5x or 10x gains others raved about on social. In fact, it slowed me down.

But I persisted. Partly because I see it as my professional duty as a software engineer to be as productive as possible, partly because I’d volunteered to be a guinea pig in my organization.

After wrestling with it for some time, I finally got my breakthrough discovery—the way to use AI tools well involves the same disciplines we’ve applied in software development for decades:

  • Break work down into reasonable chunks
  • Understand the problem before trying to solve it    
  • Identify what worked well and what didn’t
  • Tweak variables for the next iteration

In this article, I share the patterns of AI use that have led me to higher productivity. 

These aren’t definitive best practices. AI tools and capabilities are changing too quickly, and codebases differ too much. And then we’re not even taking the probabilistic nature of AI into account.

But I do know that incorporating these patterns into your workflow can help you become one of the developers who benefit from AI instead of being frustrated or left behind.

A Cycle for Effective AI Coding

Too many people treat AI like a magic wand that will write their code _and_ do their thinking. It won’t. These tools are just that: tools. Like every developer tool before them, their impact depends on how well you use them.

To get the most from AI tools, you need to constantly tweak and refine your approach. 

The exact process you follow will also differ depending on the capabilities of the tools you use. 

For this article, I’ll assume you’re using an agentic AI tool like Claude Code, or something similar: a well-rounded coding agent with levers you can tweak and a dedicated planning mode, something that more tools are adopting. I’ve found this type of tool to be the most impactful.

With such a tool, an effective AI coding cycle should look something like this:

undefined Imgur 3

The cycle consists of four phases:

  • Prompting: Giving instructions to the AI
  • Planning: Working with the AI to construct a change plan
  • Producing: Guiding the AI as it makes changes to the code
  • Refining: Using learnings from this iteration to update your approach for the next cycle

You might think this is overly complicated. Surely you could simply go between prompting and producing repeatedly? Yes, you could do that, and it might work well enough for small changes. 

But you’ll soon find that it doesn’t help you write sustainable code quickly. 

Without each step in this loop, you risk that the AI tool will lose its place or context, and the quality of its output will plummet. One of the major limitations of these tools is that they will not stop and warn you when this happens; they’ll just keep on trying their best. As the operator of the tool and ultimately the owner of the code, it’s your responsibility to set the AI up for success. 

Let’s look at what this workflow looks like in practice.

1. Prompting

AI tools are not truly autonomous: the quality of the output reflects the input you provide. That’s why prompting is arguably the most important phase in the loop: how well you do it will determine the quality of output you get, and by extension, how productive your use of AI will be.

This phase has two main considerations: context management and prompt crafting.

Context Management

A common characteristic of current-gen AI tools is that the quality of their output tends to decrease as the amount of context they hold increases. This happens for several reasons:

  • Poisoning: errors or hallucinations linger in context
  • Distractions: the model reuses mediocre context instead of searching for better info    
  • Confusion: irrelevant details lower output quality
  • Clashes: outdated or conflicting info leads to errors

As long as AI tools have this limitation, you get better results by strictly managing the context.

In practice, this means rather than having one long-running conversation with your agent, you should “wipe” its context in between tasks. Start from a fresh slate each time, and re-prompt it with the information it needs for the next task so that you don’t implicitly rely on accumulated context. With Claude Code, you do this with the /clear slash command. 

If you don’t clear context, tools like Claude will “auto-compact” it, a lossy process that can carry forward errors and reduce quality over time.

If you need any knowledge to persist between sessions, you can have the AI dump it into a markdown file. You can then either reference these markdown files in your tool’s agent file (CLAUDE.md for Claude Code) or mention the relevant files when working on specific tasks and have the agent load them in.

Structure varies, but it might look something like this…

.

├── CLAUDE.MD

└── docs

└── agents

     └── backend

         ├── api.md

         ├── architecture.md

         └── testing.md

```

Prompt Crafting

After ensuring you’re working with a clean context window, the next most important thing is the input you provide. Here are the different approaches you can take depending on the task you are dealing with.

Decomposition

Generally, you want to break work down into discrete, actionable chunks. Avoid ambiguous high-level instructions like “implement an authentication system”, as this has too much variability. Instead, think about how you would actually do the work if you were going to do it manually, and try to guide the AI along the same path.

Here’s an example from a document management system task I gave Claude. You can view the whole interaction summary in this GitHub repo.

  • Prompt: “Look at DocumentProcessor and tell me which document types reference customers, projects, or contracts.”
    • Output: AI identified all references
  • Prompt: “Update the mapping functions at {location} to use those relationships and create tests.”
    • Output: Implemented mappings + tests
  • Prompt: “Update documentIncludes to ensure each type has the right relations. Check backend transformers to see what exists.”
    • Output: Filled in missing relationships

Notice how the task is tackled in steps. A single mega-prompt would have likely failed at some point due to multiple touchpoints and compounding complexity. Instead, small prompts with iterative context led to a high success rate. 

Once the task is done, wipe the context again before moving on to avoid confusing the AI.

Chaining

Sometimes you do need a more detailed prompt, such as when tasking the AI with a larger investigation task. In this case, you can improve your chances of success greatly by chaining prompts together. 

The most common way of doing this is by providing your initial prompt to a separate LLM, such as ChatGPT or Claude chat, and asking it to draft a prompt for you for a specific purpose. Once you’re satisfied with the parameters of the detailed prompt, feed it into your coding agent. 

Here’s an example:

Prompt (ChatGPT): “Draft me a prompt for a coding agent to investigate frontend testing patterns in this codebase, and produce comprehensive documentation that I can provide to an AI to write new tests that follow codebase conventions.”

This prompt produces a fairly detailed second-stage prompt that you can review, refine, and feed to your agent:

undefined Imgur 4

You can see the full output here

This approach obviously works best when you ensure the output aligns with the reality of your code. For example, this prompt talks about `jest.config.js`, but if you don’t use jest, you should change this to whatever you do use. 

Reuse

Sometimes, you’ll find a pattern that works really well for your codebase or way of working. Often, this will happen after Step 4: Refining, but it can happen at any time. 

When you find something that works well, you should set it aside for reuse. Claude Code has a few ways you can do this, with the most idiomatic one being custom slash commands. The idea here is that if you have a really solid prompt, you can encode it as a custom command for reuse.

For example, one great time saver I found was using an agent to examine a Laravel API and produce a Postman collection. This was something I used to do manually when creating new modules, which can be quite time-consuming.

Using the chaining approach, I produced a prompt that would:

  • Generate a new Postman collection for a given backend module
  • Use the Controller/API test suite to inform the request body values
  • Use the Controller and route definitions to determine the available endpoints

Running the prompt through an agent consistently produced a working Postman collection almost instantly. You can see the prompt here

When you find a valuable pattern or prompt like this, you should consider sharing it with your team, too. Increasing productivity across your team is where the real compounding benefits can happen.

2. Planning

Tools like Claude Code have a planning mode that allows you to run prompts to build context without making any changes. While you don’t always need this functionality, it’s invaluable if you’re dealing with a change with any appreciable amount of complexity.

Typically, the tool will perform an investigation to find all the information it needs to determine what it would do if it weren’t in planning mode. It will then present you with a summary of the intended changes. The key inflection point here is that it allows you to review what the AI is planning to do.

In the screenshot below, I used planning mode to ask Claude what’s needed to add “Login with Google” to an existing app that already supports “Login with Discord”:

0BTFj5x Imgur

I could see everything the AI planned to change to decide whether it makes sense for my use case.

Important: read the plan carefully! Make sure you understand what the AI is saying, and make sure it makes sense. If you don’t understand or if it seems inaccurate, ask it to clarify or investigate more. 

You should not move on from the planning phase until the plan looks exactly like what you would expect.

If the AI proposes rewriting a huge amount of code, treat it as a red flag. Most development should be evolutionary and iterative. If you break work into small chunks, the AI should propose and make small changes, which in turn will be easier to review. If the plan includes far more changes than you expected, review your input to see if the AI is missing important context.

Once you’ve iterated on the plan, you can give the AI the go-ahead to execute the plan.

3. Producing

During the third phase, the AI will begin to make changes to your codebase. Although the AI will produce most of the output here, you’re not off the hook. You still own any code it produces at your behest, for better or worse. It’s therefore better to see the producing phase as a collaboration between you and the AI: the AI produces code and you’re guiding it in real-time.

To get the most out of your AI tool and spend the least amount of time doing rework, you need to guide it. Remember, your goal is maximum productivity—real productivity, not just lines of code. That requires that you need to actively engage with the tool and work with it as it builds things, rather than just leaving it to its own devices.

If you take sufficient care with creating your prompt and doing planning, there shouldn’t be too many surprises during the actual coding phase. However, AI can still make mistakes, and it will certainly overlook things, especially in larger systems. (This is one of the major reasons why fully “vibe coded” projects break down quickly as they increase in scope. Even when the entire system has been built by AI, it will not remember or know everything that exists in the codebase.)

A day must still pass where I’ve not caught AI making a mistake. They might be small mistakes, like using string literals in place of pre-existing constants, or inconsistent naming conventions. These things might not even stop the code from working. 

However, if you let these changes through unchecked, it will be the start of a slippery slope that is hard to recover from. Be diligent, and treat any AI-generated code as you would code from another team member. Better still, understand that this code has your name attached to it, and don’t accept anything that you aren’t willing to “own” in perpetuity.

So if you notice a mistake has been made, point it out and suggest how it can be fixed. If the tool deviates from the plan or forgets something, try to catch it early and course-correct. Because your prompts are now small and focused, the features the AI builds should also be smaller. This makes reviewing them easier.

4. Refining

Luckily, rather than constantly fighting the machine and going back and forth on minor issues, the final phase of the loop—refining—offers a more sustainable way to calibrate your AI tool over time.

You might not make a change to your setup after every loop, but every loop will yield insight into what is working well and what needs to change. 

The most common way to tweak the behavior of AI tools is to use their specific steering documents. For instance, Claude has CLAUDE.md, and Cursor has Rules

These steering documents are typically a markdown file that gets loaded into the agent’s context automatically. In it, you can define project-specific rules, style guides, architectures, and more. If you find, for example, that the AI constantly struggles with how to set up mocks in your tests, you can add a section to your doc that explains what it needs to know, with examples it can use for reference, or links to known-good files in the codebase it can look at. 

This file shouldn’t get too big, as it does take up space in the LLM’s context. Treat it like an index, where you include information that is always needed directly in the file, and link out to more specialized information that AI can pull in when needed. 

Here’s an excerpt from one of my CLAUDE.md files that work well:

```md
...
## Frontend
...
### Development Guidelines

For detailed frontend development patterns, architecture, and conventions, see:
**[Frontend Module Specification](./docs/agents/frontend/frontend-architecture.md)**

This specification covers:

- Complete module structure and file organization
- Component patterns and best practices
- Type system conventions
- Testing approaches
- Validation patterns
- State management
- Performance considerations
...
```

The AI understands the hierarchy of markdown files, so it will see that there’s a section about frontend development guidelines, and it will see a link to a module specification. The tool will then decide internally whether it needs this information. For instance, if it’s working on a backend feature, it will skip it, but if it’s working on a frontend module, it will pull in this extra file. 

This feature allows you to conditionally expand and refine the agent’s behavior, tweaking it each time it has trouble in a specific area, until it can work in your codebase effectively more often than not.

Exceptions to the Cycle

There are some cases where it makes sense to deviate from this flow.

For quick fixes or trivial changes, you might only need PromptingProducing. For anything beyond that, skipping planning and refinement usually backfires, so I don’t recommend it.

Refinement will likely need to be done quite often when first starting or when moving to a new codebase. As your prompts, workflows, and setup mature, the need to refine drops. Once things are dialed in, you likely won’t need to tweak much at all.

Finally, while AI can be a real accelerator for feature work and bug fixes, there are situations where it will slow you down. This varies by team and codebase, but as a rule of thumb: if you’re deep in performance tuning, refactoring critical logic, or working in a highly regulated domain, AI is more likely to be a hindrance than a help.

Other Considerations

Beyond optimizing your workflow with AI tools, a few other factors strongly affect output quality and are worth keeping in mind.

Well-Known Libraries and Frameworks

One thing you’ll notice quickly is that AI tools perform much better with well-known libraries. These are usually well-documented and likely included in the model’s training data. In contrast, newer libraries, poorly documented ones, or internal company libraries tend to cause problems. Internal libraries are often the hardest, since many have little to no documentation. This makes them difficult not only for AI tools but also for human developers. It’s one of the biggest reasons AI productivity can lag on existing codebases.

In these situations, your refinement phase often means creating guiding documentation for the AI so it can work with your libraries effectively. Consider investing time up front to have the AI generate comprehensive tests and documentation for them. Without it, the AI will have to reanalyze the library from scratch every time it works on your code. By producing documentation and tests once, you pay that cost up front and make future use much smoother.

Project Discoverability

The way your project is organized has a huge impact on how effectively AI can work with it. A clean, consistent directory structure makes it easier for both humans and AI to navigate, understand, and extend your code. Conversely, a messy or inconsistent structure increases confusion and lowers the quality of output you get.

For instance, a clean, consistent structure might look like this:

```
.
├── src
│   ├── components
│   ├── services
│   └── utils
├── tests
│   ├── unit
│   └── integration
└── README.md

```

Compare that with this confusing structure:

```
.
├── components
│   └── Button.js
├── src
│   └── utils
├── shared
│   └── Modal.jsx
├── pages
│   ├── HomePage.js
│   └── components
│   	└── Card.jsx
├── old
│   └── helpers
│   	└── api.js
└── misc
	└── Toast.jsx
```

In the clear structure, everything lives in predictable places. In the confusing one, components are scattered across multiple folders (`components`, `pages/components`, `shared`, `misc`), utilities are duplicated, and old code lingers in `old/`. 

An AI, like any developer, will struggle to build a clear mental model of the project, which increases the chance of duplication and errors. 

If your codebase has a confusing structure and restructuring it is not an option, map out common patterns—even if there are multiple patterns for similar things—and add these to your steering document to reduce the amount of discovery and exploration the AI tool needs to do.

Wrapping Up

Adding AI tools to your workflow won’t make you a 10x developer overnight. You might even find that they slow you down a bit initially, as all new tools do. But if you invest the time to learn them and adapt your workflow, the payoff can come surprisingly quickly.

The AI tooling space is evolving fast, and the tools you use today will likely feel primitive a year from now. However, the habits you build and the workflow you develop—the way you prompt, plan, act, and refine—will carry forward in one form or another. Get those fundamentals right, and you’ll not only keep up with the curve, you’ll stay ahead of it.

Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Building a FastAPI Application & Exploring Python Concurrency

1 Share

What are the steps to get started building a FastAPI application? What are the different types of concurrency available in Python? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder’s Weekly articles and projects.

We discuss a recent Real Python step-by-step tutorial about programming a FastAPI example application. You practice installing FastAPI, building your first endpoints, adding path and query parameters, and validating endpoints using Pydantic.

Christopher covers updates to his Real Python video course about concurrency in Python. The course digs into what concurrency means in Python and why you might want to incorporate it in your code. He describes the different methods and demonstrates how to approach coding using threading, asyncio, and multiprocessing.

We also share several other articles and projects from the Python community, including a news roundup, the PSF fundraiser campaign for 2025, where Python stores attributes, performance hacks for faster Python code, a project to transform functions into a web interface, and a Python disk-backed cache.

Course Spotlight: Python Descriptors

Learn what Python descriptors are, how the descriptor protocol works, and when descriptors are useful, with practical, hands-on examples.

Topics:

  • 00:00:00 – Introduction
  • 00:02:18 – Django Security Release
  • 00:02:46 – Django Is Now a CVE Numbering Authority (CNA)
  • 00:03:53 – An Annual Release Cycle for Django
  • 00:04:12 – PEP 810: Explicit Lazy Imports (Accepted)
  • 00:04:27 – PSF Board Office Hour Sessions for 2026
  • 00:05:42 – PyCon US 2026: Call for Proposals Open
  • 00:06:15 – PSF Fundraiser campaign for 2025
  • 00:10:12 – A Close Look at a FastAPI Example Application
  • 00:16:36 – Speed Up Python With Concurrency
  • 00:21:08 – __dict__: Where Python Stores Attributes
  • 00:25:59 – Video Course Spotlight
  • 00:27:17 – 10 Smart Performance Hacks for Faster Python Code
  • 00:29:56 – FuncToWeb: Transform Python Functions Into a Web Interface
  • 00:32:48 – python-diskcache: Python Disk-Backed Cache
  • 00:34:07 – Thanks and goodbye

News:

Show Links:

Projects:

Additional Links:

Level up your Python skills with our expert-led courses:

Support the podcast & join our community of Pythonistas





Download audio: https://dts.podtrac.com/redirect.mp3/files.realpython.com/podcasts/RPP_E275_02_PyCoders.a3c2921ff647.mp3
Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Coaching Product Owners from Isolation to Collaboration | Sara Di Gregorio

1 Share

Sara Di Gregorio: Coaching Product Owners from Isolation to Collaboration

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

The Great Product Owner: Using User Story Mapping to Break Down PO Isolation

"One of the key strengths is the ability to build a strong collaborative relationship with the Scrum team. We constantly exchange feedback, with the shared goal of improving both our collaborating and the way of working." - Sara Di Gregorio

 

Sara considers herself fortunate—she currently works with Product Owners who exemplify what great collaboration looks like. One of their key strengths is the ability to build strong collaborative relationships with the Scrum team. They don't wait for sprint reviews to exchange feedback; instead, they constantly communicate with the shared goal of improving both collaboration and ways of working. 

These Product Owners involve the team early, using techniques like user story mapping after analysis phases to create open discussions around upcoming topics and help the team understand potential dependencies. They make themselves truly available—they observe daily stand-ups not as passive attendees but as engaged contributors. If the team needs five minutes to discuss something afterward, the Product Owner is ready. They attend Scrum events with genuine interest in working with the team, not just fulfilling an attendance requirement. They encourage open dialogue, even participating in retrospectives to understand how the team is working and where they can improve collaboration. What sets these Product Owners apart is their communication approach. They don't come in thinking they know everything or that they need to do everything alone. Their mindset is collaborative: "We're doing this together." They recognize that developers aren't just executors—they're users of the product, experts who can provide valuable perspectives. 

When Product Owners ask "Why do you want this?" and developers respond with "If we do it this way, we can be faster, and you can try your product sooner," that's when magic happens. Great Product Owners understand that strong communication skills and collaborative relationships create better products, better teams, and better outcomes for everyone involved.

 

Self-reflection Question: How are your Product Owners involving the team early in discovery and analysis, and are they building collaborative relationships or just attending required events?

The Bad Product Owner: The Isolated Expert Who Thinks Teams Just Execute

 

"Sometimes they feel very comfortable in their subject, so they assume they know everything, and the team has only to execute what they asked for." - Sara Di Gregorio

 

Sara has encountered Product Owners who embody the worst anti-pattern: they believe they don't need to interact with the development team because they're confident in their subject matter expertise. They assume they know everything, and the team's job is simply to execute what they ask for. These Product Owners work isolated from the development team, writing detailed user stories alone and skipping the interesting discussions with developers. They only involve the team when they think it's necessary, treating developers as order-takers rather than collaborators who could contribute valuable insights. 

The impact is significant—teams lose the opportunity to understand the "why" behind features, Product Owners miss perspectives that could improve the product, and collaboration becomes transactional instead of transformational. Sara's approach to addressing this anti-pattern is patient but deliberate. She creates space for dialogue and provides training with the Product Owner to help them understand how important it is to collaborate and cooperate with the team. She shows them the impact of including the team from the beginning of feature study. 

One powerful technique she uses is user story mapping workshops, bringing both the team and Product Owner together. The Product Owner explains what they want to deliver from their point of view, but then something crucial happens: the team asks lots of questions to understand "Why do you want this?"—not just "I will do it." 

Through this exercise, Sara watched Product Owners have profound realizations. They understood they could change their mindset by talking with developers, who often are users of the product and can offer perspectives like "If we do it this way, we can be faster, and you can try your product sooner." 

The workshop helps teams understand the big picture of what the Product Owner is asking for while helping the Product Owner reflect on what they're actually asking. It transforms the relationship from isolation to collaboration, from directive to dialogue, from assumption to shared understanding.

 

In this segment, we refer to the User Story Mapping blog post by Jeff Patton.

 

Self-reflection Question: Are your Product Owners writing user stories in isolation, or are they involving the team in discovery to create shared understanding and better solutions?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Sara Di Gregorio

 

Sara is a people-centered Scrum Master who champions trust, collaboration, and real value over rigid frameworks. With experience introducing Agile practices, she fosters empathy, inclusion, and clarity in every team. As an Advanced Scrum Master, she helps teams grow, perform, and deliver with enthusiasm and purpose.

 

You can link with Sara Di Gregorio on LinkedIn.

 





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251121_Sara_Di_Gregorio_F.mp3?dest-id=246429
Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Episode 501 - How to Network on LinkedIn - Solo Show

1 Share

if you want to check out all the things ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠torc.dev⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ has going on head to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠linktr.ee/taylordesseyn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ for more information on how to get plugged in!





Download audio: https://anchor.fm/s/ce6260/podcast/play/111435293/https%3A%2F%2Fd3ctxlq1ktw2nl.cloudfront.net%2Fstaging%2F2025-10-20%2F33644ef0-67ab-409a-9ca0-8c21649ca1f9.mp3
Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories