Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152990 stories
·
33 followers

How AI Swarms Are Disrupting Democracy

1 Share

Every day, millions of pieces of fake content are produced. Videos, audio clips, posts, articles, generated by artificial intelligence, distributed at industrial scale, aimed at shifting public opinion across entire countries. The people producing them are often outside the country being targeted. The people receiving them almost never know they’re fake. And they have no idea how they’re made.

A few years ago, troll farms worked like this: entire buildings full of people, shifts, desks and workers paid to write posts, create fake profiles, comment and pick fights in online discussions. It was expensive, slow, and in the end, the real impact was marginal. Those buildings still exist today, mostly in India, split between teams specializing in scams and teams dedicated to disinformation. They work on commission and they’re mostly AI experts now. They no longer write the articles themselves and no longer do graphic design or photo editing. They have AI agents do everything: agents they create, configure, instruct, and supervise. Hundreds of thousands of autonomous agents that do in one hour what used to take weeks of human labor. Troll farms have become AI farms, producing synthetic content at industrial scale.

The report “From Trolls to Generative AI: Russia’s Disinformation Evolution,” published in February of 2026 by the Centre for International Governance Innovation (CIGI), tells one of these stories, specifically about disinformation campaigns originating from Russia. Networks like CopyCop, a disinformation operation linked to the GRU (Russian military intelligence), use uncensored open-source language models like modified versions of Llama 3, installed on their own servers, to transform press articles into political propaganda and distribute it across hundreds of fake websites without leaving a trace. Because the models run locally, there’s no watermark and no log. The model runs on their hardware, inside their borders, outside any Western jurisdiction.

The paper “How malicious AI swarms can threaten democracy,” published in Science in January 2026 describes well what is coming: coordinated swarms of AI agents with persistent identities, memory, and the ability to adapt in real time to people’s reactions. The authors call them “malicious AI swarms.” Fully autonomous agents, each producing original content, each one different, each one adapted to context.

They can simulate real communities that appear credible, and they build what we can call synthetic consensus: the illusion that an opinion is widely shared, that a position is held by the majority, when in reality it’s a single operator speaking through thousands of masks.

It works because we humans have bugs too, and the swarms exploit them at a scale that was never possible before or that would have required enormous human resources.

One bug is called the bandwagon effect. Combined with another bug, illusory truth: repetition plus apparent source independence equals perceived truth. So if we see the same position expressed by different sources, in different contexts, with different words, on different platforms, we register it as widespread. And if we perceive it as widespread, we consider it more credible. And if we consider it credible, we tend to align with it.

Swarms of autonomous agents exploit both mechanisms at the same time, at industrial scale.

What most people still haven’t grasped is the scale. We were used to automation: A system that sent a hundred thousand identical emails, at most changing the name and little else, or made just as many posts and similar comments with minor variations. It automated the publishing, but at its core it was recognizable spam. Our mental model is still that one: If it’s automated, it’s generic. If it’s generic, you can spot it. But that’s a perception error built on years of experience when AI agents didn’t exist. That model is over. These agents no longer fit the concept of automation, because they make decisions, they radically change the text based on the recipient. They aggregate data from heterogeneous sources in real time: social profiles, public records, leaked databases that you can now buy for a few dollars on any dark web marketplace. Billions of personal records are already out there, scattered across hundreds of breaches accumulated over the years, and AI can cross-reference them, reconcile them, and build a coherent profile of a single person in seconds. The computational cost is negligible: a few cents in tokens to generate a perfectly personalized message. Consider that a single agent with access to a language model and a couple of leak databases can produce thousands of unique pieces of content per day, each calibrated for a different person. Multiply that by a hundred thousand agents working in parallel, twenty-four hours a day, and you have the scale of what’s happening.

Another legacy from the past: “I’m just an ordinary person, why would anyone bother creating content specifically to convince me?” That may have been once true. Today, nobody is losing time because these agents don’t get tired, don’t sleep, and do nothing else: find connections, aggregate data, produce false content calibrated for each of us. The old demographic profiling is over. This is surgical media targeting at industrial scale.

But the capacity to respond and deny is not at industrial scale. If hundreds of thousands of coordinated agents spread a video of a politician saying something they never said, that politician can deny it all they want. The video is there. Millions of people have seen it. The denial arrives later, arrives slower, and will never reach the same scale. It arrives in a world where nobody knows what’s true anymore.

If the same swarms spread the news that a head of state has died, and the news is false, that head of state can make all the videos they want to prove they’re alive. Those videos will probably be dismissed as deepfakes. Because the swarm’s narrative got there first, took root, and at that point any evidence to the contrary looks fabricated.

Whoever controls the swarms today controls the version of the facts. Whoever tries to push back is already at a disadvantage because they have to prove that a real video is real in a world where everyone has learned that videos can be fake.

The attackers are often outside the country being hit. Groups aligned with governments that want to shift public opinion in another country, or that target specific demographics. Young people, for example, using platforms that are often owned by those very countries.

All of this is a massive threat to democracy because democracy operates on some premises, including that people form opinions based on real information, discuss with each other, and then decide. If the information is fabricated, if the debate is populated by entities that don’t exist, if the consensus we perceive is synthetic, that premise collapses. And with it, the entire mechanism. Elections become the result of who has the best swarms, not who has the best ideas. Public debate becomes a performance where most of the voices are generated, and public opinion stops being public and becomes the product of whoever has the resources to manufacture it.

We grew up thinking that threats to democracy came from coups, censorship, or regime propaganda broadcast on television or in national newspapers. Those were real threats, but they were at least visible. They were things you could identify and fight. Now the threat is bigger and, above all, invisible, personalized, and it operates inside the very channels we use to inform ourselves, to discuss, to participate. It contaminates information from within, to the point where nobody knows which voices are real and which are machines.

What can we do? Watermarking? Pattern detection? Unfortunately, they don’t work. The major AI platforms can embed markers in content generated by their models, true. But the people building autonomous swarms don’t use commercial platforms. They use open-source models with fine-tuning and capabilities that can’t be controlled from outside. And they often have no legal obligation to do anything because there are no global laws that can impose watermarking on every computer in the world. The result is paradoxical: The content produced by those who follow the rules stays marked, and the content produced by those who want to cause harm stays free.

Pattern detection systems have the same limits. They work for a while, then once the detection patterns are identified, the swarms adapt. They’re designed to do exactly that.

And the platforms where all of this circulates have a financial incentive to turn a blind eye. Internal Meta documents made public by Reuters in November 2025 estimated that roughly 10% of Meta’s global 2024 revenue, about $16 billion, came from advertising for scams and prohibited products. Fifteen billion high-risk ads served on average every day to users. The maximum revenue Meta was willing to sacrifice to act against suspicious advertisers was 0.15% of total revenue: $135 million out of $90 billion. When a platform’s business model depends on ad volume, removing the fraudulent ones has a cost that nobody wants to pay. I suspect Meta is not alone in this.

Regulation doesn’t solve this problem either. I’ve worked on the European AI framework, the GPAI task force, the Italian AI law, and I’ve brought my perspective to the UK Parliament. I’ve been in those rooms. Europe has the AI Act, the GPAI Code of Practice is currently being drafted, and has a regulatory apparatus that is more advanced than any other bloc in the world. The United States has no federal regulation, and twenty-eight states have tried to legislate with transparency requirements that amount to fine print. But even the most ambitious European framework has a structural limit: The attacks come from countries that answer to none of these rules. You can regulate your platforms, your developers, your companies. You can’t regulate a building in Saint Petersburg, Shenzhen, or New Delhi, where someone is instructing swarms of agents on open-source models running on local servers, outside any jurisdiction.

One way out is to return to the reputation of sources. Editors, news organizations, journalists with a name and a face. People and organizations that have a professional track record to defend and that risk something when they get it wrong. Sure, they can have political leanings and they can make mistakes. But they have a constraint that no AI agent will ever have: public accountability. A system that generates millions of pieces of false content answers to no one. An editor answers to their audience, to the law, to their reputation. That constraint is the only filter that still holds, and protecting it is the only thing we can do right now, while the laws try to catch up with a technology that moves faster than any legislative process in the world.

Are we completely at the mercy of AI swarms or can we fight back?

Machines should not get to overpower humans, especially when what’s at stake is how we govern ourselves. The antibodies exist. We need to activate them.

The more people understand how swarms work, the less effective they become. A swarm that manufactures fake consensus only works if the people receiving it don’t know synthetic consensus exists. A bit like deepfakes. We know about them now and we often spot them. Once you see how it works, it’s harder to fall for it.

Then we need investment in culture. In spreading digital literacy, which is not learning how to use a computer, but learning to understand the social and cultural effects of the digital world. It means teaching in schools how to verify a source and what the signs of manipulated content are. It means stopping the practice of treating media literacy as a school project and starting to treat it as democratic infrastructure, on the same level as bridges and hospitals. It means funding independent journalism instead of letting it die, strangled by the same mechanisms that reward false content because it generates more engagement. It means demanding that platforms give different visibility to those who have a verifiable reputation versus those who have none.

Because awareness is the only antibody that scales at the same speed as the threat. And unlike regulation or detection systems, awareness doesn’t need to be imposed. It can be built, taught, shared, and spread from person to person.

Before sharing a piece of content, check where it comes from. Before reacting to a video or a statement, stop. Ask yourself whether the source has a name, a history, something to lose. Treat every piece of content as potentially synthetic until a credible, accountable source confirms it. These are habits, not technologies. They cost nothing and they work immediately.

Finally, we need the help and collaboration of the tech community. Those who design platforms, write code, and make decisions about how feeds and ranking algorithms work are making choices that directly shape the information ecosystem. These are choices with democratic consequences. The people making them know it. Many have known it for years. This is the moment to stop treating it as someone else’s problem and to decide which side you’re on. Because the swarms are not waiting.

We can do this. The tools exist, the knowledge is there, and the threat is clear enough that pretending not to see it is already a choice. The question is whether we act now, while the window is still open, or later, when the damage will be harder to reverse.



Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

PyTorch vs. TensorFlow: Choosing the Right Framework in 2026

1 Share
PyTorch vs. TensorFlow

Choosing between PyTorch and TensorFlow isn’t about finding the “better” framework – it’s about finding the right fit for your project. Both power cutting-edge AI systems, but they excel in different domains. PyTorch dominates research and experimentation, while TensorFlow leads in production deployment at scale.

The frameworks have evolved significantly since their early days, each building tools and capabilities to support research and production. Despite these improvements, fundamental differences remain in their philosophies, ecosystems, and ideal use cases, which will naturally influence which framework will best fit your project.

This guide examines where each framework shines, compares them across key dimensions, and helps you choose the right tool for your natural language processing, computer vision, and reinforcement learning projects.

What sets PyTorch and TensorFlow apart?

PyTorch and TensorFlow took different approaches from day one. Google launched TensorFlow in 2015, focusing on production deployment and enterprise scalability. Meta released PyTorch in 2016, prioritizing research flexibility and Pythonic development. These roots still shape each framework today.

The key difference between the two lies in computational graphs. PyTorch uses dynamic graphs that execute operations immediately, making debugging natural – you use standard Python tools and inspect tensors at any point. TensorFlow originally required static graphs defined before execution, though version 2.x now defaults to eager execution while retaining optional graph compilation for performance.

Market data shows TensorFlow holds a 37% market share, while PyTorch commands 25%. But the research tells a different story: PyTorch powers 85% of deep learning papers presented at top AI conferences.

PyTorch: Strengths and weaknesses

PyTorch’s Pythonic API treats models like regular Python code, making development feel intuitive from the start. The framework’s dynamic computational graphs execute operations immediately rather than requiring upfront model definition, fundamentally changing how you approach debugging and experimentation.

This design philosophy has made PyTorch the dominant choice in research, where flexibility matters more than deployment infrastructure. However, this research-first design means production deployment tools remain less mature than TensorFlow’s enterprise infrastructure.

PyTorch strengths

  • Intuitive, Pythonic API: Models use standard Python syntax with minimal framework-specific concepts, reducing the learning curve dramatically compared to other frameworks.
  • Dynamic graphs enable natural debugging: Set breakpoints in training loops, inspect tensor values mid-execution, and modify architectures on the fly using tools you already know.
  • Priority access to the latest techniques: Because of its research dominance, when cutting-edge architectures or methods emerge, they’re implemented in PyTorch before anywhere else.
  • Strong ecosystem: Libraries like PyTorch Lightning handle training loops and best practices automatically, letting you focus on model architecture.

PyTorch weaknesses

  • Production deployment tools are less mature: Deployment options lag behind TensorFlow’s battle-tested infrastructure, so you need to do more setup work for production systems.
  • Mobile and edge deployment is limited: PyTorch Mobile is functional but less polished than TensorFlow Lite for smartphones and IoT devices.
  • Dynamic nature complicates optimization: The flexibility that aids development can make optimization for production performance harder without additional tools like TorchScript.
  • Smaller enterprise adoption: Fewer production patterns and case studies compared to TensorFlow’s extensive enterprise documentation.

TensorFlow: Strengths and weaknesses

TensorFlow’s production ecosystem provides you with a comprehensive infrastructure for deploying models at scale. Google built the framework specifically for enterprise environments where reliability, performance, and deployment flexibility matter most.

This production-first approach created mature tooling for serving, mobile optimization, and MLOps that PyTorch is still catching up to. The trade-off comes in development experience – TensorFlow’s API can feel more complex and less intuitive than PyTorch’s streamlined approach.

TensorFlow strengths

  • Mature production deployment tools: Battle-tested infrastructure with TensorFlow Serving for high-throughput serving, TensorFlow Lite for mobile, and TensorFlow.js for browsers.
  • Superior mobile and edge optimization: TensorFlow Lite delivers industry-standard performance and comprehensive device support for smartphones and edge devices.
  • Strong enterprise adoption: Proven production patterns used by thousands of companies, with extensive documentation for scaling systems serving millions of predictions.
  • Comprehensive MLOps tooling: TensorFlow Extended (TFX) gives you end-to-end pipelines for production ML workflows, from data validation through model monitoring.
  • TPU support for large-scale training: Access to Google’s specialized Tensor Processing Units for training at massive scale with performance advantages over GPU infrastructure.

TensorFlow weaknesses

  • Steeper learning curve: More complexity when implementing custom models or debugging issues, even with Keras integration simplifying high-level operations.
  • More verbose code for custom work: Novel architectures or training procedures require significantly more code compared to PyTorch’s streamlined approach.
  • Larger, less cohesive API: Broader API surface with multiple ways to accomplish the same task creates confusion and longer learning curves.
  • Debugging can be challenging: Graph-related issues may require you to understand TensorFlow’s internal execution model despite eager execution improvements.
  • Slower adoption of research techniques: New methods from research papers typically take longer to appear in TensorFlow compared to PyTorch.

If you’re new to TensorFlow and want a hands-on starting point, check out How to Train Your First TensorFlow Model in PyCharm, where you’ll build and train a simple model step by step using Keras and visualize the results.

PyTorch vs. TensorFlow: Head-to-head comparison

Choosing between PyTorch and TensorFlow isn’t always straightforward, and there are many factors to consider. 

The table below provides a high-level head-to-head comparison of PyTorch and TensorFlow so you can quickly assess which framework generally fits your needs. We’ll later consider project-specific scenarios and provide a detailed decision matrix to guide your choice.

DimensionPyTorchTensorFlow
Learning curveEasier: Pythonic and intuitiveSteeper: more complex API despite Keras
DebuggingExcellent: standard Python tools work naturallyGood: improved with eager execution
Production deploymentImproving: TorchServe and TorchScript availableExcellent: mature ecosystem (Serving, Lite, JS)
Research/experimentationDominant: 85% of deep‑learning research papersPresent: but trailing PyTorch in adoption
Community ecosystemResearch-focused: Hugging Face, PyTorch LightningEnterprise-focused: TFX, strong cloud integration
Performance at scaleStrong: DDP for distributed trainingStrong: graph optimization, TPU support
Industry adoptionGrowing: used by 15,800+ companiesEstablished: used by more than 23,000 companies

PyTorch vs. TensorFlow for different use cases and applications 

Your framework choice depends heavily on what you’re building. Here’s how PyTorch and TensorFlow stack up for major machine learning domains.

Natural language processing

PyTorch dominates NLP with no signs of slowing. The Hugging Face Transformers library – the de facto standard for working with language models – started as a PyTorch-only framework and later added TensorFlow support as a secondary option. When you’re fine-tuning transformers, implementing custom attention mechanisms, or experimenting with novel architectures, PyTorch’s flexibility accelerates your iteration.

Verdict: PyTorch leads NLP decisively. Choose TensorFlow only if you have specific mobile deployment requirements that override all other considerations.

Computer vision

Computer vision presents a more balanced landscape for your projects. PyTorch benefits from research momentum – when you’re developing novel detection algorithms or experimenting with architectures, you’ll find state-of-the-art implementations appear in PyTorch first. TensorFlow excels for building production CV systems, especially for mobile object detection or on-device image classification, where TensorFlow Lite’s optimization matters most.

For a hands-on example, watch this video on how to build a TensorFlow object detection app to see how to take a pre-trained model and turn it into a real-time object detection app running on a robot in PyCharm:

Verdict: Use case dependent. Choose PyTorch for research and novel architectures, TensorFlow when your deployment priorities favor mobile and edge devices.

Reinforcement learning

PyTorch holds a slight edge in reinforcement learning, driven by the research community’s preference for it. When you’re implementing custom RL algorithms, modifying reward functions dynamically, or debugging agent behavior, PyTorch’s flexibility serves you better. TensorFlow offers solid capabilities through TF-Agents for production RL systems at scale.

Verdict: Choose PyTorch for RL research and experimentation or TensorFlow for building large-scale production-grade RL systems like recommendation engines.

Tooling and developer experience in PyCharm

PyCharm provides comprehensive support for both frameworks, streamlining your development workflow regardless of which you choose.

  • Debugging: Set breakpoints in training loops, inspect tensor values, and step through model forward passes using the integrated debugger that works naturally with PyTorch’s dynamic graphs and TensorFlow’s eager execution.
  • Jupyter notebook support: Prototype in notebooks, inspect data transformations visually, then move to scripts for production training with seamless integration.
  • Package management: Handle complex dependency trees and CUDA requirements using virtual environment management to prevent conflicts between frameworks.
  • Remote interpreters: Connect to remote GPU servers, develop locally while training remotely, and sync code automatically to take advantage of powerful hardware without leaving your IDE.
  • TensorBoard integration: Track training metrics, visualize model graphs, and compare experiments within PyCharm using native TensorFlow support or torch.utils.tensorboard for PyTorch.
  • Code completion: Get framework-specific suggestions for layer definitions, optimizer configurations, and data pipeline operations that reduce errors and accelerate development.

Performance, scalability, and deployment

Training performance barely differs between frameworks for most workloads – both handle GPU training efficiently with comparable speeds. TensorFlow gains an edge when you need TPU support for large-scale training, offering more mature integration with Google’s specialized hardware. For multi-GPU scaling, both deliver strong performance with PyTorch’s DDP and TensorFlow’s MirroredStrategy.

Deployment scenarios differentiate the frameworks more clearly. TensorFlow Serving handles production model serving at scale with built-in versioning and A/B testing that PyTorch’s TorchServe can’t yet match in maturity. When deploying to mobile devices or edge hardware, TensorFlow Lite provides industry-standard optimization through quantization and pruning. For browser deployment, TensorFlow.js offers more integrated, optimized inference compared to serving PyTorch models via ONNX Runtime.

Memory management affects development experience – PyTorch’s caching allocator handles GPU memory efficiently with dynamic batch sizes, causing fewer surprises when experimenting with different model configurations.

Community, ecosystem, and library support

PyTorch’s research dominance created a vibrant, innovation-focused community that accelerates development. The PyTorch Conference 2024 saw triple the registrations versus 2023, and when cutting-edge techniques emerge, they appear in PyTorch first. The Hugging Face ecosystem amplifies this advantage – more than 220,000 PyTorch-compatible models versus around 15,000 for TensorFlow makes a tangible difference in development speed.

TensorFlow’s community skews toward production engineering, providing comprehensive enterprise-grade documentation and proven deployment patterns. Google’s backing ensures strong cloud platform integrations, particularly with Google Cloud, offering managed services that reduce operational complexity. The Model Garden provides production-ready implementations optimized for deployment rather than research experimentation.

Learning resources reflect these different audiences – PyTorch tutorials emphasize research workflows and novel implementations, while TensorFlow documentation prioritizes production deployment patterns and enterprise-scale systems.

Choosing the right framework for your project

Many successful teams use both frameworks strategically – researching and experimenting in PyTorch, then deploying in TensorFlow. The frameworks aren’t mutually exclusive. You can use ONNX to enable model conversion between them when needed.

When making a choice, it helps to prioritize factors most relevant to your project: Mobile deployment requirements may override other considerations, research-heavy work might make PyTorch essential, and enterprise support with MLOps integration could tip the scales toward TensorFlow. 

Use the table below to match your project requirements with the framework strengths. 

Decision FactorPyTorchTensorFlow
By use case
Natural language processing✅ NLP standard choiceOnly if mobile deployment is critical
Computer vision✅ Research/novel architectures✅ Production mobile/edge apps
Reinforcement learning✅ Research and experimentation✅ Large-scale production RL
By experience level
Beginner✅ More intuitive APIKeras simplifies learning
Intermediate/Advanced✅ Research and prototyping✅ Production systems at scale
By project phase
Research/Experimentation✅ Dynamic graphs aid iterationGraph compilation for optimization
Rapid prototyping✅ Fast experimentationKeras for simple models
Production deploymentTorchServe improving✅ Mature deployment tools
By deployment target
Cloud/ServerStrong performance✅ Strong performance, slight GCP advantage
Mobile/Edge devicesBasic support via PyTorch Mobile✅ TensorFlow Lite industry standard
Web ApplicationsVia ONNX Runtime✅ TensorFlow.js optimized
By team context
Research-focused team✅ Natural fit for researchersIf already using TensorFlow
Production-focused teamIf comfortable with tooling✅ Proven enterprise patterns
Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

When Telling a Manager "You Don't Have a Role" Backfires — A Lesson in Agile Coaching Humility | Peter Merel

1 Share

Peter Merel: When Telling a Manager "You Don't Have a Role" Backfires — A Lesson in Agile Coaching Humility

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"A failure is not a failure. A failure is just the first step." - Peter Merel

 

Peter Merel became a Scrum Master by stealth — long before the title existed. Credited in Kent Beck's first XP book and present at the first agile conference, Peter was practicing lightweight processes at Hewlett Packard in the late 1990s. When he took a role at GMAC, the residential finance arm of General Motors, he brought XP practices with him and found early success. After six months of strong results, the project manager, Mike Alakom, sat Peter down and asked the most dangerous management question: "What do I do?" Peter gave what he now calls the stupidest answer possible — "You don't really have a role in this process." The next day, Mike called an all-hands meeting and calmly maneuvered Peter into crediting the entire way of working as Mike's idea. Peter stayed on for another six months, but at arm's length. In hindsight, Peter recognizes Mike did exactly what he should have done. The second failure came at Commonwealth Bank of Australia, where Peter was brought in to coach agile but was actually being set up to fail — a ripcord the organization could pull when it wasn't ready for change. The delivery manager, Des Webster, told Peter directly: "You were set up to fail." Peter walked away, thinking he'd never return. But six years later, every person he had coached had moved up in the organization, and Peter came back as principal coach for 50,000 people. The CIO declared Agile one of the bank's five pillars. Just because you hit the wall doesn't mean it's the end — it might be the beginning.

 

Self-reflection Question: When was the last time you failed at introducing change, and have you considered that the seeds you planted might still be growing in ways you can't yet see?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Peter Merel

 

Credited in the first agile book (XP Embraced), keynoted the first agile conference, invented the first agile training game, founded the xscale alliance, authored the agile way, Peter developed software by hand for forty years, coached agile in person for twenty years, and is working now to revolutionize the AI alignment landscape.

 

You can link with Peter Merel on LinkedIn. You can also find his work at agile.way.pm.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260504_Peter_Merel_M.mp3?dest-id=246429
Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

When Your Words Sound Smooth But Mean Nothing

1 Share

Bob Galen and Josh Anderson dig into one of the most quietly destructive habits in leadership communication. You know the feeling. You sit through a thirty minute all hands, and when it ends, you cannot answer the simplest question. What did they just say? What do they want me to do?

This episode breaks down why so many leaders fall into the manager speak trap, why kindness and fear often drive the obfuscation, and why softening the message creates the very anxiety you were trying to prevent.

Bob walks through how he coaches leaders to lead with clarity first and add nuance after. Josh shares what Stephen King's On Writing taught him about plain language and why simple words land harder than polished ones.

Together, they explore the courage required to communicate clearly inside a culture that rewards corporate jargon, the role of relationships in receiving direct feedback, and the practical question every leader should ask before walking out of a room. Does everyone know what we need to do?

If you have ever watched a slide deck full of synergy and strategic alignment leave a room more confused than when they walked in, this one is for you.

Stay Connected and Informed with Our Newsletters

Josh Anderson's "Leadership Lighthouse"

Dive deeper into the world of Agile leadership and management with Josh Anderson's "Leadership Lighthouse." This bi-weekly newsletter offers insights, tips, and personal stories to help you navigate the complexities of leadership in today's fast-paced tech environment. Whether you're a new manager or a seasoned leader, you'll find valuable guidance and practical advice to enhance your leadership skills. Subscribe to "Leadership Lighthouse" for the latest articles and exclusive content right to your inbox.

Subscribe here

Bob Galen's "Agile Moose"

Bob Galen's "Agile Moose" is a must-read for anyone interested in Agile practices, team dynamics, and personal growth within the tech industry. The newsletter features in-depth analysis, case studies, and actionable tips to help you excel in your Agile journey. Bob brings his extensive experience and thoughtful perspectives directly to you, covering everything from foundational Agile concepts to advanced techniques. Join a community of Agile enthusiasts and practitioners by subscribing to "Agile Moose."

Subscribe here

Do More Than Listen:

We publish video versions of every episode and post them on our YouTube page.

Help Us Spread The Word: 

Love our content? Help us out by sharing on social media, rating our podcast/episodes on iTunes, or by giving to our Patreon campaign. Every time you give, in any way, you empower our mission of helping as many agilists as possible. Thanks for sharing!





Download audio: https://episodes.captivate.fm/episode/a6b71cfc-76a7-41e1-b91f-7f0c223f33a5.mp3
Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

129. Why Developers Still Matter in the Age of AI - with Tim Corey

1 Share

In this episode, Rick & Oscar talk with Tim Corey about his unconventional path into software development and how it shaped his focus on teaching fundamentals with clarity. What started as simple YouTube videos grew into a global platform helping developers understand the “why” behind their work. They explore the complexity of modern tools and the importance of strong foundations. The conversation also touches on AI, which Tim sees as a tool that supports developers rather than replaces them. 

About this episode, and Tim Corey in particular: you can find Tim on LinkedIn. Check out his website for all his courses, or watch them on his YouTube channel. And listen to his Hope episode.

About Betatalks: watch our videos and follow us on Instagram, LinkedIn, and Bluesky





Download audio: https://www.buzzsprout.com/1622272/episodes/19108485-129-why-developers-still-matter-in-the-age-of-ai-with-tim-corey.mp3
Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

513: Agents Over Chat: The Future of Developer Workflows

1 Share

James and Frank explore the future of developer workflows powered by AI agents, revealing how developers are shifting from coders to testers and product strategists. They dive into new research-planning-implementation cycles, GitHub's usage-based pricing changes, and why developers must think strategically about model choices and token costs. Plus, reflections on Apple's leadership transition and the need for bold hardware innovation.

Follow Us

⭐⭐ Review Us ⭐⭐

Machine transcription available on http://mergeconflict.fm

Support Merge Conflict





Download audio: https://aphid.fireside.fm/d/1437767933/02d84890-e58d-43eb-ab4c-26bcc8524289/0100c049-052f-46da-986e-469208470c70.mp3
Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories