Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152033 stories
·
33 followers

Meta Plans Up to 8,000 Job Cuts in New Round of Layoffs

1 Share

Meta is preparing a major round of layoffs that could cut up to 8,000 jobs as the company restructures and leans further into AI-driven operations.

The post Meta Plans Up to 8,000 Job Cuts in New Round of Layoffs appeared first on TechRepublic.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

7 Best Static Code Analysis Tools

1 Share

Investing in static code analysis tools might seem straightforward, but finding one that truly fits your team can be tough.

Most tools promise the usual benefits: cleaner code, fewer bugs, better security, and more consistency in code reviews. Yet in reality, there’s a big difference between a tool the team embraces and one that everyone tries to avoid.

Static analysis delivers real value only when it becomes part of everyday development, not just another compliance step at the end of the CI pipeline.

That is also why there is no single “best” tool for everyone. Some platforms are better suited to teams that need centralized quality control, while others offer support for security-heavy workflows, flexible customization, or a more developer-friendly experience. The right choice depends on what you want to improve most.

In this post, we’ll walk through some of the best static code analysis tools and help you figure out which one is the right fit for your team.

1. Qodana – built for developer-first teams and out-of-the-box integration

Qodana is JetBrains’ static analysis platform, and it’s built on the same inspection logic many developers already know from JetBrains IDEs. Its biggest advantage is that it does not treat code quality as a separate process. Instead, it extends familiar inspections into team workflows and CI/CD.

That makes Qodana especially strong for teams that care about both detection and adoption. Developers can catch issues locally, and teams can enforce standards in CI, with both sides working from the same logic.

Qodana is a strong fit for:

  • Teams that want code quality checks to feel native to their development environment.
  • Organizations that prioritize maintainability and consistency.
  • Teams that use JetBrains IDEs and want the same inspection logic locally and in CI.
  • Engineering cultures that value guidance over gatekeeping.

Its strength is not in trying to be everything at once. It stands out by helping teams improve code quality in a workflow that developers are more likely to trust and keep using.

Static code analysis tools

Request Demo

2. SonarQube – for teams that need broad language coverage and AI fixes

SonarQube has held the top spot on the market for a while, providing broad language coverage for teams with highly varied tech stacks.

It is a good fit for:

  • Organizations standardizing quality processes across teams.
  • Teams that want centralized dashboards and policy enforcement.
  • Companies looking for a more governance-oriented approach.

One of the limitations is that this model can feel more external to day-to-day development. When static analysis is experienced mainly through gates and reports, adoption often depends more on process enforcement than on developer pull. It’s also worth noting that SonarQube’s pricing model is based on lines of code (LoC).

3. Snyk – for teams choosing static analysis as part of a broader security platform

Snyk makes sense when static analysis is only one part of a larger security strategy. Its main appeal is that code scanning sits alongside other security capabilities, such as dependency, container, and infrastructure analysis.

It is a strong option for:

  • Teams shifting security to earlier in the development process.
  • Organizations that want broader coverage against code and supply chain risks.
  • Companies where security is the main selection criterion.

One of the limitations is its emphasis on security. For teams focused primarily on everyday code quality, maintainability, license audits, and scaling, the experience may feel more security-centered than developer-centered.

4. Semgrep – for teams that want flexibility and custom rules

Semgrep stands out for speed, flexibility, and approachable rule customization. That makes it especially appealing to teams that want more control over how analysis works and what exactly gets flagged.

It works especially well for:

  • AppSec teams that want to write and refine custom rules.
  • Organizations that value flexibility and transparency.
  • Teams that want fast feedback loops and more control over detection logic.

One of the limitations is that flexibility assumes ownership. It delivers the most value when someone on the team is actively maintaining and evolving the rules.

5. Checkmarx – for enterprise-scale AppSec programs

Checkmarx used to partner with Qodana to bring security vulnerability detection to teams like yours. Now Mend.io helps Qodana provide these checks. However, Checkmarx still offers broad platform coverage, a deep security focus, and strong alignment with enterprise governance and compliance requirements.

It is a strong fit for:

  • Large enterprises with dedicated AppSec teams.
  • Regulated environments with audit or compliance pressure.
  • Organizations that want centralized security governance.

The downside is complexity. For smaller teams or organizations looking for lightweight adoption, it can feel like more machinery than they actually need.

6. Aikido – best for smaller teams that want broad security coverage

Aikido is an all-in-one security platform that combines multiple security capabilities (such as SAST, SCA, DAST, and CSPM) in one interface. Its positioning focuses on reducing noise, fast onboarding, and developer-friendly workflows, with an AI AutoFix feature for some issue types.

It is a strong option for:

  • Startups and mid-size teams that want a quick setup process.
  • Teams looking for broad security coverage in one place.
  • Organizations that prioritize reducing false positives.

One of the limitations is its focus. Because Aikido is a broader security platform, static analysis is only one part of the experience. For teams focused mainly on code quality and the everyday developer workflow, that broader security-first approach may be less aligned.

7. Codacy – best for teams that want AI-driven code quality and security in one platform.

Codacy positions itself as a code quality and security platform for AI-accelerated coding, combining code quality, security, and quality gates in one product. Its current positioning strongly emphasizes AI-focused workflows and developer-facing checks in the IDE.

It is a good fit for:

  • Teams actively using AI coding assistants.
  • Organizations that want code quality and security together.
  • Teams that value easy onboarding and developer-friendly workflows.

One of the limitations is its positioning. Much of the product story is tied to AI-assisted development and broader platform coverage, which may feel less directly centered on static analysis itself. For teams that want inspections closely tied to everyday development and familiar IDE workflows, a more inspection-centered approach may feel more natural.

Which static code analysis tool should you choose?

The right tool depends on what your team needs most.

Some teams prioritize centralized control, others broader security coverage, and others flexibility in rules and configuration.

But if you want static analysis to feel like a natural part of development, Qodana stands out.

Built on the same inspection logic developers already know from JetBrains IDEs, we’ve built a tool that helps teams align local development, CI checks, and shared code quality standards without turning static analysis into a separate process.

At the same time, Qodana goes beyond basic code quality checks. It includes security analysis capabilities and continues to evolve with more advanced inspections and team-wide quality controls, giving teams a way to scale both quality and security practices together.

The best tool is not the one with the longest feature list. It is the one your team will actually use to write better code.

Want to see how Qodana fits your team’s workflow? Try Qodana for free or request a demo.

Request Demo

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Scenario Planning for AI and the “Jobless Future”

1 Share

We all read it in the daily news. The New York Times reports that economists who once dismissed the AI job threat are now taking it seriously. In February, Jack Dorsey cut 40% of Block’s workforce, telling shareholders that “intelligence tools have changed what it means to build and run a company.” Block’s stock rose 20%. Salesforce has shed thousands of customer support workers, saying AI was already doing half the work. And a Stanford study found that software developers aged 22 to 25 saw employment drop nearly 20% from its peak, while developers over 26 were doing fine.

But how are we to square this news with a Vanguard study that found that the 100 occupations most exposed to AI were actually outperforming the rest of the labor market in both job growth and wages, and a rigorous NBER study of 25,000 Danish workers that found zero measurable effect of AI on earnings or hours?

Other studies could contribute to either side of the argument. For example, PwC’s 2025 Global AI Jobs Barometer, analyzing close to a billion job ads across six continents, found that workers with AI skills earn a 56% wage premium, and that productivity growth has nearly quadrupled in the industries most exposed to AI.

This is exactly the kind of contradictory, uncertain landscape that scenario planning was designed for. Scenario planning doesn’t ask you to predict what the future will be. It asks you to imagine divergent possible futures and to develop a strategy that improves your odds of success across all of them. I’ve used it many times at O’Reilly and have written about it before with COVID and climate change as illustrative examples. The argument between those who say AI will cause mass unemployment and those who insist technology always creates more jobs than it destroys is a debate that will only be resolved by time. Both sides have evidence. Both are probably right at some level. And both framings are not terribly helpful for anyone trying to figure out what to do next.

In a scenario planning exercise, you identify two key uncertainties and draw them as crossing vectors, dividing the possibility space into four quadrants. Each quadrant describes a different future. The power of the technique is that you don’t bet on one quadrant. You look for actions that make the most sense across all of them. And you’re not limited to doing this for only one uncertainty. You can repeat the exercise multiple times, each time expanding your sense of possible futures and clarifying your convictions about the most robust strategies for adapting to them.

For AI and jobs, the most obvious crossing vectors to model might seem to be how fast AI grows in its ability to replace human work and how quickly that capability is adopted. This is, in effect, scenario planning about whether the “AI is unprecedented” or “AI is normal technology” camp is correct. That might well be a useful pair of axes.

There’s no question that AI capability is accelerating. SWE-Bench scores for coding went from solving 4.4% of problems in 2023 to 71.7% in 2024, and we saw what was widely described as a “step change” beyond that in December of 2025. Anthropic’s new Mythos model seems to have upped AI capabilities even further. Even before Mythos, McKinsey estimated that today’s technology could in theory automate roughly 57% of current US work hours. But capability is not adoption. Goldman Sachs notes that AI appears to be suppressing hiring more than destroying existing jobs in the near term. Yale’s Budget Lab, analyzing US labor data from 2022 to 2025, found no massive shift in the share of workers across occupations. Deployment, not capability, seems to be the limiting factor.

As a result, it makes sense to me to synthesize these two factors (capability increase and rate of adoption) into a single vector that we can call the scale and size of impact. The question on this axis, therefore, is not so just “How good does AI get?” but also “How fast does the economy actually reorganize around it?”

What’s a good second vector to cross with this one? If you’ve read my book WTF? or other things I’ve written about the role of human choices in shaping the future, you probably won’t be surprised that the second vector I’ve chosen reflects my conviction that the future depends on whether AI capability is primarily used to achieve efficiencies in existing work or to do more, to solve new problems and serve more human needs.

When Dorsey says a smaller team can now do the same work, that’s efficiency. When Insilico Medicine uses AI to design a drug for idiopathic pulmonary fibrosis in a fraction of the time traditional discovery takes (with over 173 other AI-discovered drugs also now in clinical development and 15 to 20 entering pivotal Phase III trials this year), that’s not replacing a human job. That’s doing something that wasn’t being done before. But we shouldn’t content ourselves with the idea that the “do more” axis is just about technical breakthroughs. It might be in serving a vastly larger number of people far more effectively and efficiently. When Todd Park says that his company, Devoted Health, “is on a mission to dramatically improve the health and well-being of older Americans,” that is a call to do more. Given the size of the existing markets that need to be transformed, it is likely that even with 10x or 100x efficiency gains from AI, Devoted’s 1,000x mission might require more resources, including people.

What will be scarce?

I’ve always assumed that the “do more” orientation is chiefly a moral argument driven by human judgment about what kind of world we’d prefer to live in. As the IMF noted earlier this year, “Work brings dignity and purpose to people’s lives. That’s what makes the AI transformation so consequential.” A world of concentrated value capture leading to a split between those with capital to invest and a permanent unemployed underclass is the stuff of dystopian science fiction.

But it’s not just a matter of inequality and the importance of work to human self-esteem. I’ve also become convinced that companies that lean into new possibilities and expand markets do better than those that simply do the same things more cheaply. There are a number of fascinating economics arguments for why the jobless future is just not going to happen. These arguments provide useful guidance into the structural changes to the economy that workers, business leaders, and politicians should be planning for.

Noah Smith pointed to a draft economics paper by Garicano, Li, and Wu that helps explain how the trade-offs between efficiency and expanding output might impact jobs. Garicano, Li, and Wu note that “the effect of AI on an occupation depends not just on which tasks AI can perform but also on how costly it is to unbundle those tasks from the job.” They model jobs as bundles of tasks, and distinguish between “strongly bundled” jobs (where the same person has to do multiple interdependent tasks) and “weakly bundled” ones (where tasks can easily be split between a human and an AI). AI replaces the weakly bundled jobs first. But even for weakly bundled jobs, automation only replaces human labor after demand becomes inelastic, after AI is so productive at the task that making more of the output hits diminishing returns.

Until that point, increased productivity from AI can be focused on expanding output rather than shrinking headcount. That is another way of saying that whether AI replaces workers or augments them depends in large part on whether there is unmet demand to absorb the increased productivity. If we use AI only to do the same things more cheaply, we hit that inelastic point fast, and jobs disappear. If we use it to do new things, demand keeps expanding and people keep working. University of Chicago economist Alex Imas believes that just how much demand elasticity there is on a job by job basis is one of the big questions of our time.

But that’s not all. In a new essay called “What Will Be Scarce?” Imas points out that when a new technology makes one sector dramatically more productive, one part of the economy shrinks but another grows. When agriculture was mechanized, 40% of the American workforce moved off farms, but the economy actually grew, because people spent their rising real incomes on fundamentally different things. Imas argues, drawing on work by Comin, Lashkari, and Mestieri, that income effects account for over 75% of observed patterns of structural change. As people get richer, they want fundamentally different things.

What are those things? Imas calls it “the relational sector”: goods and services where the human element is itself part of the value; teachers, nurses, therapists, hospitality workers, artisans, performers, personal chefs, community curators, and more. He opens his piece with Starbucks. In pursuit of economic efficiency, the company tried to automate more and more of its operations. CEO Brian Niccol concluded that it was a mistake, that handwritten notes on cups, ceramic mugs, and good seats drove customer satisfaction. More baristas are being hired per store and automation is being rolled back.

But there’s far more to the relational sector than service jobs. Imas identifies a further dimension in what René Girard called mimetic desire, the idea that people don’t just want objects for their functional properties. They want things that others want, and they want them more when they’re scarce and exclusive. (Hobbes and Rousseau made this same point.) Imas’s experimental research shows that willingness to pay roughly doubles when people learn that others will be excluded from a product. And in new work with Graelin Mandel, he finds that AI involvement undermines the perceived exclusivity of a good. Human-made artwork gained 44% in value from exclusivity; AI-generated artwork gained only 21%. The mere involvement of AI made the work feel inherently reproducible.

This means the relational sector has naturally high income elasticity. If AI makes production cheaper and real incomes rise, spending shifts toward goods where the human element matters. This is Baumol’s cost disease working as a feature, not a bug: The sector that resists automation becomes relatively more expensive, and that’s precisely where spending and employment grow. This is an economic mechanism that could power the upper quadrants of the scenario grid that we will look at shortly, not just as a matter of moral choice but as a structural tendency of rich economies getting richer.

I’m going to include both Noah’s ideas and Alex’s in my scenario planning exercise, since they fit right in.

Four possible futures

Let’s look at how the two vectors cross each other and give us four futures.

Four futures vectors

Upper left: The Augmentation Economy. AI capability grows but adoption is gradual, and workers are augmented rather than replaced. A programmer who once wrote 100 lines of code a day now ships features that used to take a team. A nurse practitioner aided by AI diagnostic tools provides care that once required a specialist. A small business owner uses AI to access legal and financial services previously available only to large corporations. This is the quadrant where the PwC finding about the 56% wage premium makes the most sense. AI becomes a tool that makes individual workers more productive and more valuable, and the gains flow broadly. What makes this a positive, growing economy are at least in part the choices made by employers. They use the increased efficiency to build better services, not just to make them cheaper. Doctors and nurses have more time with patients and less time with paperwork. As services become more efficient, they can be offered to more people at lower cost.

Lower left: The Slow Squeeze. AI grows, adoption is gradual, and the primary use is efficiency. This is in many ways the most insidious quadrant, because it doesn’t look like a crisis. It looks like a normal economy with slightly fewer entry-level jobs each year, slightly more pressure on wages, and slightly less bargaining power for workers. That Stanford study on young software developers is a signal from this quadrant. So is the HBR finding that companies are laying off workers because of AI’s potential, not its performance. The Slow Squeeze is the world where companies use AI to pad margins without passing the gains along or investing in new capabilities.

Lower right: The Displacement Crisis. AI advances fast and is adopted rapidly, almost entirely for efficiency. This is the future the doomsayers warn about, the Citrini Research scenario of unemployment topping 10% and the S&P 500 tanking. Block’s 40% cut is a signal from this quadrant, whether or not Dorsey’s prediction that most companies will follow suit within a year turns out to be right. Deutsche Bank analysts warn that “AI redundancy washing,” companies blaming layoffs on AI that are really driven by other cost-cutting, will be a significant feature of 2026. But the fact that Wall Street rewarded Block with a 20% stock price jump for firing 4,000 people tells you what the current incentive structure is optimizing for.

Upper right: The Great Transformation. AI capability advances rapidly and is adopted fast, but the primary use is to do more, not just the same with less. Whole new industries emerge. The WEF’s projection of 170 million new roles by 2030 comes true, far exceeding the 92 million displaced. AI-driven drug discovery actually delivers on its promise. New forms of education, personalized to every learner, actually reach people the old system never served. The transition is still brutal, because the people losing old jobs and the people getting new ones are not the same people, in the same places, with the same skills. Brookings has identified 6.1 million workers with high AI exposure and low adaptive capacity, 86% of them women in clerical and administrative roles. But the net direction is toward more human capability, not less.

Imas’s framework suggests that this quadrant will feature an explosion of durable jobs in the relational sector. Some of these will be high touch service jobs: doctors, nurses, therapists, teachers, personal trainers, craft producers, experience designers, hospitality workers, and roles that haven’t been invented yet. The relational sector already employs nearly 50 million people in the US. But another big part of it will be creating exclusive products and services that become objects of desire. Art critic Dave Hickey calls this “the big beautiful art market” that happens when industrial products are “sold on the basis of what they mean rather than what they do.” The structural change model predicts that both of these areas will grow as a share of the economy, not because they resist automation as a technical matter but because not being automated is part of their value proposition.

Noah Smith’s taxonomy of future work also helps fill in what life may actually look like across these quadrants. He divides AI-affected jobs into three categories: specialists whose jobs are “strongly bundled” (for example, an experienced engineer whose judgment can’t be separated from the rest of what they do), salarymen (generalists whose value comes from knowing how to wrangle AI and plug its ever-shifting gaps, much like the Japanese corporate model where long-tenured employees rotate between divisions and accumulate firm-specific knowledge rather than portable technical skills), and small businesspeople (entrepreneurs who use AI as leverage to run what would previously have required a much larger team). This is the future that Steve Yegge envisions with its “millions of one-person startups.”

In the upper quadrants, all three categories thrive. Specialists do well because AI expands the scope of what their bundled expertise can accomplish. Salarymen thrive because companies that are doing more, not just doing the same with less, need people who can adapt to constantly changing tool capabilities within the context of their business. And small businesses proliferate because AI gives a one-person shop the productive capacity that used to require a department.

In the lower quadrants, specialists may survive, but salarymen face pressure as companies optimize for headcount reduction rather than capability expansion, and small businesses struggle because the efficiency-first economy compresses the margins they need to exist.

News from the future

In scenario planning, once you’ve chosen your vectors and imagined the resulting quadrants, you watch for “news from the future,” data points that signal which direction the world is actually heading. As with any scatter plot, the points are all over the map at first, but over time you start to see the trend lines emerge.

Right now, the signals are mixed.

News from the lower quadrants: Challenger, Gray & Christmas reports that AI was a significant contributing factor in nearly 55,000 US layoffs in 2025. Employee anxiety about AI-driven job loss has jumped from 28% in 2024 to 40% in 2026. 40% of employers globally told the WEF they plan to reduce their workforce where AI can automate tasks within five years. And the entry-level job market is tightening in ways that compound over time even if they don’t show up in headline unemployment numbers. Brookings found that the “gateway” occupations that serve as stepping stones from low-wage to middle-wage work are among the most exposed to AI, threatening career pathways, not just individual jobs.

News from the upper quadrants: The PwC wage premium data. The Vanguard finding that AI-exposed occupations are growing, not shrinking. The explosion of AI drug discovery programs. MIT’s David Autor has shown that 60% of today’s US employment is in job categories that didn’t exist in 1940. New task creation is how technology has always generated new work, and there’s no reason to believe AI is exempt from that pattern, unless we choose to use it only for efficiency.

There may also be some signal in reports that usage among developers is becoming more intensive and continuous, from multistep coding workflows to automated agents running in loops. Some engineers are “tokenmaxxing,” with engineers at companies like Meta treating AI consumption as a productivity benchmark. This is driving rapid revenue growth for AI providers but squeezing their margins as infrastructure costs rise. That margin pressure may sound like bad news, but it’s actually a classic pattern by which a technology crosses from “tool” to “infrastructure.” Cloud computing margins were terrible until scale and hardware improvements drove unit costs down, at which point the providers who had built habit and lock-in harvested enormous returns. AI inference costs have been dropping roughly 10x per year, and price competition is accelerating that decline. The margin squeeze is the mechanism by which AI becomes cheap enough to be ubiquitous. And the tokenmaxxing engineers are doing dramatically more iterations, more exploration, with more ambitious scope. That’s “doing more” behavior, not an efficiency behavior.

It’s still unclear, though, whether all those tokens are producing real value or whether some of this is the AI equivalent of crypto mining. If most of those tokens are productive, we’re looking at a productivity boom. If many are wasted, the adoption curve may have a big dip in it before industry matures. Either way, though, the direction is toward AI becoming economic and technology infrastructure. It’s important to remember that tokens spent trying out prototypes that are rejected are not necessarily wasted. They can be part of a new development process that’s expanding the space of possibilities.

News that doesn’t fit neatly into any quadrant: We appear to be in what Smith calls a “no-hire, no-fire” economy, where workers hunker down in their current jobs and refuse to switch, and companies keep them rather than hiring new workers. That’s consistent with a world where people sense that their portable technical skills are depreciating, so they cling to the firm-specific knowledge that still makes them valuable where they are. It’s also consistent with the NBER Denmark study finding task reorganization without job loss: AI is replacing tasks, not (yet) jobs. Nonetheless, it is clear that a dearth of entry level positions will be a serious issue.

A University of Pittsburgh researcher has been calling state unemployment offices one by one to assemble the granular data that doesn’t yet exist in federal statistics, because our measurement tools are not yet fine-grained enough to see what’s happening. If you’re confused about whether AI is causing job losses, he put it plainly: The likely problem is a lack of data. If AI is having an impact, we may just not be equipped to see it yet with the instruments we have. We’re getting new data points daily. Asking yourself which future they support can gradually increase your confidence in what is coming.

Robust strategy

The goal of a scenario planning exercise is to stretch your thinking so that you can make strategic choices that make sense regardless of which future unfolds. Scenario planners call this a “robust strategy.”

If you’re a business leader, the robust strategy is not to ask “How many people can I replace with AI?” It’s to ask “What can we do now that we couldn’t do before?” The companies that will thrive across all four quadrants are the ones that use AI to expand what’s possible, not just to shrink how much they have to spend. Aim for the upper right quadrant, and you’ll do better even if the rest of the world chooses otherwise.

That’s not just scenario planning. It’s Clay Christensen on the lessons of disruptive technologies. A disruptive technology is not defined by the markets it destroys but by the new markets and new possibilities it creates. As Christensen observed, RCA didn’t ignore the transistor; its leaders just thought it wasn’t good enough for its current customers. Sony embraced the new technology and created a new market of portable devices where the quality difference between transistors and vacuum tubes just didn’t matter. And of course, as Clay observed, the disruptive technology continues to improve.

If you’re a worker, one element of robust strategy is to band together, as the screenwriters guild did, and to make the case that the productivity gains from AI should be shared with workers and used to amplify their skills and efforts. Don’t resist AI, but instead use it to make yourself even more valuable. Use it to amplify your uniqueness. That is, lean into the augmentation economy. One of the things we’ve learned from the early advances in AI-enabled software engineering is that a great software engineer can get more out of AI than a vibe-coding beginner. This is true of other professions as well. Find ways that your human uniqueness makes the output of AI even more valuable.

Create professional associations that lean into mentorship and an AI-enriched career ladder, but aren’t afraid to take a political stance. The idea that providers of capital are entitled to all of the gains is a pernicious idea that has created an engine of inequality rather than of wide prosperity. It doesn’t have to be that way. Professional associations and other forms of solidarity are a possible source of countervailing power. (But don’t fall into the trap that many unions and professional associations do, of using that power to extract rents rather than increasing value for everyone.) Preferentially choose employers who are investing in training employees for a human + AI future, including at the beginning of the career ladder.

If you’re a specialist, deepen the parts of your expertise that are strongly bundled, the judgment and context and human relationships that can’t be separated from the technical work. If you’re a generalist inside a company, become the person who understands what AI can and can’t do and fills the gaps, whose value comes from adaptability and firm-specific knowledge rather than a fixed set of technical skills. And if you have entrepreneurial instincts, recognize that AI is creating leverage that may make it possible to run a viable business at a scale that previously couldn’t support one.

Imas’s work suggests that the most durable career paths may not be defined by which tasks AI can’t do (a moving target) but by whether the human element is part of what the customer is paying for. A restauranteur, a therapist, a teacher who knows your child, or a guide who knows the trail aren’t jobs that survive because AI hasn’t gotten to them yet. They’re jobs where human involvement is the product.

If you’re an entrepreneur, the robust strategy is the one it has always been: look at the world as it is, determine what work needs doing, and do it. Don’t build AI tools that replace humans doing things that are already being done adequately. Build AI tools that let humans do things that have never been done before.

If you’re a policymaker, the robust strategy is to invest in the transition regardless of how fast displacement turns out to be. Create policies that give workers more of a role in how AI is used. Support positions like those of the writers guild, which allow workers to get a share of the gains from using AI. And if capital runs wild with labor replacement, tax the gains so the efficiency can be redistributed. Decrease the working week.

Education and lifelong learning programs, portable benefits, support for geographic mobility, and investment in the industries of the future pay off in every quadrant. So does reducing the regulatory friction that keeps new entrants trapped in old cost structures, funding basic research that the market underinvests in, and building the kind of infrastructure (physical and institutional) that enables rapid adaptation.

The future is up to us

I’ll return to the theme that I sounded in my book WTF? What’s the Future and Why It’s Up To Us.

Every time a company uses AI to do what it was already doing with fewer people, it is making a choice for the lower half of the scenario grid. Every time a company uses AI to do something that wasn’t previously possible, to serve a customer who wasn’t previously served, to solve a problem that wasn’t previously solvable, it is making a choice for the upper half. These choices compound, for good or ill. An economy that uses AI primarily for efficiency will slowly hollow itself out.

Looking at the news from the future, both sets of signals are present. The question is which will dominate. AI will give us both the Augmentation Economy and the Displacement Crisis, in different measures in different places, depending on the choices we make.

Scenario planning teaches us that we don’t have to predict which future we’ll get. We do have to prepare for a very uncertain future. But the robust strategy, the one that works across every quadrant, is to focus on doing more, not just doing the same with less, and to find ways that human taste still matters in what is created. As long as there is unmet demand, as long as there are problems we haven’t solved and people we haven’t served, AI will augment human work rather than replacing it. It’s only when we stop looking for new things to do that the machines come for the jobs.



Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How the Best Companies Use AI

1 Share
From: AIDailyBrief
Duration: 22:03
Views: 81

A deep dive into what separates the AI leaders from the laggers, drawing on the recent PwC study, McKinsey's AI Transformation Manifesto, George Sivulka's a16z essay on institutional vs. individual AI, and a close look at how Ramp built its internal AI system Glass. The throughline: leading companies treat AI as a growth and opportunity technology, and they build organizational systems that raise the floor for every employee rather than leaving people to figure it out alone.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

997: Rating and Roasting Your Projects

1 Share

Scott and Wes dig into a huge batch of community-submitted projects, from JSON tools and CSS editors to AI agents, view transitions, and everything in between. It’s a rapid-fire showcase of what developers have been building, including picks like Arrow JS, Sugar High, Drift, and a whole lot more.

Show Notes

Hit us up on Socials!

Syntax: X Instagram Tiktok LinkedIn Threads

Wes: X Instagram Tiktok LinkedIn Threads

Scott: X Instagram Tiktok LinkedIn Threads

Randy: X Instagram YouTube Threads





Download audio: https://traffic.megaphone.fm/FSI2715298792.mp3
Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

When Passion Becomes the Problem — How Pushing for Agile Change Too Fast Creates Resistance | Viktor Glinka

1 Share

Viktor Glinka: When Passion Becomes the Problem — How Pushing for Agile Change Too Fast Creates Resistance

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"I wanted to change the organization overnight with my eagerness and passion. Instead of helping the system to evolve, I created resistance. I became the problem myself." - Viktor Glinka

 

Viktor shares one of the most honest failure stories we've heard on the show. Early in his Scrum Master career, he joined a finance organization as a Scrum Master for a newly created department — his first experience in a scaled setup. Each team owned a particular part of the user journey, organized around components. After getting exposed to Large-Scale Scrum (LeSS) through a colleague, Viktor became overexcited. He started pushing for structural changes daily, telling the head of department that the current team composition was wrong and they needed cross-functional feature teams. But he was disconnected from reality. For this particular organization, even having partially cross-functional teams was already a big stretch. Worse, the head of department wasn't even authorized to make the changes Viktor was pushing for. Instead of helping the system evolve, he created resistance. What proved his approach wrong? That same department later received a European Award for being the best mortgage department. It took Viktor a few more years and similar cases to fully absorb the lesson: read the room, develop sensitivity to the system's pace, and stimulate reflection in decision makers rather than pushing your own agenda.

 

In this episode, we refer to organizational development, LeSS (Large-Scale Scrum), and systems analysis. Viktor also mentions the interview with Bas Vodde on the Scrum Master Toolbox Podcast.

 

Self-reflection Question: When was the last time you pushed for a change because you believed it was right, without checking whether the system was ready for it? What would happen if you started by asking decision makers what they think would be a good next step?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Viktor Glinka

 

Viktor is an organisational consultant and Professional Scrum Master who helps teams and leaders find simpler ways to deliver value while keeping the human side of work at the center. He's practical, curious, and focused on real outcomes rather than buzzwords. His true passion is adaptability - both in business and in personal life.

 

You can link with Viktor Glinka on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260420_Viktor_Glinka_M.mp3?dest-id=246429
Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories