Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153631 stories
·
33 followers

Your AI Problem Is a Data Problem

1 Share

I just sat in a room full of data engineers the other week who were worrying about AI automating them out of work the same way auto manufacturing in Detroit was upended half a century ago.

All AI. All the time. That’s what technology professionals are talking about.

Data scientists, data engineers, and data architects are right to sound an alarm at that. Using AI to solve and automate data problems all the way at the beginning of the pipeline is an obvious use case of agentic engineering in data. Shifting AI left for automation.

That looms as a threat to data engineering positions who own the pipeline underlying the architecture and deliverables. It’s a discussion that we can no longer avoid. In all fields, AI is looming, bringing with it new risks and bigger change.

Introducing AI there can be dangerous, and that’s a conversation all its own. You hear horror stories about AI initiatives that failed—and what failed them.

Agentic frameworks stall because the retrieval layer can’t be trusted. RAG pipelines work in demo then fall apart in production. Problems that should have been solved upstream are solved by building governance tools downstream.

The conversation comes back to one thing. The data wasn’t ready.

Don’t neglect the data layer

A Cloudera and Harvard Business Review study from March 2026 found that only 7% of enterprises consider their data completely ready for AI, and over a quarter said it wasn’t ready at all. Another data point: In Informatica’s 2025 CDO Insights survey, 43% of organizations named data quality and readiness as their top obstacle to AI success. Not model performance. Not tooling. Data.

So why does this keep happening?

Organizations are treating AI as a technology procurement decision. Buy the platform, hire the engineers, deploy the models. But the foundation underneath those initiatives—the data layer—is missing.

The data wasn’t governed. The lineage wasn’t tracked. The pipeline was built for reporting, not for model consumption.

The engineers in that room could easily be part of the solution. Because nobody owned the quality problem. And when the model surfaced a confident, wrong answer, nobody could trace it back to find out why.

That’s not an AI problem. That’s a data problem that AI made visible.

Readiness starts before the model

Data that feeds AI systems needs to be made consistent and owned. Not owned in the sense of having a name in a RACI chart. Owned in the sense that an engineer or data professional is accountable when it degrades. Lineage matters because AI outputs are only as auditable as the data behind them. Quality matters because model performance in production is directly correlated with what goes in.

These aren’t new principles. They’re established data engineering practices. They just haven’t been treated as AI deployment fundamentals. That needs to change.

Data readiness closes the gap between AI ambition and AI outcomes. McKinsey’s 2025 State of AI survey found that organizations investing in their data foundations first were likely to see real financial returns from AI. Without solutions like data contracts between producers and consumers, automated quality monitoring at the pipeline level, and governance frameworks that treat AI as a first-class data consumer rather than an afterthought, your AI spend will be wasted.

Thinking back to my convo with those engineers a few weeks ago, the engineers in that room worried about being automated out of work. Data engineers who understand pipelines, lineage, and quality at depth aren’t facing obsolescence. In fact, there’s a good chance they’ll soon see demand for their services spike, as organizations realize their AI initiatives aren’t failing because they hired the wrong AI engineers. They likely failed because those organizations didn’t invest in the data infrastructure and engineers.

The data engineering job isn’t going away. It’s changing shape as it solves a problem we’re all facing and talking about.

For data engineers, AI readiness is a table stakes deliverable now. That means owning the data that feeds AI systems, and building governance frameworks around what AI actually consumes. AI engineers, for their part, have to stop treating the data layer as someone else’s problem. When an agentic framework stalls or a RAG pipeline falls apart in production, the instinct is to look at the model or the retrieval architecture. The data is usually where the answer is. It behooves these two disciplines to share a definition of “done” that includes the data being ready before the model is deployed rather than after it fails.

The AI problem, for most organizations, is a data problem that can be solved by data engineers and data professionals. The sooner that lands in the boardroom, the better the odds that the next initiative doesn’t end up in the abandoned 42%.



Read the whole story
alvinashcraft
44 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

“Friction-maxxing”, Failure, and Learning to Code

1 Share

In a culture obsessed with optimization (global maximums only, please), the internet has taken a particular enjoyment in finding things to “maxx”: tokenmaxxing, looksmaxxing, funmaxxing, sleepmaxxing, etc. If only we find the right virtue to optimize, perhaps all will be right in our lives. Earlier this year, one of these emerging net-native neologisms caught my attention because of the way it echoes a concept in education research that I think deserves more attention.

To practice what I preach, I drew all of these comics by hand on physical paper, scanned them into a drawing software I didn’t know how to use, and proceeded to have many loving confrontations with our design team about “preserving the professional image of JetBrains”. Friction galore!

“Friction-maxxing” is the internet-native’s name for increasing the amount of friction in our passive and hyper-convenient, smooth-city lives. The term is said to have originated in an essay by sociologist Kathryn Jezer-Morton. With endless services and products designed to make our lives more efficient and easier, friction-maxxing is a lifestyle that believes in the value of doing hard things. It might be that embracing and seeking these things out is actually what makes you smarter and happier in the long term.

As silly as it is, taking this idea seriously could hold the key to getting through a computing program with your critical and computational thinking intact. It might also make you happier, smarter, more resilient, and better equipped for the absolutely wild job market we are hurtling toward at top speed.

Me trying to study hard and learn to be useful to my society.

How does all of this apply to learning technical skills? Well, over the past few decades, lots of research, courses, and products have emerged with the express goal of making learning to code easier. Smoother.

It’s a domain with a steep learning curve. Research suggests that Introductory CS courses have some of the lowest pass rates compared to other STEM fields. As I discussed in my video Is Programming Actually Hard to Learn?, this reputation isn’t because only 0.6% of human brains are capable of learning to code; it’s more of a cultural belief that becomes a self-fulfilling prophecy reflected in the data. Thankfully, a lot of people are working to change that by helping to make learning computing skills friendlier to all kinds of brains and bodies. 

screenshot from the video "is programming actually hard to learn"
Is this helping? Check out JetBrains Academy on YouTube.

If we’ve smooth-maxxed our way to a place where information is ever-present but the time and attention needed to process, learn, and master it is absent, where does that put us? Is anyone actually doing any learning here, or are we just hoarding Coursera courses for a day that never comes?

DO HARD THINGS

As I discussed in a previous piece and (upcoming!) video, AI tutoring tools can have the eerie effect of making you feel like you’re learning more than you actually are. This is, to some extent, the final form of smooth-maxxed education. Simply dunk your brain into the machine, watch passively as it produces magic, debugs your code, explains a concept, and then surface, head empty. A smooth learning experience, yet almost nothing learned.

comic of the head in the tub

I’ve mentioned the importance of developing computational thinking before. Given the uncertainty of how good AI is ultimately going to become at technical disciplines, it’s kind of the only skill I can responsibly say will remain useful. Well, that, spec-driven development, and mastering LLMssomeone should know what’s going on behind the scenes.

In my previous work, I advocated that people pick up these mysterious skills with the clichéd, vague advice: “do hard things.”

 me under a rainbow that says “do hard things!”, an unimpressed audience

Now, let’s actually go a little deeper into the research on learning, friction, and failure, inspired by this (several months out of date) cultural moment of friction-maxxing.

THE RESEARCH

If we lived in a world where Git commits gatekept access to food, maybe babies would evolve to pick up a bit of Python passively by age three. Thankfully, that’s not (yet) the case. Babies expend no effort in learning languages because they benefit from our brain’s capacity for passive neuroplasticity.

While there are many domains of knowledge where experiential, play-based learning is sufficient to impart essential skills, software development is not one of them. Despite being surrounded by technology and code all day, if you want to learn to build software, you’re going to need to put some effort into it.

This “effort” is, in practice, a capacity we develop as adults to engage our active neuroplasticity to learn things through concentrated effort rather than just being a sponge. Adults can achieve the exact same learning outcomes as children; we just need to learn things more incrementally. This is why we learn through courses with structured curricula instead of having an AI read us the most beautiful lines of code ever written before we go to bed.

ai chip reading us to sleep - book: Goodnight Mockoon
Mockoon” is a popular API mocking tool.

In the brain, activating our active neuroplasticity involves a cocktail of hormones regulating how alert ((nor)epinephrine), motivated (dopamine), and satisfied (serotonin) we are. This alertness or stress we feel in response to a challenging problem is literally the trigger to prepare our brain to learn something new. Failing and making mistakes are especially important, since they activate our memory more effectively than getting everything correct. 

In computing, this productive failure often takes the form of debugging, which, while comparable in enjoyability to eating rocks, is how many senior developers say they built their deep understanding of code and technical systems. 

Contrary to the besties on your short-form feed, learning research disagrees that we need only to “maxx” out on friction and failure to achieve genius status. Too much failure too soon can lead to demonstrably worse learning outcomes. As learners, we have to learn to adequately deal with the discomfort of learning before it sabotages our self-esteem and we stop believing ourselves capable of climbing the learning curve. 

meme: c’mon, do something, but it’s the hormone and a brain, maybe some bugs
By doing hard things like debugging, we send our brains a hormonal signal that it needs to adapt and learn.

In education research, dealing with the bad feelings that come with learning new stuff is known as self-regulation. The good news is, there is an ever-growing catalog of interventions that can help people stay chill enough to succeed in doing (and failing to do) hard things.

The bad news is, self-regulation strategies are almost never taught to students explicitly, especially in computing, where most curricula are allergic to any mention of a “person” with “feelings”. Why is this? I honestly see no good reason for it. My best guess is that maybe for the educators who tend to teach computing skills, these self-regulation practices were obvious or invisible to them. Maybe they happen to be the people who struggled with failure less, due to their own biochemistry or cultural background. 

Nevertheless, this gross oversight can be corrected fairly easily. This excellent paper even made a one-page handout, the “Student’s Guide to Learning from Failure”, which details a wealth of science-backed strategies for managing the hormones bouncing around your wrinkly blob. 

One read-through of the Student’s Guide might give a few good tips, but the important thing is actually putting them into practice. Simply knowing about behavior change strategies does not guarantee long-term change. The sauce is in the doing, the failing, and the re-doing. Most importantly, it’s also in learning when to not do. We need downtime to integrate new knowledge and rest to regulate our bodies. Could it be that the most productive friction in education is to be found not in seeking out more information, but in slowing down and integrating the information we already know? Possibly, but I need some time to think about it.

Goodbye! Check out our free courses and student pack below!

If you liked this, check out our series How to Learn to Program in an AI World: Is It Still Worth Learning to Code?, Learning to Think in an AI World: 5 Lessons for Novice Programmers, Should You use AI to Learn to Code?, and How to Prepare for the Future of Programming.

Clara Maine is a technical content creator for JetBrains Academy. She has a formal background in Artificial Intelligence but finds herself most comfortable exploring its overlaps with education, philosophy, and creativity. She writes, produces, and performs videos about learning to code on the JetBrains Academy YouTube channel.

Read the whole story
alvinashcraft
44 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Kubernetes at Uber with Lucy Sweet

1 Share

Guest is Lucy Sweet, a Staff Software engineer at Uber and the lead for the Kubernetes Node Lifecycle Working Group. Imagine trying to move millions of compute cores and thousands of microservices to a brand new platform. All without dropping a single user request, ride, or delivery. Sounds like an absolute logistical nightmare, right? Well, today we are sitting down with someone who actually lived to tell the tale Lucy. In this episode, we are diving deep into Uber's monumental infrastructure journey: moving away from their in-house system to Kubernetes. We'll be unpacking the reality of running at this scale, why it's always DNS and why building things for fun is worth it.

Do you have something cool to share? Some questions? Let us know:

- web: kubernetespodcast.com

- mail: kubernetespodcast@google.com

- twitter: @kubernetespod

- bluesky: @kubernetespodcast.com

 

News of the week

Links from the interview

 





Download audio: https://traffic.libsyn.com/secure/e780d51f-f115-44a6-8252-aed9216bb521/KPOD266.mp3?dest-id=3486674
Read the whole story
alvinashcraft
45 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Merging Three Companies Into One Platform — When Founders Can't Let Go and Leaders Won't Decide | Mukhtar Kadiri

1 Share

Mukhtar Kadiri: Merging Three Companies Into One Platform — When Founders Can't Let Go and Leaders Won't Decide

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"A lot of times, conflict arises because people don't understand each other. The first thing you need to do is make sure they understand each other." - Mukhtar Kadiri

 

Mukhtar brings us a challenge from a merger and acquisition program where a dominant software company acquired two competitors simultaneously — both solving the same market gap, each with their own platform, their own founders still in place, and their own fierce loyalties. The mission: merge three platforms into one. But the technical challenge was the easy part. The real complexity was human — founders who'd built their companies from scratch watching their babies potentially get retired, teams losing people to low morale and uncertainty, and leadership paralyzed by the knowledge that every decision would make somebody unhappy. Together, Mukhtar and Vasco explore a four-step approach to navigating these high-stakes disagreements: first, create a feeling of time abundance — never rush a decision that requires buy-in. Second, get each side to present their perspective with only clarifying questions, no judgment. Third, name the disagreement explicitly — turn emotions into concrete, debatable statements. And fourth, co-create an alternative solution that doesn't come from either original position, because co-creation builds commitment. Mukhtar adds a critical fifth element: steel-manning — having each side articulate the other's argument as if defending it. When people feel genuinely understood, even "disagree and commit" becomes possible.

 

In this episode, we refer to steel-manning and the concept of disagree and commit.

 

Self-reflection Question: When you're facilitating a disagreement between two strong positions, do you rush toward a decision — or do you invest the time to make sure both sides can articulate each other's argument before you even think about next steps?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Mukhtar Kadiri

 

Mukhtar Kadiri is a PM career coach with 15+ years in project management. He specializes in helping project and program managers land $100–300K roles. He's been named the #1 PM in Canada. He also has a LinkedIn following of 67K+ professionals. He shares practical insights for FREE on LinkedIn, where he talks about job search, career growth, and thriving as a PM.

 

You can link with Mukhtar Kadiri on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260513_Mukhtar_Kadiri_W.mp3?dest-id=246429
Read the whole story
alvinashcraft
45 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Azure Container Apps Express!

1 Share

Three years ago, a 15-second cold start was industry-leading. Today, developers and AI agents expect sub-second. The speed bar has moved, and the tooling needs to move with it.

After running Azure Container Apps for years, we've learned something important: for most developers, the ACA environment is an unnecessary construct. It adds provisioning time, configuration surface, and cognitive overhead — when all you really want is to run your app with scaling, networking, and operations handled for you.

At the same time, a new class of workloads has emerged. Agent-first platforms — systems where AI agents deploy endpoints on demand, spin up tool-use APIs, and tear them down when work is done — demand an even more radical focus on speed and simplicity. Every second of provisioning delay is wasted agent productivity.

Today, we're launching Azure Container Apps Express in Public Preview — the fastest, simplest way to go from a container image to an internet-reachable app on Azure, ready for many production-style workloads.

 

What Is ACA Express?

ACA Express removes the infrastructure decisions. There's no environment to provision, no networking to configure, no scaling rules to write. You bring a container image, Express handles everything else.

Behind the scenes, Express runs your container on pre-provisioned capacity with sensible defaults baked in — so you skip environment setup without giving up ACA's serverless model. There's more coming in this space soon — keep watching.

Here's what that means in practice:

  • Instant provisioning — your app is running in seconds, not minutes
  • Sub-second cold starts — fast enough for interactive UIs and on-demand agent endpoints
  • Scale to and from zero — automatic, no configuration required (full scaling controls coming soon)
  • Per-second billing — pay only for what you use
  • Production-ready defaults — ingress, secrets, environment variables, and observability are built in

Express is purpose-built for two audiences: developers who want to ship fast (SaaS apps, APIs, web dashboards, prototypes) and agents that deploy on demand (MCP servers, tool-use endpoints, multi-step workflow APIs, human-in-the-loop UIs). If you've ever waited for an ACA environment to provision, only to realize you didn't need half of the configuration options it asked you for — Express is your answer.

 

What You Can Do Today

Note: West Central US is currently the only available region. We will expand to new regions through the coming days.

Express is in Public Preview starting today. It's a deliberate early ship — there's a meaningful feature gap compared to the existing Azure Container Apps offering, and we're filling it fast. New capabilities are landing on a rapid cadence throughout the preview, and by Microsoft Build in June, Express should be close to feature-complete.

For the current list of supported features, known gaps, and what's on the way, see the Express documentation.

We'd rather put valuable technology in your hands early and iterate with you than wait behind closed doors for perfection.

 

Who Is Express For?

ScenarioWhy Express
SaaS apps and APIsDeploy and scale without infrastructure planning
AI app frontendsChat UIs and copilot frontends that scale with usage spikes
MCP serversExpose API endpoints for AI agents in seconds
Agent workflowsSpin up endpoints on demand, tear down when done
Prototypes and startupsGo from idea to production in minutes
Web dashboardsInternal tools with instant availability

 

Get Started

Note: Documentation links may not yet be available yet. They will become available throughout the day.

 

Express is available now in Public Preview. Try it:

Have questions? Check the Azure Container Apps Express FAQ for answers to common questions about pricing, limits, regions, and the road to GA.

We're building Express in the open and we want to hear from you. Tell us what features matter most, what works, and what doesn't — reach out on the Azure Container Apps GitHub or in the comments below.

Read the whole story
alvinashcraft
45 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How Braze’s CTO is rethinking engineering for the agentic area

1 Share
Jon Hyman, co-founder and CTO of Braze, shares how he's led the company's engineering organization over nearly 15 years of growth — and how they transformed into an AI-first team in just a few months.
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories