Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150967 stories
·
33 followers

Azure Resource Mover: What Actually Moves, What Doesn’t

1 Share
All sample PowerShell companion code for this blog can be found here . Azure has plenty of tools that do one thing really well, and Azure Resource Mover fits right into that category. If you need to move supported resources across regions without rebuilding from scratch, this is your tool. The trick is knowing what it was built for, what it refuses to touch, and how to use it without creating a surprise outage. This guide walks through what Resource Mover is good at, what it is not, how to...

Read the whole story
alvinashcraft
54 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Swimming in Tech Debt — Practical Techniques to Keep Your Team from Drowning in Its Codebase | Lou Franco

1 Share

BONUS: Swimming in Tech Debt — Practical Techniques to Keep Your Team from Drowning in Its Codebase

In this fascinating conversation, veteran software engineer and author Lou Franco shares hard-won lessons from decades at startups, Trello, and Atlassian. We explore his book "Swimming in Tech Debt," diving deep into the 8 Questions framework for evaluating tech debt decisions, personal practices that compound over time, team-level strategies for systematic improvement, and leadership approaches that balance velocity with sustainability. Lou reveals why tech debt is often the result of success, how to navigate the spectrum between ignoring debt and rewriting too much, and practical techniques individuals, teams, and leaders can use starting today.

The Exit Interview That Changed Everything

"We didn't go slower by paying tech debt. We went actually faster, because we were constantly in that code, and now we didn't have to run into problems." — Lou Franco

 

Lou's understanding of tech debt crystallized during an exit interview at Atalasoft, a small startup where he'd spent years. An engineer leaving the company confronted him: "You guys don't care about tech debt." Lou had been focused on shipping features, believing that paying tech debt would slow them down. But this engineer told a different story — when they finally fixed their terrible build and installation system, they actually sped up. They were constantly touching that code, and removing the friction made everything easier. This moment revealed a fundamental truth: tech debt isn't just about code quality or engineering pride. It's about velocity, momentum, and the ability to move fast sustainably. Lou carried this lesson through his career at Trello (where he learned the dangers of rewriting too much) and Atlassian (where he saw enterprise-scale tech debt management). These experiences became the foundation for "Swimming in Tech Debt."

Tech Debt Is the Result of Success

"Tech debt is often the result of success. Unsuccessful projects don't have tech debt." — Lou Franco

 

This reframes the entire conversation about tech debt. Failed products don't accumulate debt — they disappear before it matters. Tech debt emerges when your code survives long enough to outlive its original assumptions, when your user base grows beyond initial expectations, when your team scales faster than your architecture anticipated. At Atalasoft, they built for 10 users and got 100. At Trello, mobile usage exploded beyond their web-first assumptions. Success creates tech debt by changing the context in which code operates. This means tech debt conversations should happen at different intensities depending on where you are in the product lifecycle. Early startups pursuing product-market fit should minimize tech debt investments — move fast, learn, potentially throw away the code. Growth-stage companies need balanced approaches. Mature products benefit significantly from tech debt investments because operational efficiency compounds over years. Understanding this lifecycle perspective helps teams make appropriate decisions rather than applying one-size-fits-all rules.

The 8 Questions Framework for Tech Debt Decisions

"Those 8 questions guide you to what you should do. If it's risky, has regressions, and you don't even know if it's gonna work, this is when you're gonna do a project spike." — Lou Franco

 

Lou introduces a systematic framework for evaluating whether to pay tech debt, inspired by Bob Moesta's push-pull forces from product management. The 8 questions create a complete picture:

 

  1. Visibility — Will people outside the team understand what we're doing?

  2. Alignment — Does this match our engineering values and target architecture?

  3. Resistance — How hard is this code to work with right now?

  4. Volatility — How often do we touch this code?

  5. Regression Risk — What's the chance we'll introduce new problems?

  6. Project Size — How big is this to fix?

  7. Estimate Risk — How uncertain are we about the effort required?

  8. Outcome Uncertainty — How confident are we the fix will actually improve things?

 

High volatility and high resistance with low regression risk? Pay the debt now. High regression risk with no tests? Write tests first, then reassess. Uncertain outcomes on a big project? Do a spike or proof of concept. The framework prevents both extremes — ignoring costly debt and undertaking risky rewrites without proper preparation.

Personal Practices That Compound Daily

"When I sit down at my desk, the first thing I do is I pay a little tech debt. I'm looking at code, I'm about to change it, do I even understand it? Am I having some kind of resistance to it? Put in a little helpful comment, maybe a little refactoring." — Lou Franco

 

Lou shares personal habits that create compounding improvements over time. Start each coding session by paying a small amount of tech debt in the area you're about to work — add a clarifying comment, extract a confusing variable, improve a function name. This warms you up, reduces friction for your actual work, and leaves the code slightly better than you found it. The clean-as-you-go philosophy means tech debt never accumulates faster than you can manage it. But Lou's most powerful practice comes at the end of each session: mutation testing by hand. Before finishing for the day, deliberately break something — change a plus to minus, a less-than to less-than-or-equal. See if tests catch it. Often they don't, revealing gaps in test coverage. The key insight: don't fix it immediately. Leave that failing test as the bridge to tomorrow's coding session. It connects today's momentum to tomorrow's work, ensuring you always start with context and purpose rather than cold-starting each day.

Mutation Testing: Breaking Things on Purpose

"Before I'm done working on a coding session, I break something on purpose. I'll change a plus to a minus, a less than to a less than equals, and see if tests break. A lot of times tests don't break. Now you've found a problem in your test." — Lou Franco

 

Manual mutation testing — deliberately breaking code to verify tests catch the break — reveals a critical gap in most test suites. You can have 100% code coverage and still have untested behavior. A line of code that's executed during tests isn't necessarily tested — the test might not actually verify what that line does. By changing operators, flipping booleans, or altering constants, you discover whether your tests protect against actual logic errors or just exercise code paths. Lou recommends doing this manually as part of your daily practice, but automated tools exist for systematic discovery: Stryker (for JavaScript, C#, Scala) and MutMut (for Python) can mutate your entire codebase and report which mutations survive uncaught. This isn't just about test quality — it's about understanding what your code actually does and building confidence that changes won't introduce subtle bugs.

Team-Level Practices: Budgets, Backlogs, and Target Architecture

"Create a target architecture document — where would we be if we started over today? Every PR is an opportunity to move slightly toward that target." — Lou Franco

 

At the team level, Lou advocates for three interconnected practices. First, create a target architecture document that describes where you'd be if starting fresh today — not a detailed design, but architectural patterns, technology choices, and structural principles that represent current best practices. This isn't a rewrite plan; it's a North Star. Every pull request becomes an opportunity to move incrementally toward that target when touching relevant code. Second, establish a budget split between PM-led feature work and engineering-led tech debt work — perhaps 80/20 or whatever ratio fits your product lifecycle stage. This creates predictable capacity for tech debt without requiring constant negotiation. Third, hold quarterly tech debt backlog meetings separate from sprint planning. Treat this backlog like PMs treat product discovery — explore options, estimate impacts, prioritize based on the 8 Questions framework. Some items fit in sprints; others require dedicated engineers for a quarter or two. This systematic approach prevents tech debt from being perpetually deprioritized while avoiding the opposite extreme of engineers disappearing into six-month "improvement" projects with no visible progress.

The Atlassian Five-Alarm Fire

"The Atlassian CTO's 'five-alarm fire' — stopping all feature development to focus on reliability. I reduced sync errors by 75% during that initiative." — Lou Franco

 

Lou shares a powerful example of leadership-driven tech debt management at scale. The Atlassian CTO called a "five-alarm fire" — halting all feature development across the company to focus exclusively on reliability and tech debt. This wasn't panic; it was strategic recognition that accumulated debt threatened the business. Lou worked on reducing sync errors, achieving a 75% reduction during this focused period. The initiative demonstrated several leadership principles: willingness to make hard calls that stop revenue-generating feature work, clear communication of why reliability matters strategically, trust that teams will use the time wisely, and commitment to see it through despite pressure to resume features. This level of intervention is rare and shouldn't be frequent, but it shows what's possible when leadership truly prioritizes tech debt. More commonly, leaders should express product lifecycle constraints (startup urgency vs. mature product stability), give teams autonomy to find appropriate projects within those constraints, and require accountability through visible metrics and dashboards that show progress.

The Rewrite Trap: Why Big Rewrites Usually Fail

"A system that took 10 years to write has implicit knowledge that can't be replicated in 6 months. I'm mostly gonna advocate for piecemeal migrations along the way, reducing the size of the problem over time." — Lou Franco

 

Lou lived through Trello's iOS navigation rewrite — a classic example of throwing away working code to start fresh, only to discover all the edge cases, implicit behaviors, and user expectations baked into the "old" system. A codebase that evolved over several years contains implicit knowledge — user workflows, edge case handling, performance optimizations, and subtle behaviors that users rely on even if they never explicitly requested them. Attempting to rewrite this in six months inevitably misses critical details. Lou strongly advocates for piecemeal migrations instead. The Trello "Decaffeinate Project" exemplifies this approach — migrating from CoffeeScript to TypeScript incrementally, with public dashboards showing the percentage remaining, interoperable technologies allowing gradual transition, and the ability to pause or reverse if needed. Keep both systems running in parallel during migrations. Use runtime observability to verify new code behaves identically to old code. Reduce the problem size steadily over months rather than attempting big-bang replacements. The only exception: sometimes keeping parallel systems requires scaffolding that creates its own complexity, so evaluate whether piecemeal migration is actually simpler or if you're better off living with the current system.

Making Tech Debt Visible Through Dashboards

"Put up a dashboard, showing it happen. Make invisible internal improvements visible through metrics engineering leadership understands." — Lou Franco

 

One of tech debt's biggest challenges is invisibility — non-technical stakeholders can't see the improvement from refactoring or test coverage. Lou learned to make tech debt work visible through dashboards and metrics. The Decaffeinate Project tracked percentage of CoffeeScript files remaining, providing a clear progress indicator anyone could understand. When reducing sync errors, Lou created dashboards showing error rates declining over time. These visualizations serve multiple purposes: they demonstrate value to leadership, create accountability for engineering teams, build momentum as progress becomes visible, and help teams celebrate wins that would otherwise go unnoticed. The key is choosing metrics that matter to the business — error rates, page load times, deployment frequency, mean time to recovery — rather than pure code quality metrics like cyclomatic complexity that don't translate outside engineering. Connect tech debt work to customer experience, reliability, or developer productivity in ways leadership can see and value.

Onboarding as a Tech Debt Opportunity

"Unit testing is a really great way to learn a system. It's like an executable specification that's helping you prove that you understand the system." — Lou Franco

 

Lou identifies onboarding as an underutilized opportunity for tech debt reduction. When new engineers join, they need to learn the codebase. Rather than just reading code or shadowing, Lou suggests having them write unit tests in areas they're learning. This serves dual purposes: tests are executable specifications that prove understanding of system behavior, and they create safety nets in areas that likely lack coverage (otherwise, why would new engineers be confused by the code?). The new engineer gets hands-on learning, the team gets better test coverage, and everyone wins. This practice also surfaces confusing code — if new engineers struggle to understand what to test, that's a signal the code needs clarifying comments, better naming, or refactoring. Make onboarding a systematic tech debt reduction opportunity rather than passive knowledge transfer.

Leadership's Role: Constraints, Autonomy, and Accountability

"Leadership needs to express the constraints. Tell the team what you're feeling about tech debt at a high level, and what you think generally is the appropriate amount of time to be spent on it. Then give them autonomy." — Lou Franco

 

Lou distills leadership's role in tech debt management to three elements. First, express constraints — communicate where you believe the product is in its lifecycle (early startup, rapid growth, mature cash cow) and what that means for tech debt tolerance. Are we pursuing product-market fit where code might be thrown away? Are we scaling a proven product where reliability matters? Are we maintaining a stable system where operational efficiency pays dividends? These constraints help teams make appropriate trade-offs. Second, give autonomy — once constraints are clear, trust teams to identify specific tech debt projects that fit those constraints. Engineers understand the codebase's pain points better than leaders do. Third, require accountability — teams must make their work visible through dashboards, metrics, and regular updates. Autonomy without accountability becomes invisible engineering projects that might not deliver value. Accountability without autonomy becomes micromanagement that wastes engineering judgment. The balance creates space for teams to make smart decisions while keeping leadership informed and confident in the investment.

AI and the Future of Tech Debt

"I really do AI-assisted software engineering. And by that, I mean I 100% review every single line of that code. I write the tests, and all the code is as I would have written it, it's just a lot faster. Developers are still responsible for it. Read the code." — Lou Franco

 

Lou has a chapter about AI in his book, addressing the elephant in the room: will AI-generated code create massive tech debt? His answer is nuanced. AI can accelerate development tremendously if used correctly — Lou uses it extensively but reviews every single line, writes all tests himself, and ensures the code matches what he would have written manually. The problem emerges with "vibe coders" — non-developers using AI to generate code they don't understand, creating unmaintainable messes that become someone else's problem. Developers remain responsible for all code, regardless of how it's generated. This means you must read and understand AI-generated code, not blindly accept it. Lou also raises supply chain security concerns — dependencies can contain malicious code, and AI might introduce vulnerabilities developers miss. His recommendation: stay six months behind on dependency updates, let others discover the problems first, and consider separate sandboxed development machines to limit security exposure. AI is a powerful tool, but it doesn't eliminate the need for engineering judgment, testing discipline, or code review practices.

The Style Guide Beyond Formatting

"Have a style guide that goes beyond formatting to include target architecture. This is the kind of code we want to write going forward." — Lou Franco

 

Lou advocates for style guides that extend beyond tabs-versus-spaces formatting rules to include architectural guidance. Document patterns you want to move toward: how should components be structured, what state management approaches do we prefer, how should we handle errors, what testing patterns should we follow? This creates a shared understanding of the target architecture without requiring a massive design document. When reviewing pull requests, teams can reference the style guide to explain why certain approaches align with where the codebase is headed versus perpetuating old patterns. This makes tech debt conversations less personal and more objective — it's not about criticizing someone's code, it's about aligning with team standards and strategic direction. The style guide becomes a living document that evolves as the team learns and technology changes, capturing collective wisdom about what good code looks like in your specific context.

Recommended Resources

Some of the resources mentioned in this episode include: 

 

About Lou Franco

 

Lou Franco is a veteran software engineer and author of Swimming in Tech Debt. With decades of experience at startups, as well as Trello, and Atlassian, he's seen both sides of debt—as coder and leader. Today, he advises teams on engineering practices, helping them turn messy codebases into momentum.

 

You can link with Lou Franco on LinkedIn and learn more at LouFranco.com.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251213_Lou_Franco_BONUS.mp3?dest-id=246429
Read the whole story
alvinashcraft
54 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Why Agentic AI Will Be Tech's Biggest Winner of 2026 – How to Integrate Agentic AI Into Your Business Today

1 Share
Why Agentic AI Will Be Tech's Biggest Winner of 2026 – How to Integrate Agentic AI Into Your Business Today

Agentic AI is no longer science fiction — it’s the most powerful shift in artificial intelligence since the launch of ChatGPT. In 2026, agentic AI will separate winners from losers in tech, business, and productivity.

 

This article explains everything you need to know: what agentic AI actually is, why it’s exploding in 2026, real-world examples, and — most importantly — how you can start integrating agentic AI into your workflow or company right now.

 

Why Agentic AI Will Be Tech's Biggest Winner of 2026
Why Agentic AI Will Be Tech's Biggest Winner of 2026

 

Table of Contents

 

 

What Is Agentic AI? The Simple Definition

Agentic AI refers to artificial intelligence systems that can independently set goals, make decisions, take actions, and iterate until the objective is completed — without constant human guidance.

 

Unlike traditional chatbots that only respond when you talk to them, an agentic AI can:

  • Understand a high-level goal (“Book me the cheapest business-class flight to Tokyo next month with a hotel under $300/night”)
  • Break it into subtasks
  • Use tools (search the web, open browsers, send emails, fill forms)
  • Handle obstacles and re-plan in real time
  • Deliver the final result (itinerary + bookings) while you sleep

 

Dario Amodei (CEO of Anthropic), Sam Altman (OpenAI), and even Elon Musk have all publicly stated that the future belongs to agentic systems. In 2025 we already see the first generation; by 2026 they will be mainstream.

 

 

Reactive AI vs Agentic AI: The Critical Difference

Reactive AI (ChatGPT, Claude, Gemini today) Agentic AI (2025–2026 wave)
Initiative Only acts when prompted Can start tasks autonomously
Tool Use Limited or one-at-a-time Orchestrates dozens of tools in loops
Memory & Context Short conversation window Long-term memory + project history
Error Handling Gives up or asks you Retries, re-plans, finds workarounds
Goal Completion Provides information Delivers finished outcome

 

 

Why 2026 Will Be the Year of Agentic AI

Several unstoppable trends are converging in 2026:

  1. Model capability leap – GPT-5 class models + o3-style reasoning chains are live.
  2. Tool-use & browser control – Chrome extensions, APIs, and computer-use endpoints are now reliable.
  3. Memory & state – Vector databases + infinite context make long-running agents possible.
  4. Price collapse – Reasoning tokens are dropping below $1 per million; running an agent for hours costs pennies.
  5. Enterprise adoption – Salesforce, Microsoft Copilot Studio, and ServiceNow all ship agent builders in 2025–2026.
  6. Startup explosion – Over $20 billion poured into agentic startups in 2024–2025 alone.

 

Prediction: By the end of 2026, the majority of knowledge workers will have at least one personal agentic AI working for them 24/7.

 

 

7 Real-World Agentic AI Examples Already Winning in 2025–2026

  1. Devin by Cognition – The first fully autonomous software engineer (already writing production code at startups).
  2. Adept ACT-1 & OpenAI Operator – Can operate any software through the browser like a human.
  3. Salesforce Agentforce – Autonomous sales & service agents closing deals without human reps.
  4. Microsoft Copilot Workspace – Turns a GitHub issue into merged PR automatically.
  5. MultiOn + BrowserGPT – Personal agents that shop, book travel, apply to jobs for you.
  6. ManasAI & Replit Agent – Full-cycle app development in minutes.
  7. Anthropic Computer Use – Claude can now control your Mac/PC and get work done while you sleep.

 

 

How to Integrate Agentic AI Into Your Business Today (Step-by-Step)

Step 1: Identify repetitive high-value workflows
Step 2: Choose your stack (see tools below)
Step 3: Start with narrow scoped agents
Step 4: Add memory (Pinecone, Supabase Vector, or Qdrant)
Step 5: Implement human-in-the-loop approvals for money/risk
Step 6: Scale to multi-agent teams

 

Real example: A marketing agency reduced content production cost by 89% using a researcher → writer → editor → publisher agent swarm.

 

 

Top 10 Agentic AI Tools & Platforms You Can Use Right Now (2025–2026)

  1. CrewAI – Open-source multi-agent framework (most popular)
  2. LangGraph (by LangChain) – State machine for complex agents
  3. AutoGen (Microsoft) – Multi-agent conversation framework
  4. n8n + OpenAI – No-code agent builder
  5. Browserless + Puppeteer agents – Full browser control
  6. Adept, Anthropic Computer Use, OpenAI Operator – Official endpoints
  7. Replit Agent – Best for developers
  8. MultiOn – Consumer-facing personal agent
  9. Salesforce Agentforce – Enterprise-grade
  10. Zapier Central – No-code agents connected to 6000+ apps

 

 

Risks and Challenges of Agentic AI (And How to Mitigate Them)

  • Infinite loops → Set budget & time caps
  • Hallucinated actions → Use structured output + human review
  • Security → Never give full credentials; use OAuth scopes
  • Cost overruns → Monitor token usage in real time

 

The Future Beyond 2026: Multi-Agent Systems & Artificial Superintelligence

By 2027–2028 we will see:

  • Companies made of 100% AI agents (no human employees)
  • Personal AI operating systems that manage your entire digital life
  • The first sparks of AGI emerging from massive agent swarms

 

 

Frequently Asked Questions About Agentic AI

What exactly is agentic AI?

Agentic AI is artificial intelligence that can autonomously pursue complex goals using tools, reasoning, and iteration — without constant human input.

When will agentic AI become mainstream?

2026 is the breakout year. The required capabilities (reasoning, tool use, memory) are already here in late 2025.

Can I build my own agentic AI today?

Yes! With CrewAI, LangGraph, or even Zapier Central you can have a working agent in under an hour.

Is agentic AI the same as AGI?

No. Agentic AI is narrow-to-medium scope autonomy. AGI would be human-level or beyond across every domain.

 

 

Summary & Key Takeaways

  • Agentic AI = AI that independently completes goals using tools and reasoning.
  • 2026 is the inflection point — the tech is ready now in late 2025.
  • Early adopters are already 5–10× more productive.
  • You can start integrating agentic AI today with free/open-source tools.
  • The winners of the next decade will be those who master agentic workflows first.

 

Don’t wait for 2026. The agentic AI revolution has already started — and the gap between those who adopt now and those who wait will be measured in years, not months.

 

Read the whole story
alvinashcraft
55 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

IoT Coffee Talk: Episode 291- "Making an Enterprise Agentic Mess"

1 Share
From: Iot Coffee Talk
Duration: 1:08:04
Views: 5

Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.

This week Rob, Debbie, Dimitri, David, Marc, Pete, and Leonard jump on Web3 to host a discussion about:

🎶 🎙️ BAD KARAOKE! 🎸 🥁 "Man in the Box" by Alice in Chains
🐣 Costco neocloud services. An idea for a great new service of obsolete GPUs.
🐣 The AI hypster pivot toward the AI bubble implosion, I-told-you-so pivot.
🐣 Think twice before trashing an analysts if you don't know nothin'.
🐣 Leonard pronounces Dimitrios Spiliopoulos IoT' name perfectly!
🐣 Is physical AI cool or corny?
🐣 Losing your corporate identity by calling yourself an "AI company" when you are anything but.
🐣 Is AI a thing if Matthew McConaughey says AI is a thing?
🐣 Is ditching cloud smart if any prospect of good AI and agentic AI needs cloud?
🐣 Thoughts from the Data Dive. What is the state of agentic AI security?
🐣 The "gently used" GPU problem according to Rob and David.
🐣 Will Costco release a Kirkland GPU to go with your Kirkland win collection?

It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!

Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!

If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Communities. Just make a minimally required donation to www.elevatecommunities.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!

Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on Buzzsprout, like, subscribe, and share: https://lnkd.in/gyuhNZ62

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – December 12, 2025 (#684)

1 Share

Happy Friday. Things haven’t yet started winding down for the holiday season, but that may start NEXT week.

[article] The 5 AI Tensions Leaders Need to Navigate. Good list. I could argue that this applies to many leadership situations, not just AI-driven ones.

[article] How do daily stand-ups boost team performance? Some people love ’em, others hate ’em. But stand-ups (done well) are apparently a big booster of safer environments that result in better outcomes.

[blog] Product engineering teams must own supply chain risk. Platforms need to take care of more of this. This goes my “shift down” concept where we can’t expect product teams to handle it all.

[blog] Build with Gemini Deep Research. Use this amazing capability from the new Interactions API that’s part of Gemini.

[blog] Introducing GPT-5.2. Maybe some scrambling happening over at OpenAI, but you can count on them shipping great model after model.

[article] Google launched its deepest AI research agent yet — on the same day OpenAI dropped GPT-5.2. Totally coincidental. Model shops and tech vendors never try to steal each other’s thunder. No sir, never happens.

[article] InfoQ Java Trends Report 2025. Not shockingly, a lot of AI on the list. But more than that has kept Java vibrant.

[blog] Taming Vibe Coding: The Engineer’s Guide. Here’s some good practical advice on creating more consistency and personalization for your AI-driven coding exercises.

[article] What If? AI in 2026 and Beyond. It’s worth reading through this pile of thoughts as you plan your AI approach next year.

[article] How Google’s TPUs are reshaping the economics of large-scale AI. How do you erode the GPU moat? A competitor can make it easier for developers to switch while accessing premium functionality.

[blog] Before You Build a Private Cloud, Ask This One Question. My contention is that there are VERY few actual private clouds out there. Mostly some nicely automated VM infrastructure. Keith offers a good question you should ask before going down this path.

[blog] Bringing state-of-the-art Gemini translation capabilities to Google Translate. There are some potentially life-changing capabilities called out here. I can’t wait to use some of these on my next international trip.

[blog] Enterprise Agents Have a Reliability Problem. Are companies doing ok with off-the-shelf AI tools but struggling to build their own? Or the opposite? What’s a recipe for success?

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Improved Gemini audio models for powerful voice interactions

1 Share
An upgraded Gemini 2.5 Native Audio model across Google products and live speech translation in the Google Translate app.
Read the whole story
alvinashcraft
12 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories