Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150964 stories
·
33 followers

IoT Coffee Talk: Episode 291- "Making an Enterprise Agentic Mess"

1 Share
From: Iot Coffee Talk
Duration: 1:08:04
Views: 5

Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.

This week Rob, Debbie, Dimitri, David, Marc, Pete, and Leonard jump on Web3 to host a discussion about:

🎢 πŸŽ™οΈ BAD KARAOKE! 🎸 πŸ₯ "Man in the Box" by Alice in Chains
🐣 Costco neocloud services. An idea for a great new service of obsolete GPUs.
🐣 The AI hypster pivot toward the AI bubble implosion, I-told-you-so pivot.
🐣 Think twice before trashing an analysts if you don't know nothin'.
🐣 Leonard pronounces Dimitrios Spiliopoulos IoT' name perfectly!
🐣 Is physical AI cool or corny?
🐣 Losing your corporate identity by calling yourself an "AI company" when you are anything but.
🐣 Is AI a thing if Matthew McConaughey says AI is a thing?
🐣 Is ditching cloud smart if any prospect of good AI and agentic AI needs cloud?
🐣 Thoughts from the Data Dive. What is the state of agentic AI security?
🐣 The "gently used" GPU problem according to Rob and David.
🐣 Will Costco release a Kirkland GPU to go with your Kirkland win collection?

It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!

Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!

If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Communities. Just make a minimally required donation to www.elevatecommunities.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!

Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on Buzzsprout, like, subscribe, and share: https://lnkd.in/gyuhNZ62

Read the whole story
alvinashcraft
16 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – December 12, 2025 (#684)

1 Share

Happy Friday. Things haven’t yet started winding down for the holiday season, but that may start NEXT week.

[article] The 5 AI Tensions Leaders Need to Navigate. Good list. I could argue that this applies to many leadership situations, not just AI-driven ones.

[article] How do daily stand-ups boost team performance? Some people love ’em, others hate ’em. But stand-ups (done well) are apparently a big booster of safer environments that result in better outcomes.

[blog] Product engineering teams must own supply chain risk. Platforms need to take care of more of this. This goes my “shift down” concept where we can’t expect product teams to handle it all.

[blog] Build with Gemini Deep Research. Use this amazing capability from the new Interactions API that’s part of Gemini.

[blog] Introducing GPT-5.2. Maybe some scrambling happening over at OpenAI, but you can count on them shipping great model after model.

[article] Google launched its deepest AI research agent yet β€” on the same day OpenAI dropped GPT-5.2. Totally coincidental. Model shops and tech vendors never try to steal each other’s thunder. No sir, never happens.

[article] InfoQ Java Trends Report 2025. Not shockingly, a lot of AI on the list. But more than that has kept Java vibrant.

[blog] Taming Vibe Coding: The Engineer’s Guide. Here’s some good practical advice on creating more consistency and personalization for your AI-driven coding exercises.

[article] What If? AI in 2026 and Beyond. It’s worth reading through this pile of thoughts as you plan your AI approach next year.

[article] How Google’s TPUs are reshaping the economics of large-scale AI. How do you erode the GPU moat? A competitor can make it easier for developers to switch while accessing premium functionality.

[blog] Before You Build a Private Cloud, Ask This One Question. My contention is that there are VERY few actual private clouds out there. Mostly some nicely automated VM infrastructure. Keith offers a good question you should ask before going down this path.

[blog] Bringing state-of-the-art Gemini translation capabilities to Google Translate. There are some potentially life-changing capabilities called out here. I can’t wait to use some of these on my next international trip.

[blog] Enterprise Agents Have a Reliability Problem. Are companies doing ok with off-the-shelf AI tools but struggling to build their own? Or the opposite? What’s a recipe for success?

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
16 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Improved Gemini audio models for powerful voice interactions

1 Share
An upgraded Gemini 2.5 Native Audio model across Google products and live speech translation in the Google Translate app.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

You can now have more fluid and expressive conversations when you go Live with Search.

1 Share
When you go Live with Search, you can have a back-and-forth voice conversation in AI Mode to get real-time help and quickly find relevant sites across the web. And now, …
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The future of AI-powered software optimization (and how it can help your team)

1 Share

When was the last time you heard someone ask in a standup, “How could we do this more sustainably?

Topics like green software and carbon efficiency are unfortunately rarely at the top of busy development teams’ priority lists. What’s more, there are very few “green software practitioners” out there. But we believe we’re at a unique moment in time where this can all change. The next generation of AI-enabled developer tooling has the opportunity to create near-effortless, always-on engineering for sustainability. 

The GitHub Next and GitHub Sustainability teams have been collaborating to prove this concept and value through a series of internal and external pilot projects. 

We call it Continuous Efficiency. 

Making the case for Continuous Efficiency

We believe that once it’s ready for broader adoption, Continuous Efficiency will have the potential to make a significant positive impact for developers, businesses, and sustainability.

For developers

Digital sustainability and green software are intrinsically aligned to “efficiency,” which is at the core of software engineering. Many developers would benefit from performant software, better standardization of code, change quality assurance, and more.

For businesses

Building for sustainability has measurable business value, including:

  • Reducing power and resource consumption
  • Increasing efficiency
  • Better code quality
  • Improved user experience
  • Lower costs

Despite this, sustainability rarely makes it onto the roadmap, priority list, or even the backlog. But imagine a world in which the codebase could continuously improve itself…

A graphic showing Continuous Efficiency = Green Software + Continuous AI

Continuous Efficiency means effortless, incremental, validated improvements to codebases for increased efficiency. It’s an emergent practice based on a set of tools and techniques that we are starting to develop and hope to see the developer community expand on.

This emerges at the intersection of Continuous AI and Green Software.

Continuous AI is AI-enriched automation to software collaboration. We are exploring LLM-powered automation in platform-based software development and CI/CD workflows.

Green Software is designed and built to be more energy-efficient and have a lower environmental impact. This practice tends to result in software that is cheaper, more performant, and more resilient. 

Continuous Efficiency in (GitHub) Action(s)

While Continuous Efficiency is a generally applicable concept, we have been building implementations on a specific GitHub platform and infrastructure called Agentic Workflows. It’s publicly available and open source, but currently in “research demonstrator” status (read: experimental prototype, pre-release, subject to change and errors!). Agentic Workflows is an experimental framework for exploring proactive, automated, event-driven agentic behaviors in GitHub repositories, running safely in GitHub Actions.

Our work in this space has been focused on two areas:

  1. Implementing rules and standards

With modern LLMs and agentic workflows, we can now express engineering standards and code-quality guidelines directly in natural language and apply them at a scale that was previously unattainable.

This capability goes far beyond traditional linting and static analysis approaches in three important ways:

  • Declarative, intent-based rule authoring: you describe the intent in natural language and the model interprets and implements it (no need for hard-coded patterns or logic).
  • Semantic generalizability: a single high-level rule can be applied across diverse code patterns, programming languages and architectures, giving far broader coverage than conventional tools and approaches.
  • Intelligent remediation: this approach comprehensively resolves issues and violations through agentic, platform-integrated actions, like writing a pull request or adding comments and suggested edits to a change.

Examples of our work:

Case study: Code base reviews
Green software rules implementation

We have implemented a wide range of standard and specific Green Software rules, tactics and patterns. These can be applied fully agentically to entire codebases and repos.

Example:
We teamed up with the resolve project to scan their codebase with a number of rules, and agentically delivered proposed improvements. 
The outputs weren’t all perfect—but one of the recently approved and merged pull requests makes a small performance improvement by “hoisting” RegExp literals from within hot functions. 

The project gets 500M+ downloads per month on npm. So this small impact will scale!
Case study: Implementing standards
Web sustainability guidelines (WSG)

The W3C WSG is a great resource to help people make web products and services more sustainable. We implemented the Web Development section into a set of 20 agentic workflows, so now the guidelines can be used by AI too!

Example:
We have run the WSG workflows on a number of GitHub and Microsoft web properties and found opportunities and built resolutions to improve them—ranging from deferred loading to using native browser features and latest language standards.
  1. Heterogeneous performance improvement 

Performance engineering is notoriously difficult because real-world software is profoundly heterogeneous. Every repository brings a different mix of languages and architectures, and even within a single codebase, the sources of performance issues can span from algorithmic choices to cache behavior to network paths.

Expert performance engineers excel at navigating this complexity, but the sheer variety and volume of work across the industry demands better tooling and scalable assistance. 

We’ve been thinking about the “grand challenge” of how to build a generic agent that can walk up to any piece of software and make demonstrable performance improvements. One that could navigate the vast ambiguity and heterogeneity of software in the wild—no small task!

Semi-automatic performance engineering aims to meet that need with an automated, iterative workflow where an agent researches, plans, measures, and implements improvements under human guidance. The process begins with “fit-to-repo” discovery—figuring out how to build, benchmark, and measure a given project—before attempting any optimization. Modern LLM-based agents can explore repositories, identify relevant performance tools, run microbenchmarks, and propose targeted code changes. 

Early results vary quite dramatically, but some show promise that guided automation can meaningfully improve software performance at scale.

Case study: 
Daily perf improver

Daily Perf Improver is a three-phase workflow, intended to run in small daily sprints. It can do things like:
(1) Research and plan improvements
(2) Infer how to build and benchmark the repository
(3) Iteratively propose measured optimizations

Example:On a focused recent pilot on FSharp.Control.AsyncSeq it has already delivered real gains by producing multiple accepted pull requests, including a rediscovered performance bug fix and verified microbenchmark-driven optimizations.

Daily Perf Improver Research Demonstrator

How to build and run agentic workflows

GitHub agentic workflows enable you to write automation in natural language (Markdown) instead of traditional YAML or scripts. You author a workflow in a .md file that begins with a YAML-like “front matter” (defining triggers, permissions, tools, safe-outputs, etc.), followed by plain-English instructions. At build time you run the gh aw compile command (part of the agentic workflows CLI) which compiles the Markdown into a standard GitHub Actions workflow (.yml) that can be executed by the normal GitHub Actions runtime.

When the compiled workflow runs, it launches an AI agent (for example via GitHub Copilot CLI, or other supported engines like Claude Code or OpenAI Codex) inside a sandboxed environment. The agent reads the repository’s context, applies the human-written natural-language instructions (for example “look for missing documentation, update README files, then open a pull request”), and produces outputs such as comments, pull requests, or other repository modifications. Because it’s running in the GitHub Actions environment, permission boundaries, safe-output restrictions, logs, auditability, and other security controls remain. 

How we build our Continuous Efficiency Workflows (with agents, of course!)

Our internal process for creating Continuous Efficiency workflows follows a simple, repeatable pattern:

  1. Define the intent: based on a public standard or a domain-specific engineering requirement.
  2. Author the workflow in Markdown: using structured natural language, guided interactively by the create-agentic-workflow agent.
  3. Compile to YAML: turning the Markdown into a standard GitHub Actions workflow.
  4. Run in GitHub Actions: executing the workflow on selected repositories

Want to get involved in Continuous Efficiency?

If you’re a developer who loves the experimentation phase, you can already get started with running agentic workflows in GitHub Actions now! There are a range of examples that you can immediately try out (including a “Daily performance improver”) or author your own using natural language. 

GitHub Sustainability will soon be publishing rulesets, workflows, and more—if you’re interested in being an early adopter or design partner, please get in touch with me.

The post The future of AI-powered software optimization (and how it can help your team) appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

GitHub Updates Spark, Its AI Prompt-Based App Builder

1 Share
GitHub Spark, an AI app-generation tool separate from Copilot still in public preview, gains enterprise, billing, and UI upgrades in its latest update.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories