Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149896 stories
·
33 followers

Solving Advent of Code in Rust, With Just Enough AI

1 Share

A year ago, I wrote a blog post about solving Advent of Code puzzles using Rust as the implementation language. I believe it’s still relevant if you plan to use Rust this year. In one section, I advised limiting the use of AI and demonstrated how to disable the relevant functionality in RustRover, either partially or completely, while solving puzzles. Just a year later, we live in a very different world when it comes to using AI in software development. Yet here’s what Eric Wastl, the creator of Advent of Code, writes about using AI:

Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve – no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.

In this blog post, I want to argue with Eric. After all, when we go to a gym, isn’t it because there are specific tools there that help us get stronger? All those dumbbells, kettlebells, barbells with weight plates, pull-up bars, and various machines – we use them for a reason, right? We want to get stronger, so we go to a gym and use the tools. Why not use the tools that help us grow our coding skills?

Note the shift here. I fully agree with Eric: we shouldn’t use AI to solve the puzzles, but we can (and even should) use AI along the way. Why? Because being able to apply AI to writing code is a must-have skill in today’s world. Also, using AI isn’t a simple yes-or-no decision – it’s a spectrum. I’ll elaborate on that shortly. But first, I’d like to invite you to join the Solving Advent of Code 2025 in Rust contest and share the following message from the RustRover team.

Solve Advent of Code 2025 Puzzles in Rust and Win Prizes

Before we explain how to enter this year’s contest, we’d like to address last year’s Advent of Code in Rust. Unfortunately, we were unable to send the prizes to the winners because we overlooked an important logistical detail when launching the competition – we didn’t ask participants to ensure that their GitHub profiles included an email address or social media handle. We’re truly sorry about this.

To avoid the same issue this year, please make sure your email or social media handle is listed on your GitHub profile, so Santa can deliver your well-earned gifts. 🎁

As a gesture of appreciation, we’d also like to congratulate the three winners of the 2024 challenge, and we’re ready to send out the long-overdue prizes. Well done – great minds, great solutions!

  1. Duro – 4900
  2. Mark Janssen – 4769
  3. Michel Krämer – 4742

Thank you for participating and for your patience. We hope you’ll join us again this year, continue solving Advent of Code challenges in Rust, and keep contributing to the Rust community.

How to Enter the Contest

  • Make sure your GitHub profile includes an email address or social media handle.
  • Go to the Leaderboard section of your Advent of Code profile and enter one of the codes below:
    • Leaderboard 1: 4223313-0557c16e
    • Leaderboard 2: 2365659-de227312
  • Complete at least three Advent of Code puzzles in Rust.
  • Share your solutions on GitHub and add aoc-2025-in-rust to the Topics field of your repository. To do this, click the gear icon in the top right-hand corner of your repository page and edit the Topics list.

By competing for the top positions on our leaderboards, you can win one of the Amazon Gift Card prizes. As a small apology for last year’s issue, we’re offering five prizes instead of three:

  • 1st place – USD 150
  • 2nd place – USD 100
  • 3rd place – USD 70
  • 4th place – USD 50
  • 5th place – USD 30

Plus, USD 20 gift cards for five randomly selected participants.

GitHub Template

We’ve prepared a GitHub template to help you quickly set up your Advent of Code project in Rust. You don’t have to use it for your solutions, but it streamlines the setup and lets you focus on what really matters.

To use it:

  1. Log in to GitHub.
  2. Click Use this template (please don’t fork).
  3. Once the setup is complete, clone the project in RustRover.
  4. Don’t forget to add aoc-2025-in-rust to the Topics field of your repository. 

Skills we train while solving coding puzzles

Alright, if you aim to compete in leaderboards, then no AI for you. Code completion in an IDE? Well, I don’t know – maybe ed and rustc are all you need to demonstrate your puzzle-solving power. That way, you show that it’s all about the speed of your brain, your keyboard, your CPU, your network card, and your internet provider. Read the rest if you’re not competing with anyone.

Advent of Code is great precisely because it exercises so many real-world engineering muscles. Some of these muscles benefit from AI spotting you; others atrophy if you let AI do the heavy lifting. Here’s a closer look at which skills belong in each category.

Structuring code for a single puzzle and for the whole competition. Advent of Code puzzles are small, but the whole event is long. Structuring your solution so it doesn’t become a pile of spaghetti by Day 7 is a real skill. Should you use AI here? Absolutely yes – as a reviewer, not as a decision-maker. Ask AI to suggest module layouts, compare different folder structures, or propose ways to reuse code across days. But don’t outsource the structural thinking itself. Knowing how to architect small but flexible solutions is one of the main professional skills AoC trains, and AI should support your design, not replace it.

Reading the problem text and coming up with an initial idea. This skill is core to the spirit of Advent of Code. Reading carefully, extracting requirements, noticing tricky edge cases, and forming an initial idea – that’s exactly what Eric wants humans to practice. And he’s right: don’t use AI here. Don’t ask for summaries, hints, or solution outlines. Let your own brain wrestle with the puzzle text. This is one of the purest forms of algorithmic problem solving, and it’s the part AI takes away if you let it.

Choosing the right library and the right level of abstraction. Rust has plenty of useful crates, but AoC often rewards sticking to the standard library. Should AI help? Sure – but in moderation. Asking, “Is there a crate for fast grid manipulation?” or “Is there a simple way to parse this with nom?” mirrors real-world development. As long as you make the final call yourself, AI here acts like a knowledgable colleague pointing you toward options, not handing you the solution.

Choosing the right data structure. This is both an AoC skill and a general CS one. Selecting between vectors, hash maps, BTreeMaps, VecDeque, or a custom struct requires understanding the trade-offs. AI can help explain those trade-offs or remind you of performance characteristics. But don’t ask AI which data structure solves the puzzle. The puzzle is making that choice. Use AI to deepen understanding, not to skip the thinking.

Parsing the input into a convenient structure. AI shines here. Parsing is often tedious, repetitive, and not the focus of the puzzle. If you’d rather not spend 20 minutes writing yet another loop splitting lines on spaces, let AI write the initial parser. You’ll still check it, tweak it, and integrate it with your own logic, but AI can save your cognitive energy for the interesting bits.

Choosing the right algorithm. This is the heart of competitive puzzle solving. Deciding whether something requires BFS, DP, a custom state machine, or a greedy approach is a deeply human skill – and one that Advent of Code trains extremely well. This is another area where I’d say: no AI. If you rely on the model to pick the algorithm, you’ve skipped the actual puzzle. You can use AI afterward to compare approaches or learn alternatives, but not during the solving phase.

Picking the right language feature for the job. Rust is full of elegant features – iterators, pattern matching, ownership tricks, generics, traits, lifetimes. Sometimes AI can remind you of a syntactic trick or propose a more idiomatic expression of your idea. That’s fine, as long as the idea itself is yours. Using AI to teach you small idioms or propose cleaner code is actually great training, but avoid asking AI to “rustify” a solution you don’t understand.

Adding a visualization. Visualizations aren’t typically required, but they’re fun and often deepen understanding. Here, AI is extremely useful – whether generating a quick plot with plotters, helping build a tiny TUI with ratatui, or producing a debug print layout. This is auxiliary work, not the core puzzle, so go ahead and use the tools.

Testing your code. Do you need tests for AoC? Strictly speaking, no. But writing a couple of targeted tests is a great habit: testing parsing, edge cases, or parts of your algorithm. AI is a good assistant here: it can generate test scaffolding, propose property tests, or create sample data variations. As long as you understand what the tests check, this is a safe area to lean on AI.

Benchmarking solutions. Benchmarking is a professional skill AoC can absolutely help train – especially on later days, when naive solutions melt your CPU. AI can help you set up criterion benchmarks or interpret microbenchmarking results, but you should decide what to measure and why. Benchmarking is partly technical and partly philosophical: what trade-offs matter? AI can help with the technical part.

Learning stuff along the way. This is the most important skill of all. Every Advent of Code teaches something: an algorithm you forgot, a data structure you never used, a Rust feature you always meant to try. Learning with AI is natural and encouraged. Just ask it questions like a mentor, not like a puzzle-solver-for-hire. Explanation, context, and examples? Great. Solutions to the actual puzzle? Skip those.

Learning how to prompt coding agents to get a fully functional solution. Remember, Eric Wastl writes, “If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.” But is that really true? In my experience, not quite. Most “AI prompting exercises” out there are either artificial or oversimplified. Advent of Code puzzles, on the other hand, are perfect precisely because they aren’t designed for AI. Many of them can’t be solved in one shot, even by a strong coding agent, such as Junie or Claude Agent. They require iterative refinement, mid-course corrections, and careful steering – exactly the techniques you need to master if you want to use AI effectively in real-world development. In other words, these puzzles become some of the best training grounds for learning how to prompt coding agents. You’ll learn how to break down problems, feed partial context, debug collaboratively with the model, and guide it away from dead ends. These are practical, valuable skills, and Advent of Code offers an endless supply of opportunities to practice them.

Implementing the personal AI strategy for AoC in RustRover

At this point, the key question becomes: which skills do you want to train this year? There’s no single correct answer. Maybe you want to sharpen pure algorithmic thinking with zero AI help. Maybe you want to practice integrating AI into your daily workflow. Maybe you want something in between.

The important part is: make this a conscious choice, not an accidental default.

Sample strategies

Here are a few possible “AI strategies” you can adopt:

  • Strategy A: Pure human mode. No AI, no inline completion, no chat. You open RustRover, you read the problem, you write the code. This maximizes training of problem understanding, algorithm selection, data structures, and Rust fluency. It’s also the closest to what Eric Wastl has in mind.
  • Strategy B: Assisted implementation mode. You solve the puzzle on paper (or in your head) first: understand the problem, pick the algorithm, decide on data structures. Only then you let AI help with implementation details: parsing, boilerplate, small refactorings, docs. This is a great mode if you want to protect the “thinking” parts while still practicing how to collaborate with AI in code.
  • Strategy C: Agent steering mode. You deliberately practice guiding an AI coding agent toward a fully working solution. You still read and understand the puzzle, but you treat the model as a junior pair programmer: you prompt, correct, re-prompt, adjust the approach, and iterate. This is ideal if your goal is to improve at prompting coding models, debugging their output, and managing multi-step interactions.

You can even mix and match strategies across puzzle parts and days. For example, you might tackle Part 1 in Pure human mode to fully engage with the core problem, then switch to Assisted implementation or Agent steering for Part 2, where the twist often builds on the same logic. And on particularly difficult days, you might choose a more AI-assisted strategy from the start.

Using RustRover’s AI Assistant chat

For this section, I’m talking about direct chat with a model inside RustRover’s AI Assistant, not higher-level agents like Junie. You’re essentially talking to the model about your codebase and puzzle, asking it to:

  • explain parts of your solution or the standard library;
  • suggest refactorings and idiomatic Rust patterns;
  • help with parsing and data wrangling;
  • generate tests or benchmarks.

The goal is to keep you in charge of the solution, while the model helps with “muscle work” and with explanations.

If you want to train prompting as a skill, treat each puzzle as a mini lab:

  • Set constraints explicitly. “I already chose the algorithm: Dijkstra’s algorithm on a grid. Don’t change the approach, just help me implement it idiomatically in Rust.”
  • Provide context. Paste the relevant part of the puzzle description and your current code, and explain what you’re stuck on: “Parsing is done, I now need to maintain a priority queue of states. Help me implement this using BinaryHeap.”
  • Iterate, don’t restart. Instead of “rewrite everything”, use prompts like “Here is the current solution and the bug I see. Propose a minimal fix.”

This way, you’re not just getting answers; you’re practicing how to drive a coding model effectively.

Tweaking inline completion settings

Finally, inline completion can quietly shape the way you write code – sometimes too much. For Advent of Code, consider tuning it to match your chosen strategy:

  • If you’re in Pure human mode, you might want to turn inline completion off completely, or at least make it less aggressive.
  • If you’re in Assisted implementation mode, keep inline completion on, but be disciplined: accept suggestions only for clearly mechanical code (loops, parsing, simple matches), not for the core algorithm.
  • If you’re in Agent steering mode, you can let inline completion be quite active, but you should review what it proposes and ask the chat assistant to explain non-obvious pieces.

The key idea: your RustRover setup should reflect your personal AI training plan for Advent of Code, not the other way around.

Conclusion

Advent of Code remains one of the best ways to sharpen your coding skills, and AI doesn’t have to diminish that experience – it can enhance it when used intentionally. We shouldn’t let AI solve the puzzles for us, but we can absolutely let it help us write better, cleaner, and faster Rust code. The real challenge is choosing which skills you want to train: from algorithms and data structures to testing, visualization, and prompting coding agents effectively. With the right strategy, AoC becomes not just a seasonal tradition but a focused workout for both your problem-solving mind and your AI collaboration skills. RustRover gives you all the knobs and switches you need to fine-tune that strategy, from chat-based assistance to inline completion settings.

Most importantly, Advent of Code is fun – and every puzzle you attempt, no matter how you solve it, makes you a better engineer. So pick your approach, open RustRover, and go solve some puzzles.

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

332: 2025 Re:Invent Predictions Draft – May The Odds Be Ever In Your Favor

1 Share

Welcome to episode 332 of The Cloud Pod – where the forecast is always cloudy! It’s Thanksgiving week, which can only mean one thing: AWS Re:Invent predictions! In this special episode, Justin, Jonathan, Ryan, and Matt engage in the annual tradition of drafting their best guesses for what AWS will announce at the biggest cloud conference of the year. Justin is the reigning champion (probably because he actually reads the show notes), but with a reverse snake draft order determined by dice roll, anything could happen. Will Werner announce his retirement? Is Cognito finally getting a much-needed overhaul? And just how many times will “AI” be uttered on stage? Grab your turkey and let’s get predicting!

Titles we almost went with this week:

  • Roll For Initiative: The Re:Invent Prediction Draft
  • Justin’s Winning Streak: A Study in Actually Doing Your Homework
  • Serverless GPUs and Broken Dreams: Our Re:Invent Wishlist
  • Shooting in the Dark: AWS Predictions Edition
  • We’re Never Good at This, But Here We Go Again
  • Vegas Odds: What Happens at Re:Invent, Gets Predicted Wrong

AWS Re:Invent Predictions 2025

The annual prediction draft is here! Draft order was determined by dice roll: Jonathan first, followed by Ryan, Justin, and Matt in last position. As always, it’s a reverse order format, with points awarded for each correct prediction announced during the Tuesday, Wednesday, and Thursday keynotes.

Jonathan’s Predictions

  1. Serverless GPU Support – An extension to Lambda or a different service that provides on-demand serverless GPU/inference capability. Likely with requirements for pre-warmed provisioned instances.
  2. Agentic Platform for Continuous AI Agents – A service that allows agents to run continuously with goals or instructions, performing actions periodically or on-demand in the real world. Think: running agents on a schedule that can check conditions and take automated actions.
  3. Werner Vogels Retirement Announcement – Werner will announce that this is his last Re:Invent keynote and that he is retiring.

Ryan’s Predictions

  1. New Trainium 3 Chips, Inferentia, and Graviton Chips – New generation of AWS custom silicon across training, inference, and general compute.
  2. Expanded Model Availability in Bedrock – AWS will significantly expand the number of models available in Bedrock, potentially via partnerships or integrations with additional providers.
  3. Major Refresh to AWS Organizations – UI-based or functionality refresh providing better visibility into SCPs, OU mappings, and stack sets across organizations.

Justin’s Predictions

  1. New Nova Model with Multi-modal Support – Launch of Nova Premier or Nova Sonic with multi-modal capabilities, bringing Amazon’s foundational model to the next level.
  2. OpenAI Partnership Announcement – AWS and OpenAI will announce a strategic partnership, potentially bringing OpenAI models to Bedrock (likely announced on stage).
  3. Advanced Agentic AI Capabilities for Security Hub – Enhanced features for Security Hub adding Agentic AI to help automate SOC team operations.

Matt’s Predictions

  1. Model Router for Bedrock – A service to route LLM queries to different AI models, simplifying the process of testing and selecting models for different use cases.
  2. Well-Architected Framework Expansion – New lenses or significant updates to the Well-Architected Framework beyond the existing Generative AI and Sustainability lenses.
  3. End User Authentication That Doesn’t Suck – A new or significantly revamped end-user authentication service (essentially Cognito 2.0) that actually works well for client portals.

Tiebreaker: How Many Times Will “AI” or “Artificial Intelligence” Be Said On Stage?

If we end in a tie (or nobody gets any predictions correct, which is historically possible), we go to the tiebreaker!

Host Guess Matt 200 Justin 160 Ryan 99 Jonathan 1

Honorable Mentions

Ideas that didn’t make the cut but might just surprise us:

Jonathan:

  • Mathematical proof/verification that text was generated by Amazon’s LLMs (watermarking for AI output)
  • Marketplace for AI work – publish and monetize AI-based tools with Amazon handling billing
  • New consumer device to accompany Nova models (smarter Alexa replacement with local inference)

Ryan:

  • FinOps AI recommender for model usage and cost optimization
  • Savings plans or committed use discounts for Bedrock use cases

Matt:

  • Sustainability/green dashboard improvements
  • AI-specific features for Aurora or DSQL

Justin:

  • Big S3 vectors announcement and integration to Bedrock
  • FinOps service for Kubernetes
  • Amazon Q Developer with autonomous coding agents
  • New GPU architecture combining training/inference/Graviton capabilities
  • Amazon Bedrock model marketplace for revenue share on fine-tuned models

Quick Hits From the Episode

  • 00:02 – Is it really Re:Invent already? The existential crisis begins.
  • 01:44 – Jonathan reveals why Justin always wins: “Because you read the notes.”
  • 02:54 – Matt hasn’t been to a Re:Invent session since Image Builder launched… eight years ago.
  • 05:03 – Jonathan comes in hot with serverless GPU support prediction.
  • 06:57 – The inference vs. training cost debate – where’s the real ROI?
  • 09:30 – Matt’s picks get systematically destroyed by earlier drafters.
  • 14:09 – The OpenAI partnership prediction causes draft chaos.
  • 16:24 – Jonathan drops the Werner retirement bombshell.
  • 19:12 – Justin’s Security Hub prediction: “Please automate the SOC teams.”
  • 19:46 – Everyone hates Cognito. Matt’s prediction resonates with the universe.
  • 21:47 – Tiebreaker time: Jonathan goes with 1 out of pure spite.
  • 24:08 – Honorable mentions include mathematical AI verification and a marketplace for AI work.

Re:Invent Tips (From People Who Aren’t Going)

Since none of us are attending this year, here’s what we remember from the good old days:

  • Chalk Talks remain highly respected and valuable for deep technical content
  • Labs and hands-on sessions are worth your time more than keynotes you can watch online
  • Networking on the expo floor and in hallways is where the real value happens
  • Don’t try to see everything – focus on what matters to your work
  • Stay hydrated – Vegas is dry and conferences are exhausting

Closing

And that is the week in the cloud! We’re taking Thanksgiving week off, so there won’t be an episode during Re:Invent. We’ll record late that week and have a dedicated Re:Invent recap episode the following week. If you’re heading to Las Vegas, have a great time and let us know how it goes!

Visit our website, the home of the Cloud Pod, where you can join our newsletter, Slack team, send feedback, or ask questions at theCloudPod.net or tweet at us with the hashtag #theCloudPod





Download audio: https://episodes.castos.com/5e2d2c4b117f29-10227663/2248333/c1e-rodobw3vd7f0wpw1-wwpr3jx7id5x-ua5nu7.mp3
Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Episode 13: UWP Apps with Daniel Paulino

1 Share

This week our guest is Daniel Paulino, builder of multiple popular UWP apps like Nightingale REST client and Ambie White Noise. You can find Daniel on GitHub https://github.com/dpaulino and on Bluesky https://bsky.app/profile/kidjenius.bsky.social





Download audio: https://media.transistor.fm/a1784dd0/280d226d.mp3
Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

ESLint v10.0.0-alpha.1 released

1 Share

Highlights

This version of ESLint is not ready for production use and is provided to gather feedback from the community before releasing the final version. Please let us know if you having any problems or feedback by creating issues on our GitHub repo.

Most of the highlights of this release are breaking changes, and are discussed further in the migration guide. There are summaries of the significant changes below. (Less significant changes are included in the migration guide.)

This prerelease version of ESLint has a separate documentation section.

Removed deprecated SourceCode methods

The following SourceCode methods are no longer available:

  • getTokenOrCommentBefore() - Use getTokenBefore() with the { includeComments: true } option instead
  • getTokenOrCommentAfter() - Use getTokenAfter() With the { includeComments: true } option instead
  • isSpaceBetweenTokens() - Use isSpaceBetween() instead
  • getJSDocComment() - No replacement

Users of plugins that haven’t updated their code yet can use the @eslint/compat utility in the meantime.

Installing

Since this is a pre-release version, you will not automatically be upgraded by npm. You must specify the next tag when installing:

npm i eslint@next --save-dev
1

You can also specify the version directly:

npm i eslint@10.0.0-alpha.1 --save-dev
1

Migration Guide

As there are a lot of changes, we’ve created a migration guide describing the breaking changes in great detail along with the steps you should take to address them. We expect that most users should be able to upgrade without any build changes, but the migration guide should be a useful resource if you encounter problems.

Breaking Changes

  • fa31a60 feat!: add name to configs (#20015) (Kirk Waiblinger)
  • 3383e7e fix!: remove deprecated SourceCode methods (#20137) (Pixel998)
  • 501abd0 feat!: update dependency minimatch to v10 (#20246) (renovate[bot])
  • ca4d3b4 fix!: stricter rule tester assertions for valid test cases (#20125) (唯然)

Features

Documentation

Chores

  • 0b14059 chore: package.json update for @eslint/js release (Jenkins)
  • d6e7bf3 ci: bump actions/checkout from 5 to 6 (#20350) (dependabot[bot])
  • 139d456 chore: require mandatory headers in rule docs (#20347) (Milos Djermanovic)
  • 3b0289c chore: remove unused .eslintignore and test fixtures (#20316) (Pixel998)
  • a463e7b chore: update dependency js-yaml to v4 [security] (#20319) (renovate[bot])
  • ebfe905 chore: remove redundant rules from eslint-config-eslint (#20327) (Milos Djermanovic)
  • 88dfdb2 test: add regression tests for message placeholder interpolation (#20318) (fnx)
  • 6ed0f75 chore: skip type checking in eslint-config-eslint (#20323) (Francesco Trotta)
Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

It’s not the AI, it’s what you do with it

1 Share

Let’s draw a parallel between today’s software engineers and Renaissance painters.

For centuries, painters thrived because society depended on their ability to capture reality. But when a better tool arrived – the camera – the ground beneath them shifted. The craft didn’t disappear, but the meaning of value changed forever.

“We’re facing something similar,” said Tejas Kumar (Developer Advocate, IBM) at the Shift conference in Kuala Lumpur. Citing tools like Cursor, Lovable, Bolt.new, Vero, and Windsor, he reminded developers that coding agents are already writing software faster (and often more reliably) than humans.

Developers must rely on first-principles reasoning

The data backs this up. Job openings across S&P 500 companies dropped sharply after ChatGPT’s release. Yes, market cycles and the zero-interest-rate hiring bubble played their part, but the trend points to something bigger: the profession is being reshaped, and developers need to understand how to stay relevant in the years ahead.

To navigate this landscape, Kumar argued, developers need to lean on first-principles reasoning. The term gets tossed around often in tech circles, but it’s rarely defined clearly. He offered a straightforward explanation:

First-principles reasoning means starting from what is invariant – the parts that never change – and building your understanding from there.

Invariants are the fundamental laws of reality – things like gravity, light, or the rising and setting of the sun. They remain constant, no matter what tools we create.

Returning to the Renaissance analogy, Kumar explained that both painters and cameras are simply different ways of capturing the same invariant: light. The tools change, but the underlying truth stays the same.

This approach, he argued, helps us understand the deep value of AI.

The goal isn’t to cling to the tools we’ve always used but to identify the underlying invariant that AI supports. In this case, it is reclaiming time and human agency – giving developers the freedom to focus on meaning while delegating repetitive work to machines.

The invariant in AI? It gives us back lost time

The point became unmistakable when Tejas demonstrated a multi-step AI agent in real time. Without typing a single keystroke, he watched as the agent opened Chrome, searched for the event schedule, parsed the results, and added the correct session to his calendar.

While my hands were off, what could I have been doing? I could’ve been at the gym. Out on a run. Playing with children I don’t have yet – but pray for every day. I could have been doing something meaningful. Instead, I’ve outsourced this tedious work to my agent- and in return, I get back life.

AI does not simply automate tasks. It returns lost hours and makes room for creativity, rest, curiosity, and focus. That, he argued, is the true invariant AI addresses.

Tejas urged developers to embrace the moment – engineers have never had access to so many powerful open-source tools.

Breakthroughs don’t need new toys – they need new tricks

As the keynote wrapped, Kumar recounted the telescope’s origins. In 1608, Dutch glassmakers used clear spyglasses horizontally to scan the horizon. A year later, Galileo pointed the same tool upward, unlocking new worlds.

“He literally saw Jupiter. He saw Saturn,” Tejas said. “A tool used differently became the telescope we know today.”

This story illustrates a timeless lesson: transformative breakthroughs often come not from inventing new tools, but from using existing ones in unexpected ways. In today’s era of open-source models, MCP servers, frameworks like LangFlow, and an unprecedented supply of freely accessible AI technologies, Kumar posed a question to developers that was simple, but profound:

How are we using these tools and how might we use them differently to achieve more, or even discover entirely new possibilities?

In the age of AI you need to be CREATIVE

Tejas invited developers to embrace the moment rather than fear it. Never before have engineers had access to such a vast array of powerful open-source tools. LangFlow itself, he reminded the audience, is fully MIT-licensed and easy to self-host, letting anyone build scalable agents through a visual interface.

But his message went beyond tools or licenses. It was a call to creativity, a call to agency – a reminder for developers to lift their gaze, imagine new possibilities, and see where these tools can take us when used in unexpected ways.

The post It’s not the AI, it’s what you do with it appeared first on ShiftMag.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Lightning-as-a-service for agriculture

1 Share
Darryl Lyons, co-founder and Chief Rainmaker at Rainstick, joins the show to dive into advancements in AgTech and how Rainstick is using bioelectricity to enhance agricultural productivity.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories