Kotlin 2.3.0-RC is out!
Kotlin 2.3.0-RC has landed! Learn about the new features, explore the improvements, and get ready for the upcoming stable release.
See what’s newThis is a submission for the AI Challenge for Cross-Platform Apps - WOW Factor
I built a "WOW Factor" home screen for a premium, artisan coffee shop called the "Artisan Brew House."
The goal was to create a design that feels modern, cozy, and truly premium, moving far beyond a basic layout. The app uses a dark, multi-layered gradient background with warm, coffee-toned highlights to create a stunning visual vibe.
The entire UI is built 100% from XAML code, using LinearGradientBrush for backgrounds, Border with CornerRadius for "glass" and "card" effects, and FontIcon for all iconography (no images were used).
Here is the full-page screenshot of the app running on Windows Desktop, built from a single Uno Platform codebase.
For this submission, I focused on building and running the app on Windows Desktop (using the net10.0-desktop framework).
The power of Uno Platform is that this single XAML codebase is designed to be cross-platform, allowing it to be compiled for Android, iOS, macOS, Linux, and WebAssembly with minimal effort.
The design is highly creative and interactive, focusing on a premium user experience:
ScrollViewer contains "Popular Brew" cards. Each card uses a RadialGradientB
rush and FontIcon to create a beautiful, minimalist visual for each drink.Background="#22FFFFFF") allow for quick access to "My Rewards" and "Find Store."The "WOW Factor" of this app is its world-class, premium design achieved entirely with 100% XAML code.
It proves that you don't need to rely on static images to create a rich, visually stunning, and modern UI. The use of multiple gradients, "glass" borders, layered elements (like the "Hero Card"), and fluid layout creates an experience that truly makes the user say "Wow."
The Syntax team brings us their annual Holiday Gift Guide! They’ve curated the best gadgets, tools, food, and even kitchen essentials for the dev in your life — plus a few treats anyone would love to unwrap.
Syntax: X Instagram Tiktok LinkedIn Threads
Wes: X Instagram Tiktok LinkedIn Threads
Scott: X Instagram Tiktok LinkedIn Threads
Randy: X Instagram YouTube Threads
I’ve gathered the latest Kotlin highlights for you – from the Kotlin Reddit AMA and documentation updates to learning programs and Google Summer of Code 2025 projects. Whether you’re here to stay up to date or just looking for something interesting to explore, there’s plenty to dive into.
In this special episode, Dawid Dahl introduces Augmented AI Development (AAID)—a disciplined approach where professional developers augment their capabilities with AI while maintaining full architectural control. He explains why starting with software engineering fundamentals and adding AI where appropriate is the opposite of most frameworks, and why this approach produces production-grade software rather than technical debt.
"Two of the fundamental developer principles for AAID are: first, don't abandon your brain. And the second is incremental steps."
Dawid's Augmented AI Development framework stands in stark contrast to "vibecoding"—which he defines strictly as not caring about code at all, only results on screen. AAID is explicitly designed for professional developers who maintain full understanding and control of their systems. The framework is positioned on the furthest end of the spectrum from vibe coding, requiring developers to know their craft deeply. The two core principles—don't abandon your brain, work incrementally—reflect a philosophy that AI is a powerful collaborator, not a replacement for thinking. This approach recognizes that while 96% of Dawid's code is now written by AI, he remains the architect, constantly steering and verifying every step.
In this segment we refer to Marcus Hammarberg's work and his book The Bungsu Story.
"You should start with software engineering wisdom, and then only add AI where it's actually appropriate. I think this is super, super important, and the entire foundation of this framework. This is a hill I will personally die on."
What makes AAID fundamentally different from other AI-assisted development frameworks is its starting point. Most frameworks start with AI capabilities and try to add structure and best practices afterward. Dawid argues this is completely backwards. AAID begins with 50-60 years of proven software engineering wisdom—test-driven development, behavior-driven development, continuous delivery—and only then adds AI where it enhances the process. This isn't a minor philosophical difference; it's the foundation of producing maintainable, production-grade software. Dawid admits he's sometimes "manipulating developers to start using good, normal software engineering practices, but in this shiny AI box that feels very exciting and new." If the AI wrapper helps developers finally adopt TDD and BDD, he's fine with that.
"Every time I prompt an AI and it writes code for me, there is often at least one or two or three mistakes that will cause catastrophic mistakes down the line and make the software impossible to change."
Test-driven development isn't just a nice-to-have in AAID—it's essential. Dawid has observed that AI consistently makes 2-3 mistakes per prompt that could have catastrophic consequences later. Without TDD's red-green-refactor cycle, these errors accumulate, making code increasingly difficult to change. TDD answers the question "Is my code technically correct?" while acceptance tests answer "Is the system releasable?" Both are needed for production-grade software. The refactor step is where 50-60 years of software engineering wisdom gets applied to make code maintainable. This matters because AAID isn't vibe coding—developers care deeply about code quality, not just visible results. Good software, as Dave Farley says, is software that's easy to change. Without TDD, AI-generated code becomes a maintenance nightmare.
"When I hear 'our AI can now code for over 30 hours straight without stopping,' I get very afraid. You fall asleep, and the next morning, the code is done. Maybe the tests are green. But what has it done in there? Imagine everything it does for 30 hours. This system will not work."
Dawid sees two diverging paths for AI-assisted development's future. The first—autonomous agents working for hours or days without supervision—terrifies him. The marketing pitch sounds appealing: prompt the AI, go to sleep, wake up to completed features. But the reality is technical debt accumulation at scale. Imagine all the decisions, all the architectural choices, all the mistakes an AI makes over 30 hours of autonomous work. Dawid advocates for the stark contrast: working in extremely small increments with constant human steering, always aligned to specifications. His vision of the future isn't AI working alone—it's voice-controlled confirmations where he says "Yes, yes, no, yes" as AI proposes each tiny change. This aligns with DORA metrics showing that high-performing teams work in small batches with fast feedback loops.
"Without Dave Farley, this framework would be totally different. I think he does everything right, basically. With this framework, I want to stand on the shoulders of giants and work on top of what has already been done."
AAID explicitly requires product discovery and specification phases before AI-assisted coding begins. This is based on Dave Farley's product journey model, which shows how products move from idea to production. AAID starts at the "executable specifications" stage—it requires input specifications from prior discovery work. This separates specification creation (which Dawid is addressing in a separate "Dream Encoder" framework) from code execution. The prerequisite isn't arbitrary; it acknowledges that AI-assisted implementation works best when the problem is well-defined. This "standing on shoulders of giants" approach means AAID doesn't try to reinvent software engineering—it leverages decades of proven practices from TDD pioneers, BDD creators, and continuous delivery experts.
"When the AI decides to check the box [in task lists], that means this is the definition of done. But how is the AI taking that decision? It's totally ad hoc. It's like going back to the 1980s: 'I wrote the code, I'm done.' But what does that mean? Nobody has any idea."
Dawid is critical of current AI frameworks like SpecKit, pointing out fundamental flaws. They start with AI first and try to add structure later (backwards approach). They use task lists with checkboxes where AI decides when something is "done"—but without clear criteria, this becomes ad hoc decision-making reminiscent of 1980s development practices. These frameworks "vibecode the specs," not realizing there's a structured taxonomy to specifications that BDD already solved. Most concerning, some have removed testing as a "feature," treating it as optional. Dawid sees these frameworks as over-engineered, process-centric rather than developer-centric, often created by people who may not develop software themselves. AAID, in contrast, is built by a practicing developer solving real problems daily.
"The first thing developers should do is learn the fundamentals. They should skip AI altogether and learn about BDD and TDD, just best practices. But when you know that, then you can look into a framework, maybe like mine."
Dawid's advice for developers interested in AI-assisted coding might seem counterintuitive: start by learning fundamentals without AI. Master behavior-driven development, test-driven development, and software engineering best practices first. Only after understanding these foundations should developers explore frameworks like AAID. This isn't gatekeeping—it's recognizing that AI amplifies whatever approach developers bring. If they start with poor practices, AI will help them build unmaintainable systems faster. But if they start with solid fundamentals, AI becomes a powerful multiplier that lets them work at unprecedented speed while maintaining quality. AAID offers both a dense technical article on dev.to and a gentler game-like onboarding in the GitHub repo, meeting developers wherever they are in their journey.
About Dawid Dahl
Dawid is the creator of Augmented AI Development (AAID), a disciplined approach where developers augment their capabilities by integrating with AI, while maintaining full architectural control. Dawid is a software engineer at Umain, a product development agency.
You can link with Dawid Dahl on LinkedIn and find the AAID framework on GitHub.
The pressure is on. Every conference, every tech blog, every corner of the internet is buzzing with AI agents, autonomous workflows, and the promise of a revolution powered by large language models (LLMs). As a developer, it’s easy to feel like you need to integrate AI into every feature and deploy agents for every task.
But what if the smartest move isn’t to use AI, but to know when not to?
This isn’t a contrarian take for the sake of it; it’s a call for a return to engineering pragmatism. The current hype cycle often encourages us to reach for the most complex, exciting tool in the box, even when a simple screwdriver would do the job better. Just as you wouldn’t spin up a Kubernetes cluster to host a static landing page, you shouldn’t use a powerful, probabilistic LLM for a task that is, and should be, deterministic.
This post is a guide to cutting through the noise. We’ll explore why using AI indiscriminately is an anti-pattern, and lay out a practical framework for deciding when and how to use AI and agents effectively. All of this will help ensure you’re building solutions that are robust, efficient, and cost-effective.
Would you use a cloud-based AI service to solve 1 + 1? Of course not. It sounds absurd. Yet, many of the AI implementations I see today are the developer equivalent of that very question. We’re so mesmerized by what AI can do that we forget to ask if it should do it.
Using an LLM for a simple, well-defined task is a classic case of over-engineering. It introduces three significant penalties that deterministic, traditional code avoids.

API calls to powerful models like GPT-5 or Claude 4.5 are not free. Let’s say you need to validate if a user’s input is a valid email address. You could send this string to an LLM with a prompt like, “Is the following a valid email address? Answer with only ‘true’ or ‘false’.”
A simple regex in JavaScript, however, is free and executes locally in microseconds.
function isValidEmail(email) {
const regex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
return regex.test(email);
}
Now, imagine this check runs on every keystroke in a form for thousands of users. The cost of the LLM approach quickly spirals from negligible to substantial. The regex remains free, forever.

Every API call to an LLM is a network round-trip. It involves sending your prompt, waiting for the model to process it, and receiving the response. For the user, this translates to noticeable lag.
Consider a simple data transformation: converting a string from snake_case to camelCase. A local function is instantaneous. An LLM call could take anywhere from 300 milliseconds to several seconds. In a world where user experience is paramount, introducing such a bottleneck for a trivial task is a step backward.

This is the most critical issue for developers. LLMs are probabilistic; your code should be deterministic. When you run a function, you expect the same input to produce the same output, every single time. This is the bedrock of reliable software.
LLMs don’t offer that guarantee. They can hallucinate or misinterpret context. If your AI-powered email validator suddenly decides a valid email is invalid, you have a bug that is maddeningly difficult to reproduce and debug. For core application logic, you need 100% reliability:
So, if we shouldn’t use AI for simple, deterministic tasks, what is it actually good for?
The answer is simple: use AI for problems that are difficult or impossible to solve with traditional code. LLMs excel where logic is fuzzy, data is unstructured, and the goal is generation or interpretation, not calculation.
The guiding principle should be: Use deterministic code for deterministic problems. Use probabilistic models for probabilistic problems.
This applies to both in-application logic and our own development processes.
Here are a few areas where integrating an LLM into your application is the right tool for the job:
Beyond integrating AI into your applications, remember its immense utility as a personal productivity tool. Using AI for these auxiliary tasks boosts your efficiency without introducing its probabilistic nature into your core application logic. You can use LLMs to:
The hype around AI gets even more intense when we talk about AI agents. An agent is often presented as a magical entity that can autonomously use tools, browse the web, and solve complex problems with a single prompt.
This autonomy is both a great strength and a significant risk. Letting an LLM decide which tool to use or determine next steps introduces another layer of non-determinism. What if it chooses the wrong tool? What if it gets stuck in a loop, burning through your budget?
Before jumping to a fully autonomous agent, we should look at a spectrum of patterns that offer more structure and reliability. In their excellent article, “Building effective agents,” the team at Anthropic draws a crucial architectural distinction:

The key takeaway is to start with the simplest solution and only add complexity when necessary. The image below visualizes this spectrum, moving from predictable workflows to inference-driven agents:
Before building complex workflows, we need to understand the fundamental unit: what Anthropic calls the Augmented LLM. This isn’t just a base model; it’s an LLM enhanced with external capabilities. The two most important augmentations are:
These two building blocks—Tool Use and RAG—are the foundation upon which more sophisticated and reliable systems are built.
Now, let’s see how these building blocks can be assembled into the structured, predictable workflows that Anthropic recommends as alternatives to fully autonomous agents.
The simplest multi-step pattern. A task is broken down into a fixed sequence of steps, where the output of one LLM call becomes the input for the next. A step within this chain could absolutely involve the LLM using a tool or performing a RAG query.
In this pattern, an initial LLM call classifies an input and directs it to one of several specialized, downstream workflows. This is perfect for customer support, where you might route a query to a “refund process” (which uses a tool to access an orders database) or a “technical support” handler (which uses RAG to search documentation).
This workflow involves running multiple LLM calls simultaneously and aggregating their outputs. For instance, you could have one LLM call use RAG to find relevant policy documents while another call uses a tool to check the user’s account status.
A central “orchestrator” LLM breaks down a complex task and delegates sub-tasks to specialized “worker” LLMs, which in turn use the tools and retrieval capabilities necessary to complete their specific job.
This creates an iterative refinement loop. One LLM call generates a response, while a second “evaluator” LLM provides feedback. The first LLM then uses this feedback to improve its response. The evaluator’s criteria could be informed by information retrieved via RAG.
Before you write import openai, pause and ask yourself a few questions. The goal is not to use AI; the goal is to solve a problem efficiently.
Being a great developer isn’t about chasing every new trend. It’s about building robust, efficient, and valuable software. AI and agents are incredibly powerful tools, but they are just that: tools. True innovation comes from knowing your toolbox inside and out and picking the right tool for the job.
Focus on the customer’s pain points. Solve them in the most efficient and reliable way possible. Sometimes that will involve a cutting-edge LLM, but often, it will be a simple, elegant piece of deterministic code. That is the path to building things that last.
The post You don’t need AI for everything: A reality check for developers appeared first on LogRocket Blog.