Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149797 stories
·
33 followers

Artisan Brew House: A "WOW Factor" UI built with Uno Platform

1 Share

This is a submission for the AI Challenge for Cross-Platform Apps - WOW Factor

What I Built

I built a "WOW Factor" home screen for a premium, artisan coffee shop called the "Artisan Brew House."

The goal was to create a design that feels modern, cozy, and truly premium, moving far beyond a basic layout. The app uses a dark, multi-layered gradient background with warm, coffee-toned highlights to create a stunning visual vibe.

The entire UI is built 100% from XAML code, using LinearGradientBrush for backgrounds, Border with CornerRadius for "glass" and "card" effects, and FontIcon for all iconography (no images were used).

Demo

Here is the full-page screenshot of the app running on Windows Desktop, built from a single Uno Platform codebase.


Cross-Platform Magic

For this submission, I focused on building and running the app on Windows Desktop (using the net10.0-desktop framework).

The power of Uno Platform is that this single XAML codebase is designed to be cross-platform, allowing it to be compiled for Android, iOS, macOS, Linux, and WebAssembly with minimal effort.

Interactive Features

The design is highly creative and interactive, focusing on a premium user experience:

  • Brand Identity: A strong, elegant header section that establishes the "Artisan Brew House" brand.
  • "Today's Masterpiece": Instead of a boring list, I designed a "Hero Card" that showcases the main special brew. It features a glowing icon, descriptive tags, and a prominent "Order Now" button.
  • "Most Loved" Carousel: A horizontal ScrollViewer contains "Popular Brew" cards. Each card uses a RadialGradientB ![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ret6780m4ko8ol9b9x71.png)rush and FontIcon to create a beautiful, minimalist visual for each drink.
  • Quick Access Cards: At the bottom, two "glass effect" cards (Background="#22FFFFFF") allow for quick access to "My Rewards" and "Find Store."
  • Main CTA: A final "Explore Full Menu" button with a gradient background serves as the main call-to-action for the entire screen.

The Wow Factor

The "WOW Factor" of this app is its world-class, premium design achieved entirely with 100% XAML code.

It proves that you don't need to rely on static images to create a rich, visually stunning, and modern UI. The use of multiple gradients, "glass" borders, layered elements (like the "Hero Card"), and fluid layout creates an experience that truly makes the user say "Wow."

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

958: 2025 Holiday Gift Guide

1 Share

The Syntax team brings us their annual Holiday Gift Guide! They’ve curated the best gadgets, tools, food, and even kitchen essentials for the dev in your life — plus a few treats anyone would love to unwrap.

Show Notes

Hit us up on Socials!

Syntax: X Instagram Tiktok LinkedIn Threads

Wes: X Instagram Tiktok LinkedIn Threads

Scott: X Instagram Tiktok LinkedIn Threads

Randy: X Instagram YouTube Threads





Download audio: https://traffic.megaphone.fm/FSI1409561011.mp3
Read the whole story
alvinashcraft
40 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How vibe coding can make your software headaches worse, experts warn

1 Share
What may start as 'move fast and break things' too often becomes move fast and break everything, then spend a fortune rebuilding it.'
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Kodee’s Kotlin Roundup: Too Many News to Keep Quiet About

1 Share

I’ve gathered the latest Kotlin highlights for you – from the Kotlin Reddit AMA and documentation updates to learning programs and Google Summer of Code 2025 projects. Whether you’re here to stay up to date or just looking for something interesting to explore, there’s plenty to dive into.

Where You Can Learn More

YouTube Highlights

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

AI Assisted Coding: Augmented AI Development - Software Engineering First, AI Second With Dawid Dahl

1 Share

BONUS: Augmented AI Development - Software Engineering First, AI Second

In this special episode, Dawid Dahl introduces Augmented AI Development (AAID)—a disciplined approach where professional developers augment their capabilities with AI while maintaining full architectural control. He explains why starting with software engineering fundamentals and adding AI where appropriate is the opposite of most frameworks, and why this approach produces production-grade software rather than technical debt.

The AAID Philosophy: Don't Abandon Your Brain

"Two of the fundamental developer principles for AAID are: first, don't abandon your brain. And the second is incremental steps."

 

Dawid's Augmented AI Development framework stands in stark contrast to "vibecoding"—which he defines strictly as not caring about code at all, only results on screen. AAID is explicitly designed for professional developers who maintain full understanding and control of their systems. The framework is positioned on the furthest end of the spectrum from vibe coding, requiring developers to know their craft deeply. The two core principles—don't abandon your brain, work incrementally—reflect a philosophy that AI is a powerful collaborator, not a replacement for thinking. This approach recognizes that while 96% of Dawid's code is now written by AI, he remains the architect, constantly steering and verifying every step.

In this segment we refer to Marcus Hammarberg's work and his book The Bungsu Story.

Software Engineering First, AI Second: A Hill to Die On

"You should start with software engineering wisdom, and then only add AI where it's actually appropriate. I think this is super, super important, and the entire foundation of this framework. This is a hill I will personally die on."

 

What makes AAID fundamentally different from other AI-assisted development frameworks is its starting point. Most frameworks start with AI capabilities and try to add structure and best practices afterward. Dawid argues this is completely backwards. AAID begins with 50-60 years of proven software engineering wisdom—test-driven development, behavior-driven development, continuous delivery—and only then adds AI where it enhances the process. This isn't a minor philosophical difference; it's the foundation of producing maintainable, production-grade software. Dawid admits he's sometimes "manipulating developers to start using good, normal software engineering practices, but in this shiny AI box that feels very exciting and new." If the AI wrapper helps developers finally adopt TDD and BDD, he's fine with that.

Why TDD is Non-Negotiable with AI

"Every time I prompt an AI and it writes code for me, there is often at least one or two or three mistakes that will cause catastrophic mistakes down the line and make the software impossible to change."

 

Test-driven development isn't just a nice-to-have in AAID—it's essential. Dawid has observed that AI consistently makes 2-3 mistakes per prompt that could have catastrophic consequences later. Without TDD's red-green-refactor cycle, these errors accumulate, making code increasingly difficult to change. TDD answers the question "Is my code technically correct?" while acceptance tests answer "Is the system releasable?" Both are needed for production-grade software. The refactor step is where 50-60 years of software engineering wisdom gets applied to make code maintainable. This matters because AAID isn't vibe coding—developers care deeply about code quality, not just visible results. Good software, as Dave Farley says, is software that's easy to change. Without TDD, AI-generated code becomes a maintenance nightmare.

The Problem with "Prompt and Pray" Autonomous Agents

"When I hear 'our AI can now code for over 30 hours straight without stopping,' I get very afraid. You fall asleep, and the next morning, the code is done. Maybe the tests are green. But what has it done in there? Imagine everything it does for 30 hours. This system will not work."

 

Dawid sees two diverging paths for AI-assisted development's future. The first—autonomous agents working for hours or days without supervision—terrifies him. The marketing pitch sounds appealing: prompt the AI, go to sleep, wake up to completed features. But the reality is technical debt accumulation at scale. Imagine all the decisions, all the architectural choices, all the mistakes an AI makes over 30 hours of autonomous work. Dawid advocates for the stark contrast: working in extremely small increments with constant human steering, always aligned to specifications. His vision of the future isn't AI working alone—it's voice-controlled confirmations where he says "Yes, yes, no, yes" as AI proposes each tiny change. This aligns with DORA metrics showing that high-performing teams work in small batches with fast feedback loops.

Prerequisites: Product Discovery Must Come First

"Without Dave Farley, this framework would be totally different. I think he does everything right, basically. With this framework, I want to stand on the shoulders of giants and work on top of what has already been done."

 

AAID explicitly requires product discovery and specification phases before AI-assisted coding begins. This is based on Dave Farley's product journey model, which shows how products move from idea to production. AAID starts at the "executable specifications" stage—it requires input specifications from prior discovery work. This separates specification creation (which Dawid is addressing in a separate "Dream Encoder" framework) from code execution. The prerequisite isn't arbitrary; it acknowledges that AI-assisted implementation works best when the problem is well-defined. This "standing on shoulders of giants" approach means AAID doesn't try to reinvent software engineering—it leverages decades of proven practices from TDD pioneers, BDD creators, and continuous delivery experts.

What's Wrong with Other AI Frameworks

"When the AI decides to check the box [in task lists], that means this is the definition of done. But how is the AI taking that decision? It's totally ad hoc. It's like going back to the 1980s: 'I wrote the code, I'm done.' But what does that mean? Nobody has any idea."

 

Dawid is critical of current AI frameworks like SpecKit, pointing out fundamental flaws. They start with AI first and try to add structure later (backwards approach). They use task lists with checkboxes where AI decides when something is "done"—but without clear criteria, this becomes ad hoc decision-making reminiscent of 1980s development practices. These frameworks "vibecode the specs," not realizing there's a structured taxonomy to specifications that BDD already solved. Most concerning, some have removed testing as a "feature," treating it as optional. Dawid sees these frameworks as over-engineered, process-centric rather than developer-centric, often created by people who may not develop software themselves. AAID, in contrast, is built by a practicing developer solving real problems daily.

Getting Started: Learn Fundamentals First

"The first thing developers should do is learn the fundamentals. They should skip AI altogether and learn about BDD and TDD, just best practices. But when you know that, then you can look into a framework, maybe like mine."

 

Dawid's advice for developers interested in AI-assisted coding might seem counterintuitive: start by learning fundamentals without AI. Master behavior-driven development, test-driven development, and software engineering best practices first. Only after understanding these foundations should developers explore frameworks like AAID. This isn't gatekeeping—it's recognizing that AI amplifies whatever approach developers bring. If they start with poor practices, AI will help them build unmaintainable systems faster. But if they start with solid fundamentals, AI becomes a powerful multiplier that lets them work at unprecedented speed while maintaining quality. AAID offers both a dense technical article on dev.to and a gentler game-like onboarding in the GitHub repo, meeting developers wherever they are in their journey.

 

About Dawid Dahl

 

Dawid is the creator of Augmented AI Development (AAID), a disciplined approach where developers augment their capabilities by integrating with AI, while maintaining full architectural control. Dawid is a software engineer at Umain, a product development agency.

 

You can link with Dawid Dahl on LinkedIn and find the AAID framework on GitHub.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251126_Dawid_Dahl_W.mp3?dest-id=246429
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

You don’t need AI for everything: A reality check for developers

1 Share

The pressure is on. Every conference, every tech blog, every corner of the internet is buzzing with AI agents, autonomous workflows, and the promise of a revolution powered by large language models (LLMs). As a developer, it’s easy to feel like you need to integrate AI into every feature and deploy agents for every task.

spalato pragmatic ai featured image

But what if the smartest move isn’t to use AI, but to know when not to?

This isn’t a contrarian take for the sake of it; it’s a call for a return to engineering pragmatism. The current hype cycle often encourages us to reach for the most complex, exciting tool in the box, even when a simple screwdriver would do the job better. Just as you wouldn’t spin up a Kubernetes cluster to host a static landing page, you shouldn’t use a powerful, probabilistic LLM for a task that is, and should be, deterministic.

This post is a guide to cutting through the noise. We’ll explore why using AI indiscriminately is an anti-pattern, and lay out a practical framework for deciding when and how to use AI and agents effectively. All of this will help ensure you’re building solutions that are robust, efficient, and cost-effective.

The “1+1” problem: AI over-engineering

Would you use a cloud-based AI service to solve 1 + 1? Of course not. It sounds absurd. Yet, many of the AI implementations I see today are the developer equivalent of that very question. We’re so mesmerized by what AI can do that we forget to ask if it should do it.

Using an LLM for a simple, well-defined task is a classic case of over-engineering. It introduces three significant penalties that deterministic, traditional code avoids.

1. The cost penalty 💰

API calls to powerful models like GPT-5 or Claude 4.5 are not free. Let’s say you need to validate if a user’s input is a valid email address. You could send this string to an LLM with a prompt like, “Is the following a valid email address? Answer with only ‘true’ or ‘false’.”

A simple regex in JavaScript, however, is free and executes locally in microseconds.

function isValidEmail(email) {
  const regex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
  return regex.test(email);
}

Now, imagine this check runs on every keystroke in a form for thousands of users. The cost of the LLM approach quickly spirals from negligible to substantial. The regex remains free, forever.

2. The latency penalty ⏳

Every API call to an LLM is a network round-trip. It involves sending your prompt, waiting for the model to process it, and receiving the response. For the user, this translates to noticeable lag.

Consider a simple data transformation: converting a string from snake_case to camelCase. A local function is instantaneous. An LLM call could take anywhere from 300 milliseconds to several seconds. In a world where user experience is paramount, introducing such a bottleneck for a trivial task is a step backward.

3. The reliability penalty 🎲

This is the most critical issue for developers. LLMs are probabilistic; your code should be deterministic. When you run a function, you expect the same input to produce the same output, every single time. This is the bedrock of reliable software.

LLMs don’t offer that guarantee. They can hallucinate or misinterpret context. If your AI-powered email validator suddenly decides a valid email is invalid, you have a bug that is maddeningly difficult to reproduce and debug. For core application logic, you need 100% reliability:

How to find AI’s sweet spot

So, if we shouldn’t use AI for simple, deterministic tasks, what is it actually good for?

The answer is simple: use AI for problems that are difficult or impossible to solve with traditional code. LLMs excel where logic is fuzzy, data is unstructured, and the goal is generation or interpretation, not calculation.

The guiding principle should be: Use deterministic code for deterministic problems. Use probabilistic models for probabilistic problems.

This applies to both in-application logic and our own development processes.

AI in your application

Here are a few areas where integrating an LLM into your application is the right tool for the job:

  • Analyzing unstructured data: Data analysis and AI are a great match. You can’t write a regex to understand the sentiment of a thousand user reviews. But an LLM can parse them, identify common themes, and provide a nuanced summary in seconds.
  • Natural language interfaces: Building a chatbot that understands human language is a classic AI problem. It allows users to send requests like, “show me last quarter’s sales reports for the European region” instead of navigating complex filter menus.
  • Generative and creative tasks: AI is unparalleled at generating content, from marketing emails to product descriptions. In these cases, the AI acts as a powerful assistant, not as a critical piece of application logic.

AI as your developer tool

Beyond integrating AI into your applications, remember its immense utility as a personal productivity tool. Using AI for these auxiliary tasks boosts your efficiency without introducing its probabilistic nature into your core application logic. You can use LLMs to:

  • Brainstorm ideas: Stuck on an architecture decision? Ask an LLM for different approaches and their pros and cons.
  • Generate boilerplate code: Need a quick function in a new language or a basic React component? LLMs can draft it in seconds.
  • Write documentation: Have a complex piece of code? Ask an LLM to explain it or write the initial draft of its documentation.
  • Draft communications: From rejection emails for job applicants to internal team updates, LLMs can help you craft clear and concise messages.

The agent hype train: Finding a more structured path

The hype around AI gets even more intense when we talk about AI agents. An agent is often presented as a magical entity that can autonomously use tools, browse the web, and solve complex problems with a single prompt.

This autonomy is both a great strength and a significant risk. Letting an LLM decide which tool to use or determine next steps introduces another layer of non-determinism. What if it chooses the wrong tool? What if it gets stuck in a loop, burning through your budget?

Before jumping to a fully autonomous agent, we should look at a spectrum of patterns that offer more structure and reliability. In their excellent article, “Building effective agents,” the team at Anthropic draws a crucial architectural distinction:

  • Workflows are systems where LLMs and tools are orchestrated through predefined code paths.
  • Agents are systems where LLMs dynamically direct their own processes and tool usage.
AI automation workflow graphic
Image credit: Zapier

The key takeaway is to start with the simplest solution and only add complexity when necessary. The image below visualizes this spectrum, moving from predictable workflows to inference-driven agents:

The building block: The Augmented LLM

Before building complex workflows, we need to understand the fundamental unit: what Anthropic calls the Augmented LLM. This isn’t just a base model; it’s an LLM enhanced with external capabilities. The two most important augmentations are:

  1. Tool Use (function calling): This gives the LLM the ability to interact with the outside world. You define a set of available “tools” (your own functions or external APIs) with clear descriptions. The LLM doesn’t execute the code itself. Instead, it generates a structured JSON object specifying which tool it wants to use and with what arguments. Your application then receives this request, executes the deterministic code, and returns the result to the LLM. This gives you the best of both worlds: the LLM’s reasoning and your code’s reliability.
  2. Retrieval (RAG): Retrieval-Augmented Generation (RAG) is a pattern designed to combat hallucinations and ground an LLM in specific facts. Instead of relying on its training data, the system first retrieves relevant documents from a trusted source (like your company’s knowledge base) and injects them into the prompt as context. This follows a simple, powerful flow: Search → Augment → Generate.

These two building blocks—Tool Use and RAG—are the foundation upon which more sophisticated and reliable systems are built.

Composable workflows

Now, let’s see how these building blocks can be assembled into the structured, predictable workflows that Anthropic recommends as alternatives to fully autonomous agents.

Workflow 1: Prompt chaining

The simplest multi-step pattern. A task is broken down into a fixed sequence of steps, where the output of one LLM call becomes the input for the next. A step within this chain could absolutely involve the LLM using a tool or performing a RAG query.

Workflow 2: Routing

In this pattern, an initial LLM call classifies an input and directs it to one of several specialized, downstream workflows. This is perfect for customer support, where you might route a query to a “refund process” (which uses a tool to access an orders database) or a “technical support” handler (which uses RAG to search documentation).

Workflow 3: Parallelization

This workflow involves running multiple LLM calls simultaneously and aggregating their outputs. For instance, you could have one LLM call use RAG to find relevant policy documents while another call uses a tool to check the user’s account status.

Workflow 4: Orchestrator-workers

A central “orchestrator” LLM breaks down a complex task and delegates sub-tasks to specialized “worker” LLMs, which in turn use the tools and retrieval capabilities necessary to complete their specific job.

Workflow 5: Evaluator-optimizer

This creates an iterative refinement loop. One LLM call generates a response, while a second “evaluator” LLM provides feedback. The first LLM then uses this feedback to improve its response. The evaluator’s criteria could be informed by information retrieved via RAG.

A framework for the pragmatic developer

ai adoption decision treeBefore you write import openai, pause and ask yourself a few questions. The goal is not to use AI; the goal is to solve a problem efficiently.

  1. What is the core business problem? Are you trying to save time, reduce costs, or create a new capability? Start with the problem, not the solution.
  2. Can this be solved with traditional, deterministic code? If it’s data validation, a CRUD operation, or a simple transformation, the answer is yes. Use code.
  3. If not, does the problem involve unstructured data, natural language, or generation? If yes, an LLM is likely a good fit.
  4. Which is the simplest, most constrained AI pattern that solves the problem? Start with a single “Augmented LLM” call using Tool Use or RAG. If that’s not enough, compose them into a structured workflow (Chaining, Routing, etc.) before even considering a fully autonomous agent.
  5. Do I truly need autonomous decision-making? Only consider a full agentic loop if the problem is so dynamic that you cannot pre-define the workflow. Even then, implement strict guardrails, monitoring, and human-in-the-loop validation.

Conclusion

Being a great developer isn’t about chasing every new trend. It’s about building robust, efficient, and valuable software. AI and agents are incredibly powerful tools, but they are just that: tools. True innovation comes from knowing your toolbox inside and out and picking the right tool for the job.

Focus on the customer’s pain points. Solve them in the most efficient and reliable way possible. Sometimes that will involve a cutting-edge LLM, but often, it will be a simple, elegant piece of deterministic code. That is the path to building things that last.

The post You don’t need AI for everything: A reality check for developers appeared first on LogRocket Blog.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories