Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148163 stories
·
33 followers

Mr. Bones: A Pirate-Voiced Halloween Chatbot Powered by Docker Model Runner

1 Share

My name is Mike Coleman, a staff solution architect at Docker. This year I decided to turn a Home Depot animatronic skeleton into an AI-powered,  live, interactive Halloween chatbot. The result: kids walk up to Mr. Bones, a spooky skeleton in my yard, ask it questions, and it answers back — in full pirate voice — with actual conversational responses, thanks to a local LLM powered by Docker Model Runner.

Why Docker Model Runner?

Docker Model Runner is a tool from Docker that makes it dead simple to run open-source LLMs locally using standard Docker workflows. I pulled the model like I’d pull any image, and it exposed an OpenAI-compatible API I could call from my app. Under the hood, it handled model loading, inference, and optimization.

For this project, Docker Model Runner offered a few key benefits:

  • No API costs for LLM inference — unlike OpenAI or Anthropic
  • Low latency because the model runs on local hardware
  • Full control over model selection, prompts, and scaffolding
  • API-compatible with OpenAI — switching providers is as simple as changing an environment variable and restarting the service

That last point matters: if I ever needed to switch to OpenAI or Anthropic for a particular use case, the change would take seconds.

System Overview

mr bones 1

Figure 1: System overview of Mr. Bones answering questions in pirate language

Here’s the basic flow:

  1. Kid talks to skeleton
  2. Pi 5 + USB mic records audio
  3. Vosk STT transcribes speech to text
  4. API call to a Windows gaming PC with an RTX 5070 GPU
  5. Docker Model Runner runs a local LLaMA 3.1 8B (Q4 quant) model
  6. LLM returns a text response
  7. ElevenLabs Flash TTS converts the text to speech (pirate voice)
  8. Audio sent back to Pi
  9. Pi sends audio to skeleton via Bluetooth, which moves the jaw in sync
mr bones 2

Figure 2: The controller box that holds the Raspberry Pi that drives the pirate

That Windows machine isn’t a dedicated inference server — it’s my gaming rig. Just a regular setup running a quantized model locally.

The biggest challenge with this project was balancing response quality (in character and age appropriate) and response time. With that in mind, there were four key areas that needed a little extra emphasis: model selection, how to do text to speech (TTS) processing efficiently, fault tolerance, and setting up guardrails. 

Consideration 1: Model Choice and Local LLM Performance

I tested several open models and found LLaMA 3.1 8B (Q4 quantized) to be the best mix of performance, fluency, and personality. On my RTX 5070, it handled real-time inference fast enough for the interaction to feel responsive.

At one point I was struggling to keep Mr. Bones in character, so I  tried OpenAI’s ChatGPT API, but response times averaged 4.5 seconds.

By revising the prompt and Docker Model Runner serving the right model, I got that down to 1.5 seconds. That’s a huge difference when a kid is standing there waiting for the skeleton to talk.

In the end, GPT-4 was only nominally better at staying in character and avoiding inappropriate replies. With a solid prompt scaffold and some guardrails, the local model held up just fine.

Consideration 2: TTS Pipeline: Kokoro to ElevenLabs Flash

I first tried using Kokoro, a local TTS engine. It worked, but the voices were too generic. I wanted something more pirate-y, without adding custom audio effects.

So I moved to ElevenLabs, starting with their multilingual model. The voice quality was excellent, but latency was painful — especially when combined with LLM processing. Full responses could take up to 10 seconds, which is way too long.

Eventually I found ElevenLabs Flash, a much faster model. That helped a lot. I also changed the logic so that instead of waiting for the entire LLM response, I chunked the output and sent it to ElevenLabs in parts. Not true streaming, but it allowed the Pi to start playing the audio as each chunk came back.

This turned the skeleton from slow and laggy into something that felt snappy and responsive.

Consideration 3: Weak Points and Fallback Ideas

While the LLM runs locally, the system still depends on the internet for ElevenLabs. If the network goes down, the skeleton stops talking.

One fallback idea I’m exploring: creating a set of common Q&A pairs (e.g., “What’s your name?”, “Are you a real skeleton?”), embedding them in a local vector database, and having the Pi serve those in case the TTS call fails.

But the deeper truth is: this is a multi-tier system. If the Pi loses its connection to the Windows machine, the whole thing is toast. There’s no skeleton-on-a-chip mode yet.

Consideration 4: Guardrails and Prompt Engineering

Because kids will say anything, I put some safeguards in place via my system prompt. 

You are "Mr. Bones," a friendly pirate who loves chatting with kids in a playful pirate voice.

IMPORTANT RULES:
- Never break character or speak as anyone but Mr. Bones
- Never mention or repeat alcohol (rum, grog, drink), drugs, weapons (sword, cannon, gunpowder), violence (stab, destroy), or real-world safety/danger
- If asked about forbidden topics, do not restate the topic; give a kind, playful redirection without naming it
- Never discuss inappropriate content or give medical/legal advice
- Always be kind, curious, and age-appropriate

BEHAVIOR:
- Speak in a warm, playful pirate voice using words like "matey," "arr," "aye," "shiver me timbers"
- Be imaginative and whimsical - talk about treasure, ships, islands, sea creatures, maps
- Keep responses conversational and engaging for voice interaction
- If interrupted or confused, ask for clarification in character
- If asked about technology, identity, or training, stay fully in character; respond with whimsical pirate metaphors about maps/compasses instead of tech explanations

FORMAT:
- Target 30 words; must be 10-50 words. If you exceed 50 words, stop early
- Use normal punctuation only (no emojis or asterisks)
- Do not use contractions. Always write "Mister" (not "Mr."), "Do Not" (not "Don't"), "I Am" (not "I'm")
- End responses naturally to encourage continued conversation

The prompt is designed to deal with a few different issues. First and foremost, keeping things appropriate for the intended audience. This includes not discussing sensitive topics, but also staying in character at all times.  Next I added some instructions to deal with pesky parents trying to trick Mr. Bones into revealing his true identity. Finally, there is some guidance on response format to help keep things conversational – for instance, it turns out that some STT engines can have problems with things like contractions. 

Instead of just refusing to respond, the prompt redirects sensitive or inappropriate inputs in-character. For example, if a kid says “I wanna drink rum with you,” the skeleton might respond, “Arr, matey, seems we have steered a bit off course. How about we sail to smoother waters?”

This approach keeps the interaction playful while subtly correcting the topic. So far, it’s been enough to keep Mr. Bones spooky-but-family-friendly.

Mr bones 3

Figure 3: Mr. Bones is powered by AI and talks to kids in pirate-speak with built-in safety guardrails.

Final Thoughts

This project started as a Halloween goof, but it’s turned into a surprisingly functional proof-of-concept for real-time, local voice assistants.

Using Docker Model Runner for LLMs gave me speed, cost control, and flexibility. ElevenLabs Flash handled voice. A Pi 5 managed the input and playback. And a Home Depot skeleton brought it all to life.

Could you build a more robust version with better failover and smarter motion control? Absolutely. But even as he stands today, Mr. Bones has already made a bunch of kids smile — and probably a few grown-up engineers think, “Wait, I could build one of those.” 

Source code: github.com/mikegcoleman/pirate

mr bones 4

Figure 4: Aye aye! Ye can build a Mr. Bones too and bring smiles to all the young mateys in the neighborhood!

Learn more

Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Renaming prefabs in Lens Studio

1 Share

Is this really worth a blog post? Well maybe it is not, but it is something that made me scratch my head. An maybe other people are stymied by it as well, so I thought to write a little something. The issue is this. Suppose I have a prefab MyAwesomePrefab, like this:

prefab1

Lens Studio unfortunately doesn’t know the concept of prefab variants like Unity does, so if I want another prefab that is almost the same, I need to copy it. Let’s call it MyOtherPrefab. I add an extra text. The top scene object is still called MyAwesomePrefab, so let’s rename that, and hit Apply

prefab2

You might think we are done, but this is were it gets odd. Because if you drag both MyAwesomePrefab and MyOtherPrefab in the scene…

prefab3

They both show up as MyAwesomePrefab.

This, of course, can be highly confusing. Maybe it’s a bug, or maybe I simply not understand how this is supposed to work. However, this is how I fix it. Fortunately, all files a Lens Studio creates are text files. I don’t know what you call this format, but somewhere you will find this:

fix

Change that to MyAwesomePrefab, save the file, and if you now drag MyAwesomePrefab on the scene…

prefab4

Important note: save your project first before starting the rename. If you haven’t saved it, Lens Studio makes a kind of temporay project on a in a temporary location and that may cause issues when you do things like this. Even better - commit your project to Git before doing scary things like this. A prudent developer always makes small steps so it’s easy to recover from oopsies.

No code this time, as there is no code to share ;)

Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Getting Started with Flutter Lint and Static Analysis

1 Share

Cover

Static analysis in Flutter (and Dart) is the process of examining code for errors and style issues without running the app. It allows you to catch bugs and enforce best practices early, before executing a single line of code.

Adopting static analysis represents a fundamental "shift left" in the software development lifecycle (SDLC). This philosophy advocates for moving quality assurance and testing activities to the earliest possible stages of development.

Linting is a subset of static analysis that focuses on style and best-practice rules. Flutter’s analysis tool (often called the Dart/Flutter analyzer) uses a set of rules (called lints) to ensure your code follows the Dart style guide and Effective Dart recommendations.

This guide will walk you through setting up and mastering Flutter's linting and static analysis capabilities, from the basics to advanced configurations.

The Anatomy of Flutter Static Analysis

Let's start by exploring different parts of static analysis in Flutter.

The Core Engine: The Dart Analyzer

Everything starts with the Dart Analyzer, the engine that analyzes your code. It’s not just a command-line tool; it’s the backbone behind the instant feedback you see in your IDE. When you mistype a variable or mix incompatible types, it’s the analyzer that notices.

It does this by turning your source code into an Abstract Syntax Tree (AST), a structured model of your code. From there, it scans and validates everything against a defined set of rules. This tight connection between the analyzer, your IDE, and your project setup creates an immediate feedback loop where that familiar wavy underline appears the moment something's off.

The Rulebook: analysis_options.yaml

The behavior of the analyzer is governed by one file: analysis_options.yaml. Sitting at the root of your Flutter project, it tells the analyzer what to enforce and what to ignore. You can enable or disable rules, adjust their severity, and even exclude certain files or directories.

In teams, this file becomes the single source of truth for maintaining consistent code quality and style across all contributors.

The Official Starting Point: flutter_lints

Beyond language-level checks, Flutter also encourages best practices through linter rules. The easiest way to get started is with the flutter_lints package, Google’s officially recommended rule set for Flutter apps.

Built on top of the general Dart lints, it gives you a strong foundation that matches the Dart Style Guide and Effective Dart recommendations. Instead of debating style or rule configurations, you can start coding with confidence, knowing your project already follows community standards for readability and maintainability.

Going Beyond Defaults to Build Incredible Applications

While flutter_lints provides an excellent foundation, production-grade applications and growing teams quickly hit its limits. You need more than just style checking; you need to enforce architecture, manage code complexity, and ensure long-term maintainability. This is where dcm.dev comes in a complete static analysis designed specifically for Dart and Flutter.

DCM is a productivity and efficiency tool. Instead of spending weeks developing rules or searching for rules that probably don't exist, you can plug DCM in and gain access to:

  • 450+ Pre-Built Rules in Addition to Flutter Lint Rules: A massive, well-tested library of rules covering performance, style, leaks, and best practices for both Dart and Flutter. Many of the rules comes with auto-fix that can save hours of effort!

  • Advanced Code Metrics: Go beyond simple lints with metrics like Cyclomatic Complexity, Lines of Code, and others to objectively measure code health and identify areas for refactoring. DCM Metrics

  • Monitor Your Code Quality Trends: With DCM Dashboard, quickly access the latest state of all open issues across all your projects with your organization and observe changes over time to easily spot unexpected changes and act quickly to resolve them.

DCM Dashboard

  • More Features: That's not all, DCM has loads of other features including assets quality check, health commands including finding unused code and files, integrating with AI assisted tooling to ensure high quality code generation via MCP server and more to explore and more to come!

In short, dcm.dev allows your team to stand on the shoulders of static analysis experts, letting you focus your valuable time and energy on what matters most: building incredible applications.

DCM is a code quality tool that helps your team move faster by reducing the time spent on code reviews, finding tricky bugs, identifying complex code, and unifying code style.

Setting Up Flutter Static Analysis (Flutter lint rules)

Setting up static analysis (linting) in Flutter is straightforward, especially with the official lint rule sets provided by Dart and Flutter. Recent versions of Flutter come with a default set of lint rules out-of-the-box. If you created your project with Flutter 2.5 (stable) or newer, you likely already have static analysis enabled by default.

You can verify this by checking your pubspec.yaml for flutter_lints

pubspec.yaml
dev_dependencies:
flutter_test:
sdk: flutter
flutter_lints: ^6.0.0

and looking for an analysis_options.yaml file in the project root (next to pubspec).

...
├── android
├── assets
├── build
├── ios
├── lib
├── linux
├── macos
├── pubspec.lock
├── pubspec.yaml
├── analysis_options.yaml
├── test
├── web
└── windows

If your project is older or missing these, follow these steps to enable the latest Flutter lints:

  • In your project root, run the command:

      flutter pub add --dev flutter_lints
  • Create a file named analysis_options.yaml at the root of your project (in the same directory as your pubspec.yaml). In this file, include the recommended Flutter lint rules by adding a single line:

    include: package:flutter_lints/flutter.yaml
  • After adding the file, run flutter pub get (if you added the dependency manually) to ensure the package is installed. The analyzer will automatically start using these rules in your IDE. You can also run the analyzer manually (see next section) to verify everything is set up correctly.

flutter analyze

Below is an example analysis_options.yaml content for a Flutter project. This is similar to what Flutter’s template provides by default.

analysis_options.yaml
# Activate recommended lints for Flutter apps (from flutter_lints package)
include: package:flutter_lints/flutter.yaml

linter:
# You can customize the lint rules below
rules:
# avoid_print: false # Uncomment to disable the `avoid_print` rule
# prefer_single_quotes: true # Uncomment to enable the `prefer_single_quotes` rule

Running the Analyzer and Interpreting Results

With the setup complete, the analysis can be run in two primary ways, both of which yield the same results.

From the Command Line

To perform a static analysis of the entire project from the terminal, run the following command from the project's root directory:

flutter analyze

This command invokes the Dart analyzer, which will scan all Dart files in the project according to the rules defined in analysis_options.yaml.

For example, if you had a stray semicolon after an if statement, the analyzer might output a warning like this:

Analyzing flutter_test_cases...

info • Constructors for public widgets should have a named 'key' parameter •
lib/widgets/repaint_boundary_to_image.dart:22:7 • use_key_in_widget_constructors
info • Invalid use of a private type in a public API • lib/widgets/repaint_boundary_to_image.dart:24:3
• library_private_types_in_public_api
info • Don't invoke 'print' in production code • lib/widgets/snapshot_widget.dart:112:5 • avoid_print
info • Don't invoke 'print' in production code • lib/widgets/snapshot_widget.dart:132:5 • avoid_print

4 issues found. (ran in 1.5s)

In this example, it flags an unnecessary semicolon and cites the empty_statements rule. Similarly, if you use a print statement and the avoid_print lint is enabled, you would see a message warning:

info • Don't invoke 'print' in production code • lib/widgets/snapshot_widget.dart:132:5 • avoid_print

Each issue includes the file, line number, and the name of the lint rule (so you can look up details if needed).

Running flutter analyze regularly (or having your IDE’s analysis on) is highly recommended. It gives you quick feedback on code quality and potential errors. If issues are found, you can fix them as you go. In fact, some lint warnings have automated fixes, you can run

dart fix --dry-run
dart fix --apply

to auto-apply trivial fixes suggested by the analyzer. This can update your code (for example, adding missing const keywords or removing unused imports) according to the linter’s recommendations.

Getting Started with DCM

You might want to go beyond defaults and start with DCM as explained earlier. Getting started with DCM is super simple!

  • Get a proper license that works for you and your team. You can start for free or request a trial for your team

  • Install DCM depending on your operating system

  • Setup your IDE whether is VS Code or IntelliJ

  • Activate your DCM license dcm activate --license-key=YOUR_KEY

  • And finally you can simply integrate with DCM by simply starting with our recommended set of rules or enable over 450+ rule, run advanced code health commands, enable metrics, integrate with your AI assisted tool and many more.

    analysis_options.yaml
    dcm:
    extends:
    - recommended

Now go ahead and run DCM commands.

Available commands:
analyze Analyze Dart code for lint rule violations.
analyze-assets Analyze image assets for incorrect formats, names, exceeding size, and missing high-resolution images.
analyze-structure Analyze Dart project structure.
analyze-widgets Analyze Flutter widgets for quality, usages, and duplication.
calculate-metrics Collect code metrics for Dart code.
check-code-duplication Check for duplicate functions, methods, and test cases.
check-dependencies Check for missing, under-promoted, over-promoted, and unused dependencies.
check-exports-completeness Check for exports completeness in *.dart files.
check-parameters Check for various issues with function, method and constructor parameters.
check-unused-code Check for unused code in *.dart files.
check-unused-files Check for unused *.dart files.
check-unused-l10n Check for unused localization in *.dart files.
fix Apply fixes for fixable analysis issues.
format Format *.dart files.
init Set up DCM.
run Run multiple passed DCM commands at once.

Inside the IDE

The true power of the analyzer is its real-time integration with IDEs. As code is written in VS Code or Android Studio, the analyzer runs continuously in the background. Any violations of the rules in analysis_options.yaml will appear almost instantly:

  • Inline Highlighting: The specific line of code with the issue will be underlined. Hovering over the underlined code will display a tooltip with the same error message seen on the command line.

Inline Highlighting

  • Problems/Dart Analysis Tab: A dedicated panel in the IDE (e.g., the "Problems" tab in VS Code) will aggregate all issues across the entire project, allowing for easy navigation to each problem area.

Problem View

From Defaults to Custom Configuration

While the flutter_lints package provides an excellent default, professional development often requires a more tailored approach. Customizing the analysis_options.yaml file allows a team to enforce stricter checks, adopt specific coding conventions, and ultimately take full ownership of their code quality standards.

This file becomes more than just a configuration; it evolves into a living document that codifies a team's development philosophy, defining "what good code looks like" for their specific project.

Let's see what we can do!

Enforcing Maximum Type Safety

Dart's type system is powerful, but the analyzer can be configured to be even stricter, catching potential runtime errors during development. This is done using the language key under the analyzer entry in analysis_options.yaml.

analysis_options.yaml
analyzer:
language:
strict-casts: true
strict-inference: true
strict-raw-types: true
  • strict-casts: true: This flag ensures that the analyzer reports a potential error when an implicit downcast might fail at runtime. For example, if a List<Object> is implicitly cast to a List<String>, this flag will raise an issue because the cast could fail if the list contains non-string elements.

  • strict-inference: true: This forces the type inference engine to be more conservative. It will report an issue if it cannot determine a precise type and would otherwise default to dynamic. This encourages developers to be more explicit with their type annotations, reducing ambiguity.

  • strict-raw-types: true: This flag ensures that when a generic class is used, a type argument is always provided. For example, it would flag List myList and encourage the developer to specify the type, such as List<int> or List<dynamic>, making the code's intent maintainable.

Customizing Flutter Linter Rules

Beyond the top-level analyzer settings, individual linter rules can be managed under the linter key. This allows for fine-grained control over the project's coding style and conventions.

analysis_options.yaml
include: package:flutter_lints/flutter.yaml

linter:
rules:
# Disable a rule from the included set
avoid_print: false
# Enable an additional rule
require_trailing_commas: true
avoid_positional_boolean_parameters: true

Managing Code Exclusions and Severity

Sometimes, a specific rule must be violated in a controlled manner, or a rule's importance needs to be elevated. The analyzer provides mechanisms for both scenarios.

Suppressing Flutter Rules

There are two ways to exclude code from a specific analysis rule:

  • For a single line: Place a comment directly above the line of code.

    // ignore: avoid_print
    print('Debug log');
  • For an entire file: Add a comment at the top of the Dart file.

    // ignore_for_file: prefer_const_constructors

    import 'package:flutter/material.dart';
    // ... rest of the file

Changing Lint Rule Severity

It is also possible to change the severity of a rule. For example, a team might decide that a specific style violation is so important that it should be treated as an error, not just a warning. This is configured under the analyzer.errors key.

analysis_options.yaml
analyzer:
errors:
# Treat missing required parameters as an error instead of a warning
missing_required_param: error

# Just warn (not error) if const could be used
prefer_const_constructors: warning # (opt-in if you want const guidance)

# set the 'lines_longer_than_80_chars' warning to just info
lines_longer_than_80_chars: info

Exclude files or directories

Sometimes you might want to ignore generated code or specific files (say, your .g.dart files or build/ directory) from analysis to avoid false warnings. You can add an exclude list under the analyzer section:

analysis_options.yaml
analyzer:
exclude:
- build/**
- lib/generated_plugin_registrant.dart

This level of control allows teams to fine-tune the analyzer's behavior to match their project's specific quality gates and priorities.

DCM Rule and Metric Configuration

Going beyond defaults and using an advanced lint tool for Flutter like DCM also comes with huge benefits of configuring lots of rules and metrics to tailor to your specific needs.

Here is just one example from all those 450+ rules which avoid-banned-imports:

analysis_options.yaml
dcm:
rules:
- avoid-banned-imports:
entries:
- paths: ['some/folder/.*\.dart', 'another/folder/.*\.dart']
deny: ['package:flutter/material.dart']
message: 'Do not import Flutter Material Design library, we should not depend on it!'
severity: error
- paths: ['core/.*\.dart']
deny: ['package:flutter_bloc/flutter_bloc.dart']
message: 'State management should be not used inside "core" folder.'

Integrating Static Analysis into Your Workflow

Once the ruleset is defined in analysis_options.yaml, the next step is to integrate static analysis deeply into the development workflow. This involves providing more context to the analyzer through code annotations and automating checks to ensure that quality standards are consistently upheld by the entire team.

The Power of Annotations

The meta package provides a set of special annotations that allow developers to communicate their intentions directly to the analysis tools. These annotations provide hints that the analyzer cannot deduce on its own, enabling it to provide more accurate and helpful warnings.

To use them, first add the package as a dependency:

flutter pub add meta

Key annotations include:

  • @immutable: When applied to a class, this annotation asserts that all of its fields are final. The analyzer will then flag any subclasses that are not also marked as immutable. This is essential for Flutter widgets to ensure their properties cannot change after construction, which is a core principle of the framework's declarative UI.

  • @mustCallSuper: When a method in a subclass overrides a method from a superclass that is annotated with @mustCallSuper, the analyzer will issue a warning if the subclass's method does not include a call to the superclass's method (e.g., super.dispose()). This is critical for preventing resource leaks in StatefulWidget lifecycle.

Read more about annotations in the Flutter meta package documentation.

Preparing for Publication: The pana Score

For developers who intend to publish packages to the official Dart package repository, pub.dev, static analysis plays a direct and visible role in how the package is perceived by the community. When a package is published, an automated tool called pana (Package ANAlysis) runs a series of checks to generate a quality score, which is prominently displayed on the package's page

A key category in this evaluation is "Pass static analysis," which executes dart analyze (or flutter analyze) on the package's code. A clean analysis report is essential for achieving a high score. This score acts as a powerful social signal; it tells other developers that the package author is disciplined, adheres to community best practices, and has produced code that is likely to be reliable and well-maintained.

Therefore, mastering static analysis is not just about internal code quality; it is a critical component of building a positive public reputation within the Flutter ecosystem.

Flutter Lint Rules and Automation with Continuous Integration (CI/CD)

The ultimate safety net for maintaining code quality is to automate the analysis process using a Continuous Integration (CI) pipeline. The goal is to create an automated, impartial gatekeeper that prevents code violating the established analysis rules from ever being merged into the project's main branch.

For teams using platforms like GitHub, this can be achieved with a GitHub Actions workflow. The following is a simple, copy-paste-ready example of a workflow file (.github/workflows/analyze.yml) that runs flutter analyze on every push and pull request to the main branch:

name: Flutter Analyze

on:
push:
branches: [main]
pull_request:
branches: [main]

jobs:
analyze:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: subosito/flutter-action@v2
with:
channel: 'stable'
- run: flutter pub get
- run: flutter analyze

You can repeat this workflow on your desired and favorite CI tool as well.

DCM and Your Favorite CD/CI Platform Goes Beyond Basic

DCM also works seamlessly with major CI/CD platforms like GitHub Actions, GitLab CI/CD, Azure DevOps, Bitbucket Pipelines and Codemagic.

More importantly, DCM provides flexible output formats for CI pipelines: console, JSON, Checkstyle, Code Climate, GitLab, GitHub, so you can integrate into your existing build dashboards or pipelines easily.

Check out more details on our DCM CI/CD Integrations.

Conclusion

By properly configuring package:flutter_lints and customizing your analysis_options.yaml file, you create a robust development environment that catches issues early and guides your team toward Flutter best practices.

Remember that linting is not about rigid enforcement but about creating a consistent, readable, and maintainable codebase that your entire team can work with effectively. Choose rules that make sense for your project and team, and don't hesitate to adjust them as your project evolves.

Enjoying this article?

Subscribe to get our latest articles and product updates by email.

Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

From Missing in Action to Present and Collaborative—The Product Owner Spectrum | Darryl Wright

1 Share

Darryl Wright: The PONO—Product Owners in Name Only and How They Destroy Teams

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

The Great Product Owner: Collaborative, Present, and Clear in Vision

 

"She was collaborative, and that meant that she was present—the opposite of the MIA product owner. She came, and she sat with the team, and she worked with them side by side. Even when she was working on something different, she'd be there, she'd be available." - Darryl Wright

 

Darryl shares an unusual story about one of the best Product Owners he's ever encountered—someone who had never even heard of Agile before taking the role. Working for a large consulting company with 170,000 staff worldwide, they faced a difficult project that nobody wanted to do. Darryl suggested running it as an Agile project, but the entire team had zero Agile experience. The only person who'd heard of Agile was a new graduate who'd studied it for one week at university—he became the Scrum Master. The executive sponsor, with her business acumen and stakeholder management skills, became the Product Owner despite having no idea what that meant. 

The results were extraordinary: an 18-month project completed in just over 7 months, and when asked about the experience, the team's highest feedback was how much fun they had working on what was supposed to be an awful, difficult project. Darryl attributes this success to mindset—the team was open and willing to try something new. 

The Product Owner brought critical skills to the role even without technical Agile knowledge: She was collaborative and present, sitting with the team and remaining available. She was decisive, making prioritization calls clearly so nobody was ever confused about priorities. She had excellent communication skills, articulating the vision with clarity that inspired the team. Her stakeholder management capabilities kept external pressures managed appropriately. And her business acumen meant she instantly understood conversations about value, time to market, and customer impact. 

Without formal training, she became an amazing Product Owner simply by being open, willing, and committed. As Darryl reflects, going from never having heard of the role to being an inspiring Product Owner in 7 months was incredible—one of the most successful projects and teams he's ever worked with.

 

Self-reflection Question: If you had to choose between a Product Owner with deep Agile certification and no business skills, or one with strong business acumen and willingness to learn—which would serve your team better?

The Bad Product Owner: The PONO—Product Owner in Name Only

 

"The team never saw the PO until the showcase. And so, the team would come along with work that they deemed was finished, and the product owner had not seen it before because he wasn't around. So he would be seeing it for the first time in the showcase, and he would then accept or reject the work in the showcase, in front of other stakeholders." - Darryl Wright

 

The most destructive anti-pattern Darryl has witnessed was the MIA—Missing in Action—Product Owner, someone who was a Product Owner in Name Only (PONO). This senior business person was too busy to spend time with the team, only appearing at the sprint showcase. The damage this created was systematic and crushing. The team would build work without Product Owner engagement, then present it in the showcase looking to be proud of their accomplishment. 

The PO, seeing it for the first time, would accept or reject the work in front of stakeholders. When he rejected it, the team was crushed, deflated, demoralized, and made to look like fools in front of senior leaders—essentially thrown under the bus. This pattern violates multiple principles of Agile teamwork. First, there's no feedback loop during the sprint, so the team works blind, hoping they're building the right thing. Second, the showcase becomes a validation ceremony rather than a collaborative feedback session, creating a dynamic of subservience rather than curiosity. The team seeks approval instead of engaging as explorers discovering what delivers customer value together. Third, the PO positions themselves as judge rather than coach—extracting themselves from responsibility for what's delivered while placing all blame on the team. 

As Deming's quote reminds us, "A leader is a coach, not a judge." When the PO takes the judge role, they're betraying fundamental Agile values. 

The responsibility for what the team delivers belongs strictly to the Product Owner; the team owns how it's delivered. 

When Darryl encounters this situation as a Scrum Master, he lobbies intensely with the PO: "Even if you can't spare any other time for the entire sprint, give us just one hour the night before the showcase." That single hour lets the team preview what they'll present, getting early yes/no decisions so they never face public rejection. The basic building block of any Agile or Scrum way of working is an empowered team—and this anti-pattern strips all empowerment away.

 

Self-reflection Question: Does your Product Owner show up as a coach who's building something together with the team, or as a judge who pronounces verdicts? How does that dynamic shape what your team is willing to try?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Darryl Wright

 

Darryl is an Agile Coach and Instructor dedicated to helping organisations and leaders be both successful and humane. He has over two decades in IT delivery and business leadership, he champions Agile ways of working to create thriving workplaces where people are happy, productive, and deliver products customers truly love.

 

You can link with Darryl Wright on LinkedIn, and visit Darryl's website at www.organa.com.au.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251031_Darryl_Wright_F.mp3?dest-id=246429
Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Mini book: AI Assisted Development: Real World Patterns, Pitfalls, and Production Readiness

1 Share

AI is no longer a research experiment or a novelty: it is part of the software delivery pipeline. Teams are learning that integrating AI into production is less about model performance and more about architecture, process, and accountability. In this issue on AI Assisted Development, we examine what happens after the proof of concept and how AI changes the way we build, and operate systems.

By InfoQ
Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

An Open-Source ChatGPT App Generator

1 Share

OpenAI released ChatGPT apps just a couple of days ago. Such apps are incredibly interesting from a UX perspective, because sometimes a chat user interface simply won't cut it. Sometimes, you simply need a graphical user interface. For such cases, there are "ChatGPT apps."

So, what is a ChatGPT app? Well, it's a fully functional user interface with buttons, dropdown lists, checkboxes, and everything you can create on the web. It can be as complex as Google Maps or as simple as a collect email form. It is basically "an app" hosted inside your AI chatbot. You can try a simple such app by clicking here.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories