
Alex is founder and CEO of Efficient Power Conversion, a leading manufacturer of GaN MOSFET’s.
Alex is also the inventor of the original Power MOSFET and HEXFET at International Rectifier.
Also, former CEO of International Rectifier (founded by his father!),
https://epc-co.com
We cover everything from inventing the power MOSFET on his first day on the job to silicon physics, AI data centres and humanoid robots. Enjoy.
This week, we discuss NVIDIA GTC, token machines, token budgets, and an AWS outage that may or may not involve AI. Plus, Matt reviews The Wizard of Oz at The Sphere.
Watch the YouTube Live Recording of Episode 564
This episode of The Modern .NET Show is supported, in part, by RJJ Software's Strategic Technology Consultation Services. If you're an SME (Small to Medium Enterprise) leader wondering why your technology investments aren't delivering, or you're facing critical decisions about AI, modernization, or team productivity, let's talk.
"For me it's born out of, I mean the old phrase right, that necessity is the mother of invention. And I want to make games actually, but I think there's a missing middleware in the industry at the moment for certain types of game developers."— Ben Bowen
Hey everyone, and welcome back to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. I'm your host Jamie Taylor, bringing you conversations with the brightest minds in the .NET ecosystem.
Today, we're joined by Ben Bowen to talk about TinyFFR - a cross-platform library for .NET which allows developers to render 3D models. TinyFFR came from Ben spotting that there is a gap in the Games Development tools market: somewhere between 3D modelling software and a full-blown game engine.
"I, personally, believe that a library or software middleware is only really as good as the documentation that comes with it. You probably drive away 90% of the potentially interested parties if you're just saying to them, 'hey, if you want to learn how to use this, you'd better go spelunking through the source code or looking at examples."— Ben Bowen
Along the way, we talked about the importance of really good quality documentation. And it should come as no surprise to you that we talked about this because the documentation for TinyFFR is fantastic. Seriously folks, when you're done listening to this episode, go check out Ben's Hello Cube tutorial for TinyFFR and you'll see what I mean.
Before we jump in, a quick reminder: if The Modern .NET Show has become part of your learning journey, please consider supporting us through Patreon or Buy Me A Coffee. Every contribution helps us continue bringing you these in-depth conversations with industry experts. You'll find all the links in the show notes.
Anyway, without further ado, let's sit back, open up a terminal, type in `dotnet new podcast` and we'll dive into the core of Modern .NET.
The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at: https://dotnetcore.show/season-8/from-zero-to-3d-ben-bowen-on-tinyffrs-rapid-net-rendering/
Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend.
And don't forget to reach out via our Contact page. We're very interested in your opinion of the show, so please get in touch.
You can support the show by making a monthly donation on the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast.
Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show.
Editing and post-production services for this episode were provided by MB Podcast Services.
The full KotlinConf’26 schedule is finally live, and it’s packed!
With parallel tracks, deep-dive sessions, and back-to-back talks, planning your time can feel overwhelming. When almost every session looks interesting, deciding where to spend your time isn’t easy.
To help you navigate it all, the Kotlin team has selected a few talks worth adding to your list. Whether you’re an intermediate or advanced Kotlin developer looking to sharpen your expertise, part of a multiplatform team solving cross-platform challenges, building robust server-side systems, or exploring AI-powered applications in Kotlin, these are sessions you might want to check out.
These talks are perfect if you want to build on your foundations, understand where Kotlin is heading, and sharpen practical skills you can apply in your day-to-day work.
Programming languages are shaped by their defaults – what’s safe, convenient, and practical. But defaults evolve, and yesterday’s good idea can become today’s source of friction. This session explores how languages rethink and change their defaults, including mutability, null-safety, and deeper object analysis. With examples from C#, Java, Swift, Dart, and Kotlin, you’ll gain insight into how Kotlin continues to evolve and what those changes mean for everyday development.
Data is messy, and drawing the right conclusions takes more than generating a pretty chart. In this practical session, Adele will walk you through analyzing a real-world powerlifting dataset using Kotlin tools. You’ll explore how to understand and validate data, work with Postgres and DataFrame, and visualize results with Kandy – all directly from your IDE. It’s a hands-on introduction to doing thoughtful, reliable data science in Kotlin.
Modern terminals can do far more than print text. In this deep dive, Jake explores how command-line apps communicate with terminals – from colors and sizing to advanced features like frame sync, images, and keyboard events. Using Kotlin, he covers OS-specific APIs, JVM vs. Kotlin/Native challenges, and reusable libraries that help you unlock the full power of the terminal.
Ten years after Kotlin 1.0, the language continues to evolve quickly. This talk examines recent stable and preview features, unpacking their design and implementation to reveal what they tell us about Kotlin’s direction. You’ll leave with a deeper understanding of how the language is shaped and how those insights can influence your own Kotlin code.
This session explores how Koog can power the intelligent core of a Compose Multiplatform app. This session demonstrates building AI-driven applications using local tools across Android, iOS, and desktop, connecting to an MCP server with the Kotlin MCP SDK, and integrating both cloud and on-device LLMs. It’s a practical look at bringing full-stack AI into real Kotlin applications.
Ready to go deeper? These sessions dive into compiler internals, language design, architecture, and performance, making them ideal for experienced developers who want to explore Kotlin beneath the surface.
Metro is both a multiplatform DI framework and a sophisticated Kotlin compiler plugin. This advanced session breaks down how Metro works inside the compiler, what code it generates, and how its “magic” actually happens. If you’re comfortable with DI frameworks and curious about compiler-level mechanics, this is a rare behind-the-scenes look.
What if Kotlin could enforce that certain objects never escape their intended scope? This talk introduces a proposed design for enforceable locality – lightweight, limited-lifetime objects that prevent leaks and enable safer APIs. Beyond bug prevention, locality opens the door to advanced control patterns, effect-like behavior, and strong backwards compatibility, all while integrating cleanly into today’s Kotlin ecosystem.
Kotlin Multiplatform native builds come with a key constraint: one native binary per project. This session explores what happens when multiple binaries enter the picture, the architectural impact on large systems, and strategies for splitting compilation into manageable parts. It’s a practical look at scaling Kotlin/Native in complex, multi-repository environments.
Instead of showing how to use OkHttp, this talk opens it up. You’ll explore its interceptor-based architecture, connection lifecycle management, caching state machines, URL decoding, and performance optimizations. From generating HTTPS test certificates to extending the library in multiple ways, this session is a masterclass in reading and learning from high-quality Kotlin code.
Kotlin Multiplatform continues to expand what’s possible across devices and platforms. These sessions showcase the latest advancements, real-world journeys, and forward-looking tooling shaping the cross-platform landscape.
This session explores what’s new in Compose Multiplatform and how it continues to improve shared UI across iOS, web, desktop, and Android. You’ll get a hands-on look at recent platform advances, including faster rendering, improved input handling, richer iOS interop, web accessibility improvements, and a smoother developer experience with unified previews, mature Hot Reload, and a growing ecosystem. It’s a practical update on how Compose Multiplatform is becoming an even stronger choice for cross-platform UI.
Go behind the scenes of Sony’s six-year journey from an early, risky experiment with Kotlin Multiplatform to the global success of the Sony | Sound Connect app. From high-speed BLE and background execution to migrating from React Native to Compose Multiplatform, this talk explores technical trade-offs, stakeholder skepticism, and hard-earned architectural lessons. It’s a real-world story of betting on KMP early and scaling it globally.
Swift Export aims to make calling shared Kotlin code from Swift more idiomatic and natural. This session looks at the current experimental state of Swift Export, demonstrates the transition from the old Objective-C bridge to the new approach, and highlights supported features, current limitations, and practical adoption guidance. By the end, you’ll be able to evaluate whether Swift Export is ready for your team.
Discover how Filament, a real-time physically-based rendering engine, can bring dynamic visual effects into your Compose Multiplatform UI. Through practical examples, you’ll explore materials, shaders, lighting, and touch-reactive animations – all without diving too deep into low-level graphics code. It’s a hands-on introduction to building expressive, animated interfaces.
With Kotlin/Wasm reaching Beta and supported in modern browsers, full-stack Kotlin is closer than ever. This talk walks through building a complete web app using Kotlin/Wasm, Compose Multiplatform, Coroutines, Exposed, and Ktor – unifying the frontend, backend, and database in one ecosystem. It’s a practical guide to building performant, fully Kotlin-powered web applications.
Kotlin is increasingly used to power large-scale backend systems. These talks explore how Kotlin powers high-performance systems, large migrations, and mission-critical platforms in the real world.
Discover how Google Search uses server-side Kotlin and coroutines to enable low-latency, highly asynchronous streaming code paths at massive scale. This session explores Qflow, a data-graph interface language connecting asynchronous definitions with Kotlin business logic, along with coroutine instrumentation for latency tracking and critical path analysis. It’s a deep look at building “asynchronous by default” systems at massive scale.
Uber introduced Kotlin into its massive Java monorepo to modernize backend development without disrupting scale. This talk shares how the JVM Platform team built the business case, addressed tooling and static analysis gaps, overcame skepticism, and enabled thousands of engineers to adopt Kotlin. It’s a practical story of large-scale language evolution inside a global engineering organization.
Adopting Kotlin in a payment platform is a strategic decision about risk, trust, and long-term ROI. This session examines how Kotlin was integrated into a global EMV/PCI ecosystem – from Android terminals to gateways – using null-safety, sealed hierarchies, and value classes to eliminate entire classes of production issues. You’ll see architectural outcomes, measurable compliance gains, and a practical framework for positioning Kotlin as a strategic bet in regulated industries.
AI is rapidly becoming part of modern application development. If you’re exploring agents, LLM integrations, or AI-assisted coding, these sessions will give you both strategy and hands-on insight.
Agentic systems introduce probabilistic behavior and real risk. This talk introduces Eval-Driven Development (EDD), an engineering-first approach to making AI agents reliable. Using Koog, you’ll see how to test agents at multiple layers, collect meaningful metrics, detect regressions, generate synthetic test cases with LLMs, and build continuous evaluation loops that prevent silent degradation in production.
Many AI agents fail when moving beyond demos. This session introduces Koog 1.0.0-RC and explains how its structured, type-safe architecture enables scalable, production-ready agents across JVM and KMP targets. You’ll explore cost control, strongly typed workflows, state persistence, observability with OpenTelemetry and Langfuse, and integrations across the Kotlin ecosystem – all focused on building agents that actually scale.
Improving AI-generated Kotlin code requires more than better prompts. This talk explores practical strategies, evaluation techniques, and lessons from advancing Kotlin code generation in real-world agents. You’ll learn how to measure quality, refine outputs, and apply tools and best practices that ensure reliability, readability, and maintainability, even as models continue to evolve.
This is just a glimpse of the many great sessions waiting for you at KotlinConf’26. With dozens of talks across multiple tracks, the hardest part might simply be choosing which ones to attend. Don’t forget to dive into the full schedule, plan your agenda, and get ready for three days packed with ideas, insights, and conversations with the global Kotlin community.
Most companies are buying AI tools. Very few are investing in AI literacy. There’s a difference, and it’s costing engineering teams more than you think.
Over the past two years, building AI-powered systems and teaching engineers how to work with these tools, I’ve noticed the same pattern everywhere: leadership buys the tools, sends a few Slack messages about “exploring AI,” maybe shares some ChatGPT prompts, and then wonders why adoption is spotty and results are underwhelming.
Here’s what I’ve learned: giving engineers access to AI tools without structured learning is like giving them access to AWS without understanding infrastructure. Sure, something will happen. But it won’t be pretty, and it definitely won’t scale.
When I look at how engineering teams are actually using AI, I see the same three groups everywhere:
The power users (10-15%): They’ve figured it out on their own. They’re using Claude for architecture reviews, automating workflows with n8n, and building custom agents. They’re dramatically more productive.
The experimenters (30-40%): They’re using ChatGPT occasionally. Mostly for code snippets or debugging. They sense there’s more potential but don’t know how to access it.
The skeptics (40-50%): They tried it once, got mediocre results, and decided it’s overhyped. Or they’re quietly worried about job security and avoiding it altogether.
There’s nothing wrong with the tools. The problem is that we’re treating AI literacy like something engineers will just “pick up” organically, the same way they learn a new framework or language.
But AI isn’t a framework. It’sso much more than that. And that requires intentional skill development.
After building AI-powered systems and teaching engineers how to work effectively with these tools, here’s what actually works:
Don’t start with “here’s what Claude can do.” Start with “here’s the work you do every day that AI can amplify.”
Real examples I use:
When engineers see AI as a tool that makes their work better, not a replacement, adoption skyrockets.
Most engineers treat prompts like Google searches. That’s like treating SQL like keyword matching.
Good prompt engineering is about:
This isn’t something you learn from a blog post. It’s a skill that needs practice and feedback.
One of the most valuable things you can do is document what works for your team.
The best setup I’ve seen is an internal wiki with:
This turns individual learning into organizational knowledge.
Junior engineers are the most worried about AI, and ironically, they often benefit the most.
Instead of hiding AI usage or treating it as “cheating,” the most effective approach is pairing junior engineers with seniors specifically to learn how to work with AI effectively.
The focus isn’t speed. It’s judgment. How do you evaluate AI output? When do you trust it? When do you dig deeper?
This builds confidence instead of anxiety.
Here’s what doesn’t work: bringing in an external consultant for a one-day workshop, covering AI “in general,” and calling it done.
Here’s what does work: structured, ongoing learning that’s specific to your stack, your problems, and your team’s actual workflow.
The best AI training programs I’ve seen are:
The key is building internal AI literacy programs that are tailored to how your team actually works. Don’t just teach tools; focus on what needs to be done.
Let me be blunt: AI training isn’t free. It takes time, focus, and often outside expertise to design well.
But the returns are dramatic.
Teams that invest in structured AI literacy programs consistently report:
The fear of AI replacing jobs turns into excitement about AI amplifying capabilities.
The fanciest tools only get you so far. The teams that are the ones that invested in teaching their engineers how to think with AI, not just use it.
If you’re responsible for an engineering team, here’s what to do:
1. Audit current AI usage: Asking yourself “who has ChatGPT?” isn’t enough. It’s more like, “who’s using AI effectively, and what are they doing differently?”
2. Identify your power users: Find the engineers who’ve figured it out. Document their workflows. Turn them into internal mentors.
3. Start small: Pick one workflow (code review, documentation, debugging) and design a structured learning experiment around it.
4. Make AI literacy an explicit goal: Add it to performance reviews, career development plans, and onboarding.
5. Invest in structured training: Whether you build it internally or bring in outside expertise, treat AI education as infrastructure, not a one-off event.
In 2026, every company has access to the same AI tools. Claude, ChatGPT, Cursor, GitHub, and Copilot are all commodity infrastructure now.
The competitive advantage isn’t the tools. It’s how well your engineers can use them.
The companies that invest in AI literacy today will have engineering teams that are 2-3x more productive, more confident, and more innovative than their competitors.
The ones that skip this step will wonder why they’re not seeing results, despite spending thousands on AI subscriptions.
AI isn’t replacing engineers. But engineers with AI literacy will replace engineers without it.
The question is: which team are you building?
Alexandra Spalato is a SaaS builder and AI workflow specialist who builds AI-powered systems and teaches engineers how to work effectively with AI tools.
The post Your engineering team’s AI training is probably failing: How to fix it appeared first on LogRocket Blog.