An anonymous reader shared this report from Futurism:
In November, Amazon leaders sent an internal memo to employees, pushing them to use its in-house code generating tool, Kiro, over third-party alternatives from competitors. "While we continue to support existing tools in use today, we do not plan to support additional third party, AI development tools," the memo read, as quoted by Reuters at the time. "As part of our builder community, you all play a critical role shaping these products and we use your feedback to aggressively improve them."
It was an unusual development, considering the tens of billions of dollars the e-commerce giant has invested in its competitors in the space, including Anthropic and OpenAI... Half a year later, Amazon is singing a dramatically different tune. As Business Insider reports, Amazon is officially throwing in the towel, succumbing to growing calls among employees for access to OpenAI's Codex and Anthropic's Claude... Given the unfortunate optics of opening the floodgates for Codex and Claude Code, an Amazon spokesperson told the publication in a statement that teams are still "primarily using" Kiro, claiming that 83 percent of engineers at the company are leaning on it.
Microsoft employees eligible for the company’s first-ever voluntary retirement program learned the details Thursday, including cash payments of up to nine months of base pay, up to five years of healthcare coverage, and continued stock vesting. … Read More
A small team inside Microsoft led by Corporate Vice President Omar Shahine is building “Project Lobster,” an OpenClaw-based agent designed to work around the clock on behalf of knowledge workers within the Microsoft 365 ecosystem. … Read More
By combining massive floating power generators with onsite AI hardware, Panthalassa turns the ocean into a self-sustaining computing powerhouse — all without needing a single mile of underwater power cables. … Read More
Amazon VP Yunyan Wang is now Chewy’s CTO; Smartsheet names new CFO; and a longtime Microsoft exec joins NetApp’s C-suite, among other tech moves. … Read More
Microsoft’s 2026 Work Trend Index finds that the biggest barrier to AI at work isn’t the technology or the workers — it’s the organizations around them. … Read More
Beyond massive curtains inside its Everett, Wash., R&D facility, Helion is betting that a downsized testbed can answer key technology questions as the company races to meet deadlines. … Read More
Contrary to popular myths, our taxes are relatively low, haven’t exploded skyward, and are nowhere near the point of creating serious damage to the commercial sphere. … Read More
Amazon launched Amazon Supply Chain Services, bundling freight, distribution, fulfillment, and parcel shipping into a single offering for any business. … Read More
New Xbox CEO Asha Sharma is winding down Gaming Copilot on mobile and canceling its console launch, barely a year after Microsoft debuted the AI feature. … Read More
More than 300 members of the Pacific Northwest tech scene packed the Showbox SoDo to honor the year’s top startups, founders, leaders and deal makers across a dozen awards categories. … Read More
I'm finally back from my travels, Brazil and Gamescom Latam were awesome but it's time to get back to work, there's so much stuff on my to-do list!
I got back Wednesday at 6am, took a quick nap and got right to work on building challenge #4 for my Game Dev Practice Lab, this is the first 2D challenge which a lot of people have been asking about. It is already live and later today I will publish the FREE YouTube video talking about challenges #2 and #3. So whether you can afford it or not it doesn't matter, go learn by DOING!
Unity AI has just entered into Open Beta, meaning anyone who wants to try it can, no need to request access, you can try it out here.
I'm currently researching Unity AI for a super detailed tutorial and so far in my research I am finding the tool is actually genuinely helpful! This is not mindless AI hype, and it's not even focused on AI generation, but rather is capable of building genuinely useful things like custom editor tools, helping you analyze the profiler, taking actions in your project, it can see your project visually, it can do basic level design, helps you organize your codebase, fix bugs (with context of your project) and a bunch more.
Like I said I'm still in the research phase but thankfully Unity actually invited me to be a guest of their Unity AI livestream showcase which was quite educational to me. They showed how they built a really nice demo and included a ton of useful hints for how to get the most out of these tools.
The demo is a vehicle combat game in an arena where the player controls a car and the NPC trucks spawn and try to destroy the player. The demo was built with a handful of the developers WITH AI, not replaced by it. Meaning in order for this AI to be useful it required actually developers who know how to get the most out of the AI.
You can watch the full livestream but here are the main useful takeaways that I got myself:
Build systems and learn. At 13:00 they showcase terrain deformation which I personally found really nice. And they describe how they used AI to help come up with that system. Personally this is something that I've wanted to research for ages and I think AI is super useful in this process, you can ask it to help you build such a system like they did, and then importantly ask it to help you understand WHY it works. The fact that this AI exists inside Unity means it has context of Unity itself and game development, so it's much better at building systems like these that rely on shaders and rendering as opposed to a generic LLM.
Use it for editor tools. At 18:00 they showcase a simple editor tool made with AI that helps position all the modular pieces to make an arena of any size. Super important takeaway: Ask AI to give you tons of sliders and settings so you can manually make it perfect, because the AI output will not be perfect.
Great for research. At 26:00 they wanted to have some robots cheering, but did not know which approach would be best, so they just asked AI and it gave them 3 possible options with their pros and cons, another great learning example. And after picking an option (in this case Vertex Animated Textures) the AI helped implement that system (with another custom tool)
Profiler Analysis. At 33:20 the game was having performance issues, so they asked AI what could be the cause and it identified a very niche issue, one ProBuilder mesh had super insanely long triangles which apparently tanks performance. This is one of those things that is hard to find and easy to fix, and "finding" things is where AI excels.
There's a lot more uses they showed in the livestream, like making a Hexagon shader to protect the robots, generating the UI and the code behind it, generating the statue and vehicles and more.
Some general best practices:
Break problems down. Don't ask "build me an entire game", instead ask "help me with this tiny specific task", then ask for another one, etc. Many tiny tasks instead of huge ones.
Be as descriptive as possible, the more detail the better
Generate characters in T-pose to make them easier to animate.
When generating sprites, ask for a solid color background (like green) to make it easier to remove later.
Ask for editor tools with tons of fine-tuning sliders
Ask it brainstorming questions instead of asking for a specific output. "I have some robots in the stands and I want a shield to protect them, give me 5 possible approaches for solving this problem"
Use screenshots and images to visually guide the AI if you want a visual task (like level design or particle effects)
Drag related prefabs and assets to give more context to the AI
Use Plan mode to come up with a plan before you attempt any changes
Again the main thing here is this is NOT "AI will make the game for you" but rather "your skills + AI will help you make better games faster". As always it's a tool meant to help you, not replace you. So give it a try here and see how it helps you in your workflow.
In terms of pricing, here is their page. On the Free Unity Personal plan you get 1000 free credits you can try out, and after that you can pay $10 per month. If you're on the Unity Pro tier you get 2000 credits per month. Then you can buy more separately if you need them. Different tasks require different credits, for example generating a sprite is 5-10 credits, and a quick query is 2-5 credits, so it feels like 1000 is actually a pretty decent amount. Example: This demo which had very heavy usage of AI (with lots of trial and error) was built with around 1800 credits.
At the end of the livestream I asked if this demo would be available for download so we could inspect all of it and they mentioned how they hadn't considered it but maybe.
I am very very curious to see how people adopt these AI tools. From my limited research so far (and AI in general) I would say learning how to use these tools is the most important thing. Knowing the best practices for how to break tasks down and knowing how to prompt correctly are crucial to getting the best results, so I look forward to seeing how those best practices come to be. Stay tuned for my dedicated video tutorial coming out in the near future.
Affiliate
Surprising Bundle 94% OFF, FREE Shader
Unity is running an surprising bundle! Surprising because I instantly spotted how it contains one of my most recommended assets of all time, that I definitely did not expect to be bundled!
It’s worth it for that asset alone (Asset Inventory) and on top of that getting all the other assets is just an awesome bonus. The whole thing has a huge discount and it’s mostly made up of new assets so you probably don’t have any yet. (I only had one asset myself)
Get it HERE and use coupon DANIELILETT at checkout to get it for FREE!
Looking for Characters and Weapons in a realistic style to make your games? Check out this awesome HumbleBundle!
It’s made by Bugrimov Maksim which is one of the best realistic publishers. This pack has a mountain of characters in all styles, alongside a ton of weapons with first person animations.
Valve is such a weird company, but usually weird in the good way. (although I do wish they would lower their 30% cut for indie games)
They just released their latest piece of hardware, the Steam Controller, and pretty much instantly it went out of stock. You can't buy it on the official site for $99 (although there are already scalpers on ebay selling them for $300)
While you wait for more official stock, you can actually build one yourself! By that I mean that Valve has just released the CAD files for the Steam Controller and its Puck under a slightly restrictive Creative Commons license, which means people can now freely download the official shell files, modify them, 3D print accessories, and make all sorts of weird custom creations, as long as it is non-commercial.
They also included engineering drawings and "keep out" zones so people do not accidentally block things like the antenna or other critical areas.
Now technically you can't really build your own complete Steam Controller, these files are for the external shell and not the internals, but still this is pretty fun for them to do! Usually things like console shells are proprietary so it's nice to see a big company just put them out for anyone to build third party accessories for. There's a nice Valve-like message on the GitLab page: "Your Steam Controller is yours, and you have the right to do with it what you want."
I am now curious to see what people will build from this. A giant Steam Controller? A cursed ergonomic shell? A phone mount? A clever puck holder with flashing lights? This is one of those stories that is just fun, and I really hope the community goes crazy with creativity.
Glue (or more technically, adhesives) is a fascinating thing. How do you connect one thing to another thing? It is also a surprisingly complex topic since you need different glue types to glue different objects. Gluing a Hat to a piece of Wood? You need something different than if you were gluing Metal to Plastic.
Here is a fun website that shows you what you need to glue THIS to THAT. You just pick the this and that from the dropdown menu and it tells you what works best, neat!
Plus there's a Trivia page! Did you know that "The Aztec Indians in Central America used animal blood mixed with cement as a mortar for their buildings"?
I love super niche websites like these, so silly but so useful when you really need it. Now I know that if I ever want to glue Ceramic to some Rubber that I should be using Household Goop
This blog post was created with the help of AI tools. Yes, I used a bit of magic from language models to organize my thoughts and automate the boring parts, but the geeky fun and the in C# are 100% mine.
Hi!
Some of these started as small pet projects.
IE: a quick helper for a demo, a tiny tool for a conference, a library to avoid repeating the same code again and again, or one of those “I’ll just build this in one evening” ideas that somehow becomes a real thing.
And now, thanks to GitHub Copilot, many of these experiments are becoming open source, free NuGet packages that I hope are useful to everyone.
Some are focused on AI. Some are focused on local models. Some help with embeddings, speech, QR codes, document processing, MCP tools, agents, and developer productivity.
In other words: a beautiful collection of useful chaos.
These packages help with evaluation workflows, reporting, synthetic test data, and xUnit integration. The idea is simple: AI applications should not rely only on “it worked once in my demo.”
They need repeatable checks.
They need quality gates.
They need tests that can run again tomorrow, when the model, prompt, data, or weather in the cloud changes.
Because yes, AI apps are fun. But “trust me bro, the prompt is good” is not a testing strategy.
Agents and orchestration
The agents orchestration packages are focused on coordinating multiple AI agents through a lightweight workflow.
That is a nice mental model for agent-based workflows. It also fits very well with GitHub Copilot, SQUAD-style automation, and repo-based development experiments.
Because once you have more than one agent, you need orchestration.
Otherwise, congratulations, you invented a very expensive group chat.
AOT-friendly mapping for .NET
The ElBruno.AotMapper family is focused on compile-time DTO mapping.
This project is especially interesting for modern .NET workloads because it avoids runtime reflection and generates mapping code at compile time.
That makes it useful for:
NativeAOT apps
trimmed applications
cloud-native workloads
serverless scenarios
places where startup time and predictability matter
In short: less runtime magic, more generated code that you can actually inspect.
And sometimes that is exactly what you want.
Local embeddings
The ElBruno.LocalEmbeddings family is one of the most useful groups for AI developers building RAG, semantic search, local-first AI apps, or privacy-aware demos.
This family gives you local embedding generation in .NET, including support for text embeddings, image embeddings, vector data integrations, Kernel Memory, and NPU-specific packages.
The NPU packages are especially cool because they connect directly with the AI PC story.
Not every embedding call needs to cross the internet.
Sometimes the best cloud call is the one you did not make.
Local LLMs
The ElBruno.LocalLLMs packages are focused on running and integrating local language models from .NET.
These are useful for local chat, local model experimentation, and RAG-style workflows.
Cloud models are amazing. But sometimes you want the model running right there next to your code, your logs, your fan noise, and your questionable coffee.
MarkItDotNet: from files to AI-ready Markdown
The ElBruno.MarkItDotNet family is probably one of the biggest and most useful areas in this package collection.
This family is about converting, preparing, enriching, chunking, indexing, validating, and syncing content for AI workflows.
In a typical RAG project, everyone wants the fancy chat UI.
But before that, someone has to solve the real problem:
Can we clean and prepare the documents first?
That is where this package family fits.
It helps turn files into AI-ready Markdown and supports scenarios like document conversion, chunking, metadata extraction, citations, Azure AI Search, Whisper transcription, and quality checks.
Because every RAG project eventually becomes a document-cleanup project wearing an AI hat.
Text-to-image generation
The ElBruno.Text2Image packages provide a .NET-friendly way to work with image generation.
This is useful when working with agents that have access to many tools.
Sending every possible tool to the model all the time is not always a good idea. It burns tokens, increases noise, and makes the model work harder than needed.
Tool routing helps select the most relevant tools for the task.
Agents are great.
Agents with 97 tools in context are a token bonfire.
Search, ranking, and retrieval
These packages help with retrieval, ranking, and search scenarios.
AspireMonitor helps with Aspire monitoring. ClockTray helps with Windows tray clock scenarios. OllamaMonitor gives quick visibility into local Ollama runtime status.
Tiny tools. Big quality-of-life improvement.
The best kind of yak shaving.
Hugging Face helpers
The Hugging Face downloader packages help with downloading models and related assets from Hugging Face.
This package adds useful extensions around OllamaSharp, especially for scenarios where local LLM calls can take longer and timeout management becomes important.
If you have ever waited for a local model to respond and wondered whether it was thinking, frozen, or silently judging your prompt, this one makes sense.
MemPalace: memory for AI apps
The MemPalace family is focused on memory infrastructure for AI apps and agents.
Modern .NET applications are increasingly distributed, integrating APIs, background services, and external AI systems. With the rise of AI coding tools such as GitHub Copilot and frameworks like the Microsoft Agent Framework, developers can now generate large portions of application logic.
This raises a question: When AI can generate much of the code, what becomes the core responsibility of a .NET developer?
This article will answer you the above question by sharing a practical case study of that shift—highlighting how architecture, contracts, and observability via Aspire - As systems become more dynamic and AI-driven, observability and orchestration become just as important as implementation.
To explore this in an easy-to-understand way, I built a simple full-stack e-commerce application (“flowershop”) using:
• Vue.js (frontend)
• ASP.NET Core Web API (backend)
• Microsoft Agent Framework (agent orchestration)
• .NET Aspire (distributed tracing and system visibility)
• GitHub Copilot (AI-assisted development)
\
Now, let’s explore!
\
1. System Overview
The application includes features:• Product browsing and checkout• AI-powered chat assistant• Automated product description generation• End-to-end observability using .NET Aspire
This is final UI (Vue.js + AI Assistant)
Figure 1. Vue.js frontend with product listing, admin form, and AI assistant.
The frontend communicates with ASP.NET Core APIs, which orchestrates AI agents and external services.
\
2. Architecture: Orchestrating AI in .NET
Sales Assistant Flow
Figure 2. Sales Assistant architecture (Vue.js → ASP.NET Core API → Agent → LLM).
In this architecture:• The Vue.js client sends requests to the API• The API routes requests to a Sales Agent• The agent interacts with the LLM and backend tools
Writer Flow
Figure 3. Writer flow for generating product descriptions.
When a product image is uploaded:
The API forwards the request to an agent
The agent interacts with LLM for image understanding
Additional context is retrieved
A final description is generated
Observability and Orchestration with .NET Aspire
Figure 4. .NET Aspire tracing of LLM interactions and tool calls.
\
Using .NET Aspire, I was able to:
Trace full request flows:
Vue.js → API → Agent → LLM → Tool calls
Inspect tool usage:
GetFlowerDetails
PlaceOrder
SearchFlowersByOccasion
Monitor:
Latency
Token usage
Execution paths
\
This is essential because:
AI systems are non-deterministic—without observability, their behavior is difficult to understand and debug.
\
3. Implementation Challenges
During implementation, I encountered several challenges:
\
Ambiguous Specifications
Initial GitHub issues were short and informal. This led to:
• Misinterpreted requirements
• Inconsistent outputs
AI requires structured and explicit instructions.
\
Loss of Control
AI-generated pull requests often:
• Ignored coding conventions
• Required heavy revision
Effort shifted from writing code to reviewing and testing it manually.
\
Debugging Complexity
AI-generated logic was difficult to trace and fix.
AI accelerates generation, but not understanding
\
4. Evolving the Development Approach
To address these challenges, the process was refined.
\
Structured Issue Definition
Issues were rewritten using Markdown, clear requirements, and acceptance criteria, improving clarity and reducing ambiguity [1][2].
\
API Contract Design
Explicit API contracts were introduced to align frontend and backend components, ensuring clear interfaces and predictable integration [3][4].
Contracts become critical when AI generates both sides of a system.
\
Instruction and Agent Design
Custom instructions and configurations were used to guide coding conventions, architecture, and workflow [5][6].
Figure 3: Setup GitHub Copilot in project
\
Continuous Learning
Improving outcomes required continuous learning from official documentation and evolving frameworks [7][8].
\
AI amplifies—not replace the need for technical knowledge.
\
5. Key Lessons for .NET Developers
• Design before generating code
• Be explicit across the system
• Treat AI as a co-engineer
• Invest in observability (Aspire is critical)
\
6. Discussion: The Shift in Developer Responsibility
AI does not remove responsibility—it redistributes it to a higher level.
Key questions remain:
\
• Which responsibilities should remain human-controlled?
• Who is accountable for AI-generated code?
• How might teams adapt workflows to integrate AI effectively?
• What skills are required to remain effective in this new paradigm?
\
7. Conclusion
AI-assisted development in .NET is not just about generating code—it is about building systems that integrate AI reliably.With tools like GitHub Copilot, Microsoft Agent Framework, and especially .NET Aspire, developers gain new capabilities—but also new responsibilities.
Success depends on:
\
• Clear architecture
• Strong contracts
• Well-defined orchestration
• Deep observability
\
8. Source Code
The full implementation is available on GitHub:👉 GitHub Repo