Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151548 stories
·
33 followers

Anthropic Now Leads OpenAI in Annualized Revenue

1 Share
From: AIDailyBrief
Duration: 10:22
Views: 851

Anthropic reports a $30 billion annualized revenue run rate and closes a multi-gigawatt compute deal with Google and Broadcom. OpenAI and Anthropic face enormous model-training costs and use accounting methods that exclude training to show near-term profitability. Google commercializes Gemma 4 with an on-device dictation app while Meta prepares a partly proprietary model release and internal token-maxing practices reshape engineering culture.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Android Weekly Issue #722

1 Share
Articles & Tutorials
Sponsored
Microsoft, Meta, Amazon, and DoorDash trust Maestro. So do hundreds of Android teams who got tired of flaky, hard-to-maintain UI tests. Find elements, build and run tests visually with Maestro Studio. And yeah, it's free.
alt
Jaewoong Eum explains how Compose hot reload works on real devices, covering its pipeline, supported changes, and instant literal patching.
Nav Singh explores Kotlin 2.4.0's new collection sorting-validation extensions that efficiently check ordering without resorting.
Sponsored
Shipping white-label apps used to mean repeating the same steps and signing in and out of Google Play Console dozens of times per release. With Runway, ship everything in one place, just once.
alt
Tezov show how with Koin annotations and the Koin compiler, you can completely bypass expect/actual.
Thomas Künneth explores Android's AppFunctions API for making app capabilities discoverable and executable by AI agents.
Place a sponsored post
We reach out to more than 80k Android developers around the world, every week, through our email newsletter and social media channels. Advertise your Android development related service or product!
alt
Libraries & Code
A Kotlin Multiplatform library providing JSON Pointer implementation and integration with Kotlin Serialization JSON.
A sample Android app for experimenting with on-device Gemini Nano via the ML Kit Prompt API.
Videos & Podcasts
Stevdza-San demonstrates how to set up and use a local database on the web target in Kotlin Multiplatform.
alt
Philipp Lackner demonstrates how attackers can hack Android in-app purchases and how to protect against it.
Firebase covers the March highlights including AI Studio integration and hybrid AI inference for Android.
Sergio Sastre demonstrates using behavior tests as guardrails when refactoring Android code with Gemini in Android Studio.
Dave Leeds explores how coroutine context is preserved across multi-coroutine flows in Kotlin.
Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How Intelligent is AI?

1 Share

It’s been almost three years exactly since I recorded my “On Losing Our Jobs to AI” video on YouTube. I have had time to use the tooling and talk to a lot of people about the effect on our industry. This is a follow up.

In my career, there have been two types of developer: those who crave to understand how everything works; and those who don’t mind just following a pattern or code they found to get it to work. I don’t have a bias against either type of developer. And, I would say, that most organizations can benefit from both types of developers.

But, now that we’re in the Claude era (or whatever coding agent you’re using) - I think we have to judge our agents too. I’ve used Claude to convert projects, make prototypes and refactor quite a bit. And in that time, I’ve come to the conclusion that AI is that second type of developer.

AI, by and by, have built models based on other things it was fed. Whether that is to create songs, videos or code. It mimicks what we think we want. And that is fine. But I think we do ourselves a disservice when we think we can prompt our way to finished products. I think we’re seeing re-hiring of a lot of devs as someone has to own the code that Claude (et al.) create.

Is this different than what we’ve always done? I don’t think so. When we create a new project (e.g. npm init, dotnet new), we are asking some collective intelligence to start us off without having to build everything scratch. Prompt engineering is just that. I don’t believe in curating a prompt so far as it contains the entire spec for a solution. Instead, I want a prompt that will do the 80% case. That’s it. I own the code after that. You can’t maintain that code by asking Claude to find a specific bug and fix it unless you then understand what it gave you.

This 80% case is a tough one for me, honestly. I love creating the first 80% of a project; I get bored in the final 20% (which is why I’m an architect, trainer, and consultant instead of a day-to-day developer). I see the productivity benefit, but still see a bright future for most dev’s. It might affect the BootCamp developers which is a shame as they were promised a future that might not exist much longer.

What do you think?


If you liked this article, see Shawn's courses on Pluralsight.
Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Fastest Possible HTTP Queries with Marten

1 Share

I’ve been piddling this weekend with testing out JasperFx Software‘s soon to be officially curated AI Skills. To test and refine those new skills, I’ve been using my buddies Chris Woodruff and Joseph Guadagno‘s MoreSpeakers application as a sample application to port to Wolverine and Marten (and a half dozen others too so far).

I’m sure that you’ll be positively shocked to know that it’s taken quite a bit of minor corrections and “oh, yeah” enhancements to the guidance in the skills to get to exactly where I’d want the translated code to get to. It’s not exactly this bad, but what it’s most reminded me of was my experience coaching youth basketball teams of very young kids when I constantly kick myself after the first game for all the very basic basketball rules and strategies I’d forgotten to tell them about.

Anyway, on to the Marten and Wolverine part of this. Consider this HTTP endpoint in the translated system:

public static class GetExpertiseCategoriesEndpoint
{
[WolverineGet("/api/expertise")]
public static Task<IReadOnlyList<ExpertiseCategory>> Get(IQuerySession session, CancellationToken ct)
=> session.Query<ExpertiseCategory>()
.Where(c => c.IsActive)
.OrderBy(c => c.Sector)
.ThenBy(c => c.Name)
.ToListAsync(ct);
}

Pretty common request to run a query against the database, then stream the results down to the HTTP response. I’ll write a follow up post later to discuss the greater set of changes, but let’s take that endpoint code above and make it a whole lot more efficient by utilizing Marten.AspNetCore‘s ability to just stream JSON write out of the database like this:

public static class GetExpertiseCategoriesEndpoint
{
[WolverineGet("/api/expertise")]
// It's an imperfect world. I've never been able to come up with a syntax
// option that would eliminate the need for this attribute that isn't as ugly
// as using the attribute, so ¯\_(ツ)_/¯
[ProducesResponseType<ExpertiseCategory[]>(200, "application/json")]
public static Task Get(IQuerySession session, HttpContext context)
=> session.Query<ExpertiseCategory>()
.Where(c => c.IsActive)
.OrderBy(c => c.Sector)
.ThenBy(c => c.Name)
.WriteArray(context);
}

The version above is 100% functionally equivalent to the first version, but it’s a lot more efficient at runtime because what it’s doing is writing the JSON directly from the database (Marten is already storing state using PostgreSQL’s JSONB type) right to the HTTP response byte by byte.

And just to be silly and be even more serious about the optimization, let’s introduce Marten’s compiled query feature that completely eliminates the runtime work of having to interpret the LINQ expression into an executable plan for executing the query:

// Compiled query — Marten pre-compiles the SQL and query plan once,
// then reuses it for every execution. Combined with WriteArray(),
// the result streams raw JSON from PostgreSQL with zero C# allocation.
public class ActiveExpertiseCategoriesQuery : ICompiledListQuery<ExpertiseCategory>
{
public Expression<Func<IMartenQueryable<ExpertiseCategory>, IEnumerable<ExpertiseCategory>>> QueryIs()
=> q => q.Where(c => c.IsActive)
.OrderBy(c => c.Sector)
.ThenBy(c => c.Name);
}
public static class GetExpertiseCategoriesEndpoint
{
[WolverineGet("/api/expertise")]
[ProducesResponseType<ExpertiseCategory[]>(200, "application/json")]
public static Task Get(IQuerySession session, HttpContext context)
=> session.WriteArray(new ActiveExpertiseCategoriesQuery(), context);
}

That’s a little bit uglier code that we had to go out of our way to write compared to the simpler, original mechanism, but that’s basically how performance optimization generally goes!

At no point is it ever trying to deserialize the actual ExpertiseCategory objects in memory. There are of course some limitations or gotchas:

  • There’s no anti-corruption layer of any kind, and this can only send down exactly what is persisted in the Marten database. I’ll tackle this in more detail in a follow up post about the conversion, but I’m going to say I don’t really think this is a big deal at all, and we can introduce some kind of mapping later if we want to change what’s actually stored or how the JSON is served up to the client.
  • You may have to be careful to make Marten’s JSON storage configuration match what HTTP clients want — which is probably just using camel casing and maybe opting into Enum values being serialized as strings.

But now, let’s compare the code above to what the original version using EF pCore had to do. Let’s say it’s about a wash in how long it takes Marten and EF Core to translate the

  1. EF Core has to parse the LINQ expression and turn that into both SQL and some internal execution plan about how to turn the raw results into C# objects
  2. EF Core executes the SQL statement, and if this happens to be a .NET type that has nested objects or collections, this could easily be an ugly SQL statement with multiple JOINs — which Marten doesn’t have to do all
  3. EF Core has to loop around the database results and create .NET objects that map to the raw database results
  4. The original version used AutoMapper in some places to map the internal entities to the DTO types that were going to be delivered over HTTP. That’s a very common .NET architecture, but that’s more runtime overhead and Garbage Collection thrashing than the Marten version
  5. My buddies used an idiomatic Clean/Onion Architecture approach, so there’s a couple extra layers of indirection in their endpoints that require a DI container to build more objects on each request, so there’s even more GC thrasing. It’s not obvious at all, but in the Wolverine versions of the endpoint, there’s absolutely zero usage of the DI container at runtime (that’s not true for every possible endpoint of course).
  6. ASP.Net Core feeds those newly created objects into a JSON serializer and writes the results down to the HTTP response. The AspNetCore team has optimized the heck out of that process, but it’s still overhead.

The whole point of that exhaustive list is just to illustrate how much more efficient the Marten version potentially is than the typical .NET approach with EF Core and Clean Architecture approaches.

I’ll come back later this week with a bigger post on the differences in structure between the original version and the Critter Stack result. It’s actually turning out to be a great exercise for me because the problem domain and domain model mapping of MoreSpeakers actually lends itself to a good example of using DCB to model Event Sourcing. Stay tuned later for that one!



Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Game Dev advice is contradictory! and this AI is too Dangerous to release!

1 Share

Hello and Welcome, I’m your Code Monkey!

I’ve been working like crazy lately, trying to keep up with a ton of projects while also building a backlog of videos since later this month I will be travelling to Gamescom Latam! I’ve never been so I’m really looking forward to that event!

Oh and I just started Challenge #2 on my Game Dev Practice Lab, this one is on making a nice Traffic Controller! The demo scene includes cars driving around, and it's YOUR job to make sure they obey traffic rules (traffic lights, stop signs) while not crashing into each other and driving smoothly! Fun!

By the way the 20% OFF Launch coupon is still active until the end of this month

  • Game Dev: Game Advice Contradictory

  • Tech: AI too dangerous

  • Gaming: Triple I Initiative

  • Fun: Humans fly by the Moon!



Game Dev

Is Game Dev Advice Contradictory?

If you're looking for advice on Game dev (or literally anything) then you can find tons of it on the internet. And if you attempt to apply all of it you might discover some things are contradictory, except that's not exactly right.

One developer on Reddit posted about exactly this. Things like "TikTok is amazing for marketing" versus "TikTok does not convert into wishlists", also "launch a demo as soon as possible" versus "time your demo for festivals", or "publishers are useless" versus "my publisher was essential."

And the truth is quite simply: ALL of those can be correct!

What is missing is simply nuance, advice in Game Dev is extremely situational. It depends on the genre, the visuals, the platform, the current market, your budget, your goals, your audience, and just the specific game you're making. A tactic that works amazingly well for one game might completely fail for another. There is no one-size-fits-all checklist for success.

But more importantly, that advice is actually NOT contradictory. TikTok is indeed excellent for marketing, you can have a video go viral and get millions of views. While at the same time it is true TikTok views don't convert too well into wishlists. 10k views on a YouTube video will likely convert into wishlists better than 100k on TikTok, although even if they convert less it is still quite valuable to have those 100k eyeballs on your game, maybe next time they are randomly browsing Steam they see your game and then they wishlist. Both are true.

On Demos, you DO want to publish your demo as soon as possible, as soon as you have your game idea in a good playable state. At the same time launching a demo when it's just a grey box prototype might harm your initial wishlist velocity, that's the nuance behind "as soon as possible". And you ALSO want to time Next Fest to get into it with a hot demo planned just before your release date.

Same thing for publishers, you have good and bad publishers. A good one will massively help your game reach the next level of success, and a bad one will ruin you so you might as well self-publish. That's not conflicting, that's just analyzing what is a good publisher vs a bad one, that's the nuance.

So these are actually pretty much all great advices, but you have to adapt them to your own situation and what specific stage you are in the development process.

Meaning that the real skill is NOT memorizing universal rules. It is learning how to judge context. What type of game are you making? Who is the audience? What is your actual goal? Are you trying to maximize wishlists, get funding, grow a community, or just finish and ship something small?

That is also why two successful developers can give seemingly opposite advice and both be right. If you regularly listen to successful devs (like on the Jonas Tyroller podcast) you will see how everyone finds different paths to success.

So yes, definitely listen to advice, but learn to analyze the nuance behind it and adapt it to your specific situation.

I am actually thinking of doing a full video on this exact topic/post just to try to explain all the nuance behind all of these seemingly (but not really) conflicting advices.


Affiliate

FREE VFX, and 99% OFF Bundle!

Oh wow this might be the biggest HumbleBundle in ages!

It includes a Tools, VFX, Meshes, Textures, UI, 2D, 3D, a bit of everything! Thousands and thousands of objects. And it’s all with an insane discount, worth €2,332 and you can get it for just €15!

Get it HERE!

The Publisher of the Week this time is Hovl Studio, publisher with some of my favorite VFX packs!

Get the FREE Map Track Markers VFX which is a nice collection of effects for map markers, perfect for any RTS or Strategy game.

Get it HERE and use coupon HOVL2026 at checkout to get it for FREE!

The Unity Asset Store is starting their Spring Sale tomorrow!

As always you will be able to get most of the top assets at 50% OFF, and maybe even some Flash Deals! My own Code Monkey Toolkit will be on sale! And I’m currently finishing up an update for it adding a bunch more tools. If you already own it then you get the new tools in a FREE update.


Tech

This AI is too dangerous to release!

Anthropic just announced Claude Mythos Preview, a new model it says is especially strong at cybersecurity tasks. So strong that they say it is actually dangerous! They think if they released it publicly then bad actors would use it to find vulnerability flaws in all kinds of software and cause tons of damage. Therefore they are not releasing it broadly. Instead, it is being offered as an invitation-only research preview where selected partners use it for defensive security work. They say the model has already helped find thousands of serious vulnerabilities.

Is this reality or just hype? It's hard to know. Back in 2019 OpenAI said ChatGPT 2 was too dangerous to release, but eventually was released. Then same thing when they first worked on Sora, it was too good at creating deepfakes and misinformation which meant they didn't want to release it. But eventually they did (and eventually killed it)

So is this new model really that good? Is it genuinely dangerous? Or it is just boosting the hype machine even further? These AI companies are looking to go public this year, so it pays them to over exaggerate just how good their tools are.

But if it is genuinely true then this could be quite interesting, it could genuinely be dangerous if AI's like these can suddenly find zero-day exploits in all manner of software. And at the same time when used for good they could also be very powerful.

I think it definitely pays to keep paying attention to the state of AI and see if these tools can help you in your workflow. If they do continue genuinely improving then I think you'd be shooting yourself in the foot by not using them. For example just last week I created a very useful AI to help me edit videos. I paste my raw video transcript onto ChatGPT and ask it to help me find mistakes (where I speak the same line twice), then I made a tool to help me add markers in Premiere and that helps me speed up the editing process quite a bit. I would be less productive without this tool. I recommend you analyze your own workflow to see where AI might help you.


Find out why 200K+ engineers read The Code twice a week

Staying behind on tech trends can be a career killer.

But let’s face it, no one has hours to spare every week trying to stay updated.

That’s why over 200,000 engineers at companies like Google, Meta, and Apple read The Code twice a week.

Here’s why it works:

  • No fluff, just signal – Learn the most important tech news delivered in just two short emails.

  • Supercharge your skills – Get access to top research papers and resources that give you an edge in the industry.

  • See the future first – Discover what’s next before it hits the mainstream, so you can lead, not follow.

Join 200,000+ engineers who read The Code to stay ahead of the curve.


Gaming

Awesome Indie Games Event!

The Triple-I initiative just had their 2026 event! This is an awesome yearly event, it might be my favorite showcase all year since I'm always so interested in every game shown. And this year was no different!

The whole idea behind this event is 40 announcements in 45 minutes, no hosts, no ads, just games! The full video is here (it's longer because after the event they showcased more of some games)

Honestly I think about 70% of the games shown looked super awesome to me, I wish I had time to play them all!

Graveyard Keeper 2! The original is one of my favorite games of all time, it's one of those games where I couldn't stop playing and in just a few days I spent over 40 hours on it, super awesome. If you haven't played it and you like Stardew Valley but wanted it to be a bit more like a dark comedy then I highly recommend. The sequel seems to be more of the same which to me is awesome!

Thick as Thieves is another one looking great, this is the game by Warren Spector which has been in development for quite some time. It's an Immersive Stealth Action game where you team up or go alone into a mansion to steal all kinds of valuables.

Romestead is another one for Stardew-like fans, but this time set in Ancient Rome! Awesome theme for this type of game. Or check out Crop which is a spooky and creepy Stardew-like.

Solarpunk is an awesome colony building game in the sky with lots of green tech.

Dead as Disco is all about rhythm fighting action to the beat with an impressive visual style. Valor Mortis is another one with insanely impressive visuals where you fight with a sword and pistol in Victorian? French? times.

Warhammer Survivors puts Space Marines in pixel art against endless hordes. Windrose is for pirate fans and Shift at Midnight is a horror-papers please-supermarket sim inspired game.

Also lots of updates for games that are already out. Oxygen Not Included, Brotato, Clover Pit, Rift of the Necrodancer, and more.

Finally the show ended with the announcement of Don't Starve Elsewhere which is a full on sequel! I played the original years ago when it came out and loved it! Haven't touched it since then even though they have massively expanded it. Maybe this sequel is just what I need for a fresh start.

Just for fun I also read my own Game Dev Report of last year where I covered the 2025 Triple-I initiative. Most of the ones I mentioned are actually not out yet. Star Birds and Shapez 2 are out and huge hits! While Timberborn (showed an update last year) just hit 1.0!

I really love this event! This one and the PC Gaming Show are probably my two favorite events of the year because they mostly showcase the games that I love playing. I love strategy, management, crafting, building! As opposed to things like The Game Awards which (while still being awesome!) mostly focus on Shooters and Action Adventure.



Fun

Humans return from the moon!

The NASA Artemis 2 mission was a huge success! Humans left Earth, went to the Moon, flew around it and returned home safely.

It's been over 50 years since humans last went to the Moon on Apollo 17, and almost 60 since famously Apollo 11 with Neil Armstrong, Buzz Aldrin (and Michael Collins) stepped on the Moon. Buzz Aldrin is still alive to witness this mission!

Thankfully nowadays in 2026 we have much better tech than 50 years ago, so this mission comes full of gorgeous images and videos that you can see. Plus a fun bonus, the web app they made to showcase the capsule as it was travelling was made with Unity!

This is just the start of the Artemis program!

  • Artemis 1 (2022) was uncrewed.

  • Artemis 2 (2026) took humans around the moon.

  • Artemis 3 (2027) will test docking and commercial landers from SpaceX and Blue Origin.

  • Artemis 4 (2028) will land on the moon!

  • Artemis 5 (late 2028) and beyond will begin the process of establishing a human Moon colony!

I love space exploration, I hope I get to go to the Moon one day! Considering how I'm 37 that means I have about 30 years for "Moon tourism" to become a thing. Will it happen in time? Or just after my lifetime? This is one good step towards that!




Get Rewards by Sending the Game Dev Report to a friend!

(please don’t try to cheat the system with temp emails, it won’t work, just makes it annoying for me to validate)

Thanks for reading!

Code Monkey



Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

DNA-Level Encryption Developed by Researchers to Protect the Secrets of Bioengineered Cells

1 Share
The biotech industry's engineered cells could become an $8 trillion market by 2035, notes Phys.org. But how do you keep them from being stolen? Their article notes "an uptick in the theft and smuggling of high-value biological materials, including specially engineered cells." In Science Advances, a team of U.S. researchers present a new approach to genetically securing precious biological material. They created a genetic combination lock in which the locking or encryption process scrambled the DNA of a cell so that its important instructions were non-functional and couldn't be easily read or used. The unlocking, or decryption, process involves adding a series of chemicals in a precise order over time — like entering a password — to activate recombinases, which then unscramble the DNA to their original, functional form... They created a biological keypad with nine distinct chemicals, each acting as a one-digit input. By using the same chemicals in pairs to form two-digit inputs, where two chemicals must be present simultaneously to activate a sensor, they expanded the keypad to 45 possible chemical inputs without introducing any new chemicals. They also added safety penalties — if someone tampers with the system, toxins are released — making it extremely unlikely for an unauthorized person to access the cells. "The researchers conducted an ethical hacking exercise on the test lock and found that random guessing yielded a 0.2% success rate, remarkably close to the theoretical target of 0.1%."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories