Microsoft has been talking about plans for an Xbox mobile gaming store for a couple of years now, and the company now plans to launch it in July. Speaking at the Bloomberg Technology Summit earlier today, Xbox president Sarah Bond revealed the launch date and how Microsoft is going to avoid Apple’s strict App Store rules.
“We’re going to start by bringing our own first-party portfolio to [the Xbox mobile store], so you’re going to see games like Candy Crush show up in that experience, games like Minecraft,” says Bond. “We’re going to start on the web, and we’re doing that because that really allows us to have it be an experience that’s accessible across all devices, all countries, no matter what and independent of the policies of closed...
In this MongoDB video, we'll explore the intricacies of database performance tuning, focusing on MongoDB's pivotal role in enhancing application and database efficiency. The session, led by product managers Xiaochen and Frank, delves into performance tuning patterns and best practices, offering insights into MongoDB's query and server performance capabilities. The interactive session promises to equip viewers with practical knowledge to apply to their performance management tasks, ensuring an informative and enriching learning experience.
⏱️ Timestamps ⏱️ Introduction and Performance Tuning Overview [00:00:00 - 00:06:56] The session begins with an introduction by the speakers, Xiaochen and Frank, who discuss their roles and the purpose of the talk. They engage with the audience to understand their professional backgrounds and set the stage for discussing performance tuning patterns and best practices in MongoDB.
Understanding Audience and Performance Tuning Scope [00:06:56 - 00:13:52] The speakers continue to interact with the audience, gauging their familiarity with various roles related to database management. They emphasize the importance of understanding the context, problems, forces, and solutions in performance tuning.
Performance Tuning Patterns and Categories [00:13:52 - 00:20:48] Xiaochen and Frank introduce the 12 performance tuning patterns organized into four categories: big picture, query and indexes, scaling, and putting it all together. They explain the importance of starting with smart performance requirements and understanding the system as a whole.
Deep Dive into Selected Performance Tuning Patterns [00:20:48 - 00:27:44] The speakers delve into specific performance tuning patterns, discussing the benefits of indexing and the trade-offs involved. They also touch on the use of compound indexes and the new feature of persistent query settings in MongoDB.
Audience Participation and Pattern Selection [00:27:44 - 00:34:36] The audience is invited to vote on which performance tuning patterns they would like to learn more about. The winning pattern is "Design Schemas," which the speakers agree to discuss further.
Design Schemas and Conclusion [00:34:36 - 00:41:47] Xiaochen provides insights into designing schemas in MongoDB, emphasizing the need to tailor the schema to the application's queries. He discusses the trade-offs between embedding and referencing documents and concludes the session by directing the audience to further resources and offering to answer questions after the talk.
Shoelace creator Cory LaViska joins Amal & Jess to tell them all about the forward-thinking library of web components that just joined the Font Awesome family to create Web Awesome.
Changelog++ members save 5 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
Speakeasy – Production-ready, Enterprise-resilient, best-in-class SDKs crafted in minutes. Speakeasy takes care of the entire SDK workflow to save you significant time, delivering SDKs to your customers in minutes with just a few clicks! Create your first SDK for free!
CrabNebula Cloud – Join Tauri’s “DevTools Premium” waitlist — If you’re building with Tauri, this might be the best news you hear all week! DevTools Premium is right around the corner. It’s not just about finding and fixing issues; it’s about understanding, optimizing, and perfecting the application development process. Join the waitlist today!
In this post, let’s look at how Wolverine allows you to either control the parallelism of your background processing, or restrict the processing to be strictly sequential.
To review, in previous posts we were “publishing” a SignupRequest message from a Minimal API endpoint to Wolverine like so:
In this particular case, our application has a message handler for SignupRequest, so Wolverine has a sensible default behavior of publishing the message to a local, in memory queue where each message will be processed in a separate thread from the original HTTP request, and do so asynchronously in the background.
So far, so good? By default, each message type gets its own local, in memory queue, with a default “maximum degree of parallelism” equal to the number of detected processors (Environment.ProcessorCount). In addition, the local queues do not enforce strict ordering by default.
But now, what if you do need to strict sequential ordering? Or if you want to restrict or expand the number of parallel messages that can be processed? Or the get really wild, constrain some messages to running sequentially while other messages run in parallel?
First, let’s see how we could alter the parallelism of our SignUpRequest to an absurd degree and say that up to 20 messages could theoretically be processed at one time by the system. We’ll do that by breaking into the UseWolverine() configuration and adding this:
builder.Host.UseWolverine(opts =>
{
// The other stuff...
// Make the SignUpRequest messages be published with even
// more parallelization!
opts.LocalQueueFor<SignUpRequest>()
// A maximum of 20 at a time because why not!
.MaximumParallelMessages(20);
});
Easy enough, but now let’s say that we want all logical event messages in our system to be handled in the sequential order that our process publishes these messages. An easy way to do that with Wolverine is to have each event message type implement Wolverine’s IEvent marker interface like so:
public record Event1 : IEvent;
public record Event2 : IEvent;
public record Event3 : IEvent;
To be honest, the IEvent and corresponding IMessage and ICommand interfaces were added to Wolverine originally just to make it easier to transition a codebase from using NServiceBus to Wolverine, but those types have little actual meaning to Wolverine. The only way that Wolverine even uses them is for the purpose of “knowing” that a type is an outbound message so that Wolverine can preview the message routing for a type implementing one of these interfaces automatically in diagnostics.
Revisiting our UseWolverine() code block again, we’ll add that publishing rule like this:
builder.Host.UseWolverine(opts =>
{
// Other stuff...
opts.Publish(x =>
{
x.MessagesImplementing<IEvent>();
x.ToLocalQueue("events")
// Force every event message to be processed in the
// strict order they are enqueued, and one at a
// time
.Sequential();
});
});
With the code above, our application would be publishing every single message where the message type implements IEvent to that one local queue named “events” that has been configured to process messages in strict sequential order.
Summary and What’s Next
Wolverine makes it very easy to do work in background processing within your application, and even to easily control the desired parallelism in your application, or to make a subset of messages be processed in strict sequential order when that’s valuable instead.
To be honest, this series is what I go to when I feel like I need to write more Critter Stack content for the week, so it might be a minute or two before there’s a follow up. There’ll be at least two more posts, one on scheduling message execution and an example of using the local processing capabilities in Wolverine to implement the producer/consumer pattern.
Each day, I just read whatever pops into my feeds and newsletters. I’m not looking for a theme, but sometimes one pops out at me. Today? It seemed like a lot of content focused on optimization and doing things the right way. For example, check out items below about improving dev experience, efficient hosting of streaming platforms, doing CI well, controlling ops metrics volume, and scaling Kubernetes.
[paper] Capabilities of Gemini Models in Medicine. There’s 30+ pages of description and data in this new paper, and it may inspire you for use cases outside of medicine.
[article] How is Flutter Platform-Agnostic? This framework renders interfaces across desktop, web, and mobile. how does it do that? Good deep dive here.
[blog] Optimizing CI in Google Cloud Build. Darren wrote a fantastic post that’s helpful whether you’re using the Google Cloud services he mentions, or not.
Microsoft has quietly started testing an intriguing change to the Windows 11 Start menu that could introduce a floating panel full of “companion” widgets. Windows watcher Albacore discovered the new Start menu feature in the latest test versions of Windows 11 that Microsoft has released publicly.
While Microsoft has not yet announced this feature, the “Start menu Companions” appear to be a way to allow developers to extend the Windows 11 Start menu with widget-like functionality that lives inside a floating island that can be docked next to the Start menu. It looks like developers will be able to build apps that provide widget-like information through adaptive cards — a platform-agnostic way of displaying UI blocks of information.