Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150360 stories
·
33 followers

Random.Code() - Excluding Properties From Records in C#, Part 2

1 Share
From: Jason Bock
Duration: 1:01:49
Views: 10

I'm going to keep working on my source generator for property exclusion in records. Maybe I can get far enough to write tests....

https://github.com/JasonBock/Transpire/issues/44

#dotnet #csharp

Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Your Website Is Running Code You’ve Never Seen - Scott Helme - NDC Security 2026

1 Share
From: NDC
Duration: 57:07
Views: 90

This talk was recorded at NDC Security in Oslo, Norway. #ndcsecurity #ndcconferences #security #developer #softwaredeveloper

Attend the next NDC conference near you:
https://ndcconferences.com
https://ndc-security.com/

Subscribe to our YouTube channel and learn every day: @NDC

Follow our Social Media!

https://www.facebook.com/ndcconferences
https://twitter.com/NDC_Conferences
https://www.instagram.com/ndc_conferences

#applicationsecurity #javascript

If your website includes third-party JavaScript, you are running code you probably haven’t reviewed, can’t inspect in production, and don’t control when it changes. That code runs with full access to the DOM, user data, authentication state, business logic, and more.

This session explores the risk of what that really means and what can go wrong. You can’t secure what you can’t see — but the browser can.

Read the whole story
alvinashcraft
36 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The Foundation of AI: Why Your Data Platform Matters Most | Building the AI Era Ep. 2

1 Share
From: MongoDB
Duration: 5:39
Views: 10

Learn more about building AI applications with MongoDB: https://mdb.link/66dkG0w_luQ-AI
Subscribe to the MongoDB for Developers YouTube Channel: https://www.youtube.com/@MongoDBDevelopers?sub_confirmation=1
Subscribe to MongoDB YouTube→ https://mdb.link/subscribe

Speed and control don’t have to be a tradeoff.

In Episode 2 of Building the AI Era, we show how AI‑powered data and a unified operational data platform help teams move faster and maintain the governance, reliability, and precision enterprises require. You’ll see why traditional approaches struggle to balance speed and safety, and how modern architectures close that gap.

In this video, you will learn:
- Why data remains foundational to AI inferencing and application success
- How to hit the high bar for user experience set by fast-growing consumer products
- The essential characteristics of a database for AI: speed, flexibility, and ease of deployment
- How the MongoDB Application Modernization Platform (AMP) removes friction from the modernization journey

00:00:00 Introduction: Building the AI Era
00:00:40 The Foundational Role of Data Platforms in the AI Stack
00:01:03 Navigating Emerging Tech and the User Experience Bar
00:01:40 Technical Debt: The Existential Threat to Modernization
00:02:17 Redefining AI ROI and Measuring Business Value
00:03:26 Key Database Features for AI Success
00:04:47 Modernizing Legacy Stacks with MongoDB AMP

Visit Mongodb.com → https://mdb.link/MongoDB
Read the MongoDB Blog → https://mdb.link/Blog
Read the Developer Blog → https://mdb.link/developerblog
MongoDB for Developers YouTube Channel → https://www.youtube.com/@MongoDBDevelopers

Read the whole story
alvinashcraft
41 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

New In-App Purchase and subscription data now available in Analytics

1 Share

Analytics in App Store Connect receives its biggest update since its launch, including a refreshed user experience that makes it easier to measure the performance of your apps and games. Updates include:

  • More than 100 new metrics. Now you can access monetization and subscription data in Analytics to better understand the performance of your In-App Purchases and offers.
  • New cohort capabilities. Analyze user behavior based on common attributes — such as download date, download source, offer start date, and more — to measure how a particular group of users performs over time. For example, if you’ve expanded your app to a new region, you can monitor how long it takes users in that region to make a purchase compared to other more established regions. Cohort data is aggregated to ensure user privacy.
  • New peer group benchmarks. Discover how you stack up to peers with two new monetization benchmarks: download-to-paid conversion and proceeds per download. Benchmarks incorporate differential privacy techniques to protect individual developer performance while also providing meaningful and actionable insights.
  • Two new subscription reports. Export these via the Analytics Reports API to perform offline analysis and integrate Analytics into your own data systems.
  • Additional filters. Apply up to seven filters to your selected metrics at once allowing you to drill down further and uncover additional insights.
  • App Store Analytics Guide. This new guide in App Store Connect Help enables you to develop a data-driven strategy and understand App Store tools and features you can use to grow your business.

Learn about measuring performance with Analytics

Read the new Analytics guide

Read the whole story
alvinashcraft
59 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Everything you should know about the SQL Server Resource database

1 Share

Every SQL Server instance contains a database that most people never query, never back up, and never even see in Object Explorer. Yet, without it, SQL Server would not start. Enter the SQL Server Resource database.

This article explains what the SQL Server Resource database is, why it exists, and how it affects patching, upgrades, and troubleshooting – without the mythology that often surrounds it.

What is the SQL Server Resource database?

The SQL Server Resource database is a hidden, read-only system database that contains:

  • System object definitions
  • System stored procedures
  • System views
  • Internal metadata required by the SQL Server engine

Logically, these objects appear to live in the master database. Physically, they do not. Instead, they live in two files:

  • mssqlsystemresource.mdf
  • mssqlsystemresource.ldf

These files sit alongside the system databases, but they are not listed as a database in normal system views.

Why does the SQL Server Resource database even exist?

To understand why the SQL Server Resource database exists, it helps to understand what came before it.

Before SQL Server 2005

In SQL Server 2000 and earlier:

  • System objects physically lived in master
  • Patching replaced or modified system objects directly
  • Upgrades were intrusive and risky
  • Rollbacks were difficult or impossible

The master database was both:

  • A configuration database
  • A code container

That coupling caused real problems.

It was common – and still is – for users to create objects in the master database, because it’s the only place where code can be created and used by all databases. (I wish it was possible to create true libraries in SQL Server!)

The problem then, was that the upgrade/patching code needed to modify the master database couldn’t be sure of the state of that database – making it far more likely for updates to fail.

The Resource database was introduced in SQL Server 2005 to solve that very specific problem: to decouple system code from system configuration.

Fast, reliable and consistent SQL Server development…

…with SQL Toolbelt Essentials. 10 ingeniously simple tools for accelerating development, reducing risk, and standardizing workflows.
Learn more & try for free

In practical terms:

  • Code should be patchable
  • Configuration should be preserved

The Resource database contains code, and the master contains instance state. This allows for a much cleaner separation.

What lives where in the SQL Server Resource database

The SQL Server Resource database contains:

  • Definitions for system catalog views such as:
    • sys.objects
    • sys.tables
    • sys.indexes
  • System stored procedures
  • Internal functions
  • Metadata required for query compilation and execution

When you run:

SELECT * FROM sys.objects;

SQL Server is reading metadata from the Resource database. Rather than metadata, the master database contains:

  • Logins
  • Endpoints
  • Configuration settings
  • Linked servers
  • Database metadata
  • Service-level state

The master database references system objects; it does not own their definitions.

How does SQL Server use the Resource database?

At startup:

  • SQL Server starts with minimal functionality
  • master is brought online
  • The Resource database is attached internally
  • System objects become visible through metadata views

If the Resource database is missing or corrupted:

  • SQL Server will not start
  • You cannot rebuild it independently
  • Recovery requires reinstallation or file restoration

Why is the SQL Server Resource database read-only?

The Resource database is intentionally read-only.

This prevents:

  • Accidental modification
  • Drift between instances
  • Corruption caused by user activity

It also ensures:

  • Consistent system object definitions
  • Predictable patch behavior
  • Repeatable upgrades

Allowing writes here would reintroduce the same fragility SQL Server had before 2005.

Patching and the SQL Server Resource database

When you apply a cumulative update or service pack:

  • SQL Server replaces the Resource database files
  • System object definitions are updated atomically
  • master and user databases are untouched

This design is why:

  • Patching does not modify user metadata
  • Rollbacks are possible
  • Version consistency is easier to maintain

Enjoying this article? Subscribe to the Simple Talk newsletter

Get selected articles, event information, podcasts and other industry content delivered straight to your inbox.
Subscribe now

Why don’t you back up the SQL Server Resource database?

You will often hear:

You don’t need to back up the Resource database.

That statement is correct – but incomplete. Backups of it aren’t useful because:

  • The Resource database is version-specific
  • It is replaced during patching
  • Restoring it across versions is unsupported

A backup does not provide a meaningful recovery path. Instead, you protect the Resource database indirectly by protecting:

  • SQL Server installation media
  • Cumulative update installers
  • System database backups (master, msdb, distribution)
  • Encryption keys (SMK, certificates, DMKs)

If the Resource database is lost, recovery is reinstallation – not restore.

Common myths about the SQL Server Resource database

System objects live in master

Incorrect. They appear to live in master, but do not.

You can modify system procedures

You can override behavior in limited ways, but you cannot safely modify the underlying definitions.

Corruption in master affects system code

Usually, it does not. System code lives elsewhere.

The Resource database is optional

Incorrect. SQL Server cannot run without the Resource database.

How to view the SQL Server Resource database (carefully)

You can see the SQL Server Resource database files:

SELECT *
    FROM sys.database_files;

You can also attach a copy for inspection:

CREATE DATABASE resource_copy
    ON (FILENAME = '...\mssqlsystemresource.mdf')
    FOR ATTACH;

This should be done:

  • Read-only
  • For investigation only
  • Never for modification

Why does understanding the SQL Server Resource database matter?

Understanding the SQL Server Resource database helps you:

  • Diagnose startup failures
  • Understand patch behavior
  • Explain why system objects change after updates
  • Avoid dangerous assumptions about master
  • Understand SQL Server architecture accurately

The SQL Server Resource database: in summary

The SQL Server Resource database is invisible by design, but fundamental by necessity. It exists to make SQL Server:

  • Safer to patch
  • Easier to upgrade
  • More resilient to failure
  • Cleaner in architecture

FAQs: The SQL Server Resource database

1. What is the SQL Server Resource database?

A hidden, read-only system database that stores SQL Server system object definitions and internal metadata.

2. Where is the SQL Server Resource database stored?

In mssqlsystemresource.mdf and mssqlsystemresource.ldf, alongside system databases.

3. Why does the SQL Server Resource database exist?

To separate system code from configuration, making patching and upgrades safer.

4. Can you back-up or restore the SQL Server Resource database?

No. It’s version-specific and replaced during patching.

5. What happens if the SQL Server Resource database is missing or corrupted?

SQL Server won’t start; recovery requires reinstalling or restoring the files.

6. Does patching modify master in SQL Server?

No. Updates replace the Resource database files, not master.

7. Are system objects in master in SQL Server?

No. They appear in master but physically live in the Resource database.

The post Everything you should know about the SQL Server Resource database appeared first on Simple Talk.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Free SQL Server Performance Monitoring: Which Version Is Right For You?

1 Share

Free SQL Server Performance Monitoring: Which Version Is Right For You?


Summary

In this video, I delve into my new, completely free, open-source SQL Server monitoring tool, discussing which edition—full or light—you might find most useful based on your specific needs. The full edition creates a database and agent jobs to continuously collect data, offering complete control over the collection schedule and allowing for easy customization of stored procedures. It’s ideal for scenarios where you need constant data collection and want the flexibility to manage the monitoring tool as needed. On the other hand, the light edition uses an embedded DuckDB version inside itself, collecting data only when it’s open, making it perfect for quick triage or situations with limited server access, ensuring that no data is missed while you’re away from your computer. Both editions offer alerts and notifications for critical issues like blocking and high CPU usage, providing a seamless experience regardless of which edition you choose.

Chapters

  • *00:00:00* – Introduction
  • *00:06:28* – Features and Capabilities
  • *00:10:37* – Setup and Installation Process
  • *00:13:59* – Platform Support and Configuration <– Thanks, AI

Full Transcript

Erik Darling here with Darling Data, and we’re going to continue talking a little bit more about my new, completely free, open-source SQL Server monitoring tool, and we’re going to spend some time in this video talking about which edition of the monitoring tool you might want to use, because there are two of them. They have two somewhat different design philosophies and two somewhat different, I think, usage patterns, it’s important to figure out which one you might find the most useful. So, there’s a full edition, which I believe I talked about in the last video a little bit, it creates a database on your server. It creates some agent jobs on your server, and it starts pulling data into that database on a schedule. You have complete control of the schedule. You can change how frequently it runs, you can change whether things run or not, and even if there’s, you know, like, you want to change one of the stored procedures, you can do that, right? You can, like, the code is right there, it’s not encrypted or anything, nothing’s hidden from you. Again, completely naked and vulnerable to the world. There are two ways to install it. There is a command line installation process, and there is a GUI front-end installation process, whichever one you find easier, or if you, I mean, if you need to do something programmatic to a bunch of servers, then the command line is probably for you.

It spins up, like I said, some agent jobs, three of them. One of them is to do data collection, one of them manages data retention, and the other one, like, goes and looks for if the agent job, either of our agent jobs is hung, and knocks them out so that nothing, nothing weird happens, right? Because it’s important to monitor the monitoring tool. But then, what you do is you open up a dashboard, and you point it to the server, and the dashboard finds the performance monitor database, and it goes and starts pulling data from it. So, it’s pretty easy there, and there’s pretty standard monitoring stuff. There’s a bunch of tabs, and, you know, you look through them and make a bunch of wise decisions.

The light edition is a little bit different. The light edition does not create a database on your server. The light edition uses an embedded sort of DuckDB version inside of itself and starts pulling data in. This is where things start to differentiate a little bit. The full edition is going to be constantly, via the agent jobs, pulling in data. The light edition only pulls in data when it’s open. So, if you close down, if you close light edition, if you, you know, go on vacation, shut your computer down, just kidding, who, no one does either of those things.

It will not be collecting data while you’re away. Probably, if you, you know, don’t want anything weird happening, you might want to stick, if you, and you, like, if you’re, like, depending on, like, what sort of access you have, you might just want to stick light edition on a jump box or something and have it constantly pulling stuff in, and that way anyone can go in and look at the dashboard when they want. I mean, you could do that with full edition, too, but, you know, some other stuff, you know, like I talked about in yesterday’s video, you know, when I was first designing the full edition of this, I did sort of picture scenarios where someone would want to, you know, maybe have the monitoring database, and they would, they might want to, like, just, you know, back it up, zip it up, send it to a, send it to someone for analysis without having to give them full access to the server, the whole other thing, right?

It’s a fairly easy thing to do, and, you know, if you’re, if you’re collecting data for, like, a week or a few days or something, there’s not going to be a huge database backup to, you know, manage and install. It’s not like a multi-terabyte database or anything. We do store a reasonable amount of data in there. By default, it’s up to 30 days, but you can manage that as well. If you want more data, less data, you can change the retention policies.

Right. So, which edition kind of fits your needs is going to depend on how you use things and what your level of access is and probably what your job title is. You know, like if, like, you know, like the full edition, if you want, like, you know, like, guaranteed that it’s always going to be up there collecting data, then, you know, you probably want the agent jobs running collecting stuff. Light edition, if, you know, anything happens to it, if, like, you know, you lose network connectivity, you shut your computer down, Windows update kicks in and ruins your life, you’re going to have gaps in collection.

If you want to do a really quick triage, the light edition is probably better for you because it just, like, you can just open it, point it at a server, it’ll start immediately pulling in data. It does make an attempt to pull in, like, a bit of historical stuff to, like, populate some of the dashboards, but, like, not everything in SQL Server plays nicely with that.

Wait stats and other perfmon counters, other things like that, they’re, like, aggregate since the server started up. So, you know, if you need to support Azure SQL database, then the light edition is the only one that does that because, you know, Azure SQL DB only supports one database, so we couldn’t create a monitoring database and start pulling in data.

So, you know, depending on your level of access to the server and what you’re allowed to create and do there, the full edition might just not be possible for you, like, depending on, like, if you’re, like, if you’re not allowed to, you know, like, create agent jobs without, like, you know, compliance and all the other stuff getting involved or whatever other change management stuff going in, light edition’s a little bit easier to sneak in and say, oh, I don’t know, I’m just checking out some stuff here, let’s run an SP, who is active.

So, you know, you know, like, for the, like, consultants, though, like, people out there who might be interested in using this, you know, like, it is, I think the full edition is useful for a lot of reasons, like, you know, again, it’s always collecting data. Someone can take a backup and send you the backup.

You can run analysis on whatever, like, whatever data is in there without someone having to give you full access to the server, schedule a call, whatever it is. Light edition, you know, if you, you know, like, like, are getting started with a client or, like, you know, like, I don’t know, like, let’s say you’re me and you sign a client and then you’re like, well, you know, we can have our first call in, like, you know, three, four days or something, why don’t you get this running, collecting some data so we have stuff, like, good stuff to look at when we kick things off.

Like, I don’t have to sit there and, like, you know, run a bunch of scripts and collect stuff and, you know, dig through things and be like, oh, well, these wait stats, oh, the server’s been up for 2,000 hours. Well, I don’t know how, I don’t know which of these wait stats is relevant anymore. That really bad one could have happened 1,900 hours ago and never again, right?

Like, it’s just a nice way to sort of get some preamble data before you start talking to someone. We do also provide some alerts and notifications. So for stuff like blocking, deadlocks, high CPU, connection changes, stuff like that, like connection changes, meaning, like, server being unreachable, important stuff, kind of.

Those do generate notifications. And the notification thresholds are all configurable as well. It’s not like, you know, like everything in here, I try to make it so that you can customize it to make sense to you as much as I can.

The only thing that would not be easy there would be managing two separate presentation modes. Right now, the dashboard is all dark mode. Light mode hurt my eyes while I was making it.

It’s a lot of late nights on this. So, like, it will fire off notifications. And there are a few different ways to get the notifications. There’s this, like, you get system tray pop-ups when stuff happens.

And you can set up SMTP to send you emails. I tested that with a weird, like, dev SMTP thing. And it worked wonderfully.

If you, you know, again, if you hit any problems, just, you know, open an issue on GitHub. I’ll get to it as soon as I can. And then the other neat thing here is that you can just, you can either acknowledge or silence notifications. So, like, if, you know, you, if you’re, like, you know, looking at the server and you’re, like, well, there’s 70 deadlocks.

I’m, okay, I saw those. You can just click acknowledge on them. If there’s a specific notification you don’t care about, you can just silence it. You can silence notifications for an entire server.

Like, whatever works for you. And then also both of them include MCP server analysis. Now, something that seems to be confusing to people is in my setup stuff, like, I use Claude. And so the setup instructions that I have to add an MCP server are geared towards that.

You can use it with any LL. You can use it with whatever you want that supports MCP servers. So it’s not limited to Claude.

You can use whatever you want with it. It just really is, like, I use Claude specifically. So that’s what I geared stuff towards, right? There’s nothing specific about an MCP server or, like, the descriptions or the tools in here that are Claude only. You can use whatever you want with it.

And the cool thing here is that, like, as far as security goes, it’s not like, you know, it just binds to your local host. So, you know, you’re not sending your data all over the place. You’re asking specific questions about just the data that has been collected by the monitoring tool.

Like, it can see database and table names and stuff because that’ll be in various collection things and, like, see server names and stuff because that’s what you’re connecting to. But, like, it’s not going out to your user databases. It does not have permissions or privileges beyond the collected data set.

So nothing outside of that gets touched, looked at. It’s just collected performance data. And that’s nice, too, because it really helps to narrow the focus of what goes on in there. It all works off tools that are specific to the tables and schema that I have and some views that I have set up in there.

So it’s not like, you know, it has to go and crunch numbers itself. Like, everything is pretty well laid out. So you can ask, you know, you can ask questions of your collected performance data and make life very easy for doing analysis because you’re not, like, again, if you’re not the type of person who does a lot of performance monitoring regularly or even if you are and you just want to get quick answers without looking at, you know, a bunch of dashboards and everything else.

Like, if you want to, like, sort of get a, like, like, you want to get a story to tell before you start looking through things, you know, it’s very easy to do that, right? So that’s probably about good here. If you want to check, oh, I guess there’s a little bit of platform support stuff to talk about.

All this stuff is somewhat configurable. You know, it’s a little annoying that on RDS, if you want to use the block process report, that’s outside of my control. You need to use a parameter group for that.

Azure SQL DB is fixed at 20 seconds. But for on-prem in Azure managed instance, I will auto-configure the block process report stuff for you, run the SP configure stuff, create the extended events to read the block process and deadlock reports. So, like, I do as much setup as I can, but that’s one thing that kind of sticks out.

So, like, the default trace stuff, like, if you’re, if you have it turned on, it’ll read from it. If you don’t, it’ll just say it’s not turned on. There’s also a specific trace that I use.

Like, one thing that I really liked about SQL Sentry back when it was good was that it used trace to find stuff. So, you can get, like, some, like, good performance metrics pulled out of that. And so, you know, I use that.

So, I have something similar set up in mind to give you a good experience in that realm. But, you know, pretty, pretty simple stuff there. Really, the only, really the big thing is that the old, like, Azure SQL database that’s only supported by the Lite version, again, because of the database creation thing.

So, if you want to check that out, go to code.erikdarling.com, and you can download it for free. If you want to contribute to the project, it is open source. If you want to support it as an open source project, that is available to you.

And if your company requires some sort of support contract or you need to have some other vendor validation stuff before you run software in your environment, the purchase terms for that are at training.erikdarling.com under the monitoring header. So, just head over there if you require additional stuff before you get going with an open source project. Anyway, thank you for watching.

Hope you learned something. Hope you enjoyed yourselves. I hope you try out this free open source SQL Server Monitoring Tool, and I will see you in tomorrow’s video where we will dig a little bit deeper into some of the inner workings of it. All right.

Thank you for watching.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

The post Free SQL Server Performance Monitoring: Which Version Is Right For You? appeared first on Darling Data.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories