Track SQL Server Configuration Changes Using the Error Log
If you work with SQL Server long enough, you or someone will eventually want to know, “Did anyone change that setting?” And if you cannot prove it one way or the other, well you might just be in luck.
The good news is that SQL Server already tracks configuration changes in a few places. You just need to know where to look and what each option can (and cannot) do for you.
The easiest option: standard reports
The simplest place to start is the SSMS GUI. Right-click the server ? Reports ? Standard Reports, and look for the one that shows configuration history.
This standard report reads from SQL Server’s default trace, which being the "default setting, would have been already running. That means you don't need to setup and enable it. It is just there ready to view, unless of course it was purposely disabled/stopped.
However, the default trace comes with an important limitation: it is small by design. It is typically about 20 MB total, spread across up to 5 rollover files. You cannot change that size, so older entries will eventually roll off as new activity fills the trace.
If you want your own custom tracking, Extended Events is usually a better choice than building a new trace. Microsoft has largely moved the ecosystem toward Extended Events, and that’s where you should focus any new work.
When you need better control: auditing and Extended Events
SQL Server Auditing and Extended Events can also capture configuration changes, but there is one big catch: you must set them up before the change happens.
If you only think about tracking after the fact, auditing and Extended Events won’t help you. Once a setting is changed, they cannot retroactively tell you when it happened (unless they were already recording).
That is why many DBAs still start with the default trace or the SQL Server error log. Those options are already there, and they can answer the question right now, even if you did not plan for it.
The SQL Server error log
Another powerful, often overlooked place to check is the SQL Server error log. Every time a configuration option is changed with sp_configure, SQL Server usually writes a message into the error log that includes the phrase “Configuration option”.
Because of that, the error log becomes a built-in change history for key settings, as long as the log files are still present. You can search this log in several ways:
In SSMS, by opening the SQL Server logs.
In a text editor, by opening the .log files directly.
Programmatically, using sys.xp_readerrorlog from T-SQL.
The sys.xp_readerrorlog procedure is especially handy because it lets you search and filter the log without leaving SQL Server. Microsoft documents it here: sp-readerrorlog
Please note: You may notice that Microsoft documents sp_readerrorlog as the supported way to read SQL Server error logs, while this post uses sys.xp_readerrorlog instead. In practice, both procedures read the same underlying log files and return the same core information; the main difference is that sp_readerrorlog is a wrapper that first checks your permissions and then calls xp_readerrorlog under the hood. The supported version also accepts a slightly simpler parameter list, while xp_readerrorlog exposes extra parameters such as a start date, end date, and sort order, which makes it more flexible for ad-hoc troubleshooting and filtered searches. Because of this extra flexibility, many DBAs, including examples in this blog post, continue to rely on xp_readerrorlog in day-to-day work, even though it remains an undocumented extended stored procedure.
Examples:
Here is a basic example that searches the current SQL Server error log for configuration change:
USEmaster;
GOEXECsys.xp_readerrorlog
0,-- LogNumber: 0 = current error log1,-- LogType: 1 = SQL Server error logN'Configuration option',-- SearchString1: text to search forNULL,-- SearchString2: additional text filter (none)NULL,-- StartTime: no start-time filterNULL,-- EndTime: no end-time filterN'desc';-- Sort order: newest entries first
This returns all entries in the current log file that contain Configuration option, ordered newest first.
If you want to narrow it by time, you can pass a start-time filter:
USEmaster;
GOEXECsys.xp_readerrorlog
0,-- LogNumber: 0 = current error log1,-- LogType: 1 = SQL Server error logN'Configuration option',-- SearchString1: text to search forNULL,-- SearchString2: additional text filter (none)'2026-05-10 16:00:00',-- StartTime: start datetime filterNULL,-- EndTime: no end-time filterN'desc';-- Sort order: newest entries first
Now you only see configuration-change lines from that specific point onward.
You can also search for a specific setting by adding a second text filter. For example, this looks for configuration messages that also contain the word Optimize:
USEmaster;
GOEXECsys.xp_readerrorlog
0,-- LogNumber: 0 = current error log1,-- LogType: 1 = SQL Server error logN'Configuration option',-- SearchString1: primary text to search forN'Optimize',-- SearchString2: secondary text to search forNULL,-- StartTime: no start-time filterNULL,-- EndTime: no end-time filterN'desc';-- Sort order: newest entries first
These patterns make it easy to quickly corner the exact change you are investigating.
What happens after a restart
Here is an important detail that trips people up: every time SQL Server restarts, it starts a new error log file. The previous file is renamed to errorlog.1, the one before that becomes errorlog.2, and so on.
By default, SQL Server keeps up to 7 of these archived log files, but you can configure it to keep more (up to 99). If needed, you can also manually rotate the log without restarting by running sp_cycle_errorlog. This is useful if you want to “start fresh” and still keep older logs around.
Because of this cycling behavior, older configuration changes may move into errorlog.1, errorlog.2, and so on. The good news is that sys.xp_readerrorlog can read those archived logs too. The bad news is that the oldest logs will eventually be removed unless you copy that data somewhere else.
So while the error log is a great tool for recent history, it is still not a permanent, long-term auditing solution.
Usefulness beyond configuration changes
sys.xp_readerrorlog is not only useful for tracking configuration changes. You can also use it to search for errors, warnings, startup messages, and other events.
For example, say you want to check for severity 16 errors containing a specific error number over the last 24 hours:
USEmaster;
GODECLARE@start_timedatetime=GETDATE()-1;
EXECsys.xp_readerrorlog
0,-- LogNumber: 0 = current error log1,-- LogType: 1 = SQL Server error logN'Severity: 16',-- SearchString1: primary text to search forN'41145',-- SearchString2: secondary text to search for@start_time,-- StartTime: start datetime filter (yesterday)NULL,-- EndTime: no end-time filterN'desc';-- Sort order: newest entries first
This kind of pattern is very handy when troubleshooting a server and you want to quickly focus on a specific kind of error.
The bottom Line:
Here is how I usually think about this in practice:
Need to do a quick check? Use the Standard Reports in SSMS. They read from the default trace and give you an easy, visual way to see recent changes.
Need more detail, but don’t want to set up a new system? Use sys.xp_readerrorlog against the SQL Server error log. It is fast, already available, and works great for recent configuration changes.
For long-term auditing and compliance: Use the Extended Events or SQL Server Auditing. These tools require planning and setup, but they are the right choice if you need reliable, long-lasting change history.
So if someone tells you that a setting was never touched, check the default trace first. If that has already rolled off, search the error log. Only then, if you need a permanent answer, invest in a more structured auditing solution.
Patch release for Aspire 13.3 with a fix for container tunnel startup when tunnel-dependent containers use WaitFor().
🐛 Fixes
🚇 Fix WaitFor() for tunnel-dependent containers — The container tunnel implementation that shipped in Aspire 13.3 deadlocked at startup when tunnel-using containers waited on other resources, because resource waits blocked ResourceStarting before the tunnel initialization could complete. Container and tunnel startup have been refactored to cooperate correctly, and additional tunnel-dependent containers can now be started at any point during the application lifecycle. Also improves error reporting for container tunnel failures. (#16988, backported via #16993)
Learn how Aspire deployment works: targets add pipeline steps, commands enter the pipeline, and compute environments, parameters, and environments shape each run.
When I need to pick up the client IP Address in ASP.NET Core I always forget where to find the connection information.
It's also useful to remember that if requests are proxied we need to return the forwarded IP address, rather than the proxy's IP Address.
Good ready to use a small helper extension method for the HttpRequest class that makes this more easily accessible:
/// <summary>
/// Returns the client IP Address for a request.
///
/// Checks proxy forwarding first, then the actual ip
/// and returns null if not available.
/// </summary>
/// <param name="context">HttpRequest instance</param>
/// <returns>
/// IP Address or null if not available. Note local IP
/// tends to get returned as IPv6 `::1` value.
/// </returns>
public static string GetClientIpAddress(this HttpRequest request)
{
if (request == null) return null;
return request.Headers["X-Forwarded-For"].FirstOrDefault() ??
request.HttpContext?.Connection?.RemoteIpAddress?.ToString() ??
null;
}
You can also find this as part of the HttpRequestExtensions class in the Westwind.AspNetCore package here which provides a host of other small, but frequently used extensions.
Summary
Nothing new here, but given how often I fumble for this value, creating this wrapper and putting a reminder here for quick lookup seems worth the effort 😄
CyberGym benchmark scores over time, showing the rapid improvement in AI vulnerability discovery capabilities. Microsoft’s multi-model MDASH system (top right) tops the leaderboard at 88.4%. (CyberGym / UC Berkeley)
Mythos has been MDASH’d.
A new AI-powered system from Microsoft surpassed a headline-grabbing rival from Anthropic on a leading cybersecurity benchmark, using more than 100 specialized AI agents working together across multiple AI models to find real-world software vulnerabilities.
Microsoft’s system, codenamed MDASH, was introduced this week alongside the disclosure of 16 new vulnerabilities it found in different versions of Windows, including four “critical” remote code execution flaws fixed in this month’s Patch Tuesday release.
The company, which has faced persistent criticism over security lapses, is betting that multiple models can discover vulnerabilities at a pace that individual models can’t match.
MDASH, derived from the term “multi-model agentic scanning harness,” works by running specialized AI agents through a staged pipeline. Different agents scan code for potential vulnerabilities, then a separate set of agents debate whether each finding is real and exploitable, and a final stage constructs proof-of-concept attacks to confirm the bugs exist.
By comparison, Anthropic’s Mythos, which raised concerns over its ability to find and exploit software vulnerabilities when it was previewed earlier this year, is a single AI model running inside an agent framework. Anthropic restricted its release to a handful of companies through a consortium called Project Glasswing, which includes Microsoft.
OpenAI’s GPT-5.5 and others on the leaderboard are also single-model systems.
MDASH scored 88.45% on the CyberGym benchmark, a test developed by UC Berkeley researchers that measures how well AI systems can reproduce real-world vulnerabilities across 1,507 tasks drawn from 188 open-source software projects.
Mythos Preview was second at 83.1%, followed by GPT-5.5 at 81.8%.
The benchmark gives each system a description of a known vulnerability and an unpatched codebase, and measures whether it can produce a working attack that triggers the bug.
The scores on the CyberGym leaderboard are self-reported by the companies, including Anthropic’s Mythos result. The benchmark code is public, but no independent party has verified any of the scores. Also, benchmark results don’t necessarily reflect real-world performance.
The results also highlight growing concerns about AI’s use as an offensive hacking tool. The same capabilities that allow AI to find vulnerabilities in friendly hands can be used to discover them for exploitation by attackers. Microsoft said MDASH is being used internally by its security engineering teams and will be entering a limited private preview with customers.
Microsoft is telling customers to expect bigger Patch Tuesdays going forward as AI accelerates the discovery of vulnerabilities.
Microsoft Edge is adding a new feature that will allow its Copilot AI chatbot to gather information from all of your open tabs. When you start a conversation with Copilot, you can ask the chatbot questions about what's in your tabs, compare the products you're looking at, summarize your open articles, and more.
In its announcement, Microsoft says you can "select which experiences you want or leave off the ones you don't." The company is retiring Copilot Mode as well, which could similarly draw information from your tabs but offered some agentic features, like the ability to book a reservation on your behalf. Microsoft has since folded these …