Read more of this story at Slashdot.
Read more of this story at Slashdot.
1165. Today, we talk with Joan Houston Hall to look at the monumental task of documenting how Americans speak. We look at the Dictionary of American Regional English (DARE), exploring the unique folk words that survive outside of standard dictionaries and how "word wagons" traveled the country to map the "egg turners," "pogonips," and "oncers" that define our regional identities.
"Dictionary of American Regional English" (DARE)
Support DARE by visiting the University of Wisconsin's giving page.
🔗 Join the Grammar Girl Patreon.
🔗 Share your familect recording in Speakpipe or by leaving a voicemail at 833-214-GIRL (833-214-4475)
🔗 Watch my LinkedIn Learning writing courses.
🔗 Subscribe to the newsletter.
🔗 Take our advertising survey.
🔗 Get the edited transcript.
🔗 Get Grammar Girl books.
| HOST: Mignon Fogarty
| Grammar Girl is part of the Quick and Dirty Tips podcast network.
| Theme music by Catherine Rannus.
| Grammar Girl Social Media: YouTube. TikTok. Facebook. Threads. Instagram. LinkedIn. Mastodon. Bluesky.
I always really look forward to reading Redgate’s annual State of the Database Landscape report, and 2026 was no exception. I love the level of detailed statistics they collected from many thousands of database professionals worldwide, from DBAs to senior leaders, and from and so many different organizations. There’s a lot to digest, so here are my thoughts on it. Take a look and feel free to share your own experiences and insights in the comments.
One thing called out early in the report is that local knowledge, manual controls and operational practices no longer work. Most organizations are now working with multiple database systems, and no one individual is sufficiently across all of them.
As the number of databases at each organization increases fast (and not just the number of different types of databases), operational practices are becoming more challenging. I didn’t used to see many clients with thousands of databases, but now it’s becoming quite routine. In the past, it was just a select few, very specific, types of customer who operated multiple platforms.
For example, I worked with mining companies who had one database per drilling rig, and they had tens of thousands of them. I always thought they were a great example of why AutoClose shouldn’t be removed as an option from SQL Server. If you have tens of thousands of databases on an instance, and only a handful are being used at any point in time, you don’t want them all online.
But today, pools of cloud-based databases connected to SaaS applications is a common cause. Many SaaS vendors choose to use databases as one of the core isolation levels for their applications. It certainly makes it harder to accidentally show one customer’s data to another.
Elsewhere, I was interested to see the mention that organizations exposed to auditing are getting better at database monitoring. The pressure from audits is clearly helping. After all, consistent monitoring is essential – and clearly is of interest for Redgate with their Redgate Monitor tool.
Lastly, it was disappointing – but not surprising – to see that adoption of Database DevOps (45%) and built-in CI/CD tools (38%) are both still sluggish.
I was interested to see that the percentage of organizations working with just a single database increased in 2025 (to 26%) but fell again in 2026 (to 16%). Shared practices across multiple toolsets continues to be a challenge. Manual processes increase risk directly and respondents clearly say they want cross-database tooling.
I’m sure that cloud-based systems are influencing that. The report notes that the number of fully on-premises estates has fallen from 53% in 2021 to just 20% in 2026, and that hybrid architectures are the default. From my work, I see that many cloud vendors do a good job in the cloud but still do a poor hybrid job.
I liked seeing flexibility now cited (57%) as a key driver for cloud adoption. So often in the past, scalability seemed to be claimed as most important. I’ve always valued both. We depend on scalability in our systems and routinely dynamically scale resources to keep costs under control but performance up when needed. But so many times, I’ve been at client sites and thought that flexibility was what they missed.
For example, a site that wanted to test Always On AGs and was determined to do so with their own hardware. It took them 4 weeks to get a suitable quote for the equipment and another 5 weeks for it to arrive. And, when it finally did, they told me they couldn’t take a power outage on their racks to install it for another 4 weeks. If we had done the work in Azure VMs, it would have been completed in a single day.
The report noted that data security has become a board-level concern. This is an area that I’m not so sure about. I’ve done work for many large financial organizations where the board are made well-aware of security issues, but the board also decides against resolving those issues, based on cost.
The report notes that risk is increasing. I wish our local laws here in Australia were following this trend. It was interesting to see the statistic that 58% of organizations are willing to accept higher risk for efficiency gains. Why wouldn’t they if there’s no personal risk?
How to keep your databases secure in 2026: a complete guide
Lukas Vileikis explains how to secure your databases against the ever-increasing security threats in 2026 – including best practices for access control, encryption, monitoring, resilience, and more. Read the guide.
I think that if personal liability was attached to these decisions by board members, we’d be seeing very different decisions. With concepts like “trading whilst insolvent”, board members in my country (Australia) understand that they become personally liable. However, they can still make poor security decisions that lose members or shareholders’ funds, and seem to be able to just shrug it off. I hope that changes.
And I’ll talk about AI more in a moment, but this just makes things even more challenging.
For many years, I’ve been telling my clients that, for high-risk financial systems, a move to cloud-based environments is inevitable. These are the same people who used to tell me that their hesitance in “moving to the cloud” was based on security concerns.
In Australia, we have many regulations around computing systems that financial organizations must meet. And when I work in banks here, even larger ones, I see how they struggle to meet those requirements. Yet the best cloud providers have started with those regulations as a base level. I believe that as the requirements increase, the only way these systems will ever have a chance of meeting the requirements will be in high-end cloud providers. They just won’t be able to meet the requirements by themselves.
A vendor who is managing many millions of cloud-based databases is going to have procedures in place that normal organizations just can’t compete with.
I like John Q Martin’s quote: “one of the main reasons that many organizations adopt cloud infrastructure for database and other workloads is the prospect of significant cost savings. However, most fail at this.”
The biggest issue here is that far too many organizations still try to migrate database applications to the cloud, when those applications aren’t suitable in their current form.
As an example, I was helping a local bank who had moved their overnight processing system from an on-premises server to a popular cloud vendors infrastructure. They called me because their overnight processing was now taking over 12 hours, when it used to take 2 hours. And worse, they had reserved cloud-based infrastructure that was, in theory, 10 times more powerful than their existing on-premises systems.
That application could never move without the work we ended up helping them do on the application.
Back when SQL Server introduced Data Quality Services, I thought that finally there would be some action in this area. But the data quality issues stay unsolved. Everyone knows the importance, yet nobody is solving it.
I found it interesting that in the report, Gartner predicts that 60% of AI projects without AI-ready data will be abandoned by the end of 2026. I can believe that. In the end, it doesn’t matter how good your analytics looks if the content is nonsense, or even just poor quality.
The challenge here is that everyone says the words ‘Artificial Intelligence’, but they often mean very different things.
The report says that the proportion of organizations with no plans to adopt AI has fallen to 13%, yet only 44% say they already are. So, what are the rest doing? Sounds like a lot of just talking about it. And the stats show that the larger organizations are making the biggest strides – yet you’d think the smaller ones would have the most flexibility to allow them to get started with the technology.
Jeff Foster’s quote, “everyone wants to move faster with AI, but few are truly ready for it”, is telling – and matches well with what I see around my client base.
I work with AI-based systems every single day and now cannot imagine working without them. If you aren’t, it’s time to get involved. And I don’t just mean using ChatGPT, useful as it can be.
As an example, one area I love using AI tools for, is in generating realistic test data. I still want test data that stretches the use of data types and parsing, but I also want test data that looks real but isn’t. AI tools can do a great job of that. And they are clever. For example, I can ask for a particular ethnic mix in customer details, and I get back realistic names. I’ve learned enough Mandarin over the years to know that when I ask for 20% Chinese names, that they’ve used common Chinese family names, etc.
Another example is code review. I recently had a Gen AI tool reformat some code for me. That’s a good example of “busy work” that AI can really help with. But I was so pleased when, while formatting the code, it pointed out a logical bug in one of the use cases covered by the code. Nice!
But it’s not just limited to productivity – I believe that AI-based tooling is the only way we will ever deal with the onslaught of security issues heading our way.
I’ll finish by saying that I continue to be amazed at the level of interest in chatbots, given how poorly they work today, at least in my experience. Is there any chatbot that you’ve ever interacted with, that’s done a great job for you? What percentage of them are just annoying, and make it look like the organization can’t be bothered having humans to talk to you?
It’s a great report, full of fascinating statistics and insights. These were just some of my thoughts, but it’s well worth a full read for yourself – and why not share your takeaways from the report in the comments down below?
The post My thoughts on Redgate’s 2026 State of the Database Landscape report appeared first on Simple Talk.
The Decentralized Identifier Working Group has published Decentralized Identifiers (DIDs) v1.1 as a W3C Candidate Recommendation Snapshot. This document specifies the DID syntax, a common data model, core properties, serialized representations, DID operations, and an explanation of the process of resolving DIDs to the resources that they represent.
Decentralized identifiers (DIDs) are a new type of identifier that enables verifiable, decentralized digital identity. They may refer to any subject and have been designed so that they may be decoupled from centralized registries, identity providers, and certificate authorities, so as to enable the controller of a DID to prove control over it without requiring permission from any other party.
Comments are welcome via Github issues by 5 April 2026.
The JSON-LD Working Group published today a First Public Working Draft of YAML-LD 1.0. [JSON-LD11] is a JSON-based format to serialize Linked Data. In recent years, [YAML] has emerged as a more concise format to represent information that had previously been serialized as [JSON], including API specifications, data schemas, and Linked Data.
This document defines YAML-LD as a set of conventions on top of YAML which specify how to serialize Linked Data [LINKED-DATA] as [YAML] based on JSON-LD syntax, semantics, and APIs.
Since YAML is more expressive than JSON, both in the available data types and in the document structure (see [RFC9512]), this document identifies constraints on YAML such that any YAML-LD document can be represented in JSON-LD.