Learn how to create an SSMS offline installer for easy access without an internet connection. Get started with our guide.
The post SSMS 22 Offline Installation appeared first on MSSQLTips.com.
Learn how to create an SSMS offline installer for easy access without an internet connection. Get started with our guide.
The post SSMS 22 Offline Installation appeared first on MSSQLTips.com.

If you want to see my “velocity trumps everything” doctrine at work, you don’t have to look any further than the AI news headlines coming thick and fast about investments, valuations, and all the related hype. As a reminder, my doctrine is that velocity has replaced authority as the organizing principle of information. What and who moves fastest wins. Truth and facts are optional and get lost in the race to dominate attention.
However, you take a moment to scratch beneath the surface, add some numbers, and think for yourself, a lot does not add up. That is a good reason to be pragmatic. Caveats are a great silent gift of Shakespeare’s language. But no one wants to be a party pooper and talk logic and sense when you have the Internet monster to feed. But the reality is very real, and kind of boring.
I bring this up because of the dust-up about Nvidia and OpenAI. This is a good example of why we should all exercise some modicum of caution when reporting and when reading the news in a mega hype cycle. Nvidia apparently has some second thoughts about its $100 billion OpenAI investment, the Wall Street Journal reports.
First, let me catch you up on the past.
Last September, OpenAI and Nvidia stood together at Nvidia’s headquarters to announce a $100 billion deal. The largest computing project in history, Jensen Huang called it. Nvidia’s stock jumped 4 percent. Market cap pushed toward $4.5 trillion. As now know, the deal was never real. It was a memorandum of understanding. A press release dressed up as a partnership.
Now, if you read the original press release, you could have easily picked up these three basic facts. First, it was a letter of intent, not a binding deal, for Nvidia to help deploy at least 10 gigawatts of AI data center compute for OpenAI. Nvidia clearly said it may invest up to $100 billion over time as each gigawatt is deployed. That can take forever, especially since first deployments were not expected until the second half of 2026. In short, everything is as laid out in the press release, and no one should be surprised by what is coming to light.
The Journal now reports that talks never progressed beyond the early stages. Nvidia CEO Huang has been telling associates for months that the agreement was nonbinding and not finalized. He has also privately criticized what he called a lack of discipline in OpenAI’s business approach. Actually, this is a sucker punch. Why?
Over the weekend, Huang told reporters that his company would “absolutely be involved” in OpenAI’s latest funding round. “We will invest a great deal of money, probably the largest investment we’ve ever made,” he said. But will it be over $100 billion? “No, no, nothing like that,” he replied. The man is playing the announcement economy like Miles Davis played the trumpet. Yes, I am listening to Miles this morning. And why would he not? OpenAI is rumored to be one of Nvidia’s largest customers. If OpenAI lags, it will impact Nvidia’s sales. This is what it means to have a tiger by the tail.
Given that Nvidia is privy to the progress made by others, such as Anthropic — Nvidia committed up to $10 billion in November— and by various others, including many Chinese AI companies, Huang probably has a much better understanding of the AI economy. He probably has a good idea which company is being smart about business and which is not. Read between the lines, and that is a pretty strong condemnation of OpenAI and its business practices.
Still, the whole brouhaha about Nvidia suddenly backing away from its OpenAI commitment is a good example of the momentum and noise that dominate. Just as the reaction to the original deal was crazy on one extreme, the reaction to the Journal story is on the other end of that extreme.
This is how the new announcement economy works. You declare a massive number. The headlines write themselves. The stock moves. Mission accomplished. Whether the deal actually closes becomes almost irrelevant. The momentum already happened. Remember Stargate?
A whopping $500 billion for AI infrastructure. The president at the podium. SoftBank, Oracle, OpenAI logos everywhere. Great theater. The actual committed capital? Far murkier. But who cares when you have already won the news cycle.
Now we are hearing about SoftBank potentially putting another $30 billion into OpenAI. Amazon, maybe $50 billion. These numbers get reported breathlessly, without much interrogation. Total up OpenAI’s announced commitments and you get $1.4 trillion. More than one hundred times its revenue. The math does not need to work. It just needs to generate headlines. OpenAI is racing to go public by the end of this year. Every splashy announcement builds the narrative. Every trillion-dollar figure shapes the IPO story. Sam Altman understands this game better than almost anyone.
What is my takeaway after having lived through multiple bubbles and hype cycles? Big-talking press releases are nothing more than strategic posturing. In the case of AI news, such announcements are signaling dominance in AI infrastructure. Of course, the investment community sees all that and reacts. Everyone dreams of future revenue and massive growth. And do not get me started on analysts. They get a chance to talk up the partnership, get themselves in the media, and retain their jobs. These days, podcasts and social media then simply amplify everything.
Velocity is everything. Reality is kind of boring.
To be clear, I am a hundred percent a believer in our transition to the new AI world. I am just old-fashioned enough to not be impressed by press releases and media announcements that are meant to impress and shock you. The more pragmatic way is my way. Sure, it will not make me many friends, but who needs faux friends when you have facts.
February 2, 2026. San Francisco
Photo by Igor Omilaev on Unsplash
What is a cozy fantasy? In this post, we explore the genre. We include the elements of cozy fantasy – with plenty of examples.
A cozy fantasy is a sub-genre of the fantasy genre. A cozy fantasy has magic, dragons, and fairies. It can even have the undead. But because there is no saving the world, or the universe, required, cozy fantasy is considered ‘low fantasy’.
For starters, they are often described as having:
Yes. In fact, Discworld, Hogwarts, and adventures beyond the Shire may be considered by some to be on the more active side of cozy fantasies. Other books in the genre exchange epic adventures for character journeys, action for a slow-paced, relationship-building, community-rich, positive, feel-good read.
Does that mean bad things don’t happen. On the contrary. The comfort that cozy fantasy offers is that they do, that life is not always easy for the characters, and there are, as in real life, problems that need working through and resolution. Just as readers of Romance know there will be a happy ever after, readers of cozy fantasy enjoy knowing the characters will be okay, recovery can be achieved, magic is gentle, and problems will, in the end, be solved.
Here are the top five cozy fantasies on Goodreads:
If you need help creating a setting, buy The Setting Workbook from our shop.
So, if you enjoy fantasy, but would like something a little softer, gentler, where the slaying of dragons, or defeating evil empires isn’t on the agenda, and where raucous taverns give way to cake shops, and conversations, then cozy fantasy may be the genre for you.
[Cozy fantasy is a sub-genre of the broader cozy fiction genre, which includes cozy mystery and cozy horror.]
If you would like to learn how to write a book, sign up for one of the rich and in-depth workbooks and courses that Writers Write offers, and get your book off to a great start.
Source for image: Pixabay

by Elaine Dodge. Author of The Harcourts of Canada series and The Device Hunter, Elaine trained as a graphic designer, then worked in design, advertising, and broadcast television. She now creates content, mostly in written form, including ghost writing business books, for clients across the globe, but would much rather be drafting her books and short stories.
Top Tip: Find out more about our workbooks and online courses in our shop.
The post What Is A Cozy Fantasy? appeared first on Writers Write.
This week’s off to a hot start at work, and in the industry as a whole. It’s also earnings season across tech, so we can see who is making what type of real progress with AI.
[blog] Beyond Just Looking: Gemini 3 Now Has Agentic Vision. This is a bigger deal than we realize. Instead of “best guess” image processing, our model now does an agentic loop to truly understand an image.
[article] The Five Skills I Actually Use Every Day as an AI PM (and How You Can Too). Here’s a great challenge to PMs. If you’re not doing these types of activities, you’re going to quickly see people encroaching on your domain.
[blog] High-performance inference meets serverless compute with NVIDIA RTX PRO 6000 on Cloud Run. Truly impressive. Which other serverless stack is offering up to 44 vCPUs and 170+ GiB of RAM per instance and letting you run 70B parameter models on demand?
[blog] Summarizing Too Big for Context with MapReduce and LLMs. Smart approach from Wei here. If you’ve got a ton of input data, you can do a Map-Reduce-style exercise to distill the information.
[article] AWS’s inevitable destiny: becoming the next Lumen. It’s lucrative to be the backbone, but mindshare disappears.
[blog] The Rise of Coding Agent Orchestrators. Agent harnesses and orchestrators are going to have a big year. As will the management layers around them.
[blog] A Javelit Frontend for the Deep Research Agent. Ok, now I know what Javalit is, and why it’s great for building data apps without messing with the frontend.
[blog] Kubernetes Rolling Updates for Reliable Deployments. Solid post about how good Kubernetes has gotten at supporting rolling updates for your workloads.
[blog] LiteRT: The Universal Framework for On-Device AI. I don’t understand this space very well, but I read this to learn more. Getting cross-platform acceleration for AI workloads from a single framework is a good deal.
[article] Elon Musk’s SpaceX has acquired his AI company, xAI. The man goes big and plays to win, that’s for sure. The renewed investments in space are pretty exciting.
[article] Waymo raises $16B to scale robotaxi fleet internationally. Let’s go! Great to see this fantastic engineering get deployed more widely.
Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:
This is a servicing release of Windows Package Manager v1.12. If you find any bugs or problems, please help us out by filing an issue.
winget mcp for assistance on configuring your client.Font as an InstallerType and NestedInstallerType.UTF-8 BOM encoding when the schema header is on the first lineFont experimental feature to accurately reflect fonts as the required setting valueOnLaunch updates.Font Install and Uninstall via manifest and package source for user and machine scopes has been added.
A sample Font manifest can be found at:
https://github.com/microsoft/winget-pkgs/tree/master/fonts/m/Microsoft/FluentFonts/1.0.0.0
At this time install and removal of fonts is only supported for fonts installed via WinGet Package.
Fonts must either be the Installer or a .zip archive of NestedInstaller fonts.
A new explicit source for fonts has been added "winget-font".
winget search font -s winget-font
This source is not yet accepting public submissions at this time.
The following snippet enables experimental support for fonts via winget settings. The winget font list command will list installed font families and the number of installed font faces.
{
"$schema" "https://aka.ms/winget-settings.schema.json",
"experimentalFeatures": {
"fonts": true
}
}The font 'list' command has been updated with a new '--details' feature for an alternate view of the installed fonts.
This release only contains bug fixes for App Installer, and no changes to winget.
Full Changelog: v1.12.460...v1.12.470
AI is ubiquitous in both the consumer and enterprise sectors. Yet few organizations are realizing AI’s full potential. Why? AI agents must make decisions and take actions based on a limited subset of overall data. Result: too much guesswork, the occasional hallucination, and failure to extract full value from AI.
The downfall of enterprise AI, then, is agents that falter without a comprehensive understanding of data, both customer- and business-derived. Companies need to be able to pivot from simple data ingestion to sophisticated content collection, integration, and curation that enable AI agents to respond accurately and take appropriate actions.
This can only be accomplished by advancing from traditional prompt engineering to context engineering, which combines a 360-degree view of the customer and a complete enterprise view of a dynamically changing business.
Many companies implementing AI are data-rich. They use large language models (LLMs) that pull data from all over the internet. They have in-house models that access data from customer databases and product documentation libraries.
Their agents access these pools of information and attempt to guide their decisions. Sometimes they get it right. But too often, they take the wrong action or recommend an incorrect response. What is missing is end-to-end context.
Here is a common example: A person wants to buy a car, so before finalizing their purchase, they go on the manufacturer’s website to research the various options. This data is captured in the car maker’s systems, and over the following weeks, AI directs a series of marketing actions to generate interest in the car model. Without full context, the marketing agent doesn’t recognize that the person has already purchased the car.
This breakdown occurs when one system contains the details of a car purchase, another has records on the individual buyer, and a separate application tracks customer engagement details (such as website visits). Robbed of the rich context of data locked inside information silos, AI digital engagement agents only know that someone researched a car. They’ve missed the opportunity to promote extended warranties and maintenance plans.
Far from rare, such examples are all too common in agentic AI. Enterprises may be data-rich but are context-poor.
For AI to respond contextually, data needs to be fluid, harmonized, and unified. The walls between silos must be removed.
Achieving this requires several key elements:
Data catalog: The data catalog provides a single view of data across systems. This gives apps and AI agents a map of all assets residing in on-premises systems, the cloud, data lakes, and legacy infrastructure.
Data lineage: Consider this a data verification layer. It traces the full journey of data from origin to consumption, showing every change or transformation along the way. Data lineage enables AI agents to know where any piece of data came from, how it was produced, whether it aligns with organizational governance and regulatory compliance policies, whether it is secure and trustworthy, and whether it reflects the most current knowledge.
Connected signals and actions: Apps and AI agents rely on signals from every system to interpret what’s happening and trigger secure, meaningful actions.
Unified data context: There must be a central repository within an agentic AI architecture that collects, synthesizes, harmonizes, and unifies all information. This context interface for apps and AI agents must operate in real time without requiring file copying or data movement. Whether an AI agent is analyzing a trend or processing a product return, it must provide a single, shared, up-to-the-second view of the customer and the business, aligned with all relevant policies.
Enterprise understanding: Apps and AI agents should not have to relearn the business from scratch. They must act in accordance with the definitions, rules, and principles that underlie each portion of the business. If they don’t, they may appear “AI smart” but “corporate stupid.” Why? Deep metadata intelligence in the enterprise is unavailable to customer-facing systems.
Enterprise context is vital in defining core business entities and their interrelationships. This context encompasses historical records, master data management (of products, suppliers, assets, and more), business rules, regulatory compliance, and organizational workflows. Comprehensive customer and enterprise records must be unified to supply AI agents with a shared data vocabulary that helps them infer the right context for the right situation at the right time.
Case in point: Large enterprises typically include numerous accounts and corporate entities. The names of various entities may be similar, but there are hierarchies, as well as specific rules and tax schemes that apply by geography and industry. In such a complex organizational structure, if names are entered incorrectly or data is assigned to the wrong corporate entity, AI-based errors are practically inevitable.
Only the complete unification of customer and enterprise metadata and systems can prevent costly errors and keep AI agents and apps supplied with the applicable context. This way, organizations can consolidate all enterprise and customer data and connect related data from multiple sources to transform trusted context into a meaningful story.
Learn more about Data 360 from Salesforce and how it transforms scattered, fragmented enterprise data into one complete view of your business to fuel real-time workflows, better decision making, and more intelligent agents.
The post Unlocking AI’s full potential: Why context is everything appeared first on The New Stack.