Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148405 stories
·
33 followers

Couchbase’s Data API in Practice With NodeRed

1 Share

Couchbase Capella has a new Data API. If you’re wondering why it is important since we already have SDKs, the answer is addressed in our documentation but here’s a quick comparison:

  • Data API: HTTP, language, and runtime agnostic; easy integration with zero dependency; plus trade-offs in latency, throughput, and resiliency features.
  • SDKs: Native library, richer features, better performance, better suited to workloads where scale and resilience matter.

Some use cases:

  • Functions for FaaS/Serverless: AWS Lambda, Google Cloud Functions, Azure Functions, Netlify Functions
  • SaaS integrations: Zapier, IFTTT, Relay, Make, N8N, Flowwise, Node-RED
  • Scripting: Jenkins Pipeline or GitHub Actions
  • Internal tools: Dashboard, Grafana

All of these use cases can be implemented but would require either the deployment and management of your own Couchbase SDK-backed API, or ensuring SDKs were available where the code was running which in some cases is impossible.

With that, let’s see how this all works in practice with a use case example.

Node-RED Example

Node-RED enables low-code programming for event-driven applications. It’s visual, simple, lightweight, and it runs on a wide range of hardware platforms. However, while it supports the use of external modules, some especially those that depend on native libraries, such as our Node SDK, can  be difficult to use. This presents a perfect excuse to try the new Data API.

Below is a simple use case that tries to scrape data from luma to know what’s going on in Paris. You can see the results below. Note, the top-level flow is the ingestion, the second is the debugging query. 

Ingestion Flow

  • Start: An ingest node that triggers this every 72 hours.
  • Query luma: An HTTP request to https://luma.com/paris.
  • Extract events: An HTML parser to retrieve the list of events as String.
  • Convert to JSON: A JSON parser to turn this String in a JSON Object.
  • Parse_to_events: A function that takes this object and creates a new one with only the required data.

The code looks like this:

var newMsg = { payload: msg.payload[0].content.props.pageProps.initialData.data.events };
return newMsg;

  • forEach: A Split tasks that splits the events JSON object from the previous step.
  • Create event in Couchbase: An HTTP request is sent to the Capella Data API. Whatever is in the payload of the previous step will be the body of the request. You can use Mustache templating for the URL field. Here’s a screenshot of the actual step; it’s like running this curl command:

curl -X PUT --user api:****  -H 'Content-Type: application/json'
   -d '{"event":"data",...}' 
https://snzd9hz3unnntl7.data.cloud.couchbase.com/v1/buckets/events/scopes/paris/collections/luma/documents/fd5hds83

  • Debug1: A debug step to see the results of all the requests. 

Debug FLow

  • Start: An ingest node that must be triggered manually.
  • SQL_Query: A function that returns a JSON object representing the query to run.

var query = { statement: "SELECT * FROM events.paris.luma ;" }
var newMsg = { payload: query };
return newMsg

  • Query events in Couchbase: An HTTP Request that runs the given query. It would look like the following curl command:

curl -X POST --user api:****  -H 'Content-Type: application/json'
   -d '{ statement: "SELECT * FROM events.paris.luma ;" }' 
https://snzd9hz3unnntl7.data.cloud.couchbase.com/_p/query/query/service

Try It Yourself

Node-RED can easily be run on your machine with docker run -it -p 1880:1880 --name mynodered nodered/node-red

Then go to http://127.0.0.1:1880/ and follow the instructions. You can either create the nodes by hand or import this flow with this JSON export:

[
   {
  	"id":"f6f2187d.f17ca8",
  	"type":"tab",
  	"label":"Event Ingestion",
  	"disabled":false,
  	"info":""
   },
   {
  	"id":"5039574f789798bf",
  	"type":"inject",
  	"z":"f6f2187d.f17ca8",
  	"name":"Start",
  	"props":[
     	{
        	"p":"payload"
     	},
     	{
        	"p":"topic",
        	"vt":"str"
     	}
  	],
  	"repeat":"259200",
  	"crontab":"",
  	"once":false,
  	"onceDelay":0.1,
  	"topic":"",
  	"payload":"",
  	"payloadType":"date",
  	"x":90,
  	"y":60,
  	"wires":[
     	[
        	"a7e82076addaf2bf"
     	]
  	]
   },
   {
  	"id":"65dcb4aeead12a54",
  	"type":"debug",
  	"z":"f6f2187d.f17ca8",
  	"name":"debug 1",
  	"active":true,
  	"tosidebar":true,
  	"console":false,
  	"tostatus":false,
  	"complete":"payload",
  	"targetType":"msg",
  	"statusVal":"",
  	"statusType":"auto",
  	"x":580,
  	"y":120,
  	"wires":[
    	 
  	]
   },
   {
  	"id":"a7e82076addaf2bf",
  	"type":"http request",
  	"z":"f6f2187d.f17ca8",
  	"name":"Query Luma",
  	"method":"GET",
  	"ret":"txt",
  	"paytoqs":"ignore",
  	"url":"https://luma.com/paris",
  	"tls":"",
  	"persist":false,
  	"proxy":"",
  	"insecureHTTPParser":false,
  	"authType":"",
  	"senderr":false,
  	"headers":[
    	 
  	],
  	"x":290,
  	"y":60,
  	"wires":[
     	[
        	"c463b094ea88ec73"
     	]
  	]
   },
   {
  	"id":"c463b094ea88ec73",
  	"type":"html",
  	"z":"f6f2187d.f17ca8",
  	"name":"Extract Events",
  	"property":"payload",
  	"outproperty":"payload",
  	"tag":"#__NEXT_DATA__",
  	"ret":"compl",
  	"as":"single",
  	"chr":"content",
  	"x":480,
  	"y":60,
  	"wires":[
     	[
        	"3be4c6d9e4554672"
     	]
  	]
   },
   {
  	"id":"3be4c6d9e4554672",
  	"type":"json",
  	"z":"f6f2187d.f17ca8",
  	"name":"Convert to JSON",
  	"property":"payload[0].content",
  	"action":"obj",
  	"pretty":true,
  	"x":690,
  	"y":60,
  	"wires":[
     	[
        	"9b66605f179021d0"
     	]
  	]
   },
   {
  	"id":"9b66605f179021d0",
  	"type":"function",
  	"z":"f6f2187d.f17ca8",
  	"name":"parse_to_events",
  	"func":"var newMsg = { payload: msg.payload[0].content.props.pageProps.initialData.data.events };\nreturn newMsg;",
  	"outputs":1,
  	"timeout":0,
  	"noerr":0,
  	"initialize":"",
  	"finalize":"",
  	"libs":[
    	 
  	],
  	"x":900,
  	"y":60,
  	"wires":[
     	[
        	"1924a3c26b9520bd"
     	]
  	]
   },
   {
  	"id":"1924a3c26b9520bd",
  	"type":"split",
  	"z":"f6f2187d.f17ca8",
  	"name":"foreach",
  	"splt":"1",
  	"spltType":"len",
  	"arraySplt":1,
  	"arraySpltType":"len",
  	"stream":true,
  	"addname":"",
  	"property":"payload",
  	"x":120,
  	"y":120,
  	"wires":[
     	[
        	"7e3a02b180875fed"
     	]
  	]
   },
   {
  	"id":"7e3a02b180875fed",
  	"type":"http request",
  	"z":"f6f2187d.f17ca8",
  	"name":"create event in couchbase",
  	"method":"PUT",
  	"ret":"txt",
  	"paytoqs":"query",
  	"url":"https://snzd9hz3unnntl7.data.cloud.couchbase.com/v1/buckets/events/scopes/paris/collections/luma/documents/{{{payload.event.url}}}",
  	"tls":"",
  	"persist":true,
  	"proxy":"",
  	"insecureHTTPParser":false,
  	"authType":"basic",
  	"senderr":false,
  	"headers":[
    	 
  	],
  	"x":370,
  	"y":120,
  	"wires":[
     	[
        	"65dcb4aeead12a54"
     	]
  	]
   },
   {
  	"id":"f309559bde533ab6",
  	"type":"inject",
  	"z":"f6f2187d.f17ca8",
  	"name":"Start",
  	"props":[
     	{
        	"p":"payload"
     	},
     	{
        	"p":"topic",
        	"vt":"str"
     	}
  	],
  	"repeat":"",
  	"crontab":"",
  	"once":false,
  	"onceDelay":0.1,
  	"topic":"",
  	"payload":"",
  	"payloadType":"date",
  	"x":90,
  	"y":200,
  	"wires":[
     	[
        	"49244e9d100517aa"
     	]
  	]
   },
   {
  	"id":"7945b9a86fa49919",
  	"type":"http request",
  	"z":"f6f2187d.f17ca8",
  	"name":"query events in couchbase",
  	"method":"POST",
  	"ret":"txt",
  	"paytoqs":"query",
  	"url":"https://snzd9hz3unnntl7.data.cloud.couchbase.com/_p/query/query/service",
  	"tls":"",
  	"persist":true,
  	"proxy":"",
  	"insecureHTTPParser":false,
  	"authType":"basic",
  	"senderr":false,
  	"headers":[
     	{
        	"keyType":"other",
        	"keyValue":"Accept",
        	"valueType":"other",
        	"valueValue":"application/json"
     	}
  	],
  	"x":520,
  	"y":200,
  	"wires":[
     	[
        	"24b1dd899b6d2ef1"
     	]
  	]
   },
   {
  	"id":"24b1dd899b6d2ef1",
  	"type":"debug",
  	"z":"f6f2187d.f17ca8",
  	"name":"debug 2",
  	"active":true,
  	"tosidebar":true,
  	"console":false,
  	"tostatus":false,
  	"complete":"payload",
  	"targetType":"msg",
  	"statusVal":"",
  	"statusType":"auto",
  	"x":740,
  	"y":200,
  	"wires":[
    	 
  	]
   },
   {
  	"id":"49244e9d100517aa",
  	"type":"function",
  	"z":"f6f2187d.f17ca8",
  	"name":"SQL Query",
  	"func":"var query = { statement: \"SELECT * FROM events.paris.luma ;\" }\nvar newMsg = { payload: query };\nreturn newMsg",
  	"outputs":1,
  	"timeout":0,
  	"noerr":0,
  	"initialize":"",
  	"finalize":"",
  	"libs":[
    	 
  	],
  	"x":270,
  	"y":200,
  	"wires":[
     	[
        	"7945b9a86fa49919"
     	]
  	]
   }
]

You will need a Capella instance with the Data API enabled. It’s available in our Free Tier and is easy to test. Just go to cloud.couchbase.com, open your cluster, and go to the connect tab. You will then click on Enable Data API. This can take up to 20 minutes to be ready, so in the interim you can follow the instructions about IP addresses and credentials. 

You now have everything you need to use the Data API, specifically the URL endpoint of the API. You can also access the Reference Documentation for more information.

We hope you enjoy this new Capella feature, and all the use cases that are now available to you.

 

 

 

 

 

The post Couchbase’s Data API in Practice With NodeRed appeared first on The Couchbase Blog.

Read the whole story
alvinashcraft
12 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Stop Wrestling with JavaScript: htmxRazor Gives ASP.NET Core the Component Library It Deserves

1 Share

Here is an uncomfortable truth the ASP.NET Core community has been avoiding for too long: server-rendered web development should not require you to adopt React, Vue, or Angular just to get a decent set of UI components.

For years, .NET developers have been stuck choosing between two bad options. You can wire up Bootstrap by hand, bolting htmx attributes onto generic HTML and writing the same boilerplate for every project. Or you can adopt Blazor, pulling in a 2 MB WebAssembly runtime to get component abstractions that were designed for a completely different rendering model.

Neither path respects the developer who chose server rendering on purpose.

That gap is exactly why htmxRazor exists.

What htmxRazor Actually Is

htmxRazor is an open-source UI component library built as ASP.NET Core Tag Helpers. It ships 72 production-ready components across 10 categories: buttons, form controls, dialogs, tabs, navigation, carousels, data visualization, feedback indicators, and more. Every component renders clean, semantic HTML on the server and treats htmx attributes as first-class properties.

That last point matters. This is not a generic component library with htmx compatibility tacked on as an afterthought. When you write hx-get, hx-post, hx-target, or hx-swap on an htmxRazor component, the Tag Helper understands those attributes and renders them correctly within its own markup structure.

Setup takes two lines of code in Program.cs:

csharp

builder.Services.AddhtmxRazor();
app.UsehtmxRazor();

Register the Tag Helpers in _ViewImports.cshtml, and you are building with components immediately. No webpack configuration. No npm install. No bundler. One NuGet package.

The Problem with the Status Quo

Think about what happens on a typical ASP.NET Core project that uses htmx today.

You start with raw HTML or Bootstrap. You add htmx for interactivity. Then you spend hours wiring up form validation, building accessible dialogs, creating tab components, and making everything work with ASP.NET Core’s model binding. You repeat this work on the next project. And the next.

Blazor solves the component problem but introduces its own complexity. You need WebAssembly or SignalR, you lose the simplicity of standard HTTP request/response patterns, and you carry a runtime that dwarfs the size of most applications it serves. For teams that chose Razor Pages or MVC because they wanted a lighter model, Blazor feels like trading one set of problems for another.

htmxRazor targets the developers caught in that middle ground: people who like server rendering, who like htmx, and who want real components without the overhead.

72 Components, Zero Client-Side Runtime

The component catalog covers real application needs across ten categories.

Actions include buttons with eight variants (brand, success, danger, neutral, and more) plus button groups and dropdowns. Forms give you inputs with model binding, textareas, selects, comboboxes, checkboxes, switches, radio groups, sliders, ratings, color pickers, file inputs, and number inputs. All of these integrate with ASP.NET Core’s ModelExpression for automatic label generation, type detection, and validation message rendering.

Feedback components cover callouts, badges, tags, spinners, skeleton loaders, progress bars, progress rings, and tooltips. Navigation includes tabs, breadcrumbs, tree views, and carousels. Overlays provide dialogs, drawers, and collapsible details panels.

Beyond the basics, htmxRazor includes imagery components (icons with 43 built-in glyphs, avatars, animated images, before/after comparisons, zoomable frames), formatting helpers (bytes, dates, numbers, relative time), utility components (copy buttons, QR codes, animations, popups, popovers), and composite patterns like active search, infinite scroll, lazy loading, and polling.

Every one of these components renders server-side. The total bundle shipped to the browser is CSS plus the htmx script, roughly 14 KB gzipped. Compare that to Blazor’s 2 MB WebAssembly runtime or even Bootstrap’s 24 KB JavaScript plus 22 KB CSS.

A Real Design System, Not a Bootstrap Wrapper

htmxRazor owns its entire visual system. It does not depend on Bootstrap, Tailwind, or any external CSS library.

The design system is built on CSS custom properties (design tokens) that control colors, spacing, typography, borders, shadows, and every other visual decision. Components use BEM naming with an rhx- prefix, keeping CSS specificity predictable and collisions nonexistent.

Theming works through a single attribute on the <html> element:

html

<html data-rhx-theme="light">

Switch to dark mode by changing that value to "dark", or toggle it programmatically with RHX.toggleTheme(). Every component responds to the theme change automatically because all styling flows through the token system.

You can override any design token with standard CSS:

css

:root {
    --rhx-color-brand-500: #6366f1;
    --rhx-radius-md: 0.75rem;
}

This means htmxRazor adapts to your brand identity without requiring you to fork the source or fight against opinionated defaults.

Accessibility Is Not an Afterthought

Every component ships with semantic HTML, appropriate ARIA attributes, keyboard navigation, and screen reader support. This was a design goal from the start, not something bolted on after the component catalog was complete.

Form components generate proper <label> associations. Dialogs trap focus. Interactive elements respond to keyboard events. The markup each component produces passes automated accessibility checks.

For teams working under WCAG compliance requirements, this saves significant effort compared to building accessible patterns from scratch on every project.

1,436 Unit Tests

The library includes 1,436 unit tests covering Tag Helper rendering behavior. These tests verify that components produce correct HTML structure, that attributes propagate properly, that model binding generates expected output, and that accessibility markup is present.

This is not a weekend experiment or a proof of concept. The test suite reflects the kind of coverage you need before trusting a component library in production applications.

Who Should Care About This

If you are building with ASP.NET Core and you have already chosen (or are considering) htmx for interactivity, htmxRazor removes the boilerplate between your decision and your working UI.

If you are evaluating whether to adopt Blazor or stick with Razor Pages, htmxRazor offers a third option: keep the server-rendered model you prefer, get real components, and add interactivity with htmx’s simple attribute-based approach.

If you are building internal tools, admin panels, or line-of-business applications where developer productivity matters more than chasing the newest client-side tooling, 72 prebuilt components with model binding and validation integration will save you weeks of work per project.

Get Started

Install from NuGet:

dotnet add package htmxRazor

See every component with live examples at htmxRazor.com.

Browse the source, file issues, or contribute at github.com/cwoodruff/htmxRazor.

The project is MIT licensed and accepting contributions. If server-rendered components with htmx integration solve a problem you have been working around, give htmxRazor a look and let the project know what you think.

The post Stop Wrestling with JavaScript: htmxRazor Gives ASP.NET Core the Component Library It Deserves first appeared on Chris Woody Woodruff | Fractional Architect.

Read the whole story
alvinashcraft
46 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

AI Crash Course: Tokens, Prediction and Temperature

1 Share

We often describe AI models as “thinking,” but what is actually happening when an AI model “thinks”? When it’s drafting a response to us, how does it know what to say?

One of the most tempting (and common) misunderstandings related to AI models is the perception that they “think”—or have awareness of any kind, for that matter.

This is primarily a language problem: we (meaning humans) like to use the words and experiences that we’re most familiar with as a shorthand to communicate complex ideas. After all, how many times have you seen a webpage slowly loading and heard someone say “hang on, it’s thinking about it”? We “wake” computers up from being in “sleep” mode, we initiate network “handshakes,” we get annoyed with memory-”hungry” programs.

In the same way, we often describe AI models as “thinking,” sometimes even including the directive to “take as much time as you need to think about this” when prompting them! But what is actually happening when an AI model “thinks”? When it’s drafting a response to us, how does it know what to say?

The short answer is that AI models (especially text-focused LLMs, which we’ll use as the example for the rest of this article) are highly advanced token prediction machines. They use neural networks (a type of machine learning algorithm) to identify patterns across large contexts. Based on decades of research about how sentences are structured in a given language (like the prevalence of various words, and the statistical likelihood that one specific word will follow another), modern AI models are able to combine tokens into words, and then words into sentences.

Predictive Language Models

For the long answer … we actually have to start all the way back in the 1940s. Cryptography and cypher-breaking technology was developing at a breakneck pace in an attempt to intercept and decrypt enemy communications during WWII. If you could recognize and crack even one or two letters in an enciphered communication, these new predictive methods could be used to help determine what the other letters were likely to be.

For example, in English “E” is the most commonly used letter, and “T” and “H” are often used together. If we know that one letter in a word is “T,” we can calculate the likeliness that the next letter will be “H” (spoiler alert: it’s pretty high). This same probability calculation can be extended from letters to words, from words to phrases, and from phrases to sentences. If you’re interested in the true deep dive, you can still read the 1950 paper published about these learnings: “Prediction and Entropy of Printed English” (which, by the way, is where those earlier facts about “E,” “T” and “H” come from). If you want the overview, watch The Imitation Game (actually, just watch The Imitation Game anyway; it’s a great movie).

Fast-forward to today: computers have offered us ways to analyze huge amounts of language data in ways that were simply not available in the 1950s. Our knowledge on this topic and our ability to predict content has only gotten better over the last 70+ years.

When we’re training large language models (LLMs), most of what we’re doing is giving them these huge samples of language—which, in turn, allows them to leverage these predictive models to more accurately identify and generate specific word, phrase and sentence combinations. You can think of it like the predictive text on your smartphone, but with the dial turned up to 1000 because it’s not just looking at samples of how you text, it’s looking at millions of samples demonstrating various ways that humans have communicated in a given language over hundreds of years.

Tokens

However, it would be a bit of a misrepresentation to say that LLMs are “thinking” in words. In fact, LLMs process language via tokens which can be (but aren’t always) entire words. Tokens are the smallest units that a given language can be broken down into by a model.

If you’re familiar with design systems, you might have heard of design tokens. Design tokens are the smallest values in a design system: hex colors, font sizes, opacity percentages and so on. In the same way, language tokens can be thought of as the smallest pieces that words can be broken down into. This is commonly aligned with prefixes, suffixes, root words, possessives, contractions, etc., but can also include units that aren’t necessarily based on human language structure.

This is done for both flexibility and efficiency: for example, if you can train an English-based model to recognize “draw” and “ing,” then you don’t have to explicitly teach it “drawing.” The same idea can be extended to things like “has” or “should” + “n’t” and “make” or “teach” + “er.” This can also help it make “educated guesses” at user input words that weren’t included in its training material. So if a user says they’re “regoogling” something, the LLM can identify the prefix “re-”, the name “Google” and the suffix “-ing” and cobble together something reasonably close to a working definition.

Because of the intrinsic role they play in AI functionality, tokens have become one of the primary ways we measure various AI models. Tokens are used to measure the data that models are trained on (total tokens seen during training), how much a model can process at a given time (known as the context window), and—as you already know if you’re a developer building apps that integrate with popular foundation models—API usage (both input and output) for the purposes of monetization.

Temperature

Adjusting these predictive computations that determine which tokens are most likely to follow other tokens is also part of how we can shape the model’s responses. The temperature of an AI model refers to how often the model will choose tokens that are less statistically likely.

A model with a low temperature is more conservative; when selecting the next word in its predictive text chain, it will choose options that have a higher percentage of occurrence. For instance, a low temperature model would be far more likely to say “My favorite food is pizza” than “My favorite food is tteokbokki,” assuming it was trained on data where “pizza” followed the words “My favorite food is” 70% of the time and “tteokbokki” only followed 15% of the time. Increasing the temperature of the model increases the percentage of times the model will choose the less-popular token by flattening the probability distribution; lowering the temperature sharpens the distribution, making less-common responses less likely.

To be clear, these are made up statistics for the purpose of illustration—if we aren’t training a model ourselves, we cannot know what the actual percentage of occurrence is for these kinds of things (unless the people doing the training offer to share that information, which is rare).

A model with a low temperature is more predictable, whereas a model with a high temperature will be more novel—but also more prone to mistakes. As IBM says: “A high temperature value can make model outputs seem more creative but it's more accurate to think of them as being less determined by the training data.”

Ultimately, the temperature of the model should be determined based on its purpose and acceptable room for error. If you’re using an AI model in a professional application to answer questions about a company’s products, you probably want a very low temperature; the tolerance for error in that situation is low, and you don’t want the AI to offer less-common results. However, if you’re using a model personally to help you brainstorm D&D campaign ideas, a higher temperature could offer you less common suggestions (plus, you’re probably less bothered in this situation by results that don’t make sense).

Regardless of temperature, however, it’s important to acknowledge that if content is included in the training data, there’s some chance (no matter how low) that it will be selected for inclusion in a model’s response. Even with a very low temperature model, there’s still a non-zero chance that it will choose the less popular answer. Why not just always set models at the most conservative temperature? Mostly because, at that point, we could just program a set of dedicated responses—most users of LLMs (and generative AI models, in general) want the “intelligence” that comes with not getting exactly the same answer every time. After all, LLMs aren’t retrieving sentences from training data via a lookup-table; their primary benefit is in their ability to generate new sequences token-by-token based on what they’ve “learned.”

Bias

Finally, it’s worth noting that this also plays into how bias occurs in AI systems. To return to the food example we used when discussing temperature: it’s entirely possible for us to curate a dataset in which “tteokbokki” occurs more often than “pizza” and then train a model on that. In that case, if we were to ask the model about the food most people like the best, it would be more likely to say “tteokbokki” even though that’s (probably) not reflective of the general population.

Obviously, this is less of a concerning issue if we’re just talking about food—but more concerning for issues related to sex, gender, race, disability and more. If a model is trained on data where doctors are more often referred to with he/him pronouns, it will in turn be more likely to return content identifying doctors as male. If slurs or hate speech are included in significant percentages, that content will be returned by the model at a level reflective of its training data (unless actively mitigated, as described below). This can be further reinforced by feedback and responses from users that are referenced by the model as context or in post-training.

As you might imagine, this is a common issue for models trained on information scraped from the internet: from chat logs, message boards, forums and more. It is possible to counteract this by excluding harmful content from the training data or by including data that intentionally balances occurrences of specific content (i.e., including the phrases “She is a doctor.” and “They are a doctor.” at equal percentages to “He is a doctor.”). It can also (sometimes) be filtered on the output side, by building in checks for specific words and prompting the model to re-create the response if it includes forbidden content. However, this must be an intentional choice implemented by those responsible for creating the training data and maintaining the model.

Read the whole story
alvinashcraft
52 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Semantic Kernel in C#: Complete AI Orchestration Guide

1 Share

Master Semantic Kernel in C# with this complete guide. Learn plugins, agents, RAG, and vector stores to build production AI applications with .NET.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

MSSQL Extension for VS Code: Query Profiler, ADS Migration Toolkit & More

1 Share

The MSSQL Extension for VS Code continues to evolve, delivering features that make SQL development more integrated, more powerful, and more developer-friendly. In version v1.40.0, we’re introducing the ADS Migration Toolkit, Basic Database Management, Flat File Import, Database Backup & Restore, Database Object Search, and Query Profiler — six capabilities that bring seamless Azure Data Studio migration, essential database operations, and real-time performance monitoring directly into your development workflow inside Visual Studio Code.

What’s new in MSSQL extension for VS Code v1.40

This release introduces six major capabilities designed to streamline the SQL development experience:

  • ADS Migration Toolkit — Migrate your saved connections, connection groups, settings, and key bindings from Azure Data Studio to VS Code in a single guided flow. Includes the MSSQL Database Management Keymap companion extension for ADS-aligned shortcuts like F5 to run queries, Ctrl+L for estimated plans, and more.
  • Database Object Search — Find tables, views, stored procedures, and functions across your databases instantly. Search by name or use type prefixes (t:, v:, sp:, f:), filter by schema, switch databases from a dropdown, and script objects directly from the results panel.
  • Basic Database Management (Public Preview) — Create, rename, and drop databases directly from the Object Explorer, with support for advanced options like collation, recovery model, and compatibility level. Generate T-SQL scripts for any operation.
  • Flat File Import (Public Preview) — Import CSV and TXT files into a new SQL Server table with a step-by-step guided wizard. Automatically infer column names and data types, customize the schema before import, and set primary keys and nullability — all without leaving VS Code.
  • Database Backup & Restore (Public Preview) — Back up databases to disk or Azure Blob Storage with support for full, differential, and transaction log backup types. Restore from existing backup sets, .bak files, or Azure Blob Storage URLs, with options to drop active connections and manage backup history.
  • Query Profiler (Public Preview) — Capture and monitor real-time database activity powered by Extended Events, directly inside VS Code. Select from profiling templates, apply column filters, manage multiple concurrent sessions, export captured events to CSV, and open existing .xel files — all without switching tools. Supported for SQL Server (on-premises and cloud) and Azure SQL Database.

ADS Migration Toolkit 

First, Azure Data Studio Migration Toolkit helps users smoothly transition from Azure Data Studio to the MSSQL extension for Visual Studio Code by importing their existing environment. It preserves key elements such as database connections, connection groups, supported settings, and familiar SQL keybindings, allowing users to continue working productively in VS Code with minimal disruption

Key highlights 

  • Imports existing database connections and connection groups from Azure Data Studio into the MSSQL Object Explorer, maintaining organization and metadata like server, database, and authentication details.
  • Migrates supported editor and SQL-related settings to align behavior between Azure Data Studio and VS Code.
  • Preserves familiar workflows by enabling ADS-style SQL keybindings via the MSSQL Database Management Keymap extension.
  • Provides a guided migration flow through the MSSQL extension, allowing users to selectively import connections, groups, settings, and keybindings.

migration dialog image

Database Object Search

Additionally, the Database Object Search in the MSSQL extension for Visual Studio Code lets you quickly find tables, views, functions, and stored procedures.

To open Database Object Search, right-click on the server or database node in the object explorer and select Search Database Objects from the menu. This opens a searchable list of objects in the selected database.

In the search view, type an object name (partial matches work) or use type prefixes—t: (table), v: (view), f: (function), sp: (stored procedure)—for example, t:<TableName> or sp:<StoredProcedureName>. You can also switch databases from the left dropdown, filter by type or schema, and refresh results.

database object search view image

Each result row includes an Actions menu (…) with common operations like full scripting options, Edit Data, Modify Data, Copy Object Name, etc.

database object search actions image

Basic Database Management (Preview)

Next, Database Management Tools (Preview) in the MSSQL extension let you perform common database administration tasks directly from the editor UI, reducing context switching and simplifying day‑to‑day database management. The preview focuses on essential operations—creating, renaming, and dropping databases—through guided dialogs that surface relevant information and safeguards.

Key highlights 

  • Create new databases quickly from a dialog where you specify the database name and owner, with the database appearing immediately in the server’s Databases list.
  • Rename existing databases in place without leaving the editor, with simple confirm and cancel controls and automatic refresh in the object tree.
  • Drop databases through a dedicated dialog that clearly shows database details.

Flat File Import (Preview)

Furthermore, the Import Flat File (Preview) feature in the MSSQL extension lets you quickly create a new SQL Server table and load data from structured text files through a guided, end‑to‑end workflow. The experience analyzes the file, previews inferred schema and data, allows schema adjustments, and completes table creation and data import in a single flow, reducing manual setup and scripting.

Key highlights 

  • Guided wizard that analyzes flat files and previews inferred schema and sample data before import.
  • Supports common text‑based formats including .csv and .txt files with tabular data only.
  • Allows schema customization, including column names, data types, primary keys, and nullability.
  • Creates a new table and imports data in one continuous workflow from the MSSQL extension UI

flat file dialog image

Database Backup and Restore (Preview)

In addition, Database Backup and Restore (Preview) in the MSSQL extension for Visual Studio Code provides a guided, UI‑based experience for performing SQL Server backup and restore operations directly from Object Explorer. These preview features reduce the need for manual scripting while still allowing users to generate equivalent T‑SQL when needed, and support both local and Azure‑based workflows

Key highlights 

  • Launch Backup Database and Restore Database directly from the database context menu in Object Explorer.
  • Create full, differential, or transaction log backups, with optional copy‑only support that does not affect the backup chain.
  • Save backups either to disk or to Azure Blob Storage using a guided Azure account and storage selection flow.
  • Restore databases from existing backup history, local .bak files, or Azure Blob Storage URLs, with visibility into available backup sets and metadata.
  • Generate the equivalent T‑SQL script for backup and restore actions directly from the dialogs.

restore dialog image

Query Profiler

Finally, the MSSQL extension now includes Query Profiler, a real-time database activity monitor powered by Extended Events, directly inside VS Code. Query Profiler replaces the need to switch to external profiling tools by letting you capture, filter, and analyze live T-SQL activity without leaving your editor.

To try it, right-click any server in Object Explorer and select Launch Query Profiler (Preview), or run MSSQL: Launch Query Profiler from the Command Palette.

Key highlights

  • Real-time event streaming – Capture live query activity in a scrollable events grid with columns like EventClass, TextData, Duration, and DatabaseName.
  • Built-in templates – Choose from five profiling templates (Standard_OnPrem, TSQL_OnPrem, TSQL_Locks, TSQL_Duration, Standard_Azure) with automatic selection for Azure SQL Database targets.
  • Column filters – Filter events by text values, numeric thresholds, or search across all columns with the quick filter.
  • Multiple concurrent sessions – Run several profiling sessions at once, each with its own connection and template, and switch between them with the session selector.
  • Export to CSV – Save captured events for offline review or sharing.
  • Open XEL files – Load existing Extended Events trace files (.xel) into the Profiler grid for review with full filtering capabilities.

query profiler context menu image

query profiler filters image

Query Profiler helps developers identify slow queries, spot performance bottlenecks, and understand how their application interacts with the database during development and testing. It works with SQL Server (on-premises, VMs, containers) and Azure SQL Database.

Learn more:

SQL Database Projects Publish Dialog (GA)

We’ve made some updates to SQL Database Projects in VS Code that streamline database development workflows. 

The Publish Dialog is now generally available. You can configure connections, review deployment options, and publish schema changes directly from the editor without writing SqlPackage commands manually. For teams using CI/CD, the dialog generates the equivalent command that you can drop into your pipeline scripts. 

We’ve also expanded item templates to cover more database object types. When you add a stored procedure, view, table, or other schema object to your project, you’ll start with structured code instead of an empty file, making it faster to build out your database schema. 

Both features are available in the latest version of the SQL Database Projects extension for VS Code. 

Other updates

  • SQL 2025 SQL Containers on ARM — Added support back for SQL Server 2025 containers on ARM-based macOS devices.
  • SQL Database Projects bug fixes — Fixed several issues related to database references and building projects in VS Code.
  • Quality improvements — Enhanced performance and usability of the query results grid, fixed accessibility issues affecting error messages and UI feedback, and addressed edge case errors in GitHub Copilot Agent Mode when switching connections.

Conclusion

The v1.40 release introduces the ADS Migration Toolkit, Basic Database Management, Flat File Import, Database Backup & Restore, Database Object Search, and Query Profiler — six major updates that streamline Azure Data Studio migration, essential database operations, and real-time performance monitoring. Together, these capabilities make the MSSQL extension more powerful, more integrated, and more developer-friendly than ever.

If there’s something you’d love to see in a future update, here’s how you can contribute:

  • 💬 GitHub discussions – Share your ideas and suggestions to improve the extension
  • ✨ New feature requests – Request missing capabilities and help shape future updates
  • 🐞 Report bugs – Help us track down and fix issues to make the extension more reliable

Want to see these features in action?

Thanks for being part of the journey—happy coding! 🚀

The post MSSQL Extension for VS Code: Query Profiler, ADS Migration Toolkit & More appeared first on Azure SQL Dev Corner.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

When You Think You’re Done

1 Share

Suggested reading: The Pros & Cons Of Plotting & Pantsing

The post When You Think You’re Done appeared first on Writers Write.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories