Read more of this story at Slashdot.
Read more of this story at Slashdot.
Modern data migrations rarely involve a simple lift-and-shift; they require transformation, cleansing, and enrichment so applications can immediately leverage the destination platform’s strengths. Couchbase Capella’s Eventing service enables event-driven, inline transformations as data arrives, allowing teams to reshape schemas, normalize values, enrich with metadata, and prepare documents for SQL++, Search, Analytics, and mobile sync from the outset.

A representative Airbnb listing document illustrates common transformation needs: decimal normalization, identifier handling, nested fields, and flattening for query performance.
Example fields include listing_url, host metadata, address with coordinates, and decimal wrappers for fields like bathrooms and price using the MongoDB extended JSON format.
Export documents to JSON using mongoexport with –jsonArray to produce a clean list for batch import into Couchbase.
Follow along with this video of the Mongo export command execution:

Syntax example:
mongoexport \ --uri="mongodb+srv://cluster0.xxxx.mongodb.net/test" \ --username=Test \ --password=Test_123 \ --authenticationDatabase admin \ --collection=listingAndReviews \ --out=listingAndReviews.json \ --jsonArray
Code: Eventing function (OnUpdate)
function OnUpdate(doc, meta) {
try {
// Directly process every document mutation in the source bucket
var newId = doc._id ? doc._id.toString() : meta.id;
var transformedDoc = {
listingId: newId,
url: doc.listing_url,
title: doc.name,
summary: doc.summary,
type: doc.property_type,
room: doc.room_type,
accommodates: doc.accommodates,
bedrooms: doc.bedrooms,
beds: doc.beds,
bathrooms: parseFloat(doc.bathrooms?.$numberDecimal || doc.bathrooms) || null,
price: parseFloat(doc.price?.$numberDecimal || doc.price) || null,
picture: doc.images?.picture_url,
host: {
id: doc.host?.host_id,
name: doc.host?.host_name,
location: doc.host?.host_location
},
address: {
street: doc.address?.street,
country: doc.address?.country,
coordinates: doc.address?.location?.coordinates
},
migrated_at: new Date().toISOString()
};
// Use a new prefixed key in the destination bucket
dst_bucket[newId] = transformedDoc;
} catch (e) {
log("Error during transformation:", e);
}
}
Ingest exported JSON into a temporary collection (listingAndReviewsTemp) using cbimport with list format and Capella’s TLS certificate.
Syntax example:
cbimport json \ -c couchbases://cb.xxxx.cloud.couchbase.com \ -u MyUser \ -p MyPassword \ --bucket Test \ --scope sample_airbnb \ --collection listingAndReviewsTemp \ --format list \ --file listingAndReviews.json \ --cacert MyCert.pem
Watch the Couchbase data import steps:

Keep the destination collection empty during this phase—Eventing will populate it post-transformation.
Event-driven migration turns a one-time port into a durable transformation pipeline that produces clean, query-ready data in Capella with minimal post-processing work. By exporting from MongoDB, importing into a temp collection, and applying a controlled Eventing transform, the destination model is ready for SQL++, Search, Analytics, and App Services on day one.
Spin up a Capella environment and test this pattern end-to-end with a small sample to validate mappings, performance, and query behavior before scaling.
Start your free tier cluster Sign up for free tier to run your experiment today!

The post Event-Driven Data Migration & Transformation using Couchbase Eventing Service appeared first on The Couchbase Blog.
We’re working on a new feature to display direct results in your address bar as you type, so that you can skip the results page and get to the right site or answer faster.
Every major browser today supports a feature known as “search suggestions.” As you type in the address bar, your chosen search engine offers real-time suggestions for searches you might want to perform.

This is a helpful feature, but these suggestions always take you to a search engine results page, not necessarily the information or website you’re ultimately looking for. This is ideal for the search provider, but not always best for the user.
For example, flight status summaries on a search results page are convenient, but it would be more convenient to show that information directly in the address bar:

Similarly, people commonly search for a website when they don’t know or remember the exact URL. Why not skip the search?

Another common use case is searching for recommendations, where Firefox can show highly relevant results from sources around the web:

The truth is, browser address bars today are largely a conduit to your search engine. And while search engines are very useful, a single and centralized source for finding everything online is not how we want the web to work. Firefox is proudly independent, and our address bar should be too.
We experimented with the concept several years ago, but didn’t ship it1 because we have an extremely high standard for privacy and weren’t satisfied with any design that would send your raw queries directly to us. Even though these are already sent to your search engine, Firefox is built on the principle that even Mozilla should not be able to learn what you do online. Unlike most search engines, we don’t want to know who’s searching for what, and we want to enable anyone in the world to verify that we couldn’t know even if we tried.
We now have the technical architecture to meet that bar. When Firefox requests suggestions, it encrypts your query using a new protocol we helped design called Oblivious HTTP. The encrypted request goes to a relay operated by Fastly, which can see your IP address but not the text. Mozilla can see the text, but not who it came from. We can then return a result directly or fetch one from a specialized search service. No single party can connect what you type to who you are.

Firefox will continue to show traditional search suggestions for all queries and add direct results only when we have high confidence they match your intent. As with search engines, some of these results may be sponsored to support Firefox, but only if they’re highly relevant, and neither we nor the sponsor will know who they’re for. We expect this to be useful to users and, hopefully, help level the playing field by allowing Mozilla to work directly with independent sites rather than mediating all web discovery through the search engine.
Running this at scale is not trivial. We need the capacity to handle the volume and servers close to people to avoid introducing noticeable latency. To keep things smooth, we are starting in the United States and will evaluate expanding into other geographies as we learn from this experience and observe how the system performs. The feature is still in development and testing and will roll out gradually over the coming year.2
1 We did ship an experimental version that users could enable in settings, as well as a small set of locally-matched suggestions in some regions. Unfortunately, the former had too little reach to be worth building features for, and the latter had very poor relevance and utility due to the technical limitations (most notably, the size of the local database).
2 Where the feature is available, you can disable it by unchecking “Retrieve suggestions as you type” in the “Search” pane in Firefox settings. If this box is not yet available in your version of Firefox, you can pre-emptively disable it by setting browser.urlbar.quicksuggest.online.enabled to false in about:config.
The post Better search suggestions in Firefox appeared first on The Mozilla Blog.
As both developers and stewards of significant open source projects, we’re watching AI coding tools create a new problem for open source maintainers.
AI assistants like GitHub Copilot, Cursor, Codex, and Claude can now generate hundreds of lines of code in minutes. This is genuinely useful; but it has an unintended consequence: reviewing machine generated code is very costly.
The core issue: AI tools have made code generation cheap, but they haven’t made code review cheap. Every incomplete PR consumes maintainer attention that could go toward ready-to-merge contributions.
At Discourse, we’re already seeing this accelerating across our contributor community. In the next year, every engineer maintaining open source projects will face the same challenge.
We need a clearer framework for AI-assisted contributions that acknowledges the reality of limited maintainer time.
A binary system works extremely well here. On one side there are prototypes that simply demonstrate an idea. On the other side there are ready for review PRs that meet a project’s contribution guidelines and are ready for human review.
The new tooling is making it trivial to create a change set and lob it over the fence. It can introduce a perverse system where project maintainers spend disproportionate effort reviewing lopsided AI generated code that took seconds for contributors to create and now will take many hours to review.
This can be frustrating, time consuming and demotivating. On one side there is a contributor who spent a few minutes fiddling with AI prompts, on the other side you have an engineer that needs to spend many hours or even days deciphering alien intelligence.
This is not sustainable and is extremely destructive.
AI coding agents such as Claude Code, Codex, Cursor CLI and more have unlocked the ability to ship a “new kind” of change set, the prototype.
The prototype is a live demo. It does not meet a project’s coding standards. It is not code you vouch for or guarantee is good. It lacks tests, may contain security issues and most likely would introduce an enormous amount of technical debt if merged as is.
That said it is a living demo that can help make an idea feel more real. It is also enormously fun.
Think of it as a delightful movie set.
Prototypes, especially on projects such as Discourse where enabling tooling exists are incredibly easy to explore using tools like dv.
% dv new my-experiment
% dv branch my-amazing-prototype
% dv ls
total 1
* my-amazing-prototype Running 1 minute ago http://localhost:4200
# finally visit http://localhost:4200 to see in action
Prototypes are great vehicles for exploring ideas. In fact you can ship multiple prototypes that demonstrate completely different solutions to a single problem which help decide on the best approach.
Prototypes, video demos and simple visual mockups are great companions. The prototype has the advantage that you can play with it and properly explore the behavior of a change. The video is faster to consume. Sometimes you may want them all.
If you are vibe coding and prototyping there are some clear rules you should follow
Maybe you will be lucky and an idea you had will get buy-in, maybe someone else may want to invest the time to drive a prototype into a production PR.
Prototyping is fun and incredibly accessible. Anyone can do it using local coding agents, or even coding agents on the cloud such as Jules, Codex cloud, Cursor Cloud, Lovable, v0 and many many more.
This heavily lowers the bar needed for prototyping. Product managers can prototype, CEOs can prototype, designers can prototype, etc.
However, this new fun that opens a new series of questions you should explore with your team.
When you introduce prototyping into your company you need to negotiate these questions carefully and form internal consensus, otherwise you risk creating large internal attitude divides and resentment.
Prototypes, what are they good for? Absolutely something.
I find prototypes incredibly helpful in my general development practices.
Sadly, as the year progresses, I expect many open source projects to receive many prototype level PRs. Not everyone would have read this blog post or even agree with it.
As a maintainer dealing with external contributions:
A ready to review PR is the traditional PRs we submit.
We reviewed all the machine generated code and vouch for all of it. We ran the tests and like the tests, we like the code structure, we read every single line of code carefully we also made sure the PR meets a project’s guidelines.
All the crazy code agents generated along the way has been fixed, we are happy to stamp our very own personal brand on the code.
Projects tend to have a large set of rules around code quality, code organisation, testing and more.
We may have used AI assistance to generate a ready to review PR, fundamentally, though this does not matter, we vouch for the code and stand behind it meeting both our brand and a project’s guidelines.
The distance from a prototype to a ready to review PR can be deceptively vast. There may be days of engineering taking a complex prototype and making it production ready.
This large distance was communicated as well by Andrej Karpathy in the Dwarkesh Podcast.
For some kinds of tasks and jobs and so on, there’s a very large demo-to-product gap where the demo is very easy, but the product is very hard.
…
For example, in software engineering, I do think that property does exist. For a lot of vibe coding, it doesn’t. But if you’re writing actual production-grade code, that property should exist, because any kind of mistake leads to a security vulnerability or something like that.
Veracode survey found that only 55% of generation tasks resulted in secure code. (source).
Our models are getting better by the day, and everything really depends on an enormous amount of parameters, but the core message that LLMs can and do generate insecure code, stands.
The root cause for the distance between project guidelines and a prototype is AI alien intelligence.
Many engineers I know fall into 2 camps, either the camp that find the new class of LLMs intelligent, groundbreaking and shockingly good. In the other camp are engineers that think of all LLM generated content as “the emperor’s new clothes”, the code they generate is “naked”, fundamentally flawed and poison.
I like to think of the new systems as neither. I like to think about the new class of intelligence as “Alien Intelligence”. It is both shockingly good and shockingly terrible at the exact same time.
Framing LLMs as “Super competent interns” or some other type of human analogy is incorrect. These systems are aliens and the sooner we accept this the sooner we will be able to navigate the complexity that injecting alien intelligence into our engineering process leads to.
Over the past few months I have been playing a lot with AI agents. One project I am particularly proud of is dv. It is a container orchestrator for Discourse, that makes it easy to use various AI agents with Discourse.
I will often run multiple complete and different throwaway Discourse environments on my machines to explore various features. This type of tooling excels at vibe engineering prototypes.
Interestingly dv was mostly built using AI agents with very little human intervention, some of the code is a bit off brand, that said unlike Discourse or many of the other open source gems I maintain it is a toy project.
Back on topic, dv has been a great factory for prototypes on Discourse. This has been wonderful for me. I have been able to explore many ideas while catching up on my emails and discussions on various Discourse sites.
Firstly you must be respectful of the rules any project you contribute has, seek them out and read them prior to contributing. For example: Cloud hypervisor says no AI generated code to avoid licensing risks.
That said, there is a trend among many developers of banning AI. Some go so far as to say “AI not welcome here” find another project.
This feels extremely counterproductive and fundamentally unenforceable to me. Much of the code AI generates is indistinguishable from human code anyway. You can usually tell a prototype that is pretending to be a human PR, but a real PR a human makes with AI assistance can be indistinguishable.
The new LLM tooling can be used in tremendous amounts of ways including simple code reviews and simple renamings within a file, to complete change set architecture.
Given the enormous mess and diversity here I think the healthiest approach is to set clear expectations. If I am submitting a PR it should match my brand and be code I vouch for.
As engineers it is our role to properly label our changes. Is our change ready for human review or is it simply a fun exploration of the problem space?
Human code review is increasingly becoming a primary bottleneck in software engineering. We need to be respectful of people’s time and protect our own engineering brands.
Prototype are fun, they can teach us a lot about a problem space. But when it comes to sending contributions to a project, treat all code as code you wrote, put your stamp of ownership and approval on whatever you build and only then send a PR you vouch for.
Happy Monday! The SSMS team is excited to announce the release of SQL Server Management Studio (SSMS) Preview 4. Building on feedback from our community and the momentum of previous previews, this release brings new features, important bug fixes, and further improvements to your SSMS experience.
This release introduces several bug fixes reported via the SSMS Developer Community as well as a few feature enhancements. Learn more about SSMS 22 Preview 4, including how to download and install and what known issues exist, by visiting our documentation site. To update an existing installation of SSMS 22, launch the Visual Studio Installer and select Update or check for updates within SSMS by going to Help > Check for Updates.
Updates to the connection dialog include a new Reset button – allowing you to clear the fields in the Connection Properties section.
Also included in this release: a fix for the feedback item New Query from Object Explorer Does Not Inherit App Name from Highlighted Database.
Additional support for SQL Server 2025 features was added in this release, including options to create Vector and JSON indexes. From Object Explorer, right-click Indexes > New Index > JSON index… or Vector Index… Also in Object Explorer, you can now view information for dimension and base type parameters for Vector data types.
A bunch of snippet files were added, including snippets for creating various index types, creating and altering external models for AI embeddings, and managing security and schema objects. View and manage available snippets by going to Tools > Code Snippets Manager… or insert snippets directly into your TSQL code by right-clicking in the query editor and selecting Insert Snippet…
SSMS 22 Preview 4 includes a few UI improvements, including a fix for a scrolling behavior issue with the query editor. Now, when the results grid has focus but the mouse pointer is hovered over the query editor you are able to scroll using the mouse wheel. We also resolved an issue that caused color contrast issues in the status bar when certain custom colors were selected from the Connection Dialog.
We introduced a new Help menu item to resolve an authentication issue that can occur when you are recently added to an Entra ID group and try to connect to a server with the new access. SSMS caches the Entra ID token for an amount of time, but this refresh time limit can be bypassed by going to Help > Clear Entra ID Token Cache and re-connecting to your server to pick up the new Entra ID Token.
A few other things were fixed in this release, including:
Thanks for reading! Keep leaving us feedback and suggestions at aka.ms/ssms-feedback. Stay tuned for more updates!
Posted by Rob Orgiu, Android Developer Relations Engineer

We’re excited to announce that Material 3 Adaptive 1.2.0 is now stable!
This release continues to build on the foundations of previous versions, expanding support to more breakpoints for window size classes and new strategies to place display panes automatically.
This stable release is built on top of WindowManager 1.5.0 support for large and extra large breakpoints, and introduces the new reflow and levitate strategies for ListDetailPaneScaffold and SupportingPaneScaffold.
WindowManager 1.5.0 introduced two new breakpoints for width window size class to support even bigger windows than the Expanded window size class. The Large (L) and Extra-large (XL) breakpoints can be enabled by adding the following parameter to the currentWindowAdaptiveInfo() call in your codebase:
This flag enables the library to also return L and XL breakpoints whenever they’re needed.
Arranging content and display panes in a window is a complex task that needs to take into account many factors, starting with window size. With the new Material 3 Adaptive library, two new technologies can help you achieve an adaptive layout with minimal effort.
With reflow, panes are rearranged when window size or aspect ratio changes, placing a second pane to the side of the first one when the window is wide enough, or reflow the second pane underneath the first pane whenever the window is taller. This technique applies also when the window becomes smaller: content reflows to the bottom.
Reflowing a pane based on the window size
While reflowing is an incredible option in many cases, there might be situations in which the content might need to be either docked to a side of the window or levitated on top of it. The levitate strategy not only docks the content, but also allows you to customize features like draggability, resizability, and even the background scrim.
Levitating a pane from the side to the center based on the aspect ratio
Both the flow and levitate strategies can be declared inside the Navigator constructor using the adaptStrategies parameter, and both strategies can be applied to list-detail and supporting pane scaffolds:
To learn more about how to leverage these new adaptive strategies, see the Material website and the complete sample code on GitHub.