Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147878 stories
·
32 followers

Qualcomm Announces AI Chips To Compete With AMD and Nvidia

1 Share
Qualcomm has entered the AI data center chip race with its new AI200 and AI250 accelerators, directly challenging Nvidia and AMD's dominance by promising lower power costs and high memory capacity. CNBC reports: The AI chips are a shift from Qualcomm, which has thus far focused on semiconductors for wireless connectivity and mobile devices, not massive data centers. Qualcomm said that both the AI200, which will go on sale in 2026, and the AI250, planned for 2027, can come in a system that fills up a full, liquid-cooled server rack. Qualcomm is matching Nvidia and AMD, which offer their graphics processing units, or GPUs, in full-rack systems that allow as many as 72 chips to act as one computer. AI labs need that computing power to run the most advanced models. Qualcomm's data center chips are based on the AI parts in Qualcomm's smartphone chips called Hexagon neural processing units, or NPUs. "We first wanted to prove ourselves in other domains, and once we built our strength over there, it was pretty easy for us to go up a notch into the data center level," Durga Malladi, Qualcomm's general manager for data center and edge, said on a call with reporters last week.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Event-Driven Data Migration & Transformation using Couchbase Eventing Service

1 Share

Modern data migrations rarely involve a simple lift-and-shift; they require transformation, cleansing, and enrichment so applications can immediately leverage the destination platform’s strengths. Couchbase Capella’s Eventing service enables event-driven, inline transformations as data arrives, allowing teams to reshape schemas, normalize values, enrich with metadata, and prepare documents for SQL++, Search, Analytics, and mobile sync from the outset.

Objectives

      • Deliver a repeatable, event-driven migration pattern from any relational or non-relational database to Couchbase Capella that transforms data in-flight for immediate usability in applications and analytics
        In this example, we will use MongoDB Atlas as a source database.
      • Provide a minimal, production-ready reference implementation using cbimport and Capella Eventing to convert source schemas (e.g., decimals, nested structures, identifiers) into query-optimized models
      • Outline operational guardrails, prerequisites, and validation steps so teams can execute confidently with predictable outcomes and rollback options if needed

Why event‑driven migration

      • Inline transformation reduces post-migration rework by applying schema normalization and enrichment as documents arrive, thereby accelerating cutover and lowering risk
      • Eventing functions keep transformations source-controlled and auditable, so changes are consistent, testable, and repeatable across environments
      • The result is Capella-ready data that supports SQL++, Full‑Text Search, Vector Search, Analytics, and App Services without interim refactoring phases

Prerequisites

      • Install MongoDB Database Tools (includes mongoexport, mongoimport, etc.)
      • Download Couchbase CLI
      • Capella paid account and cluster access, with allowed IP addresses configured and the Capella root certificate downloaded and saved as ca.pem
      • Create following artifacts in Couchbase Capella:
          1. A bucket with name: Test
          2. Scope under bucket: Test with name: sample_airbnb
          3. Two collections with names listingAndReviewsTemp and listingAndReviews
          4. A Javascript function with name dataTransformation
            Click to watch videos below see the Capella setup and creating cluster access steps.
      • Credentials with read/write access to target bucket/scope/collections and CLI tools installed for cbimport and MongoDB export utilities.
      • Connection strings for MongoDB Atlas (source) and Couchbase Capella (target), plus a temporary collection for initial ingestion before transformation.

Source example using MongoDB Atlas

A representative Airbnb listing document illustrates common transformation needs: decimal normalization, identifier handling, nested fields, and flattening for query performance.

Example fields include listing_url, host metadata, address with coordinates, and decimal wrappers for fields like bathrooms and price using the MongoDB extended JSON format.

Eventing transformation pattern

      • Use a temporary collection as the Eventing source (listingAndReviewsTemp) and a destination collection (listingAndReviews) for the transformed documents to keep migration idempotent and testable.
      • Convert MongoDB extended JSON decimals to native numbers, rename fields for domain readability, derive a Couchbase key from the original _id, and stamp documents with migrated_at.

Step 1: Export from MongoDB

Export documents to JSON using mongoexport with –jsonArray to produce a clean list for batch import into Couchbase.

Follow along with this video of the Mongo export command execution:

Syntax example:

mongoexport \
  --uri="mongodb+srv://cluster0.xxxx.mongodb.net/test" \
  --username=Test \
  --password=Test_123 \
  --authenticationDatabase admin \
  --collection=listingAndReviews \
  --out=listingAndReviews.json \
  --jsonArray

Step 2: Deploy Eventing

      • Configure the Eventing function with the temp collection as source (listingAndReviewsTemp) and the target collection (listingAndReviews) as the destination, then deploy to transform and write documents automatically.
      • Monitor success metrics and logs in Eventing; verify counts and random samples in Data Tools to confirm fidelity and schema conformance.
      • Watch the video for setup and deployment

Code: Eventing function (OnUpdate)

function OnUpdate(doc, meta) {
 try {
   // Directly process every document mutation in the source bucket
   var newId = doc._id ? doc._id.toString() : meta.id;
 
   var transformedDoc = {
      listingId: newId,
      url: doc.listing_url,
      title: doc.name,
      summary: doc.summary,
      type: doc.property_type,
      room: doc.room_type,
      accommodates: doc.accommodates,
      bedrooms: doc.bedrooms,
      beds: doc.beds,
      bathrooms: parseFloat(doc.bathrooms?.$numberDecimal || doc.bathrooms) || null,
      price: parseFloat(doc.price?.$numberDecimal || doc.price) || null,
      picture: doc.images?.picture_url,
      host: {
        id: doc.host?.host_id,
        name: doc.host?.host_name,
        location: doc.host?.host_location
      },
      address: {
        street: doc.address?.street,
        country: doc.address?.country,
        coordinates: doc.address?.location?.coordinates
      },
      migrated_at: new Date().toISOString()
   };
 
   // Use a new prefixed key in the destination bucket
   dst_bucket[newId] = transformedDoc;
 
 } catch (e) {
     log("Error during transformation:", e);
 }
}

Step 3: Import to temporary collection

Ingest exported JSON into a temporary collection (listingAndReviewsTemp) using cbimport with list format and Capella’s TLS certificate.

Syntax example:

cbimport json \
  -c couchbases://cb.xxxx.cloud.couchbase.com \
  -u MyUser \
  -p MyPassword \
  --bucket Test \
  --scope sample_airbnb \
  --collection listingAndReviewsTemp \
  --format list \
  --file listingAndReviews.json \
  --cacert MyCert.pem

Watch the Couchbase data import steps:

Keep the destination collection empty during this phase—Eventing will populate it post-transformation.


Validation checklist

      • Document counts between the source and the transformed destination align within expected variances for filtered fields and transformations
      • Numeric fields parsed from extended JSON (e.g., price, bathrooms) match expected numeric ranges, and keys map one-to-one with original IDs
      • Representative queries in SQL++ (lookup by host, geospatial proximity by coordinates, price range filters) return expected results on transformed data
      • While importing documents into Couchbase, the new ID will be UUID in listingAndReviewsTemp collection
      • The given eventing script will remove _id field of MongoDB unique Identifier, change the document ID field from UUID to value of _id as it was in MongoDB
      • Watch the import validation video

Operational tips

      • Run in small batches first to validate performance of Eventing and backfill posture; scale up once transformation throughput is stable
      • Keep the Eventing function versioned; test changes in non-prod with identical collections and a snapshot of export data before promoting
      • Apply TTL on temporary collection listingAndReviewsTemp to save the storage cost. Read more information on TTL in the Couchbase docs

Expanded use cases

      • E-commerce: Normalize prices and currencies, enrich with inventory status, and denormalize SKU attributes for fast product detail queries
      • IoT pipelines: Aggregate sensor readings by device/time window and flag anomalies on ingest to reduce downstream processing latency
      • User profiles: Standardize emails/phone numbers, derive geo fields, and attach consent/audit metadata for compliance-ready datasets
      • Multi-database consolidation: Harmonize heterogeneous schemas into a unified model that fits Capella’s SQL++, FTS, and Vector Search features
      • Content catalogs: Flatten nested media metadata, extract searchable keywords, and precompute facets for low-latency discovery experiences
      • Financial records: Convert decimal and date types, attach lineage and reconciliation tags, and route exceptions for manual review on ingest

What’s next

      • Add incremental sync by reusing the temp collection as a landing zone for deltas and letting Eventing upsert into the destination for continuous migration
      • Layer FTS and vector indexes over transformed documents to enable semantic and hybrid search patterns post-cutover without reindexing cycles
      • Continuously stream the data from various relational and non-relation sources to Couchbase for live data migration scenarios using data streaming or ETL technologies, some examples are:

Conclusion

Event-driven migration turns a one-time port into a durable transformation pipeline that produces clean, query-ready data in Capella with minimal post-processing work. By exporting from MongoDB, importing into a temp collection, and applying a controlled Eventing transform, the destination model is ready for SQL++, Search, Analytics, and App Services on day one.

Start for free

Spin up a Capella environment and test this pattern end-to-end with a small sample to validate mappings, performance, and query behavior before scaling.

Start your free tier cluster Sign up for free tier to run your experiment today!

The post Event-Driven Data Migration & Transformation using Couchbase Eventing Service appeared first on The Couchbase Blog.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Better search suggestions in Firefox

1 Share

We’re working on a new feature to display direct results in your address bar as you type, so that you can skip the results page and get to the right site or answer faster.

Every major browser today supports a feature known as “search suggestions.” As you type in the address bar, your chosen search engine offers real-time suggestions for searches you might want to perform.

A Firefox browser window with a gray gradient background. The Google search bar shows “mozilla.” Google suggestions below include “mozilla firefox,” “mozilla thunderbird,” “mozilla careers,” “mozilla vpn,” and “mozilla foundation.”

This is a helpful feature, but these suggestions always take you to a search engine results page, not necessarily the information or website you’re ultimately looking for. This is ideal for the search provider, but not always best for the user.

For example, flight status summaries on a search results page are convenient, but it would be more convenient to show that information directly in the address bar:

A Firefox browser window with an orange gradient background. The Google search bar shows “ac 8170.” The result displays an Air Canada flight from Victoria (YYJ) to Vancouver (YVR), showing departure and arrival times and that it’s “In flight” or “On time.”

Similarly, people commonly search for a website when they don’t know or remember the exact URL. Why not skip the search?

A Firefox browser window with a green gradient background. The Google search bar shows “mdn.” Below, the top result is “Mozilla Developer Network — Your blueprint for a better internet,” with Google suggestions like “mdn web docs,” “mdn array,” and “mdn fetch.”

Another common use case is searching for recommendations, where Firefox can show highly relevant results from sources around the web:

A Firefox browser window with a gradient pink-to-purple background. The Google search bar shows the query “bike repair boston.” Below it, Google suggestions and a featured result for “Ballantine Bike Shop” appear, showing address, rating, and hours.

The truth is, browser address bars today are largely a conduit to your search engine. And while search engines are very useful, a single and centralized source for finding everything online is not how we want the web to work. Firefox is proudly independent, and our address bar should be too.

We experimented with the concept several years ago, but didn’t ship it1 because we have an extremely high standard for privacy and weren’t satisfied with any design that would send your raw queries directly to us. Even though these are already sent to your search engine, Firefox is built on the principle that even Mozilla should not be able to learn what you do online. Unlike most search engines, we don’t want to know who’s searching for what, and we want to enable anyone in the world to verify that we couldn’t know even if we tried.

We now have the technical architecture to meet that bar. When Firefox requests suggestions, it encrypts your query using a new protocol we helped design called Oblivious HTTP. The encrypted request goes to a relay operated by Fastly, which can see your IP address but not the text. Mozilla can see the text, but not who it came from. We can then return a result directly or fetch one from a specialized search service. No single party can connect what you type to who you are.

A simple black-and-white diagram with three rounded rectangles labeled “Firefox,” “Relay (Operated by Fastly),” and “Mozilla.” Double arrows connect them, showing a two-way flow between Firefox ↔ Relay ↔ Mozilla.

Firefox will continue to show traditional search suggestions for all queries and add direct results only when we have high confidence they match your intent. As with search engines, some of these results may be sponsored to support Firefox, but only if they’re highly relevant, and neither we nor the sponsor will know who they’re for. We expect this to be useful to users and, hopefully, help level the playing field by allowing Mozilla to work directly with independent sites rather than mediating all web discovery through the search engine.

Running this at scale is not trivial. We need the capacity to handle the volume and servers close to people to avoid introducing noticeable latency. To keep things smooth, we are starting in the United States and will evaluate expanding into other geographies as we learn from this experience and observe how the system performs. The feature is still in development and testing and will roll out gradually over the coming year.2


We did ship an experimental version that users could enable in settings, as well as a small set of locally-matched suggestions in some regions. Unfortunately, the former had too little reach to be worth building features for, and the latter had very poor relevance and utility due to the technical limitations (most notably, the size of the local database).

2 Where the feature is available, you can disable it by unchecking “Retrieve suggestions as you type” in the “Search” pane in Firefox settings. If this box is not yet available in your version of Firefox, you can pre-emptively disable it by setting browser.urlbar.quicksuggest.online.enabled to false in about:config.

Take control of your internet

Download Firefox

The post Better search suggestions in Firefox appeared first on The Mozilla Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Your vibe coded slop PR is not welcome

1 Share

As both developers and stewards of significant open source projects, we’re watching AI coding tools create a new problem for open source maintainers.

AI assistants like GitHub Copilot, Cursor, Codex, and Claude can now generate hundreds of lines of code in minutes. This is genuinely useful; but it has an unintended consequence: reviewing machine generated code is very costly.

The core issue: AI tools have made code generation cheap, but they haven’t made code review cheap. Every incomplete PR consumes maintainer attention that could go toward ready-to-merge contributions.

At Discourse, we’re already seeing this accelerating across our contributor community. In the next year, every engineer maintaining open source projects will face the same challenge.

We need a clearer framework for AI-assisted contributions that acknowledges the reality of limited maintainer time.

A binary system works extremely well here. On one side there are prototypes that simply demonstrate an idea. On the other side there are ready for review PRs that meet a project’s contribution guidelines and are ready for human review.

The lack of proper labeling and rules is destructive to the software ecosystem

The new tooling is making it trivial to create a change set and lob it over the fence. It can introduce a perverse system where project maintainers spend disproportionate effort reviewing lopsided AI generated code that took seconds for contributors to create and now will take many hours to review.

This can be frustrating, time consuming and demotivating. On one side there is a contributor who spent a few minutes fiddling with AI prompts, on the other side you have an engineer that needs to spend many hours or even days deciphering alien intelligence.

This is not sustainable and is extremely destructive.

The prototype

AI coding agents such as Claude Code, Codex, Cursor CLI and more have unlocked the ability to ship a “new kind” of change set, the prototype.

The prototype is a live demo. It does not meet a project’s coding standards. It is not code you vouch for or guarantee is good. It lacks tests, may contain security issues and most likely would introduce an enormous amount of technical debt if merged as is.

That said it is a living demo that can help make an idea feel more real. It is also enormously fun.

Think of it as a delightful movie set.

Prototypes, especially on projects such as Discourse where enabling tooling exists are incredibly easy to explore using tools like dv.

% dv new my-experiment
% dv branch my-amazing-prototype
% dv ls
total 1
* my-amazing-prototype Running 1 minute ago http://localhost:4200

# finally visit http://localhost:4200 to see in action

Prototypes are great vehicles for exploring ideas. In fact you can ship multiple prototypes that demonstrate completely different solutions to a single problem which help decide on the best approach.

Prototypes, video demos and simple visual mockups are great companions. The prototype has the advantage that you can play with it and properly explore the behavior of a change. The video is faster to consume. Sometimes you may want them all.

If you are vibe coding and prototyping there are some clear rules you should follow

  1. Don’t send pull requests (not even drafts), instead lean on branches to share your machine generated code.
  2. Share a short video AND/OR links to a branch AND/OR quotes of particular interesting code from the prototype in issues / or forum posts.
  3. Show all your cards, explain you were exploring an idea using AI tooling, so people know the nature of the change you are sharing.

Maybe you will be lucky and an idea you had will get buy-in, maybe someone else may want to invest the time to drive a prototype into a production PR.

When should you prototype?

Prototyping is fun and incredibly accessible. Anyone can do it using local coding agents, or even coding agents on the cloud such as Jules, Codex cloud, Cursor Cloud, Lovable, v0 and many many more.

This heavily lowers the bar needed for prototyping. Product managers can prototype, CEOs can prototype, designers can prototype, etc.

However, this new fun that opens a new series of questions you should explore with your team.

  • When is a prototype appropriate?
  • How do designers feel about them?
  • Are they distracting? (are links to the source code too tempting)?
  • Do they take away from human creativity?
  • How should we label and share prototypes?
  • Is a prototype forcing an idea to jump the queue?

When you introduce prototyping into your company you need to negotiate these questions carefully and form internal consensus, otherwise you risk creating large internal attitude divides and resentment.

The value of the prototype

Prototypes, what are they good for? Absolutely something.

I find prototypes incredibly helpful in my general development practices.

  • Grep on steroids. I love that prototypes often act as a way of searching through our large code base isolating all the little areas that may need changing to achieve a change
  • I love communicating in paragraphs, but I am also a visual communicator. I love how easy a well constructed prototype can communicate a design idea I have, despite me not being that good in Figma.
  • I love that there is something to play with. It often surfaces many concerns that could have been missed by a spec. The best prototype is tested, during the test you discover many tiny things that are just impossible to guess upfront.
  • The crazy code LLMs generate is often interesting to me, it can sometimes challenge some of my thinking.

The prototype - a maintainers survival guide

Sadly, as the year progresses, I expect many open source projects to receive many prototype level PRs. Not everyone would have read this blog post or even agree with it.

As a maintainer dealing with external contributions:

  • Protect yourself and your time. Timebox initial reviews of large change sets, focus on determining if it was “vibe coded” vs leaving 100 comments on machine generated code that took minutes to generate.
  • Develop an etiquette for dealing with prototypes pretending to be PRs. Point people at contribution guidelines, give people a different outlet. “I am closing this but this is interesting, head over to our forum/issues to discuss”
  • Don’t feel bad about closing a vibe coded, unreviewed, prototype PR!

The ready to review PR

A ready to review PR is the traditional PRs we submit.

We reviewed all the machine generated code and vouch for all of it. We ran the tests and like the tests, we like the code structure, we read every single line of code carefully we also made sure the PR meets a project’s guidelines.

All the crazy code agents generated along the way has been fixed, we are happy to stamp our very own personal brand on the code.

Projects tend to have a large set of rules around code quality, code organisation, testing and more.

We may have used AI assistance to generate a ready to review PR, fundamentally, though this does not matter, we vouch for the code and stand behind it meeting both our brand and a project’s guidelines.

The distance from a prototype to a ready to review PR can be deceptively vast. There may be days of engineering taking a complex prototype and making it production ready.

This large distance was communicated as well by Andrej Karpathy in the Dwarkesh Podcast.

For some kinds of tasks and jobs and so on, there’s a very large demo-to-product gap where the demo is very easy, but the product is very hard.

For example, in software engineering, I do think that property does exist. For a lot of vibe coding, it doesn’t. But if you’re writing actual production-grade code, that property should exist, because any kind of mistake leads to a security vulnerability or something like that.

Veracode survey found that only 55% of generation tasks resulted in secure code. (source).

Our models are getting better by the day, and everything really depends on an enormous amount of parameters, but the core message that LLMs can and do generate insecure code, stands.

On alien intelligence

The root cause for the distance between project guidelines and a prototype is AI alien intelligence.

Many engineers I know fall into 2 camps, either the camp that find the new class of LLMs intelligent, groundbreaking and shockingly good. In the other camp are engineers that think of all LLM generated content as “the emperor’s new clothes”, the code they generate is “naked”, fundamentally flawed and poison.

I like to think of the new systems as neither. I like to think about the new class of intelligence as “Alien Intelligence”. It is both shockingly good and shockingly terrible at the exact same time.

Framing LLMs as “Super competent interns” or some other type of human analogy is incorrect. These systems are aliens and the sooner we accept this the sooner we will be able to navigate the complexity that injecting alien intelligence into our engineering process leads to.

Playing to alien intelligence strength, the prototype

Over the past few months I have been playing a lot with AI agents. One project I am particularly proud of is dv. It is a container orchestrator for Discourse, that makes it easy to use various AI agents with Discourse.

I will often run multiple complete and different throwaway Discourse environments on my machines to explore various features. This type of tooling excels at vibe engineering prototypes.

Interestingly dv was mostly built using AI agents with very little human intervention, some of the code is a bit off brand, that said unlike Discourse or many of the other open source gems I maintain it is a toy project.

Back on topic, dv has been a great factory for prototypes on Discourse. This has been wonderful for me. I have been able to explore many ideas while catching up on my emails and discussions on various Discourse sites.

On banning AI contributions, prototypes and similar

Firstly you must be respectful of the rules any project you contribute has, seek them out and read them prior to contributing. For example: Cloud hypervisor says no AI generated code to avoid licensing risks.

That said, there is a trend among many developers of banning AI. Some go so far as to say “AI not welcome here” find another project.

This feels extremely counterproductive and fundamentally unenforceable to me. Much of the code AI generates is indistinguishable from human code anyway. You can usually tell a prototype that is pretending to be a human PR, but a real PR a human makes with AI assistance can be indistinguishable.

The new LLM tooling can be used in tremendous amounts of ways including simple code reviews and simple renamings within a file, to complete change set architecture.

Given the enormous mess and diversity here I think the healthiest approach is to set clear expectations. If I am submitting a PR it should match my brand and be code I vouch for.

As engineers it is our role to properly label our changes. Is our change ready for human review or is it simply a fun exploration of the problem space?

Why is this important?

Human code review is increasingly becoming a primary bottleneck in software engineering. We need to be respectful of people’s time and protect our own engineering brands.

Prototype are fun, they can teach us a lot about a problem space. But when it comes to sending contributions to a project, treat all code as code you wrote, put your stamp of ownership and approval on whatever you build and only then send a PR you vouch for.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing the Release of SSMS 22 Preview 4

1 Share

Happy Monday! The SSMS team is excited to announce the release of SQL Server Management Studio (SSMS) Preview 4. Building on feedback from our community and the momentum of previous previews, this release brings new features, important bug fixes, and further improvements to your SSMS experience.

What's new in SSMS 22 Preview 4

This release introduces several bug fixes reported via the SSMS Developer Community as well as a few feature enhancements.  Learn more about SSMS 22 Preview 4, including how to download and install and what known issues exist, by visiting our documentation site. To update an existing installation of SSMS 22, launch the Visual Studio Installer and select Update or check for updates within SSMS by going to Help > Check for Updates.

 

Connection Dialog

Updates to the connection dialog include a new Reset button – allowing you to clear the fields in the Connection Properties section.

Quickly clear the Connection Properties field by selecting Reset.

Also included in this release: a fix for the feedback item New Query from Object Explorer Does Not Inherit App Name from Highlighted Database.

SQL Server 2025 Support

Additional support for SQL Server 2025 features was added in this release, including options to create Vector and JSON indexes. From Object Explorer, right-click Indexes > New Index > JSON index… or Vector Index… Also in Object Explorer, you can now view information for dimension and base type parameters for Vector data types.

 

A bunch of snippet files were added, including snippets for creating various index types, creating and altering external models for AI embeddings, and managing security and schema objects. View and manage available snippets by going to Tools > Code Snippets Manager… or insert snippets directly into your TSQL code by right-clicking in the query editor and selecting Insert Snippet

Query Editor and Status Bar

SSMS 22 Preview 4 includes a few UI improvements, including a fix for a scrolling behavior issue with the query editor. Now, when the results grid has focus but the mouse pointer is hovered over the query editor you are able to scroll using the mouse wheel. We also resolved an issue that caused color contrast issues in the status bar when certain custom colors were selected from the Connection Dialog.

 

Scroll the Query Editor using the mouse wheel, even when the Results Window has focus.

Clear Entra ID Token Cache

We introduced a new Help menu item to resolve an authentication issue that can occur when you are recently added to an Entra ID group and try to connect to a server with the new access. SSMS caches the Entra ID token for an amount of time, but this refresh time limit can be bypassed by going to Help > Clear Entra ID Token Cache and re-connecting to your server to pick up the new Entra ID Token.

 

Clear your Entra ID Token Cache by navigating to the Help Menu

 

A dialog will confirm your action to clear the Entra ID user token cache before proceeding.

Other Bug Fixes

A few other things were fixed in this release, including:

 

Thanks for reading! Keep leaving us feedback and suggestions at aka.ms/ssms-feedback. Stay tuned for more updates!

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Material 3 Adaptive 1.2.0 is stable

1 Share

Posted by Rob Orgiu, Android Developer Relations Engineer


We’re excited to announce that Material 3 Adaptive 1.2.0 is now stable!


This release continues to build on the foundations of previous versions, expanding support to more breakpoints for window size classes and new strategies to place display panes automatically.

What’s new in Material 3 Adaptive 1.2.0

This stable release is built on top of WindowManager 1.5.0 support for large and extra large breakpoints, and introduces the new reflow and levitate strategies for ListDetailPaneScaffold and SupportingPaneScaffold. 


New window size classes: Large and Extra-large



WindowManager 1.5.0 introduced two new breakpoints for width window size class to support even bigger windows than the Expanded window size class. The Large (L) and Extra-large (XL) breakpoints can be enabled by adding the following parameter to the currentWindowAdaptiveInfo() call  in your codebase:

currentWindowAdaptiveInfo(supportLargeAndXLargeWidth = true)

This flag enables the library to also return L and XL breakpoints whenever they’re needed.


New adaptive strategies: reflow and levitate

Arranging content and display panes in a window is a complex task that needs to take into account many factors, starting with window size. With the new Material 3 Adaptive library, two new technologies can help you achieve an adaptive layout with minimal effort.


With reflow, panes are rearranged when window size or aspect ratio changes, placing a second pane to the side of the first one when the window is wide enough, or reflow the second pane underneath the first pane whenever the window is taller. This technique applies also when the window becomes smaller: content reflows to the bottom.


Reflowing a pane based on the window size


While reflowing is an incredible option in many cases, there might be situations in which the content might need to be either docked to a side of the window or levitated on top of it. The levitate strategy not only docks the content, but also allows you to customize features like draggability, resizability, and even the background scrim.


Levitating a pane from the side to the center based on the aspect ratio


Both the flow and levitate strategies can be declared inside the Navigator constructor using the adaptStrategies parameter, and both strategies can be applied to list-detail and supporting pane scaffolds:


val navigator = rememberListDetailPaneScaffoldNavigator<Nothing>(

        adaptStrategies = ListDetailPaneScaffoldDefaults.adaptStrategies(

            detailPaneAdaptStrategy = AdaptStrategy.Reflow(

                reflowUnder = ListDetailPaneScaffoldRole.List

            ),

            extraPaneAdaptStrategy = AdaptStrategy.Levitate(

                alignment = Alignment.Center

            )

        )

    )


To learn more about how to leverage these new adaptive strategies, see the Material website and the complete sample code on GitHub.


Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories