Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148937 stories
·
33 followers

Get AI-Ready With Erik: Vector Index Intricacies

1 Share

Get AI-Ready With Erik: Vector Index Intricacies


Summary

In this video, I delve into the fascinating world of vector indexes in SQL Server, a topic that might seem a bit dry but is incredibly powerful for certain types of data analysis and search operations. Vector indexes are fundamentally different from traditional B-tree indexes; they create a graph where each vector (a series of floating point numbers) becomes a node, and edges connect similar vectors. This unique structure allows for faster searches by navigating the graph rather than scanning every single vector, making it particularly useful for tasks like content similarity search. I walk you through how vector search works using SQL Server’s preview function, demonstrating its efficiency compared to exact vector distance calculations. Along the way, we explore concepts like recall and see firsthand the trade-offs between exact and approximate searches. To give you a practical example, I run a demo comparing the results of an exact search with those from vector search on a sample dataset, highlighting both their similarities and differences. If you’re curious about diving deeper into AI and SQL Server 2025, be sure to check out my course “Get AI Ready with Erik,” where you can learn more advanced techniques and get hands-on experience.

Topics

`SQL Server`, `Vector Indexes`, `B-Tree Indexes`, `Columnstore Indexes`, `Graph Search`, `Greedy Search Algorithm`, `Vector Distance`, `Recall (Information Retrieval)`, `AI in SQL Server`, `Post Embeddings`, `Docker Container Networking`, `Exact Search`, `Approximate Search`, `Vector Search Function`, `SQL Server 2025`

Chapters

  • *00:00:00* – Introduction
  • *00:00:31* – Vector Indexes vs Columnstore
  • *00:01:02* – Graph-Based Structure of Vector Indexes
  • *00:01:50* – Searching the Graph
  • *00:02:19* – Greedy Search Algorithm
  • *00:03:08* – Life Metaphor for Search Strategy
  • *00:03:47* – Vector Search Function Overview
  • *00:04:32* – Efficiency of Vector Search
  • *00:05:12* – Non-Sargable Predicate Comparison
  • *00:06:03* – Recall in Approximate Searches
  • *00:07:28* – Critique of Microsoft’s Efforts
  • *00:08:45* – Exact vs Vector Search Example

Full Transcript

Erik Darling here with Darling Data, here to talk to you about some boring stuff about vector indexes, because they’re not like regular indexes, right? It’s not like a B-tree index at all. It’s a completely different structure, which is, you know, why, I believe, much in the way that, you know, columnstore, well, actually, no, I’m lying. columnstore is much closer to a normal index than a vector index is, but the only reason that I bring that up is obviously because when Microsoft first released columnstore indexes, they struggled mightily with the columnstore index, making the table that it was created on read-only. We’re going to talk more about this other stuff, but currently, vector indexes do that, but vector indexes are not like B-tree indexes in that the way that, data is searched and sort of, you know, written out is a lot different, right? It’s not a B-tree where you have pages just sort of linking to each other and you can seek and, you know, do all this other stuff within it. It’s a completely different sort of structure, which is also probably why Microsoft is struggling so mightily with getting the creation of them to be fast, because…

Anyway, let’s get on with things here. Disk and indexes build a graph, basically, where each vector, right, each vector that you have, those are that, you know, series of floating point numbers, is a node in the graph, and then edges connect sort of similar vectors. So you can, you can sort of seek from, like, you know, like, within that, or like, seek around that graph, but it’s not, it’s not really the same thing.

Whenever you search, you navigate the graph instead of scanning everything. So like that, like, like using the vector search function, which is in preview, you know, like, you can, you can seek within that rather than like, you know, like, like a vector distance, you have to basically like scan everything, measure the distance and then, like spit out whatever, you know, like the distance there and like any filtering that you apply or any ordering that you do as a result of that sort of runtime calculation. It’s not stored anywhere.

But vector, vector search uses, or vector indexes use sort of a greedy search, which is a problem solving strategy that sort of, like, I guess like the premise of it is, like, if you make the best local choice, every time you connect, like similar vectors, like if you make the best local choice at each step, then the hope is that it will lead to the best sort of like global solution. So it’s sort of like, if you do everything right in life, I mean, look, you’re still going to die, but maybe some good stuff will happen to you along the way.

You know, more, more likely, you know, you’re going to watch like crappy musicians get rich and famous and like terrible actors make millions and millions of dollars and you’re just going to, you know, work your butt off and have to watch training videos about AI and SQL Server. But the search algorithm that gets used is called the greedy search, right? I’m always searching for new ways to be greedier because I hear that the greedier you are, the more money you make.

So I’m always just trying to figure out how can I be greedier? So far, it hasn’t worked, right? But you start at an entry point and you look at like, like you get to a node and you’re like, well, like, like this is like, like, like fan out and look at, look at all the neighbors of that node.

And then you move to whatever neighbor is the closest to your query, right? So like, if you have like, like an 0.5 here, you’d be like, well, what are the closest to this? Like 0.6, 0.7, 0.8, 0.9, you’re like, ah, 0.6 is the closest I’ll go to you.

And then you kind of repeat until you don’t find a closer neighbor. At which point you might backtrack and try alternative paths to see if you turn something, if you find something better, find a better path through the graph. And then you return the best candidates that showed up in there.

What it’s faster because you don’t examine every single vector, right? Like when you use vector search, like the function vector search, SQL Server, like is able to sort of like look at a vector and figure out what is closest to it and like move to that rather than just like, you know, like running like the vector distance on everything, figuring out what that distance is and then going on with it. You can think of like the vector searching as always being like a non-sargable predicate where like if you were to say like date diff, like one column, like date different days between two columns and like is greater than four.

SQL Server doesn’t know any of that ahead of time. It has to run that function for every row that you want to compare, like figure out what the difference in days between two date columns is. And then it can figure out like if it meets that, if those rows meet that predicate, you can’t do it.

Like it doesn’t know any of that ahead of time unless you create a, create a computed column and do all the other stuff. So vector search is faster because the graph will guide the search to other relevant things that might, that are similar to it. Search time with vector distance, like I showed you in another video, gets slower as your data gets bigger because you have more things to compare and figure out the distance between.

In the AI world, there’s a concept of recall. Recall. And recall is the, like what fraction of true neighbors did an approximate search.

Fine. Because vector search is an approximate search, whereas vector distance is like an act, like an exact search. So you can think of that as sort of like if an exact search, like went through all of the neighbors and it found like ABCDEFGHIJ.

Like a, like a disc and, like a vector index search would, might find like ABCDEFGHXY. Recall is like how much of the sort of approximate search, like matches what an exact search would find. Uh, so in, in that, that case up above where only like the last two are different, IJ versus XY, the recall would be 80%.

Um, Microsoft research reports 95% plus recall on billion point benchmarks using their disc ANN indexes. Which, you know, some, I mean, it’s, it’s good, right? 95% plus.

Great. You know, it’s just, you, you wish that they were generally available. You wish they didn’t make the tables you create them on read only. You would, might even wish that creating them didn’t take the gargantuan effort that it does. You might even say, I don’t know, maybe, maybe put the fabric down and dedicate some engineers to this thing that seems important.

I don’t know. Right? Stop, stop fussing about with these gag gifts to the world.

Like, no one needs fabric. We have Databricks, we have Snowflake, we have other things that already do this job. Right? Showing up late to the party with your pants off.

Anyway, uh, let’s give ourselves a single query vector. And that single query vector is going to represent the search phrase, Docker container networking. Right?

So this is the vector that we care about. Uh, using exact search, right? With this vector distance function. We’re going to find the top 20 rows, uh, that, um, that have the, the closest distance, right? So low, again, lower number better.

We’re gonna find the top 20 closest matches to, uh, to, uh, Docker, Docker container networking in the post embeddings table. And then we’re going to use vector search down here. And I’m going to talk more about vector search, but we’re going to use vector search here to, um, see how close or see how much we get on that.

Cause from this one, right, we’ve got the top 20 by exact search here. We’re going to get the top 20. That’s this thing here.

Top n equals 20. Uh, so we’re going to get the top 20 rows from this, uh, that come out of this function. Right? So this is only going to return 20 rows. And since we’re dumping it into a temp table, we don’t need to worry about that one. This one, we were saying, give us the top 20 ordered by like, which ones are the closest neighbors.

So using vector search, we can do sort of the same thing, right? Where we hit the post embeddings table. We look at the embedding, uh, we tell, or rather we tell it which column to use here.

Uh, for some reason you can’t alias this thing. Um, we’re going to, we’re going to say, we want it to be similar to the vector embedding that we found before using the cosine metric and give us the top 20 rows from that. And if we look at, uh, what came back from those, what we’re going to see is of course, the exact, well, I mean, I say of course, but I say of course, because I’ve done this demo before, but, um, you know, both of these things found 20 rows, right?

But the only 16 of those rows overlapped, meaning that, you know, the, like there, there is a difference in the search results between the exact search and what vector search found. If you want to find out what that difference is, I would highly recommend you buy my course, Get AI Ready with Erik, which if you use this coupon, we’ll buy you a hundred, a hundred dollars off. The, the, the price of admission that, that link is down in the video description.

You can click on this fully assembled pre pre-made link for you and you can, and you can buy it and you can, you can learn all sorts of additional things about AI and SQL Server 2025. All right. Thank you for watching.

I hope you enjoyed yourselves. I hope you learned something and I will see you in tomorrow’s video where we will do, oh, I don’t know, something equally vector-y and search-y. All right. Goodbye.

 

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

The post Get AI-Ready With Erik: Vector Index Intricacies appeared first on Darling Data.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

The Great (ai) Game vs AI Theater

1 Share

“The game is so large that one sees but a little at a time.”

To understand AI, its stakes and its long-term impact, you have to step away from the cacophony of headlines. And instead take the time to think of it as the Great Game.

The Great Game was the 19th century strategic rivalry between the British Empire and Russia over Central Asia. Subsequent versions of this have played out over control of oil, for example. Then there was the Cold War, arguably the greatest game, with global nuclear annihilation at stake.

The game changes. The playbook doesn’t.

Neither side wanted direct war. Both wanted dominance. So they competed through proxies, influence, positioning, and long-horizon maneuvering. It was about who controlled the board, not just who won the battle.

Great Game is how we describe any era-defining geopolitical competition where the stakes are civilizational, the timeline is generational, and the weapons are economic and technological as much as military.

AI is the new Great Game.

If AI is the great game of the 21st century (and it is), then winning requires what great games always require. Long-term strategy, sustained focus, and the discipline to play beyond the next quarter, the next news cycle, the next fundraise.

China understands this. The United States, at this moment, seems scattered, or at least that’s what this rube can see from the outside.

This week, Beijing released its 15th Five-Year Plan, laying out its ambitions to aggressively adopt AI throughout its economy and dominate emerging technologies. This is a continuation of a deliberate, compounding strategy. China spent the last five years targeting robotics, EVs, and post-industrial manufacturing. They planned their way into dominating everything from renewable energy to automobiles. They want to do the same with AI, with or without Nvidia GPUs.

Open-source AI is the new flagship strategy for China. They see it as a competitive advantage against the United States. They’re already playing the game. And we are raking chief executives over the coals.

On our end, we have what I’d call AI theater.

I am yet to read a clear, comprehensive outline of what our long-term strategy is to win the great game. Instead what we have is virtue signaling on all sides, vapid commentary, a lot of words, deals that go nowhere, or don’t mean much in the larger scheme of things. This new deal about energy prices is a good example.

Yesterday at the White House, the President gathered the CEOs of Google, Microsoft, Meta, Amazon, Oracle, OpenAI, and xAI to sign something called the “Ratepayer Protection Pledge,” a voluntary promise not to raise your electricity bill. A pinky promise dressed up as policy, timed perfectly for the midterms.

And then there’s OpenAI. They are taking four weeks of revenue, multiplying by twelve, sprinkling some pixie dust on the result, and fast-tracking to a public offering. Cash in before the party ends? Or cash in and end the party? Either way, this isn’t a company executing a strategy to win the great game. This is an opportunist executing a strategy to get as much from the public markets as it can, so there isn’t much left for its rivals, Anthropic and SpaceX.

Neither is a plan.

The contrast isn’t really about authoritarianism versus democracy, or central planning versus markets. I don’t want a Five-Year Plan with Xi’s name on it. But I do want to know what America’s long-term strategy for winning actually is. Who trains the talent? Where does the compute go, and for whose benefit? How does AI get woven into the industries that actually employ people, manufacturing, healthcare, logistics, infrastructure?

Right now the answer is simple. Let the companies figure it out. Let the politicians take photos with the companies. And hope the voters don’t notice the electricity bill going up anyway. I know we don’t do long term, but maybe like with everyone else who is going to adapt for AI, the thinking about AI itself has to evolve.

From the outside, it seems China has a better game plan. The question is when, or whether, Washington will have one too.

“When everyone is dead the Great Game is finished. Not before.” — Mahbub Ali, Kim (1901)


Why I wrote this piece:

My Related Writing:

Read the whole story
alvinashcraft
17 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Analysis → Implementation → Reflection – a practical technique for issue resolution with agentic AI by Dean Kerr

1 Share

This article presents Analysis / Implementation / Reflection, a simple pattern for resolving issues. While the core of this pattern, the implementation, is quite conventional, there are a couple of novel additions. In the analysis phase, the agent is used to explore the issue and create a suitable harness to evaluate the solution. While the reflection phase probes the agent to provide a qualitative assessment of the implementation. Together, these provide a high degree of confidence in the solution (both functional correctness and overall quality) and ensure that you, the developer, are comfortable with the solution and able to ‘own’ the outcome.

Establishing Project Baselines

Image depicting the generate copilot instructions dropdown

Before diving into any issue, it is useful to establish project baselines via generating copilot instructions for your workspace. These help define coding standards, library preferences and architectural patterns before the agent attempts to make any adjustments to the codebase. These are included by default in every chat prompt, meaning it saves you the headache of including them yourself manually each time, or expecting the agent to infer these standards itself.

Take the time to review and adapt these appropriately, don’t rely on the AI to pick up everything. It can be particularly common to have unwritten rules and/or assumptions in a software project; this instructions file would be a great place to write any of those down.

Revisit the instructions file as you work together with agents in the codebase. Agents frequently need fine tuning and guidance to avoid common pitfalls such as stalling indefinitely when choosing to run tests in watch mode. As you build up these instructions you should start to see get a more reliable output first time when using agents.

Faster Issue Contextualisation

Getting up to speed on a new issue drains your daily cognitive bandwidth. Being able to shorten this isn’t just about saving time but also preserving your mental energy to use elsewhere.

Agents are a great tool for summarising what can be dense, comment-heavy issues - stripping away non-essential or irrelevant conversation. Try prompting your agent to distil issues into core requirements, blockers and current consensus allowing yourself to focus quicker on the technical challenge at hand.

Try not to limit yourself to textual based summarisations either, modern multimodal models can digest almost anything attached to a ticket. Screenshots, stack traces, database schemas and even video recordings of a bug can all be fed to the agent to build a robust context before you write a single line of code.

Analysis → Implementation → Reflection loop

Diagram illustrating the analysis -> implementation -> reflection loop

Closing the feedback loop, effectively allowing an agent to evaluate the quality of its own output, is especially important for AI augmented software development. With the right constraints (e.g. a tightly bounded problem and comprehensive tests) agents can port large codebases with relative ease and swiftness.

Scaling this approach down to individual issues is just as powerful. Breaking the work into three distinct phases gives the flexibility to interject whenever the agent starts to struggle or drift off course.

Analysis

Once you’ve understood the issue - ideally using the faster contextualisation techniques mentioned earlier - it’s time to build a lightweight harness to let the agent iterate in a feedback loop. A typical approach involves asking the agent to analyse the provided issue and, adopting a TDD approach, write relevant failing tests that reproduce the bug or define the new feature. Be warned: the agent may stumble even at this first hurdle, especially if the issue is difficult to reproduce (if at all).

Take a moment here to reflect. Are the generated tests appropriate, relevant and offer good coverage? Are there any potential gaps or missing edge cases? Your goal here is to verify that the AI agent fully understands the issue.

At this point you should have a decent idea in your head of a rough solution to the issue, giving you a solid point of comparison for the final reflection step.

Implementation

With failing tests in place, the implementation phase should drive itself. Prompt the agent to implement a solution and use the test suite as the primary feedback mechanism (the classic Red/Green loop). The trick here is to instruct the agent to iterate until all tests pass.

Example Prompt:

Now that we have failing tests, please implement the code required to resolve the issue. After your first attempt, automatically run the test suite. Use the test output as your primary feedback mechanism. If the tests fail, analyse the error, adjust your implementation, and run the tests again. Continue this Red/Green iteration loop until all tests pass. Crucially: Once all tests pass, stop immediately and await my review. Do not proceed to refactoring or further tasks.

Reflection

Arguably, this is the most important step of the loop, where you take on responsibility for the outlined solution. This involves using reflective prompting to challenge the agent’s outlined solution, particularly if it deviates from how you assumed the issue would be solved.

Typical lines of questioning include:

Architectural Integrity

  • What alternative solutions did you consider, why did you choose this one? Outlines other potential pathways to a solution that may be preferrable

  • What edge cases or unexpected inputs could cause this implementation to fail? Gets the model to look for cracks in their own logic

  • Are there any specific scenarios where this solution might introduce a race condition or state inconsistency? Useful for any asynchronous or multi-threaded work

Maintainability

  • If a junior team member had to maintain this code in six months, what part would be the hardest for them to understand? Neat trick to force AI to identify complex and/or unreadable logic that may warrant refactoring or commenting

Security

  • Are there any hidden performance bottlenecks or scaling issues in this approach? Good catch-all for performance / inefficiency

  • What potential security implications or vulnerabilities does this change introduce? A mandatory sanity check, especially relevant if the solution handles user input or involves authentication

Making the model defend and justify its logic is a great way to uncover edge cases or simpler paths that were glossed over in the first pass. You may also find it useful switching to a completely different model at this point and ask it to run a blind code review on the newly implemented changes.

Managing the context window

Keeping an eye on your context window usage, particularly when you’re deep in a long running chat thread with an agent can save you from diminishing performance.

There are various levers you can pull to improve context efficiency. A major one to explore is the Model Context Protocol (MCP), which lets your AI tools fetch specific, bite-sized context from your local environment on demand, saving you from pasting in whole files and burning through your token limits.

Even with these tools, if context limits are a frequent problem, consider how you can break down larger pieces of work across multiple, isolated chats. Often, a bloated context window means you’re simply trying to solve too much at once within the agent’s current capabilities.

Debugging AI Chat

Agents have a big problem with explainability. Nobody can explain exactly how models work ‘under the hood’, and because we are using bleeding-edge tools, the harness layer itself will frequently break in unexpected ways.

However, you can give yourself a headstart by lifting the lid a little. Quite often, the misbehaviour isn’t a deep problem. It’s just bad input at the harness level. It’s worth digging down into that layer via chat debug view (equivalent tooling exists) which lets you see what context, prompts and tools were used when talking to an agent. This added transparency can help you course-correct effectively.

Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Ideological Resistance to Patents, Followed by Reluctant Pragmatism

1 Share

Naresh Jain has long been uncomfortable with software patents. But a direct experience of patent aggression, together with the practical constraints faced by startups, led him to resort to defensive patenting as as a shield in this asymmetric legal environment.

more…

Read the whole story
alvinashcraft
37 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Can coding agents relicense open source through a “clean room” implementation of code?

1 Share

Over the past few months it's become clear that coding agents are extraordinarily good at building a weird version of a "clean room" implementation of code.

The most famous version of this pattern is when Compaq created a clean-room clone of the IBM BIOS back in 1982. They had one team of engineers reverse engineer the BIOS to create a specification, then handed that specification to another team to build a new ground-up version.

This process used to take multiple teams of engineers weeks or months to complete. Coding agents can do a version of this in hours - I experimented with a variant of this pattern against JustHTML back in December.

There are a lot of open questions about this, both ethically and legally. These appear to be coming to a head in the venerable chardet Python library.

chardet was created by Mark Pilgrim back in 2006 and released under the LGPL. Mark retired from public internet life in 2011 and chardet's maintenance was taken over by others, most notably Dan Blanchard who has been responsible for every release since 1.1 in July 2012.

Two days ago Dan released chardet 7.0.0 with the following note in the release notes:

Ground-up, MIT-licensed rewrite of chardet. Same package name, same public API — drop-in replacement for chardet 5.x/6.x. Just way faster and more accurate!

Yesterday Mark Pilgrim opened #327: No right to relicense this project:

[...] First off, I would like to thank the current maintainers and everyone who has contributed to and improved this project over the years. Truly a Free Software success story.

However, it has been brought to my attention that, in the release 7.0.0, the maintainers claim to have the right to "relicense" the project. They have no such right; doing so is an explicit violation of the LGPL. Licensed code, when modified, must be released under the same LGPL license. Their claim that it is a "complete rewrite" is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a "clean room" implementation). Adding a fancy code generator into the mix does not somehow grant them any additional rights.

Dan's lengthy reply included:

You're right that I have had extensive exposure to the original codebase: I've been maintaining it for over a decade. A traditional clean-room approach involves a strict separation between people with knowledge of the original and people writing the new implementation, and that separation did not exist here.

However, the purpose of clean-room methodology is to ensure the resulting code is not a derivative work of the original. It is a means to an end, not the end itself. In this case, I can demonstrate that the end result is the same — the new code is structurally independent of the old code — through direct measurement rather than process guarantees alone.

Dan goes on to present results from the JPlag tool - which describes itself as "State-of-the-Art Source Code Plagiarism & Collusion Detection" - showing that the new 7.0.0 release has a max similarity of 1.29% with the previous release and 0.64% with the 1.1 version. Other release versions had similarities more in the 80-93% range.

He then shares critical details about his process, highlights mine:

For full transparency, here's how the rewrite was conducted. I used the superpowers brainstorming skill to create a design document specifying the architecture and approach I wanted based on the following requirements I had for the rewrite [...]

I then started in an empty repository with no access to the old source tree, and explicitly instructed Claude not to base anything on LGPL/GPL-licensed code. I then reviewed, tested, and iterated on every piece of the result using Claude. [...]

I understand this is a new and uncomfortable area, and that using AI tools in the rewrite of a long-standing open source project raises legitimate questions. But the evidence here is clear: 7.0 is an independent work, not a derivative of the LGPL-licensed codebase. The MIT license applies to it legitimately.

Since the rewrite was conducted using Claude Code there are a whole lot of interesting artifacts available in the repo. 2026-02-25-chardet-rewrite-plan.md is particularly detailed, stepping through each stage of the rewrite process in turn - starting with the tests, then fleshing out the planned replacement code.

There are several twists that make this case particularly hard to confidently resolve:

  • Dan has been immersed in chardet for over a decade, and has clearly been strongly influenced by the original codebase.
  • There is one example where Claude Code referenced parts of the codebase while it worked, as shown in the plan - it looked at metadata/charsets.py, a file that lists charsets and their properties expressed as a dictionary of dataclasses.
  • More complicated: Claude itself was very likely trained on chardet as part of its enormous quantity of training data - though we have no way of confirming this for sure. Can a model trained on a codebase produce a morally or legally defensible clean-room implementation?
  • As discussed in this issue from 2014 (where Dan first openly contemplated a license change) Mark Pilgrim's original code was a manual port from C to Python of Mozilla's MPL-licensed character detection library.
  • How significant is the fact that the new release of chardet used the same PyPI package name as the old one? Would a fresh release under a new name have been more defensible?

I have no idea how this one is going to play out. I'm personally leaning towards the idea that the rewrite is legitimate, but the arguments on both sides of this are entirely credible.

I see this as a microcosm of the larger question around coding agents for fresh implementations of existing, mature code. This question is hitting the open source world first, but I expect it will soon start showing up in Compaq-like scenarios in the commercial world.

Once commercial companies see that their closely held IP is under threat I expect we'll see some well-funded litigation.

Tags: licensing, mark-pilgrim, open-source, ai, generative-ai, llms, ai-assisted-programming, ai-ethics, coding-agents

Read the whole story
alvinashcraft
50 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Bringing Robotics AI to Embedded Platforms: Dataset Recording, VLA Fine‑Tuning, and On‑Device Optimizations

1 Share
Read the whole story
alvinashcraft
54 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories