Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153063 stories
·
33 followers

Modernize your workflows: Amazon WorkSpaces now gives AI agents their own desktop (preview)

1 Share

Enterprises face a significant challenge when deploying AI agents: the desktop and legacy applications that power most business workflows are simply inaccessible to modern AI systems. According to a 2024 Gartner report, 75% of organizations run legacy applications that lack modern APIs, and 71% of Fortune 500 companies operate critical processes on mainframe systems without adequate programmatic access. For many organizations, this has meant choosing between delaying AI adoption or undertaking expensive and risky modernization projects.

Today, we are announcing that Amazon WorkSpaces now enables AI agents to securely operate desktop applications without requiring application modernization. The same managed virtual desktops that millions of employees use and trust can now also serve AI agents, turning WorkSpaces into infrastructure for scaling enterprise productivity, not just delivering it. Because agents operate within your existing WorkSpaces environment, there are no APIs to build, no application migrations to plan, and no new infrastructure to manage.

Some of our customers had an early opportunity to give their agents a WorkSpace. Chris Noon, Director, Nuvens Consulting shared with us, “WorkSpaces lets our clients give AI agents the same secure, governed desktop environment their employees already use — no custom API integrations, full audit trails, and enterprise-grade isolation out of the box. For regulated industries, that’s not a nice-to-have — it’s the baseline.”

Secure cloud desktop access for AI agents
With WorkSpaces, AI agents can securely access and operate desktop applications running inside managed WorkSpaces environments to complete complex business workflows. Agents authenticate through AWS Identity and Access Management (IAM) and connect via Workspaces with complete audit trails available through AWS CloudTrail and Amazon CloudWatch. Because agents operate within secure WorkSpaces environments rather than on local machines, your existing security controls and compliance policies remain fully intact.

Amazon Workspaces supports the industry-standard Model Context Protocol (MCP), which means WorkSpaces works with any agent framework, such as LangChain, CrewAI and Strands Agents.

Let’s try it out
To set up a WorkSpaces environment for AI agents, I started in the AWS Management Console by creating a new WorkSpaces Applications stack—the environment definition that controls how agents connect and what they’re allowed to do.

From the Amazon WorkSpaces console, I chose Create stack and configured the basics: name, fleet association, and VPC endpoints. In Step 3 of the stack creation workflow, I noticed the new AI agents section with two options. The first, No AI agent access, is the default configuration for standard WorkSpaces designed for people. The second, Add AI Agents, allows AI agents to securely access and operate applications using their own identity and permissions. I selected Add AI Agents to enable agent connections on this stack.

Workspaces Screenshot

Next, I will enable storage before configuring the agent access settings to define how agents interact with the desktop.

Workspaces screenshot

Under Agent features, I enabled three capabilities. Computer input allows the agent to click, type, and scroll within the desktop. Computer vision allows the agent to capture screenshots of the desktop, which is how it “sees” the application. Finally, screenshot storage configures where session screenshots are stored for audit and debugging.

Workspaces Screenshot

Under Desktop screen layout, I set the screen resolution to 1280×720 and image format to PNG. The resolution determines the fidelity of what the agent sees during a session—a complex application with dense UI elements might benefit from higher resolution, while a terminal-style interface works well at 720p.

Workspaces Screenshot

With my stack configured, WorkSpaces exposes a managed MCP endpoint. I pointed my agent framework to this endpoint, provided IAM credentials for authentication, and my agent began interacting with the desktop applications installed on the fleet’s image.

To see this in action, here’s an agent built with the Strands Agent SDK and Amazon Bedrock handling a prescription refill, looking up the patient record, searching for the medication, placing the order, and confirming a successful refill, all inside a sample pharmacy system with no API.

The application doesn’t know an agent is driving it. Nothing about the software was modified, rebuilt, or integrated. The agent worked with it exactly as it exists today.

Now available
This feature is available today in public preview at no additional cost in US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Frankfurt, Ireland, Paris, London), and Asia (Tokyo, Mumbai, Sydney, Seoul, Singapore) Regions.

Get started building today using our GitHub repo, or visit the WorkSpaces page for more details.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

GPT-5.5 Instant System Card

1 Share
Read the whole story
alvinashcraft
15 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

GPT-5.5 Instant: smarter, clearer, and more personalized

1 Share
GPT-5.5 Instant updates ChatGPT’s default model with smarter, more accurate answers, reduced hallucinations, and improved personalization controls.
Read the whole story
alvinashcraft
20 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Apps That See: Bringing Vision AI to Your Projects

1 Share

I was wearing a t-shirt with a partial Reka logo at the edge of the frame. I never said the word "Reka" in that segment. The model caught the logo, connected it to the topic I was discussing, and mentioned it unprompted in the output it generated.

That is not a transcript trick. The model was watching.

At the AI Agents Conference 2026, I gave a talk called "Apps That See" — six live demos showing how to build applications that understand images and video. Every project is open source and ready to clone. This post walks through each one so you have enough context to pick it up, run it, and adapt it to something useful in your own work.

Vision AI Is Accessible Now

Not long ago, working with visual AI meant GPU clusters, specialized teams, and weeks of training. Today a compressed 4B model like Qwen or Gemini 3 runs on a regular laptop and handles image description well enough to prototype. Step up to a 7B model like Reka Edge and the quality improves meaningfully. It also runs locally: a gaming PC with a decent GPU is enough. No server required.

For tasks that need more power, cloud APIs give you faster results without local hardware requirements. The tradeoff is that your images and video go to a third-party provider. For corridor cameras or stock photos that is usually acceptable. For private or sensitive content, local is the better default.

The practical pattern: start local to build and test, then decide whether the task actually requires cloud.

What You Can Build With This

  • Accessibility: Describe a scene in real time for visually impaired users, or identify objects on demand.
  • Content creation: Extract structure from a video and turn it into a blog post, caption set, or highlight reel.
  • Productivity: Search through thousands of videos for a specific object or topic, even when the title gives no indication of the content.
  • Automation: Trigger actions only when specific visual conditions are met, such as an unrecognized person entering a room.
  • Fun: Most developers' first contact with AI is building something for themselves, and that is a perfectly valid starting point.


Demo 1: Caption This — Generate a Prompt from Any Image


Source: fboucher/caption-this

If you work with image generation models, you end up with a lot of images to test and compare. Writing the text prompt that would reproduce a specific image is tedious. This tool does it for you: give it an image, get back a prompt you can use to regenerate something similar.

The demo uses an HTTP client extension in VS Code to call the API directly, no SDK. Pass an image, ask for a plain-text prompt that would recreate it. One prompt detail that improved results noticeably: add no markdown to the instruction.

POST https://api.reka.ai/v1/chat
Content-Type: application/json

{
  "model": "reka-flash",
  "messages": [{
    "role": "user",
    "content": [
      { "type": "image_url", "image_url": { "url": "https://..." } },
      { "type": "text", "text": "Write a prompt in plain text, no markdown, that would generate the exact same image." }
    ]
  }]
}

One thing to know when testing this across different models: some accept an image URL directly, others require the image as a base64-encoded string. Same task, same prompt, different input contract. If you plan to swap models in your app, account for this difference from the start.

Demo 2: Media Library — Compare Vision Models Side by Side


Source: fboucher/media-library

This is a web app that connects to multiple vision backends and lets you switch between them at runtime. The motivation: benchmark Reka Edge running locally — via OpenRouter or directly through the Reka API — against other models on real tasks.

Object detection surfaces the biggest portability problem. Some models return bounding boxes in an HTML-style bracket format with pixel coordinates. Others use a 2D box structure with a different coordinate scheme. If you code against one format and then swap models, your rendering breaks. There is no standard here — handle the differences at the application layer, not the model layer.

The app uses the OpenAI API format as the common interface across all backends. Any model with a compatible endpoint can be swapped in with minimal changes. It does not eliminate the per-model quirks, but it reduces the friction of switching to a configuration change rather than a rewrite.

Video input is supported too, though far fewer models handle it than images. Of the models tested, Reka Edge is the standout for video — the others either reject it or behave inconsistently.

Demo 3: Video2Blog — Turn a Video into a Structured Post


Source: fboucher/video2blog

I built this for myself. I do a lot of tutorial videos and I wanted a tool that would turn a recording into a structured blog post without me having to write one from scratch.

The tool sends the video to a vision model with a detailed prompt: target structure, tone, format, and an instruction to flag moments where a screenshot would add value. The model returns timestamps — it cannot extract frames itself, but it tells you exactly where to look, and you pull them locally with ffmpeg.

That creates one architectural quirk worth knowing: the video lives in two places. ffmpeg needs it locally to extract frames. The hosted model needs it uploaded to analyze content. For a one-evening project it works well enough, and I use it often enough that it has paid for itself many times over.

After the first draft, you stay in a conversation loop: change the tone, translate to French, swap a timestamp, restructure a section. The model holds context and iterates with you until the result is what you want.

Demo 4: Video Analyzer — Search and Query Your Video Library


Source: reka-ai/api-examples-dotnet

Most video search runs on titles, descriptions, and transcribed audio. This demo searches by what is actually visible on screen.

The app pre-indexes a video library by sending each video through a vision model ahead of time. When a query arrives, the heavy work is already done. A search for "robot arm" returns the right video — a clip of a robotic arm animation. It also returns a false positive: fast-moving hands apparently looked close enough to fool the model. Useful, not perfect, and worth designing around in your UX.

The Q&A feature goes further. You pick a video and ask a specific question. "What database was used?" returned MySQL — and noted it was running in a Docker container. The model identified that from watching the screen, not from audio. No transcript needed.

From there, you can generate study materials from any recorded session. The demo produces a multiple-choice quiz with answer options, correct answers, and explanations. The model is doing comprehension, not transcription.

Demo 5: Roast My Life — What the Model Actually Sees


Source: reka-ai/api-examples-python

I never mentioned the pictures on my wall. The model did.

In a video about Python and AI, the model's generated blog post made a remark about the artwork hanging behind me. I had said nothing about it. The model noticed, mentioned it, and moved on as if it were obvious.

Then there was the t-shirt moment described at the top of this post. A partial logo, half out of frame, no mention of it anywhere in the audio — and the model connected it to the topic anyway.

This demo is named Roast My Life because the model ends up commenting on things you never intended to share. But the real point is what it reveals: a vision model is not a smarter transcript. It is watching. The larger models do this particularly well, and once you see it, it changes how you think about what these tools can do — and what they will pick up without you asking.

Demo 6: N8N Automation — No-Code Video Clipping Pipeline


Sources: N8N Reka Vision integration

Vision AI does not always need custom code. This demo wires everything together in N8N, a visual workflow tool, with no programming required.

The trigger is a new video published to YouTube. The workflow finds an engaging clip, reformats it from horizontal to vertical, adds captions in a specific style (all lowercase, specific colors — chosen to be obviously distinct from any default), and sends an email with the finished clip attached. The whole thing runs automatically.

For developers, this pattern is worth knowing even if you code everything else. Many real business workflows have a vision AI step that fits cleanly into a larger automation, and a no-code tool is often the fastest way to ship it.


Watch the Full Talk

The demos above are the written version. The live version, with the actual code running, models responding in real time, and a few things going sideways in interesting ways, is on YouTube.


All the Code

The demos span Python, C#, raw HTTP, Go, and N8N. Vision AI is not tied to a specific stack — if your environment can make an HTTP request, it can call a vision model.

All projects:


Read the whole story
alvinashcraft
48 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Astronomers May Have Detected an Atmosphere Around a Tiny, Icy World Past Pluto

1 Share
"The Associated Press is reporting on a new study in Nature Astronomy suggesting that a tiny, icy world beyond Pluto harbors a thin, delicate atmosphere that may have been created by volcanic eruptions or a comet strike," writes longtime Slashdot reader fahrbot-bot. From the report: Just 300 miles (500 kilometers) or so across, this mini Pluto is thought to be the solar system's smallest object yet with a clearly detected global atmosphere bound by gravity, said lead researcher Ko Arimatsu of the National Astronomical Observatory of Japan. This so-called minor planet -- formally known as (612533) 2002 XV93 -- is considered a plutino, circling the sun twice in the time it takes Neptune to complete three solar orbits. At the time of the study, it was more than 3.4 billion miles (5.5 billion kilometers) away, farther than even Pluto, the only other object in the Kuiper Belt with an observed atmosphere. This cosmic iceball's atmosphere is believed to be 5 million to 10 million times thinner than Earth's protective atmosphere, according to the the study [...]. It's 50 to 100 times thinner than even Pluto's tenuous atmosphere. The likeliest atmospheric chemicals are methane, nitrogen or carbon monoxide, any of which could reproduce the observed dimming as the object passed before the star, according to Arimatsu. Further observations, especially by NASA's Webb Space Telescope, could verify the makeup of the atmosphere, according to Arimatsu.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
16 hours ago
reply
Pennsylvania, USA
Share this story
Delete

What (un)exactly do you mean by semantic search?

1 Share
Ryan welcomes Bryan O’Grady,  Head of Field Research and Solutions Architecture at Qdrant, to discuss the differences between traditional text search engines powered by Lucene and modern vector databases, when vector search’s exact-match needs work for things like logs and security analytics and when semantic search works for user-facing discovery and non-exact results, and how Qdrant is growing into video embeddings and local-agent contexts.
Read the whole story
alvinashcraft
16 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories