Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146941 stories
·
33 followers

I Just Don’t Understand Why You Don’t Update SSMS.

1 Share

A long time ago in a galaxy far, far away, SQL Server Management Studio was included as part of the SQL Server installer.

Back then, upgrading SSMS was not only a technical problem, but a political one too. Organizations would say things like, “Sorry, we haven’t certified that cool new SQL Server 1982 here yet, so you can’t have access to the installer.” Developers and DBAs were forced to run SSMS from whatever ancient legacy version of SQL Server that their company had certified.

These days, SQL Server Management Studio v22 has:

  • A totally separate standalone installer
  • A totally separate version numbering system (SSMS v22 as opposed to SQL Server’s year-based numbers)
  • No designed-in dependencies (you can run new versions of SSMS on your desktop and connect to any supported version of SQL Server)
  • A much, much, much faster release schedule than SQL Server
  • Pretty few known issues – the list looks long at first, but if you go through ’em, few are relevant to the kind of work you do, and frankly, it’s still a shorter list than most of the previous SSMS versions I’ve used
  • A lot more cool features than the old and busted version you’re running today

And current versions even have a built-in, kick-ass upgrade mechanism:

Easier than gaining weight on a cruise ship

You should upgrade.
It keeps improving, quickly.

For example, SSMS v22.2.1 – a seemingly tiny version number change – just got a massive improvement in code completions. T-SQL code completion has never been great – IntelliSense doesn’t even auto-complete foreign key relationships. SSMS v22.2.1’s code completion will make your jaw drop.

For example, I never remember the syntax to write a cursor. It’s the kind of thing I don’t have to do often, and for years, I’ve used text files with stuff like this that I rarely (but sometimes) need quickly. With SSMS’s latest update, I just start typing a comment:

Declare a cursor

In that screenshot, see the different text colors? I’d started a comment and just written “Declare a cursor to” – and SSMS has started to fill in the rest. My goal in this case isn’t to loop through all the tables, though, so I’ll keep typing, explaining that I want to iterate through rows:

Interesting cursor choice

SSMS guessed that I wanted to iterate through the Posts table – and that’s SO COOL because SSMS actually looked at the tables in the database that I was connected to! If I try that same thing in the master database’s context, I get a different code completion!

Now, this does mean that Github Copilot & SSMS are running queries against your server in order to do code completion, and that they’re sending this data up to the cloud to do code completion. I totally understand that that’s a big security problem for many companies, and … okay, maybe I just answered that question about why some of you aren’t upgrading. But look, you can turn that feature off if you want, and you can track what queries it’s running if you’re curious. Let’s keep moving on through the task I have at hand today. I’m not trying to run through the Posts table, I need to do something else, so let’s keep typing:

Uh that's an odd cursor choice

uh wait what

In the words of Ron Burgundy, that escalated quickly. That is most definitely NOT what I’m trying to do, but that’s the state of AI these days. It’ll gladly help you build a nuclear footgun with speed and ease. Let’s continue typing:

The cursor I want

(I don’t really need this specific thing, mind you, dear reader – it’s already built into sp_Blitz – but I’m just using this as an example for something a client asked me to do.) Now that I’ve clearly defined the comment, SSMS starts writing the code for me. I’m going to just tab my way through this, taking SSMS’s code completion recommendations for everything from here on out, just so you can see what it coded for me:

The completed code

In a matter of seconds, just by hitting tab and enter to let AI code for me, it’s done! Not only did it write the cursor, but it wrote the dynamic SQL for me to do the task too. Now all I have to do is click execute, and:

Presto! The power of AI!

This right here is the part where you expect me to make an AI joke.

But let’s stop for a second and just appreciate what happened. All I needed SSMS to do was just to build a cursor for me, and it went WAY above and beyond that. It wrote dynamic SQL too, because it understood that in order to get the right checkdb date, it has to be run inside dynamic SQL. That’s pretty impressive. I don’t mind troubleshooting some dynamic SQL that frankly, I probably would have written incorrectly the first time too!

Today, what we have is Baby’s First Code Completions. I can’t get angry about that – I’m elated about it, because we’ve never had code completions before, and now at least we have it! That’s fantastic, and it will absolutely make me more productive – in the places where I choose to use it, judiciously. I can’t rely on it to build whole tools for me out of nothing, but as an expert, using it to augment and speed things up, it’s helpful, period.

I expect it to get even better, quickly.

I’m not saying that because I’m optimistic or because I have inside information. Microsoft simply doesn’t have a choice, because the only AI model that SSMS v22.2.1 supports right now is GPT-4.1. That’s so old and underpowered that OpenAI is retiring it this month, so Microsoft is going to have to switch to a newer model – which will automatically give us better code completions.

You’ll see evidence of that in the code completion documentation, and under SSMS v22.2.1’s tools-options, under Text Editor, Code Completions:

Text completion settings

Because I installed the AI components of SSMS, I get a dropdown for Copilot Completions Model. That’s the brains of the operation, the cloud AI model that comes up with the ideas of what you’re trying to code, and codes it for you.

Today, as of this writing, the only option is GPT 4-1, the old and busted one. I’m excited to see which one(s) we get access to next. Github Copilot’s list of supported models is huge, and it includes some really heavy hitters that produce spectacular results, like Claude Opus 4.5 and Gemini 3 Pro.

Side note – if you’re on the free Copilot individual tier, you only get 2,000 code completions per month for free. You’re gonna wanna check the box in the above screenshot that says “Show code completions only after a pause in typing” – otherwise you’ll keep getting irrelevant suggestions like how to drop all your databases, ha ha ho ho, and you’ll run out of completion attempts pretty quickly.

So do it. Go update your SSMS, make sure to check the AI tools during the install, sign up for a free Github Copilot account if your company doesn’t already give you a paid one, configure SSMS with your Copilot account, and get with the program. You’ll thank me later when it starts auto-completing joins and syntax for you. It’s free, for crying out loud.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Brings Copilot Studio Agents Directly Into Visual Studio Code

1 Share

Welcome to the Cloud Wars Minute — your daily cloud news and commentary show. Each episode provides insights and perspectives around the “reimagination machine” that is the cloud.

In today’s Cloud Wars Minute, I look at how Microsoft is helping developers build and scale AI agents safely inside Visual Studio Code.

Highlights

00:10 — The Microsoft Copilot Studio extension for Visual Studio Code is now generally available, providing developers with the ability to build and manage Copilot Studio agents directly within the IDE. This extension is designed for developers and integrates seamlessly into their workflows.

00:28 — It includes standard Git integration, request-based pull reviews, auditability, and is tailored to the VS Code UX. The new extension reflects the growing complexity of agents and equips developers with the same best practices they use for app development, including, as Microsoft puts it, source control, pull requests, change history, and repeatable deployments.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.

01:02 — This extension really benefits developers when they need to manage complex agents, collaborate with multiple stakeholders, and ensure that any changes made are done so safely. It’s ideal for developers who prefer to build within their IDE while also having an AI assistant available to help them iterate more quickly and productively.

01:30 — The extension introduces important structural support for the development of AI agents. By integrating Copilot Studio directly into VS Code, Microsoft is empowering developers to build more efficiently, without compromising control, access to collaborators, or safety. This is a critical combination as AI agents become increasingly more powerful and complex.

02:00 — As these agents continue to evolve, they require the same stringent checks and balances as traditional software. Microsoft’s Copilot Studio extension addresses this by giving developers the tools they need to scale agents responsibly while maintaining performance.


The post Microsoft Brings Copilot Studio Agents Directly Into Visual Studio Code appeared first on Cloud Wars.

Read the whole story
alvinashcraft
23 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

A New Mistral AI Model's Ultra-Fast Translation Gives Big AI Labs a Run for Their Money

1 Share
“Too many GPUs makes you lazy,” says the French startup's vice president of science operations, as the company carves out a different path than the major US AI companies.
Read the whole story
alvinashcraft
23 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Code smells for AI agents: Q&A with Eno Reyes of Factory

1 Share
Quality software still needs high-quality code, AI agents or not.
Read the whole story
alvinashcraft
23 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Beyond Pilot Purgatory

1 Share

The hard truth about AI scaling is that for most organizations, it isn’t happening. Despite billions in investment, a 2025 report from the MIT NANDA initiative reveals that 95% of enterprise generative AI pilots fail to deliver measurable business impact. This isn’t a technology problem; it’s an organizational design problem.

The reason for this systemic failure is surprisingly consistent: Organizations isolate their AI expertise. This isolation creates two predictable patterns of dysfunction. In one model, expertise is centralized into a dedicated team—often called a Center of Excellence (CoE). While intended to accelerate adoption, this structure invariably becomes a bottleneck, creating a fragile “ivory tower” disconnected from the business realities where value is actually created. Business units wait months for resources, incentives become misaligned, and the organization’s overall AI literacy fails to develop.

In the opposite model, expertise is so distributed that chaos ensues. Autonomous business units build redundant infrastructure, hoard knowledge, and operate without coordinated governance. Costs spiral, incompatible technology stacks proliferate, and the organization as a whole becomes less intelligent than its individual parts.

Both approaches fail for the same underlying reason: They treat AI development as a separate activity from the core business.

The numbers confirm this struggle. Gartner predicts that 30% of GenAI projects will be abandoned after proof of concept by 2025 due to poor data quality, inadequate risk controls, and escalating costs. McKinsey’s State of AI in 2025 report reveals that while adoption is high, only one-third of organizations have scaled AI enterprise-wide. Even fewer—just 5%, according to BCG—have built the capabilities to generate significant value at scale.

The organizations that have successfully scaled AI beyond this “pilot purgatory”—companies like JPMorganChase, Walmart, and Uber—didn’t choose between these broken models. They built a third way, discovering through pressure from reality that the only thing that works is an outcome-oriented hybrid architecture. This model combines centralized enablement with distributed execution, aggressive governance with operational autonomy, and technical excellence with a relentless focus on business value.

This isn’t abstract theory. The characteristics of these successful architectures are becoming clear enough to articulate—and specific enough to implement. Here is what actually works.

What Actually Works: Outcome-Oriented Hybrid Architecture

The organizations that have successfully scaled AI share surprising structural similarities—not because they all studied the same framework but because they independently discovered the same operating model through trial and error.

This model has several key characteristics:

Platform teams with product thinking, not project thinking

Rather than treating central AI infrastructure as a cost center or a research lab, successful organizations build it as an internal product with defined customers (the business units), success metrics, and a roadmap.

Airbnb’s “Bighead” platform exemplifies this. The team didn’t just build ML infrastructure; they built a product that product teams could consume. Standardized feature engineering, model training, and deployment pipelines reduced development time from months to weeks. The platform team measured success not by research excellence but by adoption rates and time-to-market reductions for dependent teams.

Uber’s Michelangelo platform followed a similar pattern: develop shared ML infrastructure, price it internally to make resource allocation explicit, measure platform adoption and the business impact of applications built on it, and evolve the platform based on actual usage patterns.

Implementation reality: Platform teams need authority to make technical decisions while remaining accountable for business adoption. They require sustained funding separate from individual project budgeting. They need internal customers who participate in roadmap planning. Most organizations struggle with this because platform thinking requires executives to invest in capability that won’t generate revenue for 18+ months.

Outcome-driven embedded specialists, not isolated teams

Successful organizations don’t ask centralized AI teams to deliver solutions. They embed AI specialists directly into business value streams where they co-own business outcomes.

A telecommunications company we studied restructured its 50-person AI CoE by embedding team members into four core business units. Instead of business units requesting AI solutions, they now had dedicated specialists sitting in weekly operations meetings, understanding real problems, building real solutions, and feeling the pressure of business metrics. The result? Deployment speed increased 60% and adoption tripled.

The model works because:

  • Embedded specialists develop tacit knowledge about business constraints and operational realities that remote teams can never have.
  • They face direct accountability for outcomes, aligning incentives.
  • They become translators between technical and business languages.

Implementation reality: Embedding requires letting go of centralized command-and-control. The embedded specialists report dotted-line to central leadership but are primarily accountable to business unit leadership. This creates tension. Managing that tension (not eliminating it) is essential. Organizations that try to eliminate tension by centralizing authority again lose the benefits of embedding.

Dynamic governance, not static policies

Traditional governance models assume relatively stable, predictable environments where you can write policies in advance and enforce them. AI systems exhibit emergent behavior that governance can’t predict. You need frameworks that adapt as you learn.

JPMorganChase demonstrates this through its multilayered governance approach:

  • The Centralized Model Risk team reviews all AI systems before production deployment using consistent technical standards.
  • Domain-specific oversight committees in lending, trading, and compliance understand business context and risk appetite.
  • Ongoing monitoring systems track model performance, drift, and unintended consequences.
  • Clear escalation protocols activate when algorithmic decisions fall outside acceptable parameters.
  • Continuous improvement mechanisms incorporate lessons from deployed systems back into policies.

Implementation reality: Dynamic governance requires specialists who combine technical AI expertise with organizational knowledge and the authority to make decisions. These are expensive, scarce roles. Most organizations underinvest because governance doesn’t appear as a direct cost center. It gets underfunded relative to its importance.

Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.

Capability building, not just capability buying

Organizations that scale AI sustainably invest heavily in building organizational AI literacy across multiple levels:

  • Frontline workers need basic understanding of how to use AI tools and when to trust them.
  • Team leads and domain experts need to understand what AI can and can’t do in their domain, how to formulate problems for AI, and how to evaluate solutions.
  • Technical specialists need deep expertise in algorithm selection, model validation, and system integration.
  • Executives and boards need enough understanding to ask intelligent questions and make strategic decisions about AI investment.

Implementation reality: Capability building is a multiyear investment. It requires systematic training programs, rotation opportunities, and senior engineers willing to mentor junior people. It requires tolerance for people operating at reduced productivity while they’re developing new capabilities.

Measuring What Matters

Organizations caught in pilot purgatory often measure the wrong things. They track model accuracy, deployment cycles, or adoption rates. These vanity metrics look good in board presentations but don’t correlate with business value. Successful organizations understand AI is a means to an end and measure its impact on the business relentlessly.

Business outcomes: Track AI’s direct impact on primary financial and customer metrics.

  • Revenue growth: Does AI increase cross-sell and upsell opportunities through hyperpersonalization? Does it improve customer retention and Net Promoter Score (NPS)?
  • Cost and efficiency: Does AI increase throughput, lower operational cycle times, or improve first-contact resolution rates in customer service?
  • Risk reduction: Does AI reduce financial losses through better fraud detection? Does it lower operational risk by automating controls or reducing error rates?

Operational velocity: This measures time-to-market. How quickly can your organization move from identifying a business problem to deploying a working AI solution? Successful organizations measure this in weeks, not months. This requires a holistic view of the entire system—from data availability and infrastructure provisioning to governance approvals and change management.

Value-realization velocity: How long after deployment does it take to achieve a positive ROI? Organizations that track this discover that technical integration and user adoption are often the biggest delays. Measuring this forces a focus not just on building the model but on ensuring it’s used effectively.

System resilience: When individual components fail—a key person leaves, a data source becomes unavailable, or a model drifts—does your AI capability degrade gracefully or collapse? Resilience comes from modular architectures, shared knowledge, and having no single points of failure. Organizations optimized purely for efficiency are often fragile.

Governance effectiveness: Is your organization proactively catching bias, drift, and unintended consequences, or are problems only discovered when customers complain or regulators intervene? Effective governance is measured by the ability to detect and correct issues automatically through robust monitoring, clear incident response procedures, and continuous learning mechanisms.

The Implementation Reality

None of this is particularly new or revolutionary. JPMorganChase, Walmart, Uber, and other successfully scaling organizations aren’t doing secret magic. They’re executing disciplined organizational design:

Start with business, not technology capability. Identify key business drivers and values that you measure, look at balance sheet levers, and see how AI can unlock value. Don’t build impressive systems for nonproblems.

Address technical debt first. You can’t deploy AI efficiently on fragile infrastructure. Many organizations waste 60%–80% of AI development capacity fighting integration problems that wouldn’t exist with better foundations. This doesn’t mean leaving speed behind but adopting a balanced infrastructure with clear integration points.

Design human-AI decision patterns intentionally. The most successful AI implementations don’t try to create fully autonomous systems. Instead, they create hybrid systems where algorithms handle speed and scale while humans maintain meaningful control. Commerzbank’s approach to automating client call documentation exemplifies this: Rather than replacing advisors, the system freed them from tedious manual data entry so they could focus on relationship-building and advice.

The pattern: AI proposes; rules constrain; humans approve; every step is logged. This requires API-level integration between algorithmic and rule-based processing, clear definitions of what gets automated versus what requires human review, and monitoring systems that track override patterns to identify when the algorithm is missing something important.

Invest heavily in governance before scaling. Don’t treat it as an afterthought. Organizations that build governance structures first scale much faster because they don’t have to retrofit controls later.

Embed AI expertise into business units but provide platform support. Neither pure centralization nor pure distribution works. The hybrid model requires constant attention to balance autonomy with coordination.

Accept that 18–24 months is a realistic timeline for meaningful scale. Organizations expecting faster transformations are usually the ones that end up with integration debt and abandoned projects.

Build organizational capability, not just buy external talent. The organizations that sustain AI advantage are those that develop deep organizational knowledge, not those that cycle through external consultants.

Why This Still Matters

The reason organizations struggle with AI scaling isn’t that the technology is immature. Modern AI systems are demonstrably capable. The reason is that enterprises are fundamentally organizational problems. Scale requires moving AI from skunkworks (where brilliant people build brilliant systems) to operations (where average people operate systems reliably, safely, and profitably).

That’s not a technology problem. That’s an operating-model problem. And operating-model problems require organizational design, not algorithm innovation.

The organizations that figure out how to design operating models for AI will capture enormous competitive advantages. The organizations that continue bolting AI onto 1980s organizational structures will keep funding pilot purgatory.

The choice is structural. And structure is something leadership can control.



Read the whole story
alvinashcraft
23 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

OpenClaw: An AI Lobster That Gets Work Done

1 Share

OpenClaw is an open‑source AI assistant you run on your own machine. It works inside the chat apps you use and can learn your habits. People in early 2026 paid attention because it runs on your device and you can extend it as needed. It also has a lobster mascot.

What makes OpenClaw different?

OpenClaw works more like a coworker than another chat window. Three features make it stand apart:

  • Full system access. Unlike typical chatbots, OpenClaw isn’t confined to a browser tab. It runs as a service on your computer. It can read and write files, run shell commands and execute scripts. It can organize your photo library, refactor code or set up cron jobs without your direct oversight.
  • Persistent memory. During onboarding it asks for your name, time zone and a few preferences. OpenClaw remembers conversations, notices patterns in your behaviour and builds context over time. Users compare its memory to keeping daily notes in Obsidian. The assistant stores its memory as plain Markdown files on disk.
  • Proactive heartbeat. OpenClaw can wake up by itself to run background tasks. Its “heartbeat” can check your inbox, send morning summaries or run automations without a prompt. This feature makes it more like a background service than a simple program.

It can control browsers, fill out forms and collect data from websites. It supports many messaging channels, such as Telegram, WhatsApp, Discord, Slack and iMessage. You can chat in the app you like. A community builds skills and plugins. The assistant can even generate new skills to tackle tasks.

Before you install

OpenClaw has deep access to your computer, so it’s best to set it up on a dedicated machine or a virtual private server. You’ll need:

  • Supported operating system: macOS, Linux, or Windows via WSL 2. A Raspberry Pi or a spare Mac mini can host it.
  • Node.js ≥ 22: The installer will install Node if it’s missing.
  • API keys: You need credentials for a large language model provider such as Anthropic or OpenAI. If you want web searches, get a Brave Search API key.

Quick install

The simplest way to get started is with the official installer script. It installs all dependencies, sets up the CLI and runs the onboarding wizard:

# On macOS or Linux
curl -fsSL https://openclaw.ai/install.sh | bash

# On Windows (PowerShell)
iwr -useb https://openclaw.ai/install.ps1 | iex

The script installs Node if it’s missing. It downloads OpenClaw and runs the onboarding wizard. After it finishes, check the CLI with openclaw doctor. View the gateway status with openclaw status or openclaw health.

Alternative install methods

If you prefer to do things yourself or want to work on the code, there are other options:

  • Global npm install: If you already have Node installed, run npm install -g openclaw@latest. If the sharp dependency fails because of a global libvips, use SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm install -g openclaw@latest. After installing, run openclaw onboard --install-daemon.
  • From source: Clone the repository, install dependencies, build the UI and run the onboarding wizard:
  git clone https://github.com/openclaw/openclaw.git
  cd openclaw
  pnpm install
  pnpm ui:build # installs UI dependencies on first run
  pnpm build
  pnpm run openclaw onboard

This method suits developers who want to work with OpenClaw’s internals.

Onboarding your lobster

Once OpenClaw is installed, run the onboarding wizard. Type:

openclaw onboard

Run the wizard to configure the assistant. It guides you through creating a local gateway and setting up your workspace. It connects messaging channels and installs system services. There are two paths:

  • QuickStart: Picks sensible defaults. It uses a local gateway bound to localhost, a default workspace folder, port 18789 and an auto‑generated auth token. It does not use Tailscale and uses an allow‑list for new contacts.
  • Advanced: Lets you customise more. You can set the workspace location, network binding, daemon installation and choose which channels and skills to enable.

What does the wizard configure?

During onboarding you’ll be prompted for:

  1. Model provider and authentication: choose an Anthropic API key, an OpenAI Codex subscription, a plain OpenAI API key or another provider such as Moonshot or MiniMax.
  2. Workspace location: by default it stores files under ~/.openclaw/workspace. This folder holds configuration files, memory notes and skills.
  3. Gateway settings: choose the port. Decide if the gateway binds to localhost or a specific IP. Decide whether to expose it via Tailscale. Even on loopback, keep an auth token so clients must authenticate.
  4. Messaging channels: connect Telegram, WhatsApp, Discord, Google Chat, Signal or iMessage by providing bot tokens or scanning QR codes. Unknown senders get a pairing code; approve them with openclaw pairing approve <channel> <code>.
  5. Daemon installation: on macOS it installs a LaunchAgent; on Linux or WSL 2 it installs a systemd user unit so the assistant keeps running when you log out.
  6. Skills: the wizard can install some skills without extra steps. You can add more later via openclaw skills install <package>.

If you skip onboarding (for example by passing the --no-onboard flag), you can run openclaw onboard --install-daemon at any time.

Configuring web tools and search

OpenClaw has built‑in tools for browsing and web search. To enable Brave Search, run:

openclaw configure --section web

Then paste your Brave Search API key. Without a key, the agent falls back to a basic web fetch tool.

Talking to your assistant

Once the gateway is running, you can chat with your lobster in several ways:

  1. Control UI: To chat through a web dashboard, run openclaw dashboard after onboarding. It launches a local web interface.
  2. Messaging apps: Connect your chat service during onboarding. After pairing, send messages as you do. OpenClaw supports WhatsApp, Telegram, Discord, Slack, Signal, iMessage and more. If you turn on text‑to‑speech, it can send audio messages.
  3. Command‑line: Use openclaw message <channel> <recipient> <message> to send a command or openclaw browser <url> to have the agent browse a site. There are also commands for viewing memory, installing skills and checking health. See the CLI reference for details.

What can OpenClaw do?

Because OpenClaw has system access and persistent memory, it can handle many tasks. Here are a few ideas to get you started:

  • Inbox and calendar management: ask it to monitor your email, triage messages and draft responses. It can reschedule meetings by checking your calendar and sending confirmation messages.
  • Daily briefings and reports: set up a morning summary and send it via your chat app with text and audio.
  • File organization and backups: have it compare local folders to cloud backups, move files around or clean up your downloads directory.
  • Custom automations: use the skills system to build custom workflows. For example, you could create a skill that transcribes voice messages using the Whisper API or set up a cron job that watches an RSS feed and triggers a workflow.
  • Code and scripting tasks: it can write and execute scripts, refactor code or set up development environments.

When you need something new, ask the assistant to “install a skill for X” or write one for itself. OpenClaw can search the skills hub and install or create the code.

Stay safe and be mindful of costs

Giving an AI agent access to your machine is risky. Misconfigured gateways or unverified skills may expose sensitive files or API keys. Run the wizard on a machine you control, keep your software updated and pair with known contacts. Start with read‑only permissions and increase them as you gain trust.

OpenClaw is free to download, but it uses paid APIs for model inference, web search and voice features. These costs can add up if you leave the agent running all the time or use large models. Monitor your usage and set limits in your API provider’s dashboard.

\

OpenClaw is not just another chatbot. It runs on your machine and uses large language models. It keeps a memory of your interactions and can schedule tasks. The install process is short, and the onboarding wizard guides you through configuration. With care and thoughtful use, OpenClaw can be a reliable personal assistant.

Whether you are a developer, a productivity enthusiast or just curious, OpenClaw may interest you. Name your new assistant and avoid running it on the same machine that holds sensitive documents.

References

  1. OpenClaw Project, “OpenClaw: A personal AI assistant that runs on your machine,” OpenClaw Official Website. https://openclaw.ai/
  2. OpenClaw Project, “Installation and onboarding documentation,” OpenClaw Docs. https://openclaw.ai/docs
  3. OpenClaw Project, “Architecture and core features overview,” OpenClaw Documentation. https://openclaw.ai/docs/overview
  4. OpenClaw Project, “Messaging channels and pairing configuration,” OpenClaw Docs. https://openclaw.ai/docs/channels
  5. OpenClaw Project, “Skills system and automation framework,” OpenClaw Docs. https://openclaw.ai/docs/skills
  6. OpenClaw Project, “Gateway, authentication, and security configuration,” OpenClaw Docs. https://openclaw.ai/docs/gateway
  7. OpenClaw GitHub Repository, “OpenClaw source code and build instructions.” https://github.com/openclaw/openclaw
  8. Brave Software, “Brave Search API documentation.” https://api.search.brave.com/
  9. OpenJS Foundation, “Node.js runtime environment.” https://nodejs.org/
  10. Reddit Community Discussion, “Everyone talks about Clawdbot (OpenClaw), but here’s how it works,” r/LocalLLaMA. https://www.reddit.com/r/LocalLLaMA/

\

Read the whole story
alvinashcraft
23 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories