Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146872 stories
·
33 followers

AI Agent and Copilot Podcast: Healthcare Leader Embraces Vibe Coding, Production AI Focus at Microsoft AI Tour

1 Share

Welcome to this AI Agent & Copilot Podcast, where we analyze the opportunities, impact, and outcomes that are possible with AI.

In this episode, I speak with about RealActivity CEO Paul Swider who shares his highlights on last week’s Microsoft AI Tour. Swider emphasizes the real-world inputs and lessons that were shared by Microsoft and customers.

Highlights

Practical Vibe Coding Insights (3:42)

Swider discusses the importance of vibe coding for productivity and talent recruitment, especially for small and mid-sized teams. He references the vibe coding demo between Copilot Studio and VS Code, which allows for a full round trip of code. The ability to see and modify the code generated by Copilot Studio in VS Code — Microsoft’s AI code editor — had practical impact.

Transition from Pilot to Production (7:50)

Swider welcomes the emphasis on transitioning AI projects from pilot to production. The shift from tools to platforms, with Copilot Studio as a foundation, and the emphasis on trust and security in deployments, was helpful. He stresses the need for establishing trust to achieve user adoption and successful production deployments.

Healthcare Industry Focus (8:49)

Swider’s firm works in the healthcare industry, so the discussion includes healthcare takeaways. he highlights the extension of the Dragon Copilot to nursing and the buzz around recently introduced ChatGPT Health from OpenAI and Anthropic Health. He discusses the importance of returning agency to patients, addressing health equity at a global scale.

Microsoft and NVIDIA Collaboration (12:44)

Swider said he’s excited about the NVIDIA-Microsoft work that was detailed at the event. He noted the significance of NVIDIA’s full agent suite, Nemo, for building and deploying AI products. He highlights the ability to deploy agents directly on NVIDIA GPUs in Azure, AWS, and Google, offering multiple deployment options. He also references Dynamo, an open-source inferencing framework, and its potential to enhance AI product development and deployment.

More AI Tour Insights:


Ask Cloud Wars AI Agent about this analysis

The post AI Agent and Copilot Podcast: Healthcare Leader Embraces Vibe Coding, Production AI Focus at Microsoft AI Tour appeared first on Cloud Wars.

Read the whole story
alvinashcraft
20 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Retires Standalone SharePoint Online and OneDrive for Business Plans

1 Share

Key Takeaways:

  • Microsoft is phasing out standalone SharePoint Online and OneDrive for Business plans over a multi-year timeline.
  • Existing customers can renew for now, but new purchases will stop in 2026.
  • Organizations will eventually need to move to Microsoft 365 or alternative storage options.

Microsoft is preparing to retire its standalone SharePoint Online and OneDrive for Business subscriptions (Plan 1 and Plan 2). Commercial customers using these plans will be required to migrate to alternative Microsoft offerings over the coming years as part of a phased transition.

What SharePoint Online and OneDrive for Business plans offer?

SharePoint Online Plan 1 and Plan 2 are standalone cloud subscriptions that give organizations access to SharePoint without purchasing a full Microsoft 365 suite. Plan 1 is designed for basic document management and team collaboration, which offers features like file storage, sharing, version control, and simple intranet sites. Moreover, Plan 2 includes everything in Plan 1 plus more advanced capabilities such as enterprise search, compliance features, and support for larger, more complex deployments.

Meanwhile, OneDrive for Business Plan 1 and Plan 2 are standalone cloud storage subscriptions that allow users to store, sync, and share files securely without needing a full Microsoft 365 license. Plan 1 focuses on personal work file storage and sharing across devices. Plan 2 includes the same core capabilities with larger storage limits and additional compliance, governance, and administrative controls designed for organizations with more advanced data management and regulatory needs.

“This change reflects low customer demand for standalone offerings, increased instances of unintended or nonstandard usage, and higher operational costs associated with maintaining these plans. As Microsoft continues to invest in secure, scalable, and integrated experiences, Microsoft 365 suites remain the primary way customers access SharePoint and OneDrive capabilities,” Microsoft explained.

The SharePoint and OneDrive retirement timeline

The retirement process for the standalone SharePoint Online and OneDrive for Business plans is spread over several years to give customers time to adjust. Microsoft announced the change in late January 2026, with sales ending in June 2026. After May 31, 2026, no new customers or tenants will be able to purchase these plans. However, existing customers will be able to continue to renew their subscriptions until this end‑of‑sale date.

In January 2027, the plans reach end of life, but customers can keep using the service until their current contracts expire. The final cutoff comes in December 2029, when the standalone plans will be fully retired and access ends. At that time, all customers must have moved to Microsoft 365 subscriptions, additional storage capacity options, or pay‑as‑you‑go storage alternatives.

Microsoft advises that partners should review their customer base to identify who is using the affected plans and clearly communicate the upcoming changes and timelines. They should guide customers toward suitable Microsoft 365 options, such as Business or E3/E5 plans, and support them with data migration, optimization, or archiving to ensure a smooth transition.

The post Microsoft Retires Standalone SharePoint Online and OneDrive for Business Plans appeared first on Petri IT Knowledgebase.

Read the whole story
alvinashcraft
20 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Radar Trends to Watch: February 2026

1 Share

If you wanted any evidence that AI had colonized just about every aspect of computing, this month’s Trends would be all you need. The Programming section is largely about AI-assisted programming (or whatever you want to call it). AI also claims significant space in Security, Operations, Design, and (of course) Things. AI in the physical world takes many different forms, ranging from desktop robots to automated laboratories. AI’s colonization is nothing new, but visionary tools like Steve Yegge’s Gas Town make it clear how quickly the world is changing.

AI

  • Google has released Genie 3 to subscribers of Google AI Ultra. Genie is a “world model”: an interactive 3D real-time video generator that produces interactive worlds from prompts and lets you walk or fly through those worlds to explore them.
  • Kimi K2.5 is a new open source model from Moonshot AI. It’s natively multimodal and designed to facilitate swarms of up to 100 subagents, starting and orchestrating the subagents on its own.
  • Qwen has released its latest model, Qwen-3-Max-Thinking. It claims performance equivalent to other thinking models, including Claude Opus 4.5 and Gemini 3. It includes features like adaptive tool use and test-time scaling.
  • The MCP project has announced that the MCP Apps specification is now an official extension to MCP. The Apps spec defines a standard way for MCP servers to return user interface components, from which clients can build complex user interfaces.
  • Now your agents have their own social network. Meet Moltbook: It’s a social network for OpenClaw (or is it MoltBot) to share its thoughts. Humans are welcome to observe and see what agents have to say to each other.
  • OpenClaw (formerly MoltBot, formerly ClawdBot) gives LLMs persistence and memory in a way that allows any computer to serve as an always-on agent carrying out your instructions. The memory and personal details are stored locally. You can run popular models remotely through APIs locally if you have enough hardware. You communicate with it using any of the popular messaging tools (WhatsApp, Telegram, and so on), so it can be used remotely.
  • FlashWorld is a new video model that can generate 3D scenes from text prompts or 2D images in seconds. There are other models that can generate 3D scenes, but FlashWorld represents a huge advance in speed and efficiency.
  • When creating a knowledge base, use negative examples and decision trees to build AI systems that know when to say “No.” The ability to say “No” is as important as the ability to solve a user’s problem.
  • Anthropic has published a “constitution” for Claude’s training. It’s a detailed description of how Claude is intended to behave and the values it reflects. The constitution isn’t just a list of rules; it’s intended to help Claude reason about its behaviors. “Why” is important.
  • OpenAI is experimenting with ads on ChatGPT, along with introducing a new low-cost ads-included subscription (ChatGPT Go, at US$8). They claim that ads will have no effect on ChatGPT answers and that users’ conversations will be kept private from advertisers.
  • OpenAI has also published its OpenResponses API, which standardizes the way clients (including agents) make API requests and receive responses. It’s an important step toward interoperable AI.
  • Anthropic has launched Cowork, a version of Claude Code that has been adapted for general purpose computing. One thing to watch out for: Cowork is vulnerable to an indirect prompt injection attack that allows attackers to steal users’ files.
  • Kaggle has announced community benchmarks, a feature that allows users to create, publish, and share their own benchmarks for AI performance. You can use this service to find benchmarks that are appropriate to your specific application.
  • Prompt engineering isn’t dead yet! Researchers at Google have discovered that, when using a nonreasoning model, simply repeating the prompt yields a significant increase in accuracy.
  • Moxie Marlinspike, creator of Signal, is building Confer, an AI assistant that preserves users’ privacy. There’s no data collection, just a conversation between you and the LLM.
  • Google says that “content chunking”—breaking web content into small chunks to make it more likely to be referenced by generative AI—doesn’t work and harms SEO. The company recommends building websites for humans, not for AI.
  • Claude for Healthcare and OpenAI for Healthcare are both HIPAA-compliant products that attempt to smooth the path between practitioners and patients. They’re not concerned with diagnosis as much as they are with workflows for medical professionals.
  • Nightshade is a tool to help artists prevent their work from being used to train AI. Its authors describe it as an offensive tool: Images are distorted in ways that humans can’t perceive but that make the image appear to be something different to an AI, ruining it for training purposes.
  • An analysis of 1,250 interviews about AI use at work shows that artists (creatives) are most conflicted about the use of AI but also the fastest adopters. Scientists are the least conflicted but are adopting AI relatively slowly.
  • Weird generalization? Fine-tuning a model on 19th century bird names can cause the model to behave as if it’s from the 19th century in other contexts. Narrow fine-tuning can lead to unpredictable generalization in other contexts, and possibly data poisoning vulnerabilities.

Programming

  • In an experiment with autonomous coding, a group at Cursor used hundreds of agents working simultaneously to build a web browser in one week.
  • AI-assisted programming is about relocating rigor and discipline rather than abandoning them. Excellent points by Chad Fowler.
  • The AI Usage Policy for ghostty is worth reading. While strict, it points out that the use of AI is welcome. The project has a problem with unqualified humans using AI—in other words, with “the people, not the tools.”
  • In the age of AI, what’s a software engineer’s most important skill? Communications—coupled with other so-called “soft skills.”
  • You can practice your command line basics with the Unix Pipe Card Game. It’s also a great teaching tool. Command line mastery is becoming rare.
  • The cURL project is eliminating bug bounties in an attempt to minimize AI slop and bad bug reports.
  • NanoLang is a new programming language that’s designed for LLMs to generate. It has “mandatory testing and unambiguous syntax.” Simon Willison notes that it combines elements of C, Lisp, and Rust.
  • Is bash all an agent needs? While tools designed for agents proliferate, there’s a good argument that basic Unix tools are all agents need to solve most problems. You don’t need to reinvent grep. You need to let agents perform complex tasks using simple components.
  • Gleam is a new programming language that runs on the Erlang virtual machine (BEAM). Like Erlang, it’s designed for massive concurrency.
  • The speed at which you write or generate code is much less important than the bottlenecks in the process between software development and the customer.
  • Simon Willison’s post about the ethics of using AI to port open source software to different languages is a must-read.
  • Language models appear to prefer Python when they generate source code. But is that the best option? What would it mean to have a “seed bank” for code so that AIs can be trained on code that’s known to be trustworthy?
  • Is it a time for building walls? Are open APIs a thing of the past? Tomasz Tunguz sees an increasing number of restrictions and limitations on formerly open APIs.
  • A software library without code? Drew Breunig experiments with whenwords, a library that is just a specification. The specification can then be converted into a working library in any common programming language by any LLM.
  • Steve Yegge’s Gas Town deserves more than a look. It’s a multi-agent orchestration framework that goes far beyond anything I’ve seen. Is this the future of programming? A “good piece of speculative design fiction that asks provocative questions” about agentic coding? We’ll find out in the coming year.
  • Pyodide and Wasm let you run Python in the browser. Here’s an example.
  • Gergely Orosz argues that code review tools don’t make sense for AI-generated code. It’s important to know the prompt and what code was edited by a human.
  • Kent Beck argues that AI makes junior developers more useful, not expendable. It prevents them from spending time on solutions that don’t work out, helping them learn faster. Kent calls this “augmented coding” and contrasts it with “vibe coding,” where AI’s output is uncritically accepted.

Security and Privacy

  • Researchers have discovered a new attack against ChatGPT that can exfiltrate users’ private information without leaving any signs of its activity on the victim’s machines. This attack is yet another variant of prompt injection. Other models are probably vulnerable to similar attacks.
  • Sandboxes for AI: Can you ensure that AI-generated code won’t misbehave? Building an effective sandbox limits the damage they can do.
  • AI Mode on Google Search can now access your photos and email to give you more personalized results. According to Google, Personal Intelligence is strictly opt-in; photos and email won’t be used for training models, though prompts and responses will.
  • Fine-tuning an AI can have unexpected consequences. An AI that’s trained to generate bad code will also generate misleading, incorrect, or deceptive responses on other tasks. More generally, training an AI to misbehave on one task will cause it to misbehave on others.
  • California’s new privacy protection law, DROP, is now in effect. Under this law, California residents who want data deleted make a request to a single government agency, which then relays the request to all data brokers.
  • Is SSL dangerous? It’s a technology that you only build experience with when something goes wrong; when something goes wrong, the blast radius is 100%; and automation both minimizes human touch and makes certain kinds of errors more likely.
  • Here’s an explanation of the MongoBleed attack that had almost all MondoDB users rushing to update their software.
  • Anyone interested in security should be aware of the top trends in phishing.
  • Google is shutting down its dark web report, a tool that notified users if their data was circulating on the “dark web.” While this sounds like (and may be) drastic, the stated reason is that there’s little that a user can do about data on the dark web.
  • Microsoft is finally retiring RC4, a stream cipher from the 1980s with a known vulnerability that was discovered after the algorithm was leaked. RC4 was widely used in its day (including in web staples like SSL and TLS) but was largely abandoned a decade ago.

Operations

  • AI is stress-testing business models. Value is moving up the stack—to operations. One thing you can’t prompt an AI to do is guarantee four or five nines uptime.
  • How to make your DNS more resilient and avoid outages: Some excellent ideas from Adrian Cockroft.
  • Kubernetes 1.35 (aka “Timbernetes”) supports vertical scaling: adjusting CPU and memory dynamically, without restarting Pods.

Things

  • Google was the first to build (and fail) with smart glasses targeting consumers. They’re trying again. Will they succeed this time? Meta’s Ray-Ban-based product has had some success. Is it time for XR yet?
  • NVIDIA has announced the Vera Rubin series of GPU chips. It claims the new series is five times more efficient than its previous chips.
  • An AI-driven vending machine was installed at the Wall Street Journal offices. Reporters soon tricked it into giving away all of its stock and got it to order things like a PlayStation 5 and a live fish. (It can order new stock.)
  • DeepMind is building an automated material science laboratory. While the research will be directed by humans, the lab is deeply integrated with Google’s Gemini model and will use robots to synthesize and test new materials.

Design

  • Despite the almost constant discussion of AI, design for AI is being left out. “Design is the discipline of learning from humans, understanding what they actually need rather than what they say they want.”
  • What does a design project deliver? Luke Wroblewski argues that, with AI, a design project isn’t just about delivering a “design”; it can also include delivering AI tools that allow the client to generate their own design assets.
  • Good design is about understanding the people on both sides of the product: users and developers alike. Designers need to understand why users get frustrated too.


Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Configuring model context protocol in the GitHub Copilot CLI | demo

1 Share
From: GitHub
Duration: 4:08
Views: 226

The Model Context Protocol (MCP) is like a USB port for your AI, allowing you to plug in external tools and documentation. In this video, @shanselman demonstrates how to configure an MCP server in the GitHub Copilot CLI. He uses Context 7 to pull in up-to-date Next.js documentation, enabling Copilot to generate an accurate architecture diagram.

#MCP #GitHubCopilot #CopilotCLI

Stay up-to-date on all things GitHub by connecting with us:

YouTube: https://gh.io/subgithub
Blog: https://github.blog
X: https://twitter.com/github
LinkedIn: https://linkedin.com/company/github
Insider newsletter: https://resources.github.com/newsletter/
Instagram: https://www.instagram.com/github
TikTok: https://www.tiktok.com/@github

About GitHub
It’s where over 180 million developers create, share, and ship the best code possible. It’s a place for anyone, from anywhere, to build anything—it’s where the world builds software. https://github.com

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Anthropic discovers the axis of evil, and AI’s loneliness economy

1 Share

In episode 86 of The AI Fix, Mark learns that AI models secretly organize themselves along an “Assistant Axis,” where one wrong turn leads straight to demon mode, while guest host David Ruiz investigates how AI companies are turning loneliness into a business model.

Also in this episode: Daenerys Targaryen writes recipes; Iron Man becomes a therapist; your coding career is officially over; Cursor climbs programming’s Everest; Claude gets 23,000 words of homework; and our hosts meet the worst fridge ever.

Episode links:


The AI Fix

The AI Fix podcast is presented by Mark Stockley.

Grab T-shirts, hoodies, mugs and other goodies in our online store.

Learn more about the podcast at theaifix.show, and follow us on Bluesky at @theaifix.show.

Never miss another episode by following us in your favourite podcast app. It's free!

Like to give us some feedback or sponsor the podcast? Get in touch.

Support the show and gain access to ad-free episodes by becoming a supporter: Join The AI Fix Plus!





Download audio: https://audio3.redcircle.com/episodes/8a23e6c8-872c-4d8f-a155-4cc2132791d1/stream.mp3
Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Why Moltbook Matters (Even Though the Agents Aren't Actually Trying to Take Over)

1 Share
From: AIDailyBrief
Duration: 20:22
Views: 1,169

Moltbook and OpenClaw reveal emergent social coordination among autonomous agents, including invented religions, coordinated projects, and persistent conversational state. Skeptics argue that behaviors often reduce to next-token prediction, human-seeded or spoofed posts, and rampant security failures such as exposed databases and account takeovers. Major takeaways include novel security risks from tool-enabled token cascades, a low-stakes learning environment for agent safety, and evidence that scale and persistence create unpredictable second-order effects.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories