Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149932 stories
·
33 followers

IoT Coffee Talk: Episode 289 - "The Shoe of Unreasonable Expectations"

1 Share
From: Iot Coffee Talk
Duration: 1:11:43
Views: 2

Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.

This week Rob, and Leonard jump on Web3 to host a discussion about:

đŸŽ¶ đŸŽ™ïž GOOD KARAOKE! 🎾 đŸ„ "Yacht Rock Original" by Robby & Lenny
🐣 Why the "Love Boat" was the pinnacle of late-70s sitcoms and primetime TV!
🐣 Tech-induced cognitive rot. It is a really problem. We are getting dumb, fast!
🐣 Are the AI tech bros trying to fit their hype into the "Shoe of Unreasonable Expectations?"
🐣 AI? I don't think it means you what think it means,... CNBC, Bloomberg, Yahoo!
🐣 Will the AI infrastructure overbuild be useful or liquidated and sold to crypto-miners?
🐣 Can anyone in the circular AI investment game make good on $1.4 trillion?
🐣 The AI RPO game. A game of big numbers and big promises.
🐣 The Dotcom Bozo syndrome. Are we looking at an AI Bozo syndrome today?
🐣 Rob and Leonard were there when Google introduced the TPU, the talk of the AI infrastructure town today.
🐣 Why most folks were wrong about DeepSeek. It mattered, and it hurt.
🐣 Oracle is a great cloud player. Will it be a great neocloud player?

It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!

Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!

If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Communities. Just make a minimally required donation to www.elevatecommunities.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!

Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on Buzzsprout, like, subscribe, and share: https://lnkd.in/gyuhNZ62

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft says AI agents are “risky”, but it’s moving ahead with the plan on Windows 11

1 Share

For the past few weeks, Microsoft has been associating AI agents with the future of Windows. But the company’s own documentation openly admits that such agents can hallucinate, act unpredictably, and even fall for attacks that didn’t exist a year ago. Yet, the fourth-largest organization is still pushing ahead with agentic features in Windows 11.

If Microsoft believes these agents are risky enough to need separate accounts, isolated sessions, and tamper-evident audit logs, why is Windows 11 becoming the test bed for them? And why now, at a time when users are already exhausted by the AI-fication of the OS?

Microsoft’s big bet on agentic computing is already locked in

In mid-October 2025, Microsoft said that they are “making every Windows 11 PC an AI PC.” The company unveiled a wave of AI integrations meant to let you “talk” to your computer, show it what’s on your screen, and then have it act on your behalf.

Microsoft essentially wants you to replace keystrokes and mouse clicks with natural language, and we got to see a preview of this plan with Copilot Voice, Copilot Vision, and the agentic part, Copilot Actions.

The latest moves make the Windows 11 taskbar the nerve centre of this AI-fication. Windows 11’s Search box is being replaced (optional, for now) with a new “Ask Copilot” interface that lets you summon AI agents or Copilot with a single click or type. From there, agents can run tasks in the background, and you can monitor their progress directly from the taskbar, as if they were regular apps.

Invoking agent from Ask Copilot in Taskbar
Invoking agent from Ask Copilot in Taskbar. Credit: Microsoft

Even if today the agentic functionality is limited and opt-in, the architecture and roadmap clear the air around the fact that agentic computing is the next core paradigm for Windows.

Microsoft openly says AI agents can misbehave, but still wants them inside your files and apps

On the bright side, Microsoft doesn’t pretend this is safe or foolproof. The company’s official documentation warns that these AI agents “face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs.”

Agents are vulnerable to Cross Prompt Injection (XPIA), malicious prompts, and malware

One of the biggest risks that Microsoft talks of is Cross Prompt Injection (XPIA). It describes a situation where an AI agent gets tricked by malicious content embedded in UI elements, documents, or apps. Such content could potentially override the agent’s original instructions and force it to perform harmful actions like copying sensitive files or leaking data.

Security researchers have already flagged GUI-based agents as vulnerable to these kinds of indirect attacks, the reason being the high privileges given to such AI Agents.

While we appreciate Microsoft being open about this, there is a certain distrust that pops up, considering all the hatred that Copilot is garnering these days. And if you think Recall was a privacy nightmare, AI agents are a whole different ballpark.

Recall in Windows 11 24H2

Microsoft insists that agents run under separate accounts, with limited permissions, controlled folder access, and tamper-evident logs. But it still grants these agents read and write access to some of our most personal locations in the PC, specifically Documents, Downloads, Desktop, Videos, Pictures, and Music, which Microsoft calls known folders.

“…malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation,” Microsoft warned in a support document published earlier this month. “We recommend you read through this information and understand the security implications of enabling an agent on your computer.”

So, given the risks, if Microsoft wants agents to interact with apps and files like a real person, how exactly does it stop the whole system from collapsing under its own weight?

The entire thing depends on a new feature called Agent Workspace

Agent Workspace is the backbone of Microsoft’s vision for an Agentic OS. Everything the company has promised, including the AI that uses apps for you, edits files, moves documents around, and completes multi-step tasks without bothering you, only works because Windows 11 can now create dedicated sessions for these agents to operate in.

It is unlike a virtual machine or Windows Sandbox. Agent Workspace is a parallel Windows environment, complete with its own account, its own desktop, its own process tree, and its own permission boundary.

Giving a separate workspace for AI agents is Microsoft’s first attempt at giving them a “place to exist” inside Windows, without letting it sit directly inside the user’s session.

Each agent gets a separate standard account on your PC, and Windows treats this account like a controlled, limited user who can do only the things you explicitly allow. Such restrictions are Microsoft’s response to the same problems they warned about.

How AI agents work inside Windows 11

Inside this workspace, the Agent interacts with applications the same way we do. It can click UI buttons, type into text fields. Scroll through windows, drag files, and do tasks that involve multiple steps. The AI handles the reasoning behind these steps.

Copilot operator
Copilot Actions using Agent Workspace on Windows 11

Copilot Actions already uses this model. Instead of asking a cloud model to generate text, the agent literally performs the steps in software installed on your PC. That’s why Microsoft needs to give it separate Windows sessions.

If an agent misinterprets a prompt or if XPIA is triggered inside a document, the damage will be, technically, contained within a boundary where Windows can supervise and log every action.

Agent Workspace is responsible for deciding what to show to agents. As I mentioned, agents only get access to the six “known folders”. Everything else in the user profile is off-limits, that is, unless you give it access.

This should also stop agents from crawling into system directories, credential stores, or app data folders where unintended reads or writes would cause chaos for app developers. Microsoft also uses Access Control Lists to prevent the agent account from going beyond the permissions of the user who enabled it.

To enable any of this feature, you need to turn on the Experimental Agentic Features, which is off by default.

Experimental agentic features in Windows 11

Windows 11 Agent Workspace
Image Courtesy: WindowsLatest.com

Microsoft says, “This feature has no AI capabilities on its own, it is a security feature for agents like Copilot Actions. Enabling this toggle allows the creation of a separate agent account and workspace on the device, providing a contained space to keep agent activity separate from the user.” 

MCP protocol controls what agents can touch

Microsoft is positioning the Model Context Protocol (MCP) as the standardized bridge between agents and applications. That’s how the agent communicates with tools on the system.

MCP allows the agent to discover tools, call functions, read file metadata, and interact with services through a predictable JSON-RPC layer. This prevents any direct access and gives Windows a central enforcement point where authentication, permission to use tools, capability declarations, and logging happen. If it isn’t for the MCP, an agent would be blind. The workspace keeps it within safe limits.

Why Microsoft believes the risk with AI Agents is worth it?

From Microsoft’s point of view, stepping back from AI isn’t an option anymore. The company wants people to use AI naturally in Windows to the point that the OS becomes a “canvas for AI”.

Apple is hard at work with Apple Intelligence, especially since the plan to use a custom version of Gemini, which brings us to Google already planning to enter the PC market with Aluminium OS.

Apple’s upcoming budget MacBook, with a full version of Apple Intelligence, will be more appealing to many, just because of the company’s desirability factor. So, if Windows isn’t already prepared, there is a real risk that the platform starts to look boring, all while being hated for the existing issues in Windows 11, like the slow File Explorer.

Large corporations pushing users to try new stuff that eventually gives them millions in ROI isn’t something new, but should you trust Microsoft?

Windows 11 does not have a great reputation to begin with. People already complain about how bloated it feels.

Community Notes on X point to the Copilot mistake and recommends the right way to change text size
Community Notes on X point to the Copilot mistake and recommends the right way to change text size

Microsoft’s Recall feature has become the textbook example of how not to launch an AI product on a desktop OS. Security researchers, privacy advocates, and regular users all raised the alarm over the idea of constant screenshots of your activity being stored on disk.

The backlash was loud enough that Microsoft delayed the feature, reworked it to be opt-in, and still cannot fully shake the “privacy nightmare” label. Even now, privacy-focused apps like Signal, Brave, and AdGuard ship with measures that block Recall out of the box.

All of this context makes people nervous about Windows becoming an agentic OS. If Recall struggled to respect boundaries, what happens when agents can also click, type, and move files around for you?

Microsoft is building a risky future and hoping users follow

Microsoft has made its choice to rebuild Windows 11 around AI agents that can do work on your behalf. The company is brave enough to admit the risks, yet confident enough to keep moving forward.

Honestly, on paper, the architecture looks smart. Separate accounts for agents, isolated workspaces, limited folder access, strict logging, and a protocol layer that lets Windows stand between agents and tools. In practice, this will live or die on execution. One serious exploit could undo a lot of the trust Microsoft is trying to rebuild after Recall. At least, the Experimental Agentic features are optional for now.

The uncomfortable truth is that an agentic OS is probably inevitable, and I’m not just talking about Windows. Every major platform vendor is pushing towards a future where AI does more than chat with you.

What is not inevitable is trust. Microsoft will have to earn that, especially from users who already feel like Windows 11 is working against them. If the company wants people to accept AI agents that live inside their personal folders, they will need to start by making everything completely optional, and then giving valid use cases.

The post Microsoft says AI agents are “risky”, but it’s moving ahead with the plan on Windows 11 appeared first on Windows Latest

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Why observable AI is the missing SRE layer enterprises need for reliable LLMs

1 Share

As AI systems enter production, reliability and governance can’t depend on wishful thinking. Here’s how observability turns large language models (LLMs) into auditable, trustworthy enterprise systems.

Why observability secures the future of enterprise AI

The enterprise race to deploy LLM systems mirrors the early days of cloud adoption. Executives love the promise; compliance demands accountability; engineers just want a paved road.

Yet, beneath the excitement, most leaders admit they can’t trace how AI decisions are made, whether they helped the business, or if they broke any rule.

Take one Fortune 100 bank that deployed an LLM to classify loan applications. Benchmark accuracy looked stellar. Yet, 6 months later, auditors found that 18% of critical cases were misrouted, without a single alert or trace. The root cause wasn’t bias or bad data. It was invisible. No observability, no accountability.

If you can’t observe it, you can’t trust it. And unobserved AI will fail in silence.

Visibility isn’t a luxury; it’s the foundation of trust. Without it, AI becomes ungovernable.

Start with outcomes, not models

Most corporate AI projects begin with tech leaders choosing a model and, later, defining success metrics. That’s backward.

Flip the order:

  • Define the outcome first. What’s the measurable business goal?

    • Deflect 15 % of billing calls

    • Reduce document review time by 60 %

    • Cut case-handling time by two minutes

  • Design telemetry around that outcome, not around “accuracy” or “BLEU score.”

  • Select prompts, retrieval methods and models that demonstrably move those KPIs.

At one global insurer, for instance, reframing success as “minutes saved per claim” instead of “model precision” turned an isolated pilot into a company-wide roadmap.

A 3-layer telemetry model for LLM observability

Just like microservices rely on logs, metrics and traces, AI systems need a structured observability stack:

a) Prompts and context: What went in

  • Log every prompt template, variable and retrieved document.

  • Record model ID, version, latency and token counts (your leading cost indicators).

  • Maintain an auditable redaction log showing what data was masked, when and by which rule.

b) Policies and controls: The guardrails

  • Capture safety-filter outcomes (toxicity, PII), citation presence and rule triggers.

  • Store policy reasons and risk tier for each deployment.

  • Link outputs back to the governing model card for transparency.

c) Outcomes and feedback: Did it work?

  • Gather human ratings and edit distances from accepted answers.

  • Track downstream business events, case closed, document approved, issue resolved.

  • Measure the KPI deltas, call time, backlog, reopen rate.

All three layers connect through a common trace ID, enabling any decision to be replayed, audited or improved.

Diagram © SaiKrishna Koorapati (2025). Created specifically for this article; licensed to VentureBeat for publication.

Apply SRE discipline: SLOs and error budgets for AI

Service reliability engineering (SRE) transformed software operations; now it’s AI’s turn.

Define three “golden signals” for every critical workflow:

Signal

Target SLO

When breached

Factuality

≄ 95 % verified against source of record

Fallback to verified template

Safety

≄ 99.9 % pass toxicity/PII filters

Quarantine and human review

Usefulness

≄ 80 % accepted on first pass

Retrain or rollback prompt/model

If hallucinations or refusals exceed budget, the system auto-routes to safer prompts or human review just like rerouting traffic during a service outage.

This isn’t bureaucracy; it’s reliability applied to reasoning.

Build the thin observability layer in two agile sprints

You don’t need a six-month roadmap, just focus and two short sprints.

Sprint 1 (weeks 1-3): Foundations

  • Version-controlled prompt registry

  • Redaction middleware tied to policy

  • Request/response logging with trace IDs

  • Basic evaluations (PII checks, citation presence)

  • Simple human-in-the-loop (HITL) UI

Sprint 2 (weeks 4-6): Guardrails and KPIs

  • Offline test sets (100–300 real examples)

  • Policy gates for factuality and safety

  • Lightweight dashboard tracking SLOs and cost

  • Automated token and latency tracker

In 6 weeks, you’ll have the thin layer that answers 90% of governance and product questions.

Make evaluations continuous (and boring)

Evaluations shouldn’t be heroic one-offs; they should be routine.

  • Curate test sets from real cases; refresh 10–20 % monthly.

  • Define clear acceptance criteria shared by product and risk teams.

  • Run the suite on every prompt/model/policy change and weekly for drift checks.

  • Publish one unified scorecard each week covering factuality, safety, usefulness and cost.

When evals are part of CI/CD, they stop being compliance theater and become operational pulse checks.

Apply human oversight where it matters

Full automation is neither realistic nor responsible. High-risk or ambiguous cases should escalate to human review.

  • Route low-confidence or policy-flagged responses to experts.

  • Capture every edit and reason as training data and audit evidence.

  • Feed reviewer feedback back into prompts and policies for continuous improvement.

At one health-tech firm, this approach cut false positives by 22 % and produced a retrainable, compliance-ready dataset in weeks.

Cost control through design, not hope

LLM costs grow non-linearly. Budgets won’t save you architecture will.

  • Structure prompts so deterministic sections run before generative ones.

  • Compress and rerank context instead of dumping entire documents.

  • Cache frequent queries and memoize tool outputs with TTL.

  • Track latency, throughput and token use per feature.

When observability covers tokens and latency, cost becomes a controlled variable, not a surprise.

The 90-day playbook

Within 3 months of adopting observable AI principles, enterprises should see:

  • 1–2 production AI assists with HITL for edge cases

  • Automated evaluation suite for pre-deploy and nightly runs

  • Weekly scorecard shared across SRE, product and risk

  • Audit-ready traces linking prompts, policies and outcomes

At a Fortune 100 client, this structure reduced incident time by 40 % and aligned product and compliance roadmaps.

Scaling trust through observability

Observable AI is how you turn AI from experiment to infrastructure.

With clear telemetry, SLOs and human feedback loops:

  • Executives gain evidence-backed confidence.

  • Compliance teams get replayable audit chains.

  • Engineers iterate faster and ship safely.

  • Customers experience reliable, explainable AI.

Observability isn’t an add-on layer, it’s the foundation for trust at scale.

SaiKrishna Koorapati is a software engineering leader.

Read more from our guest writers. Or, consider submitting a post of your own! See our guidelines here.



Read the whole story
alvinashcraft
10 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Project Spotlight: Steeltoe

1 Share

Steeltoe provides a collection of libraries that helps users build production-grade cloud-native applications using externalized configuration, service discovery, distributed tracing, application management, and more. It is proven—trusted by developers all over the world, delivering delightful experiences to millions of end-users every day, with contributions from VMware, Microsoft, and more.

Steeltoe is flexible, offering comprehensive extensions and third-party libraries that let developers build almost any web application imaginable, whether cloud-scale microservices or heavyweight enterprise applications. It is productive, building on .NET runtime libraries, providing the necessary glue code, and supporting many of Spring Cloud’s libraries, patterns, and templates.

Steeltoe is fast, with developer productivity as one of its superpowers—developers can start a new project in seconds with the Steeltoe Initializr at start.steeltoe.io. It is secure, remediating security issues quickly and responsibly, monitoring dependencies, and providing industry-standard security integrations.

Finally, Steeltoe is supportive, backed by a global, diverse community that offers guides, tutorials, videos, support, and access to the development team on Slack.

What Steeltoe Can Do

  • Microservices: Production-grade features with independently evolvable services.

  • Cloud: Your code, any cloud—connect and scale your services.

  • Web Apps: Fast, secure, responsive applications connected to any data store.

Link: https://steeltoe.io/

Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Member Spotlight: Tomas Herceg

1 Share

Tomas Herceg lives in Prague, Czech Republic, and runs a software consulting company called RIGANTI.

He has been doing .NET development since the time of .NET Framework 1.0. He got his first Microsoft MVP award in 2009, and for a couple of years, he was also a Microsoft Regional Director. He is also running Update Conference, a company that organizes developer events focused on .NET, cloud, and security. Many people might also know him because of his recent book about modernizing .NET web applications.

Tomas’s open-source journey started in 2014 when he had the idea of creating a framework that lets you build web apps with just C# and HTML. He made a simple prototype, published it on GitHub, and demoed it in one of his conference sessions. Surprisingly, the next day someone submitted a pull request. He contacted that person, and they decided to continue working on the idea and see what happens.

That is how the DotVVM project started. Tomas and his team have been contributing to it for more than 10 years, adding hundreds of features, tests, and documentation pages. Over the years, more people have been helping with the development. They use the framework intensively at RIGANTI and are committed to its long-term sustainability. Therefore, they built a bunch of commercial extensions and components for DotVVM, which helps them secure funding for future improvements to the open-source framework.

DotVVM is an opinionated framework that enables building web apps using the Model-View-ViewModel (MVVM) approach with just C# and HTML. It requires only about 56kB of JavaScript on the client, and it can be used to build feature-rich UI interfaces. The framework supports both ASP.NET Core and classic ASP.NET, providing an easy way to incrementally modernize ASP.NET Web Forms applications. DotVVM comes with 30+ built-in components, and there is also an extension for Visual Studio and Visual Studio Code.

Links:
https://github.com/riganti/dotvvm
https://tomasherceg.com
https://modernizationbook.com

Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Python Is Quickly Evolving To Meet Modern Enterprise AI Needs

1 Share

Python is ubiquitous. Millions of professionals, from scientists to software developers, rely on it. Organizations like Google and Meta have built critical infrastructure using it. Python even helped NASA explore Mars, thanks to its image processing abilities.

And its growth isn’t slowing anytime soon.

In 2024, Python surpassed JavaScript as the most popular language on GitHub, and today, it has become the backbone of modern AI systems. Python’s versatility and passionate community have made it what it is today. However, as more enterprises rely on Python for everything from web services to AI models, there are unique needs that enterprises must address around visibility, performance, governance and security to ensure business continuity, fast time to market and true differentiation.

How Python Became the Universal AI Language

Most popular languages have benefited from corporate sponsorship. Oracle supports Java. Microsoft backs C#. And Apple champions Swift. But Python has almost always been a community project, supported by several companies, and has been developed and improved over decades by a committed group of mainly volunteers, directed by Guido van Rossum as Benevolent Dictator for Life until 2018.

In the 1980s, van Rossum sought to create a language that was both simple and beautiful. Since the early ’90s, as an open source project, Python was available for anyone to inspect, modify or improve.

The Zen of Python, by Tim Peters, image originally posted by Pycon India on X.

Python quickly differentiated itself from its peers. It was easy to learn, write and understand. Developers could easily tell what was happening in their and others’ code just by looking at it, an anomaly in the days of Perl, C++ and complex shell scripts. This low barrier to entry made it highly approachable to new users.

Then there was Python’s extensibility, meaning it could easily integrate with other languages and systems. With the rise of the internet in the early 2000s, this extensibility took Python from a scripting solution to a production language for web servers, services and applications.

In the 2010s, Python became the de facto language for numerical computing and data science. Today, the world’s leading AI and machine learning (ML) packages, such as PyTorch, TensorFlow, scikit-learn, SciPy, Pandas and more, are Python-based. Still, the high-performance data and AI algorithms they use rely on highly optimized code written in compiled languages like C or C++. It is Python’s ability to easily integrate with these and other languages that has been critical in its ability to provide the best of both worlds: an easy interface to these packages for the millions of users who want to use them, but flexible interfaces for the experts that can optimize them in the language of their choice. These factors have made Python indispensable for both data science and AI workflows.

Today, if you’re working with any kind of AI or ML application, you’re likely using Python. However, as Python has become both the glue and the engine powering modern AI systems, enterprises need to be aware of critical needs specific to corporations around compliance, security and performance, and the community must strive to address them.

Helping Python Meet Enterprise Needs

Longtime Python core contributor Brett Cannon famously said, “I came for the language, but I stayed for the community.”

The community has made Python the incredible language it is today, serving users above all else. However, the community’s mission has always been to build a language that works for everyone, from programmers to scientists to data engineers. This has proven to be the right approach. This also means Python wasn’t engineered for the specific needs of enterprises running their business with Python.

And that’s OK, as long as those needs are addressed.

Anaconda’s “2025 State of Data Science and AI Report” found that enterprises face many of the same recurring challenges as they move data and AI applications to production. Over 57% reported that it takes more than a month to move AI projects from development to production. To demonstrate ROI, respondents were mostly interested in business concerns, such as:

  • Productivity Improvements (58%)
  • Cost Savings (48%)
  • Revenue Impact (46%)
  • Customer Experience / Loyalty (45%)

Think about it like cloud computing fifteen years ago. Organizations could immediately see the massive cost and operational advantages of moving workloads to the cloud. However, they realized that the security, compliance and cost model had changed entirely. They needed to continuously monitor, govern and optimize this new tool in altogether new ways. Python has reached that same point for enterprises.

I’ve spoken with dozens of leaders at organizations using Python, and here are the common challenges and themes I see.

Security

While 82% of organizations validate open source Python packages for security, nearly 40% of respondents still frequently encounter security vulnerabilities in their projects. These security issues create deployment delays for over two-thirds of organizations.

One of the strengths of Python, and all open source software, is that they’re free to download and use. You get the latest and greatest technology, and you can experiment, develop and push applications to production without paying a dime on the software.

However, history has shown that this openness and collaborative community can be abused by bad actors or even allow simple mistakes to proliferate, leading to the spread of vulnerable and malicious software. A piece of software or a package that looks fine could actually be dangerous. That problem is now compounding, with AI systems now generating and executing Python code without a human in the loop. Enterprises must protect their people, systems and data, and in turn, ensure safe AI deployment without missing deadlines.

Performance Optimization

Though Python is straightforward to use, it can also be prolonged, which is fine for many use cases. But as we saw in the “State of Data Science and AI Report,” the modern enterprise’s primary concern is to do more with less — continually improve and increase efficiency, productivity improvements, cost savings, increase revenue, etc. The economics of producing AI applications is only exacerbating performance and efficiency concerns.

With limited time, expertise or tools, most enterprises struggle to fine-tune the Python runtime, leading to far more compute than needed and higher costs, or to running AI systems that aren’t performant enough to provide a usable experience.

Auditability

Every CIO and CISO I know is staring down a wave of regulations, from the EU AI Act to internal SOC 2 and ISO 27001 compliance audits. Enterprises must be able to prove what code is running, where it’s running and how it’s interacting with sensitive data and systems.

Free and open source software makes that challenging because when anyone can download and run software freely, everyone will. New Python applications are popping up outside of IT control, packages are constantly updating, unknown or new dependencies are pulled in and there’s limited runtime visibility. Especially for organizations in highly regulated industries, this lack of runtime visibility creates present and future risk.

Managing Deployments

According to a recent survey of Anaconda’s users, over 80% of practitioners spend more than 10% of their AI development time troubleshooting dependency conflicts or security issues. Over 40% spend greater than a quarter of their time on these tasks, and time is money.

Once applications are in production, continuous maintenance, upgrades and security hardening can compound those issues. For an individual running and maintaining a small number of scripts and applications, this is not so hard. Still, for a large enterprise managing thousands of production applications, this becomes a considerable challenge.

Enterprises need a way to easily adopt new versions of Python and new technologies, while also minimizing version sprawl, security exposure and management overhead.

How To Help Enterprise AI Meet the Needs of Modern Enterprises

The good news is you can start addressing many of these challenges today. It all comes down to being intentional about your governance strategy.

More than half of organizations today have no or very limited open source and AI governance policies or frameworks in place. Creating an official policy around governance and investing in visibility and auditability already puts you ahead of most enterprises.

When building your governance strategy, start by building internal processes that track Python usage across teams and systems. Ensure you know what packages are running, where, and under what configurations.

Next, you’ll want to ensure you’re managing Shadow IT/AI and reviewing any and all AI-generated code. Agentic tools can’t replace a solid software development life cycle (SDLC) process. Ensure you have the right visibility, standards and processes in place to prevent unverified scripts from entering production.

It’s also critical to invest in workforce upskilling, increasing AI literacy among your employees so they better understand the risks of open source and AI solutions and why governance is so important. Some of the best education is in using these tools directly and gaining experience.

Finally, give your teams safe, reliable solutions across AI and data science workflows so that doing the right thing becomes the path of least resistance.

Make Python Your Competitive Edge

Python’s openness is its greatest strength and its most significant challenge. While it’s democratized AI development, it’s also created new risk vectors and blind spots that enterprises must address. IT teams need the same visibility and governance for open source solutions as they would for any other part of their tech stack. Time has shown that this is a primary source of innovation in the enterprise, so the investment in securing that innovation is worth it. And while specific upgrades to the language itself can help, intentional governance can make a difference today.

At Anaconda, we’ve seen enterprises tackle these challenges by building strong SDLC, governance, and observability layers around their Python environments. It adds a little more work upfront, but it’s a critical shift that will protect your organization in the long run and ensure the success and longevity of your AI initiatives.

The post Python Is Quickly Evolving To Meet Modern Enterprise AI Needs appeared first on The New Stack.

Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories