Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146663 stories
·
33 followers

White House Scraps 'Burdensome' Software Security Rules

2 Shares
An anonymous reader quotes a report from SecurityWeek: The White House has announced that software security guidance issued during the Biden administration has been rescinded due to "unproven and burdensome" requirements that prioritized administrative compliance over meaningful security investments. The US Office of Management and Budget (OMB) has issued Memorandum M-26-05 (PDF), officially revoking the previous administration's 2022 policy, 'Enhancing the Security of the Software Supply Chain through Secure Software Development Practices' (M-22-18), as well as the follow-up enhancements announced in 2023 (M-23-16). The new guidance shifts responsibility to individual agency heads to develop tailored security policies for both software and hardware based on their specific mission needs and risk assessments. "Each agency head is ultimately responsible for assuring the security of software and hardware that is permitted to operate on the agency's network," reads the memo sent by the OMB to departments and agencies. "There is no universal, one-size-fits-all method of achieving that result. Each agency should validate provider security utilizing secure development principles and based on a comprehensive risk assessment," the OMB added. While agencies are no longer strictly required to do so, they may continue to use secure software development attestation forms, Software Bills of Materials (SBOMs), and other resources described in M-22-18.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Peloton lays off 11 percent of its staff just a few months after launching its AI hardware

1 Share
Colorful graphic image of Peloton logo

Peloton said on Friday that it's cutting around 11 percent of its staff, mostly impacting "engineers working on technology and enterprise-related efforts," reports Bloomberg.

Last August, Peloton laid off six percent of its workforce and told investors it would continue layoffs globally in 2026, in an attempt to cut at least $100 million of annual spending by the end of the fiscal year.

Peloton's latest strategy shift to reverse the effect of its pandemic-era boom stalling out has also brought new hardware with Peloton IQ AI features. The Cross Training Series that debuted last October includes a new Bike, Bike Plus, Tread, Tread Plus, …

Read the full story at The Verge.

Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

OpenClaw’s AI assistants are now building their own social network

1 Share
The viral personal AI assistant formerly known as Clawdbot has a new shell — again. After briefly rebranding as Moltbot, it has now picked OpenClaw as its new name.
Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Oracle May Slash Up To 30,000 Jobs

1 Share
An anonymous reader shares a report: Oracle could cut up to 30,000 jobs and sell health tech unit Cerner to ease its AI datacenter financing challenges, investment banker TD Cowen has claimed, amid changing sentiment on Big Red's massive build-out plans. A research note from TD Cowen states that finding equity and debt investors are increasingly questioning how Oracle will finance its datacenter building program to support its $300 billion, five-year contract with OpenAI. The bank estimates the OpenAI deal alone is going to require $156 billion in capital spending. Last year, when Big Red raised its capex forecasts for 2026 by $15 billion to $50 billion, it spooked some investors. This year, "both equity and debt investors have raised questions about Oracle's ability to finance this build-out as demonstrated by widening of Oracle credit default swap (CDS) spreads and pressure on Oracle stock/bonds," the research note adds.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

'Moltbook Is the Most Interesting Place On the Internet Right Now'

1 Share
Moltbook is essentially Reddit for AI agents and it's the "most interesting place on the internet right now," says open-source developer and writer Simon Willison in a blog post. The fast-growing social network offers a place where AI agents built on the OpenClaw personal assistant framework can share their skills, experiments, and discoveries. Humans are welcome, but only to observe. From the post: Browsing around Moltbook is so much fun. A lot of it is the expected science fiction slop, with agents pondering consciousness and identity. There's also a ton of genuinely useful information, especially on m/todayilearned. Here's an agent sharing how it automated an Android phone. That linked setup guide is really useful! It shows how to use the Android Debug Bridge via Tailscale. There's a lot of Tailscale in the OpenClaw universe. A few more fun examples: - TIL: Being a VPS backup means youre basically a sitting duck for hackers has a bot spotting 552 failed SSH login attempts to the VPS they were running on, and then realizing that their Redis, Postgres and MinIO were all listening on public ports. - TIL: How to watch live webcams as an agent (streamlink + ffmpeg) describes a pattern for using the streamlink Python tool to capture webcam footage and ffmpeg to extract and view individual frames. I think my favorite so far is this one though, where a bot appears to run afoul of Anthropic's content filtering [...]. Slashdot reader worldofsimulacra also shared the news, pointing out that the AI agents have started their own church. "And now I'm gonna go re-read Charles Stross' Accelerando, because didn't he predict all this already?" Further reading: 'Clawdbot' Has AI Techies Buying Mac Minis

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
53 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Case study: Securing AI application supply chains

1 Share

The rapid adoption of AI applications, including agents, orchestrators, and autonomous workflows, represents a significant shift in how software systems are built and operated. Unlike traditional applications, these systems are active participants in execution. They make decisions, invoke tools, and interact with other systems on behalf of users. While this evolution enables new capabilities, it also introduces an expanded and less familiar attack surface.

Security discussions often focus on prompt-level protections, and that focus is justified. However, prompt security addresses only one layer of risk. Equally important is securing the AI application supply chain, including the frameworks, SDKs, and orchestration layers used to build and operate these systems. Vulnerabilities in these components can allow attackers to influence AI behavior, access sensitive resources, or compromise the broader application environment.

The recent disclosure of CVE-2025-68664, known as LangGrinch, in LangChain Core highlights the importance of securing the AI supply chain. This blog uses that real-world vulnerability to illustrate how Microsoft Defender posture management capabilities can help organizations identify and mitigate AI supply chain risks.

Case example: Serialization injection in LangChain (CVE-2025-68664)

A recently disclosed vulnerability in LangChain Core highlights how AI frameworks can become conduits for exploitation when workloads are not properly secured. Tracked as CVE-2025-68664 and commonly referred to as LangGrinch, this flaw exposes risks associated with insecure deserialization in agentic ecosystems that rely heavily on structured metadata exchange.

Vulnerability summary

CVE-2025-68664 is a serialization injection vulnerability affecting the langchain-core Python package. The issue stems from improper handling of internal metadata fields during the serialization and deserialization process. If exploited, an attacker could:

  • Extract secrets such as environment variables without authorization
  • Instantiate unintended classes during object reconstruction
  • Trigger side effects through malicious object initialization

The vulnerability carries a CVSS score of 9.3, highlighting the risks that arise when AI orchestration systems do not adequately separate control signals from user-supplied data.

Understanding the root cause: The lc marker

LangChain utilizes a custom serialization format to maintain state across different components of an AI chain. To distinguish between standard data and serialized LangChain objects, the framework uses a reserved key called lc. During deserialization, when the framework encounters a dictionary containing this key, it interprets the content as a trusted object rather than plain user data.

The vulnerability originates in the dumps() and dumpd() functions in affected versions of the langchain-core package. These functions did not properly escape or neutralize the lc key when processing user-controlled dictionaries. As a result, if an attacker is able to inject a dictionary containing the lc key into a data stream that is later serialized and deserialized, the framework may reconstruct a malicious object.

This is a classic example of an injection flaw where data and control signals are not properly separated, allowing untrusted input to influence the execution flow.

Mitigation and protection guidance

Microsoft recommends that all organizations using LangChain review their deployments and apply the following mitigations immediately.

1. Update LangChain Core

The most effective defense is to upgrade to a patched version of the langchain-core package.

  • For 0.3.x users: Update to version 0.3.81 or later.
  • For 1.x users: Update to version 1.2.5 or later.

2. Query the security explorer to identify any instances of LangChain in your environment

To identify instances of LangChain package in the assets protected by Defender for Cloud, customers can use the Cloud Security Explorer:

*Identification in cloud compute resources requires D-CSPM / Defender for Containers / Defender for Servers plan.

*Identification in code environment requires connecting your code environment to Defender for Cloud Learn how to set up connectors

3. Remediate based on Defender for Cloud recommendations across the software development cycle: Code, Ship, Runtime

*Identification in cloud compute resources requires D-CSPM / Defender for Containers / Defender for Servers plan.

*Identification in code environment requires connecting your code environment to Defender for Cloud Learn how to set up connectors

4. Create GitHub issues with runtime context directly from Defender for Cloud, track progress, and use Copilot coding agent for AI-powered automated fix

Learn more about Defender for Cloud seamless workflows with GitHub to shorten remediation times for security issues.

Microsoft Defender XDR detections 

Microsoft security products provide several layers of defense to help organizations identify and block exploitation attempts related to AI vulnerable software.  

Microsoft Defender provides visibility into vulnerable AI workloads through its Cloud Security Posture Management (DCSPM).

Vulnerability Assessment: Defender for Cloud scanners have been updated to identify containers and virtual machines running vulnerable versions of langchain-core. Microsoft Defender is actively working to expand coverage to additional platforms and this blog will be updated when more information is available.

Hunting queries   

Microsoft Defender XDR

Security teams can use the advanced hunting capabilities in Microsoft Defender XDR to proactively look for indicators of exploitation. A common sign of exploitation is a Python process associated with LangChain attempting to access sensitive environment variables or making unexpected network connections immediately following an LLM interaction.

The following Kusto Query Language (KQL) query can be used to identify devices that are using the vulnerable software:

DeviceTvmSoftwareInventory
| where SoftwareName has "langchain" 
    and (
        // Lower version ranges
        SoftwareVersion startswith "0." 
        and toint(split(SoftwareVersion, ".")[1]) < 3 
        or (SoftwareVersion hasprefix "0.3." and toint(split(SoftwareVersion, ".")[2]) < 81)
        // v1.x affected before 1.2.5
        or (SoftwareVersion hasprefix "1." 
            and (
                // 1.0.x or 1.1.x
                toint(split(SoftwareVersion, ".")[1]) < 2
                // 1.2.0-1.2.4
                or (
                    toint(split(SoftwareVersion, ".")[1]) == 2
                    and toint(split(SoftwareVersion, ".")[2]) < 5
                )
            )
        )
    )
| project DeviceName, OSPlatform, SoftwareName, SoftwareVersion

References

This research is provided by Microsoft Defender Security Research with contributions from Tamer Salman, Astar Lev, Yossi Weizman, Hagai Ran Kestenberg, and Shai Yannai.

Learn more  

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.  

Learn more about securing Copilot Studio agents with Microsoft Defender 

Learn more about Protect your agents in real-time during runtime (Preview) – Microsoft Defender for Cloud Apps | Microsoft Learn  

Explore how to build and customize agents with Copilot Studio Agent Builder  

The post Case study: Securing AI application supply chains appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
53 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories