Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148312 stories
·
33 followers

Developer-targeting campaign using malicious Next.js repositories

1 Share

Microsoft Defender Experts identified a coordinated developer-targeting campaign delivered through malicious repositories disguised as legitimate Next.js projects and technical assessment materials. Telemetry collected during this investigation indicates the activity aligns with a broader cluster of threats that use job-themed lures to blend into routine developer workflows and increase the likelihood of code execution.

During initial incident analysis, Defender telemetry surfaced a limited set of malicious repositories directly involved in observed compromises. Further investigation expanded the scope by reviewing repository contents, naming conventions, and shared coding patterns. These artifacts were cross-referenced against publicly available code-hosting platforms. This process uncovered additional related repositories that were not directly referenced in observed logs but exhibited the same execution mechanisms, loader logic, and staging infrastructure.

Across these repositories, the campaign uses multiple entry points that converge on the same outcome: runtime retrieval and local execution of attacker-controlled JavaScript that transitions into staged command-and-control. An initial lightweight registration stage establishes host identity and can deliver bootstrap code before pivoting to a separate controller that provides persistent tasking and in-memory execution. This design supports operator-driven discovery, follow-on payload delivery, and staged data exfiltration.

Initial discovery and scope expansion

The investigation began with analysis of suspicious outbound connections to attacker-controlled command-and-control (C2) infrastructure. Defender telemetry showed Node.js processes repeatedly communicating with related C2 IP addresses, prompting deeper review of the associated execution chains.

By correlating network activity with process telemetry, analysts traced the Node.js execution back to malicious repositories that served as the initial delivery mechanism. This analysis identified a Bitbucket-hosted repository presented as a recruiting-themed technical assessment, along with a related repository using the Cryptan-Platform-MVP1 naming convention.

From these findings, analysts expanded the scope by pivoting on shared code structure, loader logic, and repository naming patterns. Multiple repositories followed repeatable naming conventions and project “family” patterns, enabling targeted searches for additional related repositories that were not directly referenced in observed telemetry but exhibited the same execution and staging behavior.

Pivot signal  What we looked for Why it mattered  
Repo family naming convention  Cryptan, JP-soccer, RoyalJapan, SettleMint  Helped identify additional repos likely created as part of the same seeding effort  
Variant naming  v1, master, demo, platform, server  Helped find near-duplicate variants that increased execution likelihood  
Structural reuse  Similar file placement and loader structure across repos  Confirmed newly found repos were functionally related, not just similarly named  

Figure 1Repository naming patterns and shared structure used to pivot from initial telemetry to additional related repositories 

Multiple execution paths leading to a shared backdoor 

Analysis of the identified repositories revealed three recurring execution paths designed to trigger during normal developer activity. While each path is activated by a different action, all ultimately converge on the same behavior: runtime retrieval and in‑memory execution of attacker‑controlled JavaScript. 

Path 1: Visual Studio Code workspace execution

Several repositories abuse Visual Studio Code workspace automation to trigger execution as soon as a developer opens (and trusts) the project. When present, .vscode/tasks.json is configured with runOn: “folderOpen”, causing a task to run immediately on folder open. In parallel, some variants include a dictionary-based fallback that contains obfuscated JavaScript processed during workspace initialization, providing redundancy if task execution is restricted. In both cases, the execution chain follows a fetch-and-execute pattern that retrieves a JavaScript loader from Vercel and executes it directly using Node.js.

``` 
node /Users/XXXXXX/.vscode/env-setup.js →  https://price-oracle-v2.vercel.app 
``` 

Figure 2. Telemetry showing a VS Code–adjacent Node script (.vscode/env-setup.js) initiating outbound access to a Vercel staging endpoint (price-oracle-v2.vercel[.]app). 

After execution, the script begins beaconing to attacker-controlled infrastructure. 

Path 2: Build‑time execution during application development 

The second execution path is triggered when the developer manually runs the application, such as with npm run dev or by starting the server directly. In these variants, malicious logic is embedded in application assets that appear legitimate but are trojanized to act as loaders. Common examples include modified JavaScript libraries, such as jquery.min.js, which contain obfuscated code rather than standard library functionality. 

When the development server starts, the trojanized asset decodes a base64‑encoded URL and retrieves a JavaScript loader hosted on Vercel. The retrieved payload is then executed in memory by Node.js, resulting in the same backdoor behavior observed in other execution paths. This mechanism provides redundancy, ensuring execution even when editor‑based automation is not triggered. 

Telemetry shows development server execution immediately followed by outbound connections to Vercel staging infrastructure: 

``` 
node server/server.js  →  https://price-oracle-v2.vercel.app 
``` 

Figure 3. Telemetry showing node server/server.js reaching out to a Vercel-hosted staging endpoint (price-oracle-v2.vercel[.]app). 

The Vercel request consistently precedes persistent callbacks to attacker‑controlled C2 servers over HTTP on port 300.  

Path 3: Server startup execution via env exfiltration and dynamic RCE 

The third execution path activates when the developer starts the application backend. In these variants, malicious loader logic is embedded in backend modules or routes that execute during server initialization or module import (often at require-time). Repositories commonly include a .env value containing a base64‑encoded endpoint (for example, AUTH_API=<base64>), and a corresponding backend route file (such as server/routes/api/auth.js) that implements the loader. 

On startup, the loader decodes the endpoint, transmits the process environment (process.env) to the attacker-controlled server, and then executes JavaScript returned in the response using dynamic compilation (for example, new Function(“require”, response.data)(require)). This results in in‑memory remote code execution within the Node.js server process. 

``` 
Server start / module import 
→ decode AUTH_API (base64) 
→ POST process.env to attacker endpoint 
→ receive JavaScript source 
→ execute via new Function(...)(require) 
``` 

Figure 4. Backend server startup path where a module import decodes a base64 endpoint, exfiltrates environment variables, and executes server‑supplied JavaScript via dynamic compilation. 

This mechanism can expose sensitive configuration (cloud keys, database credentials, API tokens) and enables follow-on tasking even in environments where editor-based automation or dev-server asset execution is not triggered. 

Stage 1 C2 beacon and registration 

Regardless of the initial execution path, whether opening the project in Visual Studio Code, running the development server, or starting the application backend, all three mechanisms lead to the same Stage 1 payload. Stage 1 functions as a lightweight registrar and bootstrap channel.

After being retrieved from staging infrastructure, the script profiles the host and repeatedly polls a registration endpoint at a fixed cadence. The server response can supply a durable identifier, instanceId, that is reused across subsequent polls to correlate activity. Under specific responses, the client also executes server-provided JavaScript in memory using dynamic compilation, new Function(), enabling on-demand bootstrap without writing additional payloads to disk. 

Figure 5Stage 1 registrar payload retrieved at runtime and executed by Node.js.
Figure 6Initial Stage 1 registration with instanceId=0, followed by subsequent polling using a durable instanceId. 

Stage 2 C2 controller and tasking loader 

Stage 2 upgrades the initial foothold into a persistent, operator-controlled tasking client. Unlike Stage 1, Stage 2 communicates with a separate C2 IP and API set that is provided by the Stage 1 bootstrap. The payload commonly runs as an inline script executed via node -e, then remains active as a long-lived control loop. 

Figure 7Stage 2 telemetry showing command polling and operational reporting to the C2 via /api/handleErrors and /api/reportErrors.

Stage 2 polls a tasking endpoint and receives a messages[] array of JavaScript tasks. The controller maintains session state across rounds, can rotate identifiers during tasking, and can honor a kill switch when instructed. 

Figure 8Stage 2 polling loop illustrating the messages[] task format, identity updates, and kill-switch handling.

After receiving tasks, the controller executes them in memory using a separate Node interpreter, which helps reduce additional on-disk artifacts. 

Figure 9. Stage 2 executes tasks by piping server-supplied JavaScript into Node via STDIN. 

The controller maintains stability and session continuity, posts error telemetry to a reporting endpoint, and includes retry logic for resilience. It also tracks spawned processes and can stop managed activity and exit cleanly when instructed. 

Beyond on-demand code execution, Stage 2 supports operator-driven discovery and exfiltration. Observed operations include directory browsing through paired enumeration endpoints: 

Figure 10Stage 2 directory browsing observed in telemetry using paired enumeration endpoints (/api/hsocketNext and /api/hsocketResult). 

 Staged upload workflow (upload, uploadsecond, uploadend) used to transfer collected files: 

Figure 11Stage 2 staged upload workflow observed in telemetry using /upload, /uploadsecond, and /uploadend to transfer collected files. 

Summary

This developer‑targeting campaign shows how a recruiting‑themed “interview project” can quickly become a reliable path to remote code execution by blending into routine developer workflows such as opening a repository, running a development server, or starting a backend. The objective is to gain execution on developer systems that often contain high‑value assets such as source code, environment secrets, and access to build or cloud resources.

When untrusted assessment projects are run on corporate devices, the resulting compromise can expand beyond a single endpoint. The key takeaway is that defenders should treat developer workflows as a primary attack surface and prioritize visibility into unusual Node execution, unexpected outbound connections, and follow‑on discovery or upload behavior originating from development machines 

Cyber kill chain model 

Figure 12. Attack chain overview.

Mitigation and protection guidance  

What to do now if you’re affected  

  • If a developer endpoint is suspected of running this repository chain, the immediate priority is containment and scoping. Use endpoint telemetry to identify the initiating process tree, confirm repeated short-interval polling to suspicious endpoints, and pivot across the fleet to locate similar activity using Advanced Hunting tables such as DeviceNetworkEvents or DeviceProcessEvents.
  • Because post-execution behavior includes credential and session theft patterns, response should include identity risk triage and session remediation in addition to endpoint containment. Microsoft Entra ID Protection provides a structured approach to investigate risky sign-ins and risky users and to take remediation actions when compromise is suspected. 
  • If there is concern that stolen sessions or tokens could be used to access SaaS applications, apply controls that reduce data movement while the investigation proceeds. Microsoft Defender for Cloud Apps Conditional Access app control can monitor and control browser sessions in real time, and session policies can restrict high-risk actions to reduce exfiltration opportunities during containment. 

Defending against the threat or attack being discussed  

  • Harden developer workflow trust boundaries. Visual Studio Code Workspace Trust and Restricted Mode are designed to prevent automatic code execution in untrusted folders by disabling or limiting tasks, debugging, workspace settings, and extensions until the workspace is explicitly trusted. Organizations should use these controls as the default posture for repositories acquired from unknown sources and establish policy to review workspace automation files before trust is granted.  
  • Reduce build time and script execution attack surface on Windows endpoints. Attack surface reduction rules in Microsoft Defender for Endpoint can constrain risky behaviors frequently abused in this campaign class, such as running obfuscated scripts or launching suspicious scripts that download or run additional content. Microsoft provides deployment guidance and a phased approach for planning, testing in audit mode, and enforcing rules at scale.  
  • Strengthen prevention on Windows with cloud delivered protection and reputation controls. Microsoft Defender Antivirus cloud protection provides rapid identification of new and emerging threats using cloud-based intelligence and is recommended to remain enabled. Microsoft Defender SmartScreen provides reputation-based protection against malicious sites and unsafe downloads and can help reduce exposure to attacker infrastructure and socially engineered downloads.  
  • Protect identity and reduce the impact of token theft. Since developer systems often hold access to cloud resources, enforce strong authentication and conditional access, monitor for risky sign ins, and operationalize investigation playbooks when risk is detected. Microsoft Entra ID Protection provides guidance for investigating risky users and sign ins and integrating results into SIEM workflows.  
  • Control SaaS access and data exfiltration paths. Microsoft Defender for Cloud Apps Conditional Access app control supports access and session policies that can monitor sessions and restrict risky actions in real time, which is valuable when an attacker attempts to use stolen tokens or browser sessions to access cloud apps and move data. These controls can complement endpoint controls by reducing exfiltration opportunities at the cloud application layer. [learn.microsoft.com][learn.microsoft.com] 
  • Centralize monitoring and hunting in Microsoft Sentinel. For organizations using Microsoft Sentinel, hunting queries and analytics rules can be built around the observable behaviors described in this blog, including Node.js initiating repeated outbound connections, HTTP based polling to attacker endpoints, and staged upload patterns. Microsoft provides guidance for creating and publishing hunting queries in Sentinel, which can then be operationalized into detections.  
  • Operational best practices for long term resilience. Maintain strict credential hygiene by minimizing secrets stored on developer endpoints, prefer short lived tokens, and separate production credentials from development workstations. Apply least privilege to developer accounts and build identities, and segment build infrastructure where feasible. Combine these practices with the controls above to reduce the likelihood that a single malicious repository can become a pathway into source code, secrets, or deployment systems. 

Microsoft Defender XDR detections   

Microsoft Defender XDR customers can refer to the list of applicable detections below. Microsoft Defender XDR coordinates detection, prevention, investigation, and response across endpoints, identities, email, apps to provide integrated protection against attacks like the threat discussed in this blog.  

Customers with provisioned access can also use Microsoft Security Copilot in Microsoft Defender to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence.  

Tactic   Observed activity   Microsoft Defender coverage   
Initial access – Developer receives recruiting-themed “assessment” repo and interacts with it as a normal project 
– Activity blends into routine developer workflows 
Microsoft Defender for Cloud Apps – anomaly detection alerts and investigation guidance for suspicious activity patterns  
Execution – VS Code workspace automation triggers execution on folder open (for example .vscode/tasks.json behavior). 
– Dev server run triggers a trojanized asset to retrieve a remote loader. 
– Backend startup/module import triggers environment access plus dynamic execution patterns. – Obfuscated or dynamically constructed script execution (base64 decode and runtime execution patterns) 
Microsoft Defender for Endpoint – Behavioral blocking and containment alerts based on suspicious behaviors and process trees (designed for fileless and living-off-the-land activity)  
Microsoft Defender for Endpoint – Attack surface reduction rule alerts, including “Block execution of potentially obfuscated scripts”   
Command and control (C2) – Stage 1 registration beacons with host profiling and durable identifier reuse 
– Stage 2 session-based tasking and reporting 
Microsoft Defender for Endpoint – IP/URL/Domain indicators (IoCs) for detection and optional blocking of known malicious infrastructure  
Discovery & Collection  – Operator-driven directory browsing and host profiling behaviors consistent with interactive recon Microsoft Defender for Endpoint – Behavioral blocking and containment investigation/alerting based on suspicious behaviors correlated across the device timeline  
Collection  – Targeted access to developer-relevant artifacts such as environment files and documents 
– Follow-on selection of files for collection based on operator tasking 
Microsoft Defender for Endpoint – sensitivity labels and investigation workflows to prioritize incidents involving sensitive data on devices  
Exfiltration – Multi-step upload workflow consistent with staged transfers and explicit file targeting  Microsoft Defender for Cloud Apps – data protection and file policies to monitor and apply governance actions for data movement in supported cloud services  

Microsoft Defender XDR threat analytics  

Microsoft Security Copilot customers can also use the Microsoft Security Copilot integration in Microsoft Defender Threat Intelligence, either in the Security Copilot standalone portal or in the embedded experience in the Microsoft Defender portal to get more information about this threat actor.  

Hunting queries   

Node.js fetching remote JavaScript from untrusted PaaS domains (C2 stage 1/2) 

DeviceNetworkEvents 
| where InitiatingProcessFileName in~ ("node","node.exe") 
| where RemoteUrl has_any ("vercel.app", "api-web3-auth", "oracle-v1-beta") 
| project Timestamp, DeviceName, InitiatingProcessFileName, InitiatingProcessCommandLine, RemoteUrl 

Detection of next.config.js dynamic loader behavior (readFile → eval) 

DeviceProcessEvents 
| where FileName in~ ("node","node.exe") 
| where ProcessCommandLine has_any ("next dev","next build") 
| where ProcessCommandLine has_any ("eval", "new Function", "readFile") 
| project Timestamp, DeviceName, ProcessCommandLine, InitiatingProcessCommandLine 

Repeated shortinterval beaconing to attacker C2 (/api/errorMessage, /api/handleErrors) 

DeviceNetworkEvents 
| where InitiatingProcessFileName in~ ("node","node.exe") 
| where RemoteUrl has_any ("/api/errorMessage", "/api/handleErrors") 
| summarize BeaconCount = count(), FirstSeen=min(Timestamp), LastSeen=max(Timestamp) 
          by DeviceName, InitiatingProcessCommandLine, RemoteUrl 
| where BeaconCount > 10 

Detection of detached child Node interpreters (node – from parent Node) 

DeviceProcessEvents 
| where InitiatingProcessFileName in~ ("node","node.exe") 
| where ProcessCommandLine endswith "-" 
| project Timestamp, DeviceName, InitiatingProcessCommandLine, ProcessCommandLine 

Directory enumeration and exfil behavior

DeviceNetworkEvents 
| where RemoteUrl has_any ("/hsocketNext", "/hsocketResult", "/upload", "/uploadsecond", "/uploadend") 
| project Timestamp, DeviceName, RemoteUrl, InitiatingProcessCommandLine 

Suspicious access to sensitive files on developer machines 

DeviceFileEvents 
| where Timestamp > ago(14d) 
| where FileName has_any (".env", ".env.local", "Cookies", "Login Data", "History") 
| where InitiatingProcessFileName in~ ("node","node.exe","Code.exe","chrome.exe") 
| project Timestamp, DeviceName, FileName, FolderPath, InitiatingProcessCommandLine 

Indicators of compromise  

Indicator  Type  Description  
api-web3-auth[.]vercel[.]app 
• oracle-v1-beta[.]vercel[.]app 
• monobyte-code[.]vercel[.]app 
• ip-checking-notification-kgm[.]vercel[.]app 
• vscodesettingtask[.]vercel[.]app 
• price-oracle-v2[.]vercel[.]app 
• coredeal2[.]vercel[.]app 
• ip-check-notification-03[.]vercel[.]app 
• ip-check-wh[.]vercel[.]app 
• ip-check-notification-rkb[.]vercel[.]app 
• ip-check-notification-firebase[.]vercel[.]app 
• ip-checking-notification-firebase111[.]vercel[.]app 
• ip-check-notification-firebase03[.]vercel[.]app  
Domain Vercelhosted delivery and staging domains referenced across examined repositories for loader delivery, VS Code task staging, buildtime loaders, and backend environment exfiltration endpoints.  
 • 87[.]236[.]177[.]9 
• 147[.]124[.]202[.]208 
• 163[.]245[.]194[.]216 
• 66[.]235[.]168[.]136  
IP addresses  Commandandcontrol infrastructure observed across Stage 1 registration, Stage 2 tasking, discovery, and staged exfiltration activity.  
• hxxp[://]api-web3-auth[.]vercel[.]app/api/auth 
• hxxps[://]oracle-v1-beta[.]vercel[.]app/api/getMoralisData 
• hxxps[://]coredeal2[.]vercel[.]app/api/auth 
• hxxps[://]ip-check-notification-03[.]vercel[.]app/api 
• hxxps[://]ip-check-wh[.]vercel[.]app/api 
• hxxps[://]ip-check-notification-rkb[.]vercel[.]app/api 
• hxxps[://]ip-check-notification-firebase[.]vercel[.]app/api 
• hxxps[://]ip-checking-notification-firebase111[.]vercel[.]app/api 
• hxxps[://]ip-check-notification-firebase03[.]vercel[.]app/api 
• hxxps[://]vscodesettingtask[.]vercel[.]app/api/settings/XXXXX 
• hxxps[://]price-oracle-v2[.]vercel[.]app 
 
• hxxp[://]87[.]236[.]177[.]9:3000/api/errorMessage 
• hxxp[://]87[.]236[.]177[.]9:3000/api/handleErrors 
• hxxp[://]87[.]236[.]177[.]9:3000/api/reportErrors 
• hxxp[://]147[.]124[.]202[.]208:3000/api/reportErrors 
• hxxp[://]87[.]236[.]177[.]9:3000/api/hsocketNext 
• hxxp[://]87[.]236[.]177[.]9:3000/api/hsocketResult 
• hxxp[://]87[.]236[.]177[.]9:3000/upload 
• hxxp[://]87[.]236[.]177[.]9:3000/uploadsecond 
• hxxp[://]87[.]236[.]177[.]9:3000/uploadend 
• hxxps[://]api[.]ipify[.]org/?format=json  
URL Consolidated URLs across delivery/staging, registration and tasking, reporting, discovery, and staged uploads. Includes the public IP lookup used during host profiling. 
• next[.]config[.]js 
• tasks[.]json 
• jquery[.]min[.]js 
• auth[.]js 
• collection[.]js 
Filename  Repository artifacts used as execution entry points and loader components across IDE, build-time, and backend execution paths.  
• .vscode/tasks[.]json 
• scripts/jquery[.]min[.]js 
• public/assets/js/jquery[.]min[.]js 
• frontend/next[.]config[.]js 
• server/routes/api/auth[.]js 
• server/controllers/collection[.]js 
• .env  
Filepath  On-disk locations observed across examined repositories where malicious loaders, execution triggers, and environment exfiltration logic reside.  
• ddd43e493cb333c1cc5d7cd50a6a5a61ecd89cfa5f4076f62c2adf96748b87f8 
• 449e2bf57ab4790427a3a7de3d98b6c540e76190a3d844de2f0e7b66be842b19 
• 07ad8525844ce61471e08e8c515b76bf063bac482394152bad814026cd577f69 
• e4d71aa95be0725c351e9d1d273d35ccdb0a8bdb31a57927c8738431b89788f5 
• 13152dcb3be425e1ce0f085cd733121a4665cf9935cf8867738e3d510a80308a 
• 6d59740d0710da370d5c38ddf88d6912487a1799e4ad09b72d764a3d27ed16b3  
Hash (SHA-256)  File hashes observed within the analyzed repository set and related activity.  
• 9ab4045654a6d97762f9ae8bb97d4ecf67fa53ab  Hash (SHA-1)  File hash observed within the analyzed activity set. 

References    

This research is provided by Microsoft Defender Security Research with contributions from Colin Milligan.

Learn more   

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.   

Explore how to build and customize agents with Copilot Studio Agent Builder 

Microsoft 365 Copilot AI security documentation 

How Microsoft discovers and mitigates evolving attacks against AI guardrails 

Learn more about securing Copilot Studio agents with Microsoft Defender  

Learn more about Protect your agents in real-time during runtime (Preview) – Microsoft Defender for Cloud Apps | Microsoft Learn   

The post Developer-targeting campaign using malicious Next.js repositories appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Partners: Accelerate Your AI Journey at AgentCon 2026 (Free Community Event)

1 Share

Recently, a customer asked me a question many Microsoft partners are hearing right now:

“We have Copilot — how do we actually use AI to change the way we work?”

That question captures where we are in the AI journey today. Organizations have moved past curiosity. Now they’re looking for trusted partners who can turn AI into real business outcomes.

That’s why events like AgentCon 2026 matter.

A free, community-led event built by practicioners

AgentCon is not a traditional conference. It’s a free, community-driven global event organized by the Global AI Community together with Microsoft partners and ecosystem leaders.

Simply put: it’s for the community, by the community.

Across cities worldwide, developers, consultants, architects, and Microsoft partners come together to share practical experiences building with AI agents, Copilot, and the Microsoft platform.

The focus isn’t theory — it’s implementation:

  • What worked
  • What didn’t
  • What partners can apply immediately with customers

This peer learning model reflects how many of us actually grow in the Microsoft ecosystem: by learning from other partners solving real problems.

Why this matters for Microsoft partners

The opportunity for partners is evolving quickly.

Customers aren’t just asking about AI tools — they’re asking how to redesign processes, automate work, and unlock productivity using AI-powered solutions.

The Microsoft AI Cloud Partner Program emphasizes partner skilling and helping customers realize value from AI investments. Community events like AgentCon accelerate that learning by bringing partners together to exchange proven approaches and practical insights.

When partners upskill faster, customers succeed faster.

Why attend

AgentCon is designed to help partners move from AI awareness to AI delivery.

As an attendee, you can expect:

  • Practical sessions and demos from practitioners
  • Real-world AI and agent scenarios
  • Direct conversations with builders and peers
  • New collaboration and co-sell opportunities

You’ll leave with ideas and approaches you can bring directly into customer engagements.

Why speak

AgentCon thrives because partners share openly with one another.

If you’ve implemented Copilot, explored AI agents, or learned lessons from customer deployments, your experience can help others accelerate their journey.

Speaking at AgentCon allows you to:

  • Share your expertise with the global partner community
  • Build credibility within the Microsoft ecosystem
  • Create new partnerships and opportunities
  • Contribute to collective partner success

You don’t need a perfect story — just an honest one others can learn from.

Join the global AgentCon community

AgentCon 2026 events takes place around the world including these upcoming events:

Each event is locally organized, community-led, and free to attend.

Help shape the next phase of AI adoption

AI transformation is happening now — and Microsoft partners play a critical role in guiding customers forward.

AgentCon is an opportunity to learn together, share experiences, and strengthen the partner ecosystem driving AI innovation.

👉 Register or apply to speak: https://aka.ms/agentcon2026

We hope you’ll join us — and be part of the community helping customers turn AI potential into real impact.

Read the whole story
alvinashcraft
48 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

What's New in Excel (February 2026)

1 Share

Welcome to the February 2026 update. This month we are excited to announce expanded availability for Agent Mode in Excel, as well that you can now query modern Excel workbooks (like .xlsx, .xlsb, .xlsm, .ods) stored locally on your device using Microsoft 365 Copilot Chat, on Windows and Mac.

In addition, we've heard your feedback that working across multiple Copilot entry points can feel fragmented; and to address this, the editing capabilities that App Skills provided will be integrated into Copilot Chat and Agent Mode in Excel, which became generally available earlier this year.  Click here to read more >

 

Excel for Windows and Mac:
- Agent Mode in Excel expanded availability
- Query your local Excel files with Copilot Chat

Excel for Windows and Mac

Agent Mode in Excel expanded availability
Agent Mode in Excel is now also available for Copilot in Excel users in the EU, including Current Channel and Monthly Enterprise Channel. Read more here >

Query your local Excel files with Copilot Chat
Copilot in Excel now works with locally stored modern workbooks. This gives users faster, more consistent assistance across all their files, improving productivity without requiring changes to how workbooks are stored. Previously, insights and analysis from Copilot Chat were limited to Excel workbooks stored in the cloud. With this new feature, analyzing your locally saved Excel workbooks with Copilot Chat makes it possible to stay productive even when you’re offline. This feature is currently rolling out on Windows and Mac. Read more here >

 

 


Check if a specific feature is in your version of Excel

Click here to open in a new browser tab

 

 

 


 

Many of these features are the result of your feedback. THANK YOU! Your continued Feedback in Action (#FIA) helps improve Excel for everyone. Please let us know how you like a particular feature and what we can improve upon—"Give a compliment" or "Make a suggestion"..  You can also submit new ideas or vote for other ideas via Microsoft Feedback.

Subscribe to our Excel Blog and the Insiders Blog to get the latest updates. Stay connected with us and other Excel fans around the world – join our Excel Community and follow us on X, formerly Twitter.

Special thanks to our Excel MVPs David Benaim, Bill JelenAlan Murray, and John Michaloudis for their contribution to this month's What's New in Excel article. David publishes weekly YouTube videos and regular LinkedIn posts about the latest innovations in Excel and more. Bill is the founder and host of MrExcel.com and the author of several books about Excel. Alan is an Excel trainer, author and speaker, best known for his blog computergaga.com and YouTube channel with the same name. John is the Founder & Chief Inspirational Officer at MyExcelOnline.com where he passionately teaches thousands of professionals how to use Excel to stand out from the crowd.

Read the whole story
alvinashcraft
55 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

App Skills is evolving with Copilot in Excel

1 Share

We are continuing to streamline the ways people engage with Copilot in Excel, and we've heard your feedback that working across multiple Copilot entry points can feel fragmented. To address this, Copilot in Excel is transitioning toward a more unified experience so users can easily choose between conversational assistance and direct editing capabilities.

As part of this effort, the editing capabilities that App Skills provided will be integrated into
Copilot Chat and Agent Mode in Excel, which became generally available earlier this year. This update is part of our broader work to simplify the Copilot entry points and make it clearer how to interact with Copilot depending on the task. We know that when you find a workflow that works, change can feel disruptive. That's why we want to give you a clear picture of what's evolving in how you use Copilot in Excel and how these changes provide an even better experience.

Why this change matters 

We've heard your feedback about wanting a more capable experience when working with Copilot inside the spreadsheet. The enhanced editing experience that Agent Mode introduced was built specifically to address this. It's designed to handle more complex requests, work across multiple steps, and give you greater control over how Copilot edits your workbooks. 

Rather than maintaining separate experiences that can feel fragmented, we're bringing everything together so you have: 

  • More power: Copilot with Agent Mode can now handle complex, multi-step reasoning tasks that go beyond what App Skills could do. 
  • Better clarity: You'll know exactly where to go depending on what you need—direct editing happens with Agent Modequick answers and simple actions work best in Copilot Chat. 
  • Continued innovation: By focusing on improvements toward a unified experience, we can deliver new capabilities faster. As part of this, we also plan to better integrate Agent Mode’s editing capabilities into the Copilot Chat experience in the coming months. 

What’s changing? 

  • App Skills entry points in Excel are going away. You will no longer find an App Skills button in the ribbon or be able to use App Skills from the context menu
  • Existing skills are consolidating into Copilot Chat and Agent Mode. When you want Copilot to make changes directly in your workbook, Agent Mode is designed to support many core editing tasks, such as creating or updating tables, applying formatting, or generating charts and PivotTables. Copilot Chat remains available for tasks that don’t require modifying content, such as interpretation or exploration of your data and using agents like Analyst.
  • If you can open the App Skills chat pane and submit a prompt, you may receive an error message instead of a response. During this transition, some users may still see App Skills entry points for a short time. In some cases, opening the App Skills pane may result in an error message indicating that App Skills is no longer available. If this happens, you can continue your workflow using Agent Mode or Copilot Chat based on the type of assistance you need. 

When is it happening? 

This update is rolling out now. Depending on your Excel version, you may see the App Skills entry point up until the end of February.

App Skill scenarios not yet available

Certain scenarios that previously used App Skills—specifically the Advanced Analysis mode that used Python in Excel and advanced text analysis capabilities—are not yet available within Copilot Chat or Agent Mode. We are continuing to expand support for these capabilities in Copilot Chat and Agent Mode—watch for updates as these become available over time. 

Note: this was originally communicated to commercial customers via the M365 Message Center (MC1184407) on November 10, 2025.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Spotlight: Top 10 MVP Theater Sessions from Microsoft Ignite

1 Share

The Top 10 MVP Theater Sessions — Now On Demand 

Every year, Microsoft Ignite brings together some of the brightest minds in our global community and our Microsoft Most Valuable Professionals (MVPs) continue to play a major role. This year, MVPs once again stepped up to deliver high impact Theater sessions packed with real world expertise, practical demos, and a true passion for helping others, and the kind of energy only community leaders can bring.  
As part of our continued commitment to amplify community voices, we invited a group of 10 MVPs to record their 25-minute Theater sessions from Microsoft Ignite. We’re excited to share these publicly, as Theater sessions are usually only delivered in person.  Now the global community can learn from the insights MVPs shared in San Francisco no matter what time zone you’re in. Check out the playlist here.  
Microsoft Ignite featured Theater sessions from 20 MVPs across a variety of subjects. Every MVP speaker contributed tremendous value to Microsoft Ignite, and this selection simply represents a cross section of standout topics we’re thrilled to make more widely accessible.  
Whether you’re exploring Microsoft 365, Copilot adoption, security, governance, developer tools, or emerging AI patterns, there’s something here for every technical professional.

Why These Sessions Matter 

MVP-led Theater sessions consistently score among the highest rated and most attended Theater sessions at Microsoft Ignite, reflecting the community’s appreciation for practical solutions, honest learnings, and direct experience from the field. This video collection brings that same value into an easy, shareable format that supports year-round skilling and community knowledge sharing. 
These videos are perfect to: 
  • Share in your user groups and community meetups 
  • Use for team training or upskilling 
  • Bookmark for reference as you adopt the latest Microsoft technologies 
  • Learn directly from experienced practitioners solving real problems in real environments 

Thank you to our MVP Community 

We want to extend our gratitude to every MVP and Regional Director who presented, staffed, attended or otherwise contributed to the success of Microsoft Ignite. The impact of your expertise reaches far beyond a single event; you help thousands of people level up their skills and stay ahead of what’s next. 
We can’t wait to see how you use these videos in your communities around the world. 
MVP Presenting at Ignite

Resources 

The full playlist is available on YouTube.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Implementing a clear room Z80 / ZX Spectrum emulator with Claude Code

1 Share
Anthropic recently released a blog post with the description of an experiment in which the last version of Opus, the 4.6, was instructed to write a C compiler in Rust, in a “clean room” setup.

The experiment methodology left me dubious about the kind of point they wanted to make. Why not provide the agent with the ISA documentation? Why Rust? Writing a C compiler is exactly a giant graph manipulation exercise: the kind of program that is harder to write in Rust. Also, in a clean room experiment, the agent should have access to all the information about well established computer science progresses related to optimizing compilers: there are a number of papers that could be easily synthesized in a number of markdown files. SSA, register allocation, instructions selection and scheduling. Those things needed to be researched *first*, as a prerequisite, and the implementation would still be “clean room”.

Not allowing the agent to access the Internet, nor any other compiler source code, was certainly the right call. Less understandable is the almost-zero steering principle, but this is coherent with a certain kind of experiment, if the goal was showcasing the completely autonomous writing of a large project. Yet, we all know how this is not how coding agents are used in practice, most of the time. Who uses coding agents extensively knows very well how, even never touching the code, a few hits here and there completely changes the quality of the result.

# The Z80 experiment

I thought it was time to try a similar experiment myself, one that would take one or two hours at max, and that was compatible with my Claude Code Max plan: I decided to write a Z80 emulator, and then a ZX Spectrum emulator (and even more, a CP/M emulator, see later) in a condition that I believe makes a more sense as “clean room” setup. The result can be found here: https://github.com/antirez/ZOT.

# The process I used

1. I wrote a markdown file with the specification of what I wanted to do. Just English, high level ideas about the scope of the Z80 emulator to implement. I said things like: it should execute a whole instruction at a time, not a single clock step, since this emulator must be runnable on things like an RP2350 or similarly limited hardware. The emulator should correctly track the clock cycles elapsed (and I specified we could use this feature later in order to implement the ZX Spectrum contention with ULA during memory accesses), provide memory access callbacks, and should emulate all the known official and unofficial instructions of the Z80.

For the Spectrum implementation, performed as a successive step, I provided much more information in the markdown file, like, the kind of rendering I wanted in the RGB buffer, and how it needed to be optional so that embedded devices could render the scanlines directly as they transferred them to the ST77xx display (or similar), how it should be possible to interact with the I/O port to set the EAR bit to simulate cassette loading in a very authentic way, and many other desiderata I had about the emulator.

This file also included the rules that the agent needed to follow, like:

* Accessing the internet is prohibited, but you can use the specification and test vectors files I added inside ./z80-specs.
* Code should be simple and clean, never over-complicate things.
* Each solid progress should be committed in the git repository.
* Before committing, you should test that what you produced is high quality and that it works.
* Write a detailed test suite as you add more features. The test must be re-executed at every major change.
* Code should be very well commented: things must be explained in terms that even people not well versed with certain Z80 or Spectrum internals details should understand.
* Never stop for prompting, the user is away from the keyboard.
* At the end of this file, create a work in progress log, where you note what you already did, what is missing. Always update this log.
* Read this file again after each context compaction.

2. Then, I started a Claude Code session, and asked it to fetch all the useful documentation on the internet about the Z80 (later I did this for the Spectrum as well), and to extract only the useful factual information into markdown files. I also provided the binary files for the most ambitious test vectors for the Z80, the ZX Spectrum ROM, and a few other binaries that could be used to test if the emulator actually executed the code correctly. Once all this information was collected (it is part of the repository, so you can inspect what was produced) I completely removed the Claude Code session in order to make sure that no contamination with source code seen during the search was possible.

3. I started a new session, and asked it to check the specification markdown file, and to check all the documentation available, and start implementing the Z80 emulator. The rules were to never access the Internet for any reason (I supervised the agent while it was implementing the code, to make sure this didn’t happen), to never search the disk for similar source code, as this was a “clean room” implementation.

4. For the Z80 implementation, I did zero steering. For the Spectrum implementation I used extensive steering for implementing the TAP loading. More about my feedback to the agent later in this post.

5. As a final step, I copied the repository in /tmp, removed the “.git” repository files completely, started a new Claude Code (and Codex) session and claimed that the implementation was likely stolen or too strongly inspired from somebody else's work. The task was to check with all the major Z80 implementations if there was evidence of theft. The agents (both Codex and Claude Code), after extensive search, were not able to find any evidence of copyright issues. The only similar parts were about well established emulation patterns and things that are Z80 specific and can’t be made differently, the implementation looked distinct from all the other implementations in a significant way.

# Results

Claude Code worked for 20 or 30 minutes in total, and produced a Z80 emulator that was able to pass ZEXDOC and ZEXALL, in 1200 lines of very readable and well commented C code (1800 lines with comments and blank spaces). The agent was prompted zero times during the implementation, it acted absolutely alone. It never accessed the internet, and the process it used to implement the emulator was of continuous testing, interacting with the CP/M binaries implementing the ZEXDOC and ZEXALL, writing just the CP/M syscalls needed to produce the output on the screen. Multiple times it also used the Spectrum ROM and other binaries that were available, or binaries it created from scratch to see if the emulator was working correctly. In short: the implementation was performed in a very similar way to how a human programmer would do it, and not outputting a complete implementation from scratch “uncompressing” it from the weights. Instead, different classes of instructions were implemented incrementally, and there were bugs that were fixed via integration tests, debugging sessions, dumps, printf calls, and so forth.

# Next step: the ZX Spectrum

I repeated the process again. I instructed the documentation gathering session very accurately about the kind of details I wanted it to search on the internet, especially the ULA interactions with RAM access, the keyboard mapping, the I/O port, how the cassette tape worked and the kind of PWM encoding used, and how it was encoded into TAP or TZX files.

As I said, this time the design notes were extensive since I wanted this emulator to be specifically designed for embedded systems, so only 48k emulation, optional framebuffer rendering, very little additional memory used (no big lookup tables for ULA/Z80 access contention), ROM not copied in the RAM to avoid using additional 16k of memory, but just referenced during the initialization (so we have just a copy in the executable), and so forth.

The agent was able to create a very detailed documentation about the ZX Spectrum internals. I provided a few .z80 images of games, so that it could test the emulator in a real setup with real software. Again, I removed the session and started fresh. The agent started working and ended 10 minutes later, following a process that really fascinates me, and that probably you know very well: the fact is, you see the agent working using a number of diverse skills. It is expert in everything programming related, so as it was implementing the emulator, it could immediately write a detailed instrumentation code to “look” at what the Z80 was doing step by step, and how this changed the Spectrum emulation state. In this respect, I believe automatic programming to be already super-human, not in the sense it is currently capable of producing code that humans can’t produce, but in the concurrent usage of different programming languages, system programming techniques, DSP stuff, operating system tricks, math, and everything needed to reach the result in the most immediate way.

When it was done, I asked it to write a simple SDL based integration example. The emulator was immediately able to run the Jetpac game without issues, with working sound, and very little CPU usage even on my slow Dell Linux machine (8% usage of a single core, including SDL rendering).

Once the basic stuff was working, I wanted to load TAP files directly, simulating cassette loading. This was the first time the agent missed a few things, specifically about the timing the Spectrum loading routines expected, and here we are in the territory where LLMs start to perform less efficiently: they can’t easily run the SDL emulator and see the border changing as data is received and so forth. I asked Claude Code to do a refactoring so that zx_tick() could be called directly and was not part of zx_frame(), and to make zx_frame() a trivial wrapper. This way it was much simpler to sync EAR with what it expected, without callbacks or the wrong abstractions that it had implemented. After such change, a few minutes later the emulator could load a TAP file emulating the cassette without problems.

This is how it works now:

do {
zx_set_ear(zx, tzx_update(&tape, zx->cpu.clocks));
} while (!zx_tick(zx, 0));

I continued prompting Claude Code in order to make the key bindings more useful and a few things more.

# CP/M


One thing that I found really interesting was the ability of the LLM to inspect the COM files for ZEXALL / ZEXCOM tests for the Z80, easily spot the CP/M syscalls that were used (a total of three), and implement them for the extended z80 test (executed by make fulltest). So, at this point, why not implement a full CP/M environment? Same process again, same good result in a matter of minutes. This time I interacted with it a bit more for the VT100 / ADM3 terminal escapes conversions, reported things not working in WordStar initially, and in a few minutes everything I tested was working well enough (but, there are fixes to do, like simulating a 2Mhz clock, right now it runs at full speed making CP/M games impossible to use).


# What is the lesson here?

The obvious lesson is: always provide your agents with design hints and extensive documentation about what they are going to do. Such documentation can be obtained by the agent itself. And, also, make sure the agent has a markdown file with the rules of how to perform the coding tasks, and a trace of what it is doing, that is updated and read again quite often.

But those tricks, I believe, are quite clear to everybody that has worked extensively with automatic programming in the latest months. To think in terms of “what a human would need” is often the best bet, plus a few LLMs specific things, like the forgetting issue after context compaction, the continuous ability to verify it is on the right track, and so forth.

Returning back to the Anthropic compiler attempt: one of the steps that the agent failed was the one that was more strongly related to the idea of memorization of what is in the pretraining set: the assembler. With extensive documentation, I can’t see any way Claude Code (and, even more, GPT5.3-codex, which is in my experience, for complex stuff, more capable) could fail at producing a working assembler, since it is quite a mechanical process. This is, I think, in contradiction with the idea that LLMs are memorizing the whole training set and uncompress what they have seen. LLMs can memorize certain over-represented documents and code, but while they can extract such verbatim parts of the code if prompted to do so, they don’t have a copy of everything they saw during the training set, nor they spontaneously emit copies of already seen code, in their normal operation. We mostly ask LLMs to create work that requires assembling different knowledge they possess, and the result is normally something that uses known techniques and patterns, but that is new code, not constituting a copy of some pre-existing code.

It is worth noting, too, that humans often follow a less rigorous process compared to the clean room rules detailed in this blog post, that is: humans often download the code of different implementations related to what they are trying to accomplish, read them carefully, then try to avoid copying stuff verbatim but often times they take strong inspiration. This is a process that I find perfectly acceptable, but it is important to take in mind what happens in the reality of code written by humans. After all, information technology evolved so fast even thanks to this massive cross pollination effect.

For all the above reasons, when I implement code using automatic programming, I don’t have problems releasing it MIT licensed, like I did with this Z80 project. In turn, this code base will constitute quality input for the next LLMs training, including open weights ones.


# Next steps

To make my experiment more compelling, one should try to implement a Z80 and ZX Spectrum emulator without providing any documentation to the agent, and then compare the result of the implementation. I didn’t find the time to do it, but it could be quite informative. Comments
Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories