Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148316 stories
·
33 followers

Let me see some ID: age verification is spreading across the internet

1 Share
Pixels of data over an obscured human face with a date entry form.

Age verification is a reality on a growing number of social media platforms, requiring an ID or facial scan for full access to everything from YouTube to Roblox. The age-gating wave is coming along with calls for stronger child safety measures online, despite concerns about privacy, security, and censorship. 

In the US, lawmakers are pushing forward bills like the App Store Accountability Act and Parents Over Platforms Act to have app stores themselves verify users’ ages. 

Discord has delayed plans to roll out age verification globally after user backlash until later in 2026, but it hasn’t completely shelved them, even after a breach of a former vendor last year that leaked some users’ scanned IDs. Meanwhile, other platforms, like ChatGPT and Google, are applying AI models to identify and lock down accounts suspected of being underage until some form of identity verification can prove the user is an adult.

Follow along below for the latest updates on age verification for internet services and apps…

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Discord is delaying its global age verification rollout

1 Share
A robot verifying the age of a human man.

Discord won't roll out age verification globally on its platform next month as previously announced, and says in a blog post that it's delaying the launch until the second half of 2026. "The way this landed, many of you walked away thinking we're requiring face scans and ID uploads from everyone just to use Discord. That's not what's happening, but the fact that so many people believe it tells us we failed at our most basic job: clearly explaining what we're doing and why," writes Discord CTO Stanislav Vishnevskiy.

Discord says that before it rolls out age verification globally, it will add more options for users to verify their age (includ …

Read the full story at The Verge.

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Building an AI-Ready America: Teaching in the AI age

1 Share

On Tuesday, February 23rd, Microsoft Senior Director of Education and Workforce Policy Allyson Knox testified before the House Education & Workforce Subcommittee on Early Childhood, Elementary, and Secondary Education. To view the proceedings, visit the committee’s website.

STATEMENT OF ALLYSON KNOX

SENIOR DIRECTOR OF EDUCATION AND WORKFORCE POLICY

MICROSOFT CORPORATION

BEFORE THE

EDUCATION AND WORKFORCE COMMITTEE

SUBCOMMITTEE ON EARLY CHILDHOOD, ELEMENTARY, AND SECONDARY EDUCATION

UNITED STATES HOUSE OF REPRESENTATIVES

“BUILDING AN AI-READY AMERICA: TEACHING IN THE AI AGE”

TUESDAY, FEBRUARY 24, 2026

WASHINGTON, D.C.

Good afternoon and thank you, Chairman Kiley, Ranking Member Bonamici, Members of the Subcommittee for inviting me to testify today. My name is Allyson Knox. I am Senior Director of Education and Workforce Policy at Microsoft, and I am pleased to have this opportunity to discuss issues related to artificial intelligence and its impact on teachers.

Today, I will share insights we have gathered from teachers about their experiences, challenges, and needs as they integrate AI in education; outline the steps Microsoft and other organizations are taking to facilitate this transition; and recommend legislative approaches to help policymakers strengthen these efforts. These legislative approaches include supporting professional development for teachers; encouraging public-private partnerships; promoting AI literacy; providing guidance on responsible AI use; and supporting innovation.

I would like to begin by quoting from Microsoft’s vice-chair and president, Brad Smith, in his recent foreword to Degrees of Change: What AI Means for Education and the Next Generation[i]:

“Generative AI has become the fastest-spreading technology in human history, adopted at a pace that even the most seasoned technologists could scarcely imagine. This speed is breathtaking, but it also compels us to pause and ask, “Are we ready for what comes next?” AI’s promise is extraordinary. It can help solve problems that have challenged humanity for decades—improving health outcomes, advancing education, and unlocking new opportunities for economic growth. But, like every transformative technology before it, AI brings new questions and new responsibilities.”

This thought-provoking quote is apt for today’s conversation on how AI is impacting teachers. The speed of AI adoption in our nation’s schools and classrooms is indeed breathtaking. Just three years ago, AI had barely made a mark in education. However, our 2025 Study on AI in Education found that 80% of U.S. K-12 teachers have used AI in their roles or for school-related purposes at least once or twice and one-fifth report daily use of AI. Additionally, 58% of K-12 teachers think AI usage at their school/district will increase in the next year.[ii]

What we are hearing from teachers on the impact of AI:

The breadth of adoption has been profound. We have heard directly from teachers who are using AI to streamline lesson planning, curriculum development, and personalize student learning in ways that were unimaginable a few years ago.[iii] AI is also reducing the time it takes to carry out administrative tasks, allowing more time for teachers to focus on their students.

Despite these benefits, we know teachers face challenges when it comes to AI in the classroom. We found roughly one in three teachers lack confidence in using AI effectively and responsibly. Many teachers also express concerns about how AI can exacerbate cheating and are worried about issues such as data privacy and student safety.

Teachers know AI is here to stay, and based upon countless surveys, forums, and focus groups, teachers are ready to tackle these challenges and ask for support in three main areas:

  1. AI literacy – Teachers want the skills, knowledge, and support to build AI literacy and critical thinking in their students;
  2. AI guardrails – Teachers want students to use AI responsibly and safely; and
  3. AI tools – Teachers want classroom-ready AI tools and opportunities to provide feedback that improve them.

I’m excited to share a few ways Microsoft, along with many of our partners, are committed to providing teachers with the support they are requesting.

1.AI literacy – Teachers want the skills, knowledge, and support to build AI literacy and critical thinking in their students

At the core of this support is listening to and learning from teachers and understanding what they want and need to become AI literate themselves and teach AI literacy to their students. These conversations have resulted in exciting initiatives, including the recent launch of the Microsoft Elevate for teachers program, part of the company’s broader commitment[iii] to help schools and educators build skills, expand opportunities, and ensure everyone benefits from AI.

Microsoft Elevate for Educators

The Microsoft Elevate for Educators program equips educators and school leaders with access to one of the world’s largest and most connected peer educator networks and offers free professional development resources. It will provide free access to a new industry-recognized credential for educators, developed in partnership with one of the leading national nonprofit focused on technology and innovation (ISTE+ASCD).[vi] This partnership is aligned to the AI Literacy Framework, which is intended to help educators gain confidence and expertise in integrating AI into their teaching and learning. As part of this work, we also support ISTE+ASCD in advancing AI in teacher preparation programs.

National Academy for AI Instruction

Along with OpenAI and Anthropic, we are supporting the National Academy for AI Instruction, through a partnership with the American Federation of Teachers and the United Federation of Teachers. The Academy describes itself as a national training hub designed by educators – shaping the future of AI in public education, grounded in safety and people-first technology, and improving student learning. From everything we have heard from teachers, this is exactly the type of support they need to promote AI literacy. The Academy also focuses on building critical thinking skills for students and educators.

Rob Weil, who heads up the Academy, recently shared an update on their work with me. He noted through direct engagement with teachers, they listen to what the primary concerns teachers have around using AI in the classroom are, and then work with them to design trainings that are directly responsive to their concerns and meet them where they are – including using whatever technology they are already using in their classroom.

Their goal is to train 400,000 teachers over the next 5 years. The Academy is centered around a “train the trainer” model, building capacity to provide AI literacy to teachers at scale – providing the potential of millions of teachers to benefit from this initiative. Weil noted that interest and participation in the Academy has been taking off, largely due to word of mouth. This month, 1,000 teachers showed up for a virtual session, and another in-person session was overprescribed had to turn away a hundred interested teachers.

Why the interest? Teachers want to learn from their peers and trusted partners; they also want to ensure they are using AI effectively and safely. Weil explained that one of the most popular aspects of the training is centered around the Academy’s Commonsense Guardrails for Using Advanced Technology in Schools,[v] which helps empower teachers to address the challenges they are facing in implementing AI. Some teachers describe AI as the wild-wild west, and this guide has helped provide a roadmap for understanding how to navigate bringing this technology into the classroom.

The trainings also provide real-world, hands-on experiences with using technology which teachers themselves are bringing to the table. At the trainings, teachers are asked what they could use the most help with and then have time to experiment with different tools to do things like start a draft of a lesson plan or an outline for a rubric – allowing them more time and flexibility to incorporate their expertise. In addition, the Academy creates opportunities for educators to influence the development of AI for schools.

Support for Special Education Teachers

We also recognize the potential that AI holds to support students with disabilities – and the need to ensure special education teachers have the support and resources to fully unlock this technology.

Recently, we launched a course to support educators in exploring how Microsoft AI tools can be thoughtfully used in special education environments to reduce administrative demands, strengthen accessibility, and support clear communication with families. Throughout the learning path, responsible use of AI, privacy, and transparency are emphasized so educators can determine when and how AI fits into their practice in ways that align with student needs and professional values.

After our engagements, we tailored our trainings to special education teachers by incorporating their direct feedback. Key topics included privacy with sensitive medical information and using AI to assist parents and caregivers in IEP meetings. We emphasized clear communication, parental inclusion, and ensuring parents understand the meeting’s goals and how best to support their children.

Finally, special education involves a collaborative team beyond just teachers, and we’ve revised our approach to address the needs of occupational therapists, physical therapists, and all other members involved in special education.

Support for Teachers in Rural America

We have found there’s a significant gap in daily AI usage by urban teachers versus their rural and suburban counterparts (39% vs. 24%).[iv] This gap underscores why ensuring AI tools, resources, and professional development are attuned to the needs of rural teachers is critical.

For the last five years, we’ve been working with the National Future Farmers of America (FFA) and agricultural science teachers to develop FarmBeats for Students and ensure it is responsive to agricultural science teachers’ needs. We engaged in an iterative process with them – collaboratively designing and building curriculum and training with agricultural science teachers from the very beginning of development.

FarmBeats for Students brings AI to agricultural education through a hands-on educational program that brings precision agriculture directly into the classroom. The program consists of an affordable hardware kit and a free curriculum aligned with rigorous educational standards. Activities give students direct experience with topics like digital sensors, data analysis, and AI.

We brought FarmBeats for Students to the National FFA convention and held a series of workshops with teachers across the country. They experimented with the kits and provided input to ensure this technology was directly responsive to what they wanted to see in the classroom.

In addition to our partnership with the National FFA, Microsoft helps meet the needs of rural teachers by deploying the online content referenced above through Elevate, as well as supporting community-based organizations that help facilitate activities and events which promote AI literacy in rural communities.

AI Literacy Frameworks, Standards, and Guidance

Teachers want frameworks that help them integrate AI into their classrooms. We are pleased there is bipartisan interest in establishing strong frameworks around AI and education, especially highlighting the need for widespread AI literacy. Microsoft has provided support, guidance, and input to organizations and initiatives such as Code.org and TeachAI who work to develop and promote frameworks, guidance, and standards.

Microsoft encourages state and local policymakers to review and leverage these resources as they incorporate AI in education:

  • The TeachAI Foundational Policies[vii]: This resource, endorsed by dozens of policy organizations and associations, provides practical guidance for national, state, and local leaders to harness AI’s benefits in teaching and learning while mitigating risks. The policies focus on five priorities—fostering leadership, promoting AI literacy, providing clear guidance, building educator capacity, and supporting responsible innovation—to ensure AI strengthens education systems and prepares learners for an AI‑enabled workforce.
  • The TeachAI AI Guidance for Schools Toolkit[viii]: The Toolkit helps education authorities, school leaders, and educators develop clear, responsible guidance for using AI in K–12 education, balancing potential benefits with risks such as privacy, bias, and academic integrity. It provides a practical framework, principles, sample policies, and communication templates to support safe and human‑centered AI adoption across school systems. The Toolkit has been used by the majority of states in constructing guidance for schools.
  • The AI Literacy Framework[ix]: The AI Literacy Framework defines the knowledge, skills, and attitudes students and educators need to understand, use, and critically evaluate AI in education. It is organized around four core domains—Engaging with AI, Creating with AI, Managing AI, and Designing AI—and emphasizes critical thinking, ethics, and human judgment alongside technical understanding. It also emphasizes the foundational computer science concepts that prepare students to not just use AI but understand how AI works and its societal impacts. The framework is designed to be interdisciplinary, practical, and durable, helping schools integrate AI literacy into curriculum, professional learning, and policy in age‑appropriate ways.

2.AI guardrails – Teachers want students to use AI responsibly and safely

We have heard from teachers that one of the greatest hesitations they have with AI is around safety for students. This includes ensuring AI tools used in the classroom protect student privacy, don’t collect their information, and are safe from a mental health perspective.

Some of the strategies teachers use to promote safety are a significant focus in the professional development referenced earlier. In addition, the frameworks include key components to help teachers understand responsible AI use.

Microsoft takes our responsibility as a developer and deployer of AI technology very seriously. Paramount to deploying this technology in classrooms is ensuring it is responsible. Microsoft has identified six principles that we believe should guide AI development and use.

  • Fairness: AI systems should treat all people fairly.
  • Reliability and Safety: AI systems should perform reliably and safely.
  • Privacy and Security: AI systems should be secure and respect privacy.
  • Inclusiveness: AI systems should empower everyone and engage all people.
  • Transparency: AI systems should be understandable.
  • Accountability: People should be accountable for AI systems.

These principles are the foundation for other tools and resources we share with teachers to provide guidelines for them to deploy AI in the classroom.

As another example of our commitment to safety, earlier this month, on Safer Internet Day, we launched our new Microsoft Education Security Toolkit,[x] which provides educators and IT teams with practical guidance tailored to the realities of modern education.

3. AI tools Teachers want classroom-ready AI tools and opportunities to provide feedback that improve them

Teachers often lack the right AI tools tailored to their needs for boosting student achievement. It’s essential to develop AI solutions based on teacher input rather than just delivering generic options. Microsoft strives to meet this responsibility by designing tools and partnerships that address educators’ needs. We believe this approach creates a critical feedback loop that will allow us to constantly evolve our tools to maximize their benefit in the classroom over time.

In fact, at Microsoft, our engineering teams collaborate closely with educators and students to advance the development of AI tools for classroom use. We partner with teacher organizations and directly engage with the disability community to better understand instructional requirements and design technology that enhance student learning outcomes.  Some examples include:

Reading Progress

One of the tools we offer to teachers is called Reading Progress, which helps teachers analyze students’ fluency and generates reading passages and comprehension questions.

From the beginning of development, we worked with individual teachers through our Educator Insiders program and with entire schools or districts through our Technology Adoption Preview, where educators test prototypes of our products and provide feedback.

For example, teachers asked for a tool that could generate tailored passages to meet the needs of their students. We incorporated that feedback and now, teachers can get as specific as saying they want a passage generated about sports that is for a third-grade reading level and includes specific words their class is learning.

Teachers also told us they wanted reading comprehension questions generated faster and better. With AI, it’s easy to do this in a high-quality way.

Teachers report increased comprehension, higher reading fluency, and higher scores, especially for struggling or reluctant readers.

Teach for America (TFA)

Microsoft has been a proud supporter of TFA’s efforts to improve the education system and expand opportunities for children across the U.S. It has been great to see all of the ways in which TFA has worked to equip their teachers with AI fluency in order to help them integrate this technology into the classroom.

TFA recently completed a cloud migration to Microsoft Azure, unlocking countless avenues to improve program design and delivery, direct the most possible funds toward its mission to ensure all kids have access to an excellent education, and evolve to offer the best learning options inside and outside the classroom.

Where do we go from here

What is both exciting and daunting about AI is that while we can take lessons learned from previous technological transformations in the classroom, much of the book has not been written on AI adoption. Meaning tech companies, teachers, government, and other stakeholders have the opportunity to shape where AI goes in education and beyond.

I want to conclude my remarks today with policy recommendations for the Committee to consider:

  • Support professional development for teachers to effectively teach about AI and responsibly integrate AI tools in the classroom.
    • At the Federal level, this means providing priorities for competitive grant programs, such as those recently proposed by the U.S. Department of Education.
  • Encourage public-private partnerships.
    • Incentivize and prioritize Federal funds and grants that support partnerships between technology companies and educational programs, including apprenticeship and credentialed organizations, to develop up to-date AI curriculum.
  • Promote AI literacy across the U.S.
    • Integrate AI skills and concepts, including their foundational principles, social impacts, and ethical concerns, into existing curriculum and instruction.
  • Provide guidance.
    • Equip schools with guidance on the safe, effective, and responsible use of AI, including considerations related to student privacy, data security, accessibility, transparency, and appropriate human oversight.
  • Invest in innovation.
    • Support research and evaluation to better understand the impacts of AI in education, including its effects on teaching and learning and student outcomes, and to identify effective, scalable practices that mitigate the digital divide.

 

[i] Smith, Brad. “Foreword.” Degrees of Change: What AI Means for Education and the Next Generation, by Juan M. Lavista Ferres, John Wiley & Sons, 2026.
[ii] See Microsoft 2025 AI in Education Survey Details, August 2025
[iii] See Microsoft 2025 AI in Education Survey Details, August 2025
[iv] See Microsoft Elevate: Putting people first, July 2025
[v] See Commonsense Guardrails for Using Advanced Technology in Schools, March 2025
[vi] See Microsoft 2025 AI in Education Survey Details, August 2025
[vii] See TeachAI Foundational Policies
[viii] See TeachAI AI Guidance for Schools Toolkit
[ix] See AI Literacy Framework
[x] See Microsoft Education Security Toolkit, February 2026

[1] ISTE (International Society for Technology in Education) + ASCD (Association for Supervision and Curriculum Development)

 

The post Building an AI-Ready America: Teaching in the AI age appeared first on Microsoft On the Issues.

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Mickey Tunes In: 1930 Comics and Cultural Production

1 Share

How Mickey’s 1930 comic strip turned borrowed hit songs into the foundation of Disney’s musical legacy.

On January 13, 1930, Mickey Mouse began starring in daily comic strips. This new endeavor “functioned as many fans’ most readily available source of Mickey Mouse entertainment.”1 Despite being a print medium, these works heavily featured musical motifs of popular songs—a staple of his contemporary cartoons. Unlike the concurrent animated shorts, which could incorporate synchronized sound, the comic strip relied on musical shorthand: fragments of lyrics, song titles, and musical notes that invited readers to “hear” the music. These musical moments are not incidental but intentional—Mickey participates within a popular cultural soundscape.

Early strips utilize the cultural cache of these already popular songs to reinforce Mickey’s own cultural relevance. Through subsequent references Mickey becomes associated with music that audiences recognize and consider culturally valuable. Ultimately, the Disney company utilizes this association—Mickey and music as culturally significant—to lend legitimacy to their own musical works. Through this technique the 1930 comics move from borrowing musical culture to manufacturing it.

The first instance of Mickey Mouse referencing a song is “Singin’ in the Bathtub”, a hit song from Warner Brothers’ The Show of Shows (1929).

March 10, 1930

A single panel—essentially a brief throwaway—the reference establishes the musical borrowing technique that the strip would employ throughout 1930. The song he borrows is a parody of The Hollywood Revue’sSingin’ in the Rain”, thus itself working within a cultural borrowing technique.

The borrowing strategy is repeated when Mickey and Minnie “sing” the parody’s inspiration, “Singin’ in the Rain” while camping out during a rainstorm.

May 20, 1930

The song’s optimistic tone mirrors the scene’s mood, and its inclusion requires no explanation for contemporary readers. The inclusion feels natural and of the moment: another instance of deft cultural association. Viewers of the time might have been reminded of the dazzling two-strip Technicolor sequence of the song in The Hollywood Revue.

Going further back than just the prior year, Disney pulls reference to the popular 1926 song “(Looking At The World Thru) Rose Colored Glasses

July 10, 1930

First published in 1926, “Rose Colored Glasses” is the oldest song referenced. This distance from initial publication emphasizes durability rather than novelty suggesting cultural staying power. Mickey is aligned not merely with recent hits but with songs that have proven lasting appeal. Mickey Mouse plus familiar music equals cultural relevance. At this point, Disney has established a framework that can be leveraged.

Throughout all of these references, Disney leans on the popularity and legitimacy of other musical works to establish the “sound” of their comic strip. Each song that Mickey references circulated as sheet music, 78rpm records, or in popular films of the time like The Hollywood Revue. These avenues established each song’s cultural value. By repeatedly placing Mickey alongside them, the strip transfers that value onto the character himself. Thus, it is significant when the appearance of Disney’s own original song, “Minnie’s Yoo Hoo,” appears in the strip.

October 28, 1930

First introduced in 1929’s Mickey’s Follies, “Minnie’s Yoo Hoo” utilized the new synchronized sound technology that contributed to Mickey Mouse’s popularity. In March 1930, Variety noted the song’s presence as such remarking that the “Mickey Mouse cartoons have come to the front with a theme song.” This song quickly became a marketing anthem for Mickey.


Cover design includes a drawing of Mickey Mouse playing an upright piano on top of which sits Minnie Mouse.
Sheet music cover of “Minnie’s Yoo Hoo”
(source: Library of Congress)

While the other musical numbers referenced by Mickey in the comic were also commercial properties Mickey’s presentation of them is not an attempt to sell those works. Rather, Disney and Mickey seek to benefit from their cultural value. By including “Minnie’s Yoo Hoo” in the strip it moves from a commercial song to a cultural work—referenced casually and without promotional framing. Its appearance signals that it belongs among the other recognizable tunes. As with the borrowed songs before it, sheet music and recordings were available for purchase, reinforcing its circulation beyond the page.

Today it is easy to assume that Disney songs have always held cultural significance. Yet, the 1930 comic strips exhibit the work required to achieve the earliest efforts of this. Through casual references to culturally popular musical works of the time, the Disney company established their own songs as culturally significant. Mickey’s work as the referential intermediary gave the in-house songs credibility that has grown since. The comics remind us that cultural dominance is rarely instantaneous; it is built, quietly and cumulatively. If you want to see how this happened go and read the 1930 comics in our collections.

  1.  David Gerstein and J. B. Kaufman, Walt Disney’s Mickey Mouse: The Ultimate History, 40th Anniversary ed. (Koln: Taschen, 2020), 121. ↩
Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Developer-targeting campaign using malicious Next.js repositories

1 Share

Microsoft Defender Experts identified a coordinated developer-targeting campaign delivered through malicious repositories disguised as legitimate Next.js projects and technical assessment materials. Telemetry collected during this investigation indicates the activity aligns with a broader cluster of threats that use job-themed lures to blend into routine developer workflows and increase the likelihood of code execution.

During initial incident analysis, Defender telemetry surfaced a limited set of malicious repositories directly involved in observed compromises. Further investigation expanded the scope by reviewing repository contents, naming conventions, and shared coding patterns. These artifacts were cross-referenced against publicly available code-hosting platforms. This process uncovered additional related repositories that were not directly referenced in observed logs but exhibited the same execution mechanisms, loader logic, and staging infrastructure.

Across these repositories, the campaign uses multiple entry points that converge on the same outcome: runtime retrieval and local execution of attacker-controlled JavaScript that transitions into staged command-and-control. An initial lightweight registration stage establishes host identity and can deliver bootstrap code before pivoting to a separate controller that provides persistent tasking and in-memory execution. This design supports operator-driven discovery, follow-on payload delivery, and staged data exfiltration.

Initial discovery and scope expansion

The investigation began with analysis of suspicious outbound connections to attacker-controlled command-and-control (C2) infrastructure. Defender telemetry showed Node.js processes repeatedly communicating with related C2 IP addresses, prompting deeper review of the associated execution chains.

By correlating network activity with process telemetry, analysts traced the Node.js execution back to malicious repositories that served as the initial delivery mechanism. This analysis identified a Bitbucket-hosted repository presented as a recruiting-themed technical assessment, along with a related repository using the Cryptan-Platform-MVP1 naming convention.

From these findings, analysts expanded the scope by pivoting on shared code structure, loader logic, and repository naming patterns. Multiple repositories followed repeatable naming conventions and project “family” patterns, enabling targeted searches for additional related repositories that were not directly referenced in observed telemetry but exhibited the same execution and staging behavior.

Pivot signal  What we looked for Why it mattered  
Repo family naming convention  Cryptan, JP-soccer, RoyalJapan, SettleMint  Helped identify additional repos likely created as part of the same seeding effort  
Variant naming  v1, master, demo, platform, server  Helped find near-duplicate variants that increased execution likelihood  
Structural reuse  Similar file placement and loader structure across repos  Confirmed newly found repos were functionally related, not just similarly named  

Figure 1Repository naming patterns and shared structure used to pivot from initial telemetry to additional related repositories 

Multiple execution paths leading to a shared backdoor 

Analysis of the identified repositories revealed three recurring execution paths designed to trigger during normal developer activity. While each path is activated by a different action, all ultimately converge on the same behavior: runtime retrieval and in‑memory execution of attacker‑controlled JavaScript. 

Path 1: Visual Studio Code workspace execution

Several repositories abuse Visual Studio Code workspace automation to trigger execution as soon as a developer opens (and trusts) the project. When present, .vscode/tasks.json is configured with runOn: “folderOpen”, causing a task to run immediately on folder open. In parallel, some variants include a dictionary-based fallback that contains obfuscated JavaScript processed during workspace initialization, providing redundancy if task execution is restricted. In both cases, the execution chain follows a fetch-and-execute pattern that retrieves a JavaScript loader from Vercel and executes it directly using Node.js.

``` 
node /Users/XXXXXX/.vscode/env-setup.js →  https://price-oracle-v2.vercel.app 
``` 

Figure 2. Telemetry showing a VS Code–adjacent Node script (.vscode/env-setup.js) initiating outbound access to a Vercel staging endpoint (price-oracle-v2.vercel[.]app). 

After execution, the script begins beaconing to attacker-controlled infrastructure. 

Path 2: Build‑time execution during application development 

The second execution path is triggered when the developer manually runs the application, such as with npm run dev or by starting the server directly. In these variants, malicious logic is embedded in application assets that appear legitimate but are trojanized to act as loaders. Common examples include modified JavaScript libraries, such as jquery.min.js, which contain obfuscated code rather than standard library functionality. 

When the development server starts, the trojanized asset decodes a base64‑encoded URL and retrieves a JavaScript loader hosted on Vercel. The retrieved payload is then executed in memory by Node.js, resulting in the same backdoor behavior observed in other execution paths. This mechanism provides redundancy, ensuring execution even when editor‑based automation is not triggered. 

Telemetry shows development server execution immediately followed by outbound connections to Vercel staging infrastructure: 

``` 
node server/server.js  →  https://price-oracle-v2.vercel.app 
``` 

Figure 3. Telemetry showing node server/server.js reaching out to a Vercel-hosted staging endpoint (price-oracle-v2.vercel[.]app). 

The Vercel request consistently precedes persistent callbacks to attacker‑controlled C2 servers over HTTP on port 300.  

Path 3: Server startup execution via env exfiltration and dynamic RCE 

The third execution path activates when the developer starts the application backend. In these variants, malicious loader logic is embedded in backend modules or routes that execute during server initialization or module import (often at require-time). Repositories commonly include a .env value containing a base64‑encoded endpoint (for example, AUTH_API=<base64>), and a corresponding backend route file (such as server/routes/api/auth.js) that implements the loader. 

On startup, the loader decodes the endpoint, transmits the process environment (process.env) to the attacker-controlled server, and then executes JavaScript returned in the response using dynamic compilation (for example, new Function(“require”, response.data)(require)). This results in in‑memory remote code execution within the Node.js server process. 

``` 
Server start / module import 
→ decode AUTH_API (base64) 
→ POST process.env to attacker endpoint 
→ receive JavaScript source 
→ execute via new Function(...)(require) 
``` 

Figure 4. Backend server startup path where a module import decodes a base64 endpoint, exfiltrates environment variables, and executes server‑supplied JavaScript via dynamic compilation. 

This mechanism can expose sensitive configuration (cloud keys, database credentials, API tokens) and enables follow-on tasking even in environments where editor-based automation or dev-server asset execution is not triggered. 

Stage 1 C2 beacon and registration 

Regardless of the initial execution path, whether opening the project in Visual Studio Code, running the development server, or starting the application backend, all three mechanisms lead to the same Stage 1 payload. Stage 1 functions as a lightweight registrar and bootstrap channel.

After being retrieved from staging infrastructure, the script profiles the host and repeatedly polls a registration endpoint at a fixed cadence. The server response can supply a durable identifier, instanceId, that is reused across subsequent polls to correlate activity. Under specific responses, the client also executes server-provided JavaScript in memory using dynamic compilation, new Function(), enabling on-demand bootstrap without writing additional payloads to disk. 

Figure 5Stage 1 registrar payload retrieved at runtime and executed by Node.js.
Figure 6Initial Stage 1 registration with instanceId=0, followed by subsequent polling using a durable instanceId. 

Stage 2 C2 controller and tasking loader 

Stage 2 upgrades the initial foothold into a persistent, operator-controlled tasking client. Unlike Stage 1, Stage 2 communicates with a separate C2 IP and API set that is provided by the Stage 1 bootstrap. The payload commonly runs as an inline script executed via node -e, then remains active as a long-lived control loop. 

Figure 7Stage 2 telemetry showing command polling and operational reporting to the C2 via /api/handleErrors and /api/reportErrors.

Stage 2 polls a tasking endpoint and receives a messages[] array of JavaScript tasks. The controller maintains session state across rounds, can rotate identifiers during tasking, and can honor a kill switch when instructed. 

Figure 8Stage 2 polling loop illustrating the messages[] task format, identity updates, and kill-switch handling.

After receiving tasks, the controller executes them in memory using a separate Node interpreter, which helps reduce additional on-disk artifacts. 

Figure 9. Stage 2 executes tasks by piping server-supplied JavaScript into Node via STDIN. 

The controller maintains stability and session continuity, posts error telemetry to a reporting endpoint, and includes retry logic for resilience. It also tracks spawned processes and can stop managed activity and exit cleanly when instructed. 

Beyond on-demand code execution, Stage 2 supports operator-driven discovery and exfiltration. Observed operations include directory browsing through paired enumeration endpoints: 

Figure 10Stage 2 directory browsing observed in telemetry using paired enumeration endpoints (/api/hsocketNext and /api/hsocketResult). 

 Staged upload workflow (upload, uploadsecond, uploadend) used to transfer collected files: 

Figure 11Stage 2 staged upload workflow observed in telemetry using /upload, /uploadsecond, and /uploadend to transfer collected files. 

Summary

This developer‑targeting campaign shows how a recruiting‑themed “interview project” can quickly become a reliable path to remote code execution by blending into routine developer workflows such as opening a repository, running a development server, or starting a backend. The objective is to gain execution on developer systems that often contain high‑value assets such as source code, environment secrets, and access to build or cloud resources.

When untrusted assessment projects are run on corporate devices, the resulting compromise can expand beyond a single endpoint. The key takeaway is that defenders should treat developer workflows as a primary attack surface and prioritize visibility into unusual Node execution, unexpected outbound connections, and follow‑on discovery or upload behavior originating from development machines 

Cyber kill chain model 

Figure 12. Attack chain overview.

Mitigation and protection guidance  

What to do now if you’re affected  

  • If a developer endpoint is suspected of running this repository chain, the immediate priority is containment and scoping. Use endpoint telemetry to identify the initiating process tree, confirm repeated short-interval polling to suspicious endpoints, and pivot across the fleet to locate similar activity using Advanced Hunting tables such as DeviceNetworkEvents or DeviceProcessEvents.
  • Because post-execution behavior includes credential and session theft patterns, response should include identity risk triage and session remediation in addition to endpoint containment. Microsoft Entra ID Protection provides a structured approach to investigate risky sign-ins and risky users and to take remediation actions when compromise is suspected. 
  • If there is concern that stolen sessions or tokens could be used to access SaaS applications, apply controls that reduce data movement while the investigation proceeds. Microsoft Defender for Cloud Apps Conditional Access app control can monitor and control browser sessions in real time, and session policies can restrict high-risk actions to reduce exfiltration opportunities during containment. 

Defending against the threat or attack being discussed  

  • Harden developer workflow trust boundaries. Visual Studio Code Workspace Trust and Restricted Mode are designed to prevent automatic code execution in untrusted folders by disabling or limiting tasks, debugging, workspace settings, and extensions until the workspace is explicitly trusted. Organizations should use these controls as the default posture for repositories acquired from unknown sources and establish policy to review workspace automation files before trust is granted.  
  • Reduce build time and script execution attack surface on Windows endpoints. Attack surface reduction rules in Microsoft Defender for Endpoint can constrain risky behaviors frequently abused in this campaign class, such as running obfuscated scripts or launching suspicious scripts that download or run additional content. Microsoft provides deployment guidance and a phased approach for planning, testing in audit mode, and enforcing rules at scale.  
  • Strengthen prevention on Windows with cloud delivered protection and reputation controls. Microsoft Defender Antivirus cloud protection provides rapid identification of new and emerging threats using cloud-based intelligence and is recommended to remain enabled. Microsoft Defender SmartScreen provides reputation-based protection against malicious sites and unsafe downloads and can help reduce exposure to attacker infrastructure and socially engineered downloads.  
  • Protect identity and reduce the impact of token theft. Since developer systems often hold access to cloud resources, enforce strong authentication and conditional access, monitor for risky sign ins, and operationalize investigation playbooks when risk is detected. Microsoft Entra ID Protection provides guidance for investigating risky users and sign ins and integrating results into SIEM workflows.  
  • Control SaaS access and data exfiltration paths. Microsoft Defender for Cloud Apps Conditional Access app control supports access and session policies that can monitor sessions and restrict risky actions in real time, which is valuable when an attacker attempts to use stolen tokens or browser sessions to access cloud apps and move data. These controls can complement endpoint controls by reducing exfiltration opportunities at the cloud application layer. [learn.microsoft.com][learn.microsoft.com] 
  • Centralize monitoring and hunting in Microsoft Sentinel. For organizations using Microsoft Sentinel, hunting queries and analytics rules can be built around the observable behaviors described in this blog, including Node.js initiating repeated outbound connections, HTTP based polling to attacker endpoints, and staged upload patterns. Microsoft provides guidance for creating and publishing hunting queries in Sentinel, which can then be operationalized into detections.  
  • Operational best practices for long term resilience. Maintain strict credential hygiene by minimizing secrets stored on developer endpoints, prefer short lived tokens, and separate production credentials from development workstations. Apply least privilege to developer accounts and build identities, and segment build infrastructure where feasible. Combine these practices with the controls above to reduce the likelihood that a single malicious repository can become a pathway into source code, secrets, or deployment systems. 

Microsoft Defender XDR detections   

Microsoft Defender XDR customers can refer to the list of applicable detections below. Microsoft Defender XDR coordinates detection, prevention, investigation, and response across endpoints, identities, email, apps to provide integrated protection against attacks like the threat discussed in this blog.  

Customers with provisioned access can also use Microsoft Security Copilot in Microsoft Defender to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence.  

Tactic   Observed activity   Microsoft Defender coverage   
Initial access – Developer receives recruiting-themed “assessment” repo and interacts with it as a normal project 
– Activity blends into routine developer workflows 
Microsoft Defender for Cloud Apps – anomaly detection alerts and investigation guidance for suspicious activity patterns  
Execution – VS Code workspace automation triggers execution on folder open (for example .vscode/tasks.json behavior). 
– Dev server run triggers a trojanized asset to retrieve a remote loader. 
– Backend startup/module import triggers environment access plus dynamic execution patterns. – Obfuscated or dynamically constructed script execution (base64 decode and runtime execution patterns) 
Microsoft Defender for Endpoint – Behavioral blocking and containment alerts based on suspicious behaviors and process trees (designed for fileless and living-off-the-land activity)  
Microsoft Defender for Endpoint – Attack surface reduction rule alerts, including “Block execution of potentially obfuscated scripts”   
Command and control (C2) – Stage 1 registration beacons with host profiling and durable identifier reuse 
– Stage 2 session-based tasking and reporting 
Microsoft Defender for Endpoint – IP/URL/Domain indicators (IoCs) for detection and optional blocking of known malicious infrastructure  
Discovery & Collection  – Operator-driven directory browsing and host profiling behaviors consistent with interactive recon Microsoft Defender for Endpoint – Behavioral blocking and containment investigation/alerting based on suspicious behaviors correlated across the device timeline  
Collection  – Targeted access to developer-relevant artifacts such as environment files and documents 
– Follow-on selection of files for collection based on operator tasking 
Microsoft Defender for Endpoint – sensitivity labels and investigation workflows to prioritize incidents involving sensitive data on devices  
Exfiltration – Multi-step upload workflow consistent with staged transfers and explicit file targeting  Microsoft Defender for Cloud Apps – data protection and file policies to monitor and apply governance actions for data movement in supported cloud services  

Microsoft Defender XDR threat analytics  

Microsoft Security Copilot customers can also use the Microsoft Security Copilot integration in Microsoft Defender Threat Intelligence, either in the Security Copilot standalone portal or in the embedded experience in the Microsoft Defender portal to get more information about this threat actor.  

Hunting queries   

Node.js fetching remote JavaScript from untrusted PaaS domains (C2 stage 1/2) 

DeviceNetworkEvents 
| where InitiatingProcessFileName in~ ("node","node.exe") 
| where RemoteUrl has_any ("vercel.app", "api-web3-auth", "oracle-v1-beta") 
| project Timestamp, DeviceName, InitiatingProcessFileName, InitiatingProcessCommandLine, RemoteUrl 

Detection of next.config.js dynamic loader behavior (readFile → eval) 

DeviceProcessEvents 
| where FileName in~ ("node","node.exe") 
| where ProcessCommandLine has_any ("next dev","next build") 
| where ProcessCommandLine has_any ("eval", "new Function", "readFile") 
| project Timestamp, DeviceName, ProcessCommandLine, InitiatingProcessCommandLine 

Repeated shortinterval beaconing to attacker C2 (/api/errorMessage, /api/handleErrors) 

DeviceNetworkEvents 
| where InitiatingProcessFileName in~ ("node","node.exe") 
| where RemoteUrl has_any ("/api/errorMessage", "/api/handleErrors") 
| summarize BeaconCount = count(), FirstSeen=min(Timestamp), LastSeen=max(Timestamp) 
          by DeviceName, InitiatingProcessCommandLine, RemoteUrl 
| where BeaconCount > 10 

Detection of detached child Node interpreters (node – from parent Node) 

DeviceProcessEvents 
| where InitiatingProcessFileName in~ ("node","node.exe") 
| where ProcessCommandLine endswith "-" 
| project Timestamp, DeviceName, InitiatingProcessCommandLine, ProcessCommandLine 

Directory enumeration and exfil behavior

DeviceNetworkEvents 
| where RemoteUrl has_any ("/hsocketNext", "/hsocketResult", "/upload", "/uploadsecond", "/uploadend") 
| project Timestamp, DeviceName, RemoteUrl, InitiatingProcessCommandLine 

Suspicious access to sensitive files on developer machines 

DeviceFileEvents 
| where Timestamp > ago(14d) 
| where FileName has_any (".env", ".env.local", "Cookies", "Login Data", "History") 
| where InitiatingProcessFileName in~ ("node","node.exe","Code.exe","chrome.exe") 
| project Timestamp, DeviceName, FileName, FolderPath, InitiatingProcessCommandLine 

Indicators of compromise  

Indicator  Type  Description  
api-web3-auth[.]vercel[.]app 
• oracle-v1-beta[.]vercel[.]app 
• monobyte-code[.]vercel[.]app 
• ip-checking-notification-kgm[.]vercel[.]app 
• vscodesettingtask[.]vercel[.]app 
• price-oracle-v2[.]vercel[.]app 
• coredeal2[.]vercel[.]app 
• ip-check-notification-03[.]vercel[.]app 
• ip-check-wh[.]vercel[.]app 
• ip-check-notification-rkb[.]vercel[.]app 
• ip-check-notification-firebase[.]vercel[.]app 
• ip-checking-notification-firebase111[.]vercel[.]app 
• ip-check-notification-firebase03[.]vercel[.]app  
Domain Vercelhosted delivery and staging domains referenced across examined repositories for loader delivery, VS Code task staging, buildtime loaders, and backend environment exfiltration endpoints.  
 • 87[.]236[.]177[.]9 
• 147[.]124[.]202[.]208 
• 163[.]245[.]194[.]216 
• 66[.]235[.]168[.]136  
IP addresses  Commandandcontrol infrastructure observed across Stage 1 registration, Stage 2 tasking, discovery, and staged exfiltration activity.  
• hxxp[://]api-web3-auth[.]vercel[.]app/api/auth 
• hxxps[://]oracle-v1-beta[.]vercel[.]app/api/getMoralisData 
• hxxps[://]coredeal2[.]vercel[.]app/api/auth 
• hxxps[://]ip-check-notification-03[.]vercel[.]app/api 
• hxxps[://]ip-check-wh[.]vercel[.]app/api 
• hxxps[://]ip-check-notification-rkb[.]vercel[.]app/api 
• hxxps[://]ip-check-notification-firebase[.]vercel[.]app/api 
• hxxps[://]ip-checking-notification-firebase111[.]vercel[.]app/api 
• hxxps[://]ip-check-notification-firebase03[.]vercel[.]app/api 
• hxxps[://]vscodesettingtask[.]vercel[.]app/api/settings/XXXXX 
• hxxps[://]price-oracle-v2[.]vercel[.]app 
 
• hxxp[://]87[.]236[.]177[.]9:3000/api/errorMessage 
• hxxp[://]87[.]236[.]177[.]9:3000/api/handleErrors 
• hxxp[://]87[.]236[.]177[.]9:3000/api/reportErrors 
• hxxp[://]147[.]124[.]202[.]208:3000/api/reportErrors 
• hxxp[://]87[.]236[.]177[.]9:3000/api/hsocketNext 
• hxxp[://]87[.]236[.]177[.]9:3000/api/hsocketResult 
• hxxp[://]87[.]236[.]177[.]9:3000/upload 
• hxxp[://]87[.]236[.]177[.]9:3000/uploadsecond 
• hxxp[://]87[.]236[.]177[.]9:3000/uploadend 
• hxxps[://]api[.]ipify[.]org/?format=json  
URL Consolidated URLs across delivery/staging, registration and tasking, reporting, discovery, and staged uploads. Includes the public IP lookup used during host profiling. 
• next[.]config[.]js 
• tasks[.]json 
• jquery[.]min[.]js 
• auth[.]js 
• collection[.]js 
Filename  Repository artifacts used as execution entry points and loader components across IDE, build-time, and backend execution paths.  
• .vscode/tasks[.]json 
• scripts/jquery[.]min[.]js 
• public/assets/js/jquery[.]min[.]js 
• frontend/next[.]config[.]js 
• server/routes/api/auth[.]js 
• server/controllers/collection[.]js 
• .env  
Filepath  On-disk locations observed across examined repositories where malicious loaders, execution triggers, and environment exfiltration logic reside.  
• ddd43e493cb333c1cc5d7cd50a6a5a61ecd89cfa5f4076f62c2adf96748b87f8 
• 449e2bf57ab4790427a3a7de3d98b6c540e76190a3d844de2f0e7b66be842b19 
• 07ad8525844ce61471e08e8c515b76bf063bac482394152bad814026cd577f69 
• e4d71aa95be0725c351e9d1d273d35ccdb0a8bdb31a57927c8738431b89788f5 
• 13152dcb3be425e1ce0f085cd733121a4665cf9935cf8867738e3d510a80308a 
• 6d59740d0710da370d5c38ddf88d6912487a1799e4ad09b72d764a3d27ed16b3  
Hash (SHA-256)  File hashes observed within the analyzed repository set and related activity.  
• 9ab4045654a6d97762f9ae8bb97d4ecf67fa53ab  Hash (SHA-1)  File hash observed within the analyzed activity set. 

References    

This research is provided by Microsoft Defender Security Research with contributions from Colin Milligan.

Learn more   

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.   

Explore how to build and customize agents with Copilot Studio Agent Builder 

Microsoft 365 Copilot AI security documentation 

How Microsoft discovers and mitigates evolving attacks against AI guardrails 

Learn more about securing Copilot Studio agents with Microsoft Defender  

Learn more about Protect your agents in real-time during runtime (Preview) – Microsoft Defender for Cloud Apps | Microsoft Learn   

The post Developer-targeting campaign using malicious Next.js repositories appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Partners: Accelerate Your AI Journey at AgentCon 2026 (Free Community Event)

1 Share

Recently, a customer asked me a question many Microsoft partners are hearing right now:

“We have Copilot — how do we actually use AI to change the way we work?”

That question captures where we are in the AI journey today. Organizations have moved past curiosity. Now they’re looking for trusted partners who can turn AI into real business outcomes.

That’s why events like AgentCon 2026 matter.

A free, community-led event built by practicioners

AgentCon is not a traditional conference. It’s a free, community-driven global event organized by the Global AI Community together with Microsoft partners and ecosystem leaders.

Simply put: it’s for the community, by the community.

Across cities worldwide, developers, consultants, architects, and Microsoft partners come together to share practical experiences building with AI agents, Copilot, and the Microsoft platform.

The focus isn’t theory — it’s implementation:

  • What worked
  • What didn’t
  • What partners can apply immediately with customers

This peer learning model reflects how many of us actually grow in the Microsoft ecosystem: by learning from other partners solving real problems.

Why this matters for Microsoft partners

The opportunity for partners is evolving quickly.

Customers aren’t just asking about AI tools — they’re asking how to redesign processes, automate work, and unlock productivity using AI-powered solutions.

The Microsoft AI Cloud Partner Program emphasizes partner skilling and helping customers realize value from AI investments. Community events like AgentCon accelerate that learning by bringing partners together to exchange proven approaches and practical insights.

When partners upskill faster, customers succeed faster.

Why attend

AgentCon is designed to help partners move from AI awareness to AI delivery.

As an attendee, you can expect:

  • Practical sessions and demos from practitioners
  • Real-world AI and agent scenarios
  • Direct conversations with builders and peers
  • New collaboration and co-sell opportunities

You’ll leave with ideas and approaches you can bring directly into customer engagements.

Why speak

AgentCon thrives because partners share openly with one another.

If you’ve implemented Copilot, explored AI agents, or learned lessons from customer deployments, your experience can help others accelerate their journey.

Speaking at AgentCon allows you to:

  • Share your expertise with the global partner community
  • Build credibility within the Microsoft ecosystem
  • Create new partnerships and opportunities
  • Contribute to collective partner success

You don’t need a perfect story — just an honest one others can learn from.

Join the global AgentCon community

AgentCon 2026 events takes place around the world including these upcoming events:

Each event is locally organized, community-led, and free to attend.

Help shape the next phase of AI adoption

AI transformation is happening now — and Microsoft partners play a critical role in guiding customers forward.

AgentCon is an opportunity to learn together, share experiences, and strengthen the partner ecosystem driving AI innovation.

👉 Register or apply to speak: https://aka.ms/agentcon2026

We hope you’ll join us — and be part of the community helping customers turn AI potential into real impact.

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories