Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147041 stories
·
33 followers

Opus 4.6 and Codex 5.3

1 Share

Two major new model releases today, within about 15 minutes of each other.

Anthropic released Opus 4.6. Here's its pelican:

Slightly wonky bicycle frame but an excellent pelican, very clear beak and pouch, nice feathers.

OpenAI release GPT-5.3-Codex, albeit only via their Codex app, not yet in their API. Here's its pelican:

Not nearly as good - the bicycle is a bit mangled, the pelican not nearly as well rendered - it's more of a line drawing.

I've had a bit of preview access to both of these models and to be honest I'm finding it hard to find a good angle to write about them - they're both really good, but so were their predecessors Codex 5.2 and Opus 4.5. I've been having trouble finding tasks that those previous models couldn't handle but the new ones are able to ace.

The most convincing story about capabilities of the new model so far is Nicholas Carlini from Anthropic talking about Opus 4.6 and Building a C compiler with a team of parallel Claudes - Anthropic's version of Cursor's FastRender project.

Tags: llm-release, anthropic, generative-ai, openai, pelican-riding-a-bicycle, ai, llms, parallel-agents, c, nicholas-carlini

Read the whole story
alvinashcraft
31 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Here’s what Xbox is working on for 2026

1 Share
Illustration of Xbox logo

Microsoft has a big year ahead for Xbox as it marks its 25-year milestone. After the tough decision to release more Xbox games on rival consoles two years ago, 2026 is a chance to refocus on the platform and celebrate some of Xbox's biggest franchises. It's also an opportunity for Microsoft to define its vision for the future of Xbox, after months of confusion from fans and plummeting Xbox hardware sales.

Xbox kicked off 2026 with its annual Developer Direct last month, a preview of some of the games it's publishing this year. Microsoft is lining up its "four horsemen" for 2026: Forza, Halo, Fable, and Gears of War. These franchises have be …

Read the full story at The Verge.

Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The security implementation gap: Why Microsoft is supporting Operation Winter SHIELD

1 Share

Every conversation I have with information security leaders tends to land in the same place. People understand what matters. They know the frameworks, the controls, and the guidance. They can explain why identity security, patching, and access control are critical. And yet incidents keep happening for the same reasons.

Successful cyberattacks rarely depend on something novel. They succeed when basic controls are missing or inconsistently applied. Stolen credentials still work. Legacy authentication is still enabled. End-of-life systems remain connected and operational, though of course not well patched.

This is not a knowledge problem. It is an execution and follow through problem. We know what we’re supposed to do, but we need to get on with doing it. The gap between knowing what matters and enforcing it completely is where most real-world incidents occur.

If the basics were that easy to implement, everyone would have them in place already.

That gap is where cyberattackers operate most effectively, and it is the gap that Operation Winter SHIELD is designed to address as a collaborative effort across the public and private sector.

Why Operation Winter SHIELD matters

Operation Winter SHIELD is a nine-week cybersecurity initiative led by the FBI Cyber Division beginning February 2, 2026. The focus is not awareness or education for its own sake. The focus is on implementation. Specifically, how organizations operationalize the real security guidance that reduces risk in real environments.

This effort reflects a necessary shift in how we approach security at scale. Most organizations do not fail because they chose the wrong security product or the wrong framework. They fail because controls that look straightforward on paper are difficult to deploy consistently across complex, expanding environments.

Microsoft is providing implementation resources to help organizations focus on what actually changes outcomes. To do this, we’re sharing guidance on controls, like Baseline Security Mode that hold up under real world pressure, from real world threat actors.

What the FBI Cyber Division sees in real incidents

The FBI Cyber Division brings a perspective that is grounded in investigations. Their teams respond to incidents, support victim organizations through recovery, and build cases against the cybercriminal networks we defend against every day. This investigative perspective reveals which missing controls turn manageable events into prolonged incident crises.

That perspective aligns with what we see through Microsoft Threat Intelligence and Microsoft Incident Response. The patterns repeat across industries, geographies, and organization sizes.

Nation-sponsored threat actors exploit end-of-life infrastructure that no longer receives security updates. Ransomware operations move laterally using over privileged accounts and weak authentication. Criminal groups capitalize on misconfigurations that were understood but never fully addressed.

These are not edge cases. They are repeatable failures that cyberattackers rely on because they continue to work.

When incidents arise, it is rarely because defenders lacked guidance. It is because controls were incomplete, inconsistently enforced, or bypassed through legacy paths that remained open.

The reality of execution challenge

Defenders are not indifferent to these risks. They are certainly not unaware. They operate in environments defined by complexity, competing priorities, and limited resources. Controls that seem simple in isolation become difficult when they must be deployed across identities, devices, applications, and cloud services that were not designed at the same time.

In parallel, the cyberthreat landscape has matured. Initial access brokers sell credentials at scale. Ransomware operations function like businesses. Attack chains move quickly and often complete before the defenders can meaningfully intervene.

Detection windows shrink. Dwell time is no longer an actionable metric. The margin for error is smaller than it has ever been before.

Operation Winter SHIELD exists to narrow that margin by focusing attention on high impact control areas and showing how they can help defenders succeed when they are enforced.

Each week, we’ll focus on a high-impact control area informed by investigative insights drawn from active cases and long-term trends. This is not about introducing yet another security framework or hammering back again on the basics. It is about reinforcing what already works and confronting, honestly, why it is so often not fully implemented.

Moving from guidance to guardrails

Microsoft’s role in Operation Winter SHIELD is to help organizations move from insight to action. That means providing practical guidance, technical resources, and examples of how built-in platform capabilities can reduce the operational friction that slows deployment.

A central theme throughout the initiative is secure by default and by design. The fastest way to close implementation gaps is to reduce the number of decisions defenders must make under pressure. Controls that are enforced by default remove reliance on error-prone configurations and constant human vigilance.

Baseline Security Mode reflects this approach in practice. It enforces protections that harden identity and access across the environment. It blocks legacy authentication paths. It requires phish-resistant multifactor authentication for administrators. It surfaces legacy systems that are no longer supported. And it enforces least-privilege access patterns. These protections apply immediately when enabled and are informed by threat intelligence from Microsoft’s global visibility and lessons learned from thousands of incident response engagements.

The same guardrail model applies to the software supply chain. Build and deployment systems are frequent intrusion points because they are implicitly trusted and rarely governed with the same rigor as production environments. Enforcing identity isolation, signed artifacts, and least-privilege access for build pipelines reduces the risk that a single compromised developer account or token becomes a pathway into production.

These risks are not limited to technical pipelines alone. They are compounded when ownership, accountability, and enforcement mechanisms are unclear or inconsistently applied across the organization.

Governance controls only matter when they translate into enforceable technical outcomes. Requiring centralized ownership of security configuration, explicit exception handling, and continuous validation ensures that risk decisions are deliberate and traceable.

The objective is straightforward. Reduce the distance between guidance and guardrails. We must look to turn recommendations into protections that are consistently applied and continuously maintained.

What you can expect from Operation Winter SHIELD

Starting the week of February 2, 2026, you can expect focused guidance on the controls that have the greatest impact on reducing exposure to cybercrime. The initiative is not about creating new requirements. It is about improving execution of what already works.

Security maturity is not measured by what exists in policy documents or architecture diagrams. It is measured by what is enforced in production. It is measured by whether controls hold under real world conditions and whether they remain effective as environments change.

The cybercrime problem does not improve through awareness. It improves through execution, shared responsibility, and continued focus on closing the gaps threat actors exploit most reliably. You can expect to hear this guidance materialize on the FBI’s Cybercrime Division’s podcast, Ahead of the Threat, and a future episode of the Microsoft Threat Intelligence Podcast.

Building real resilience

Operation Winter SHIELD represents a focused effort to help organizations strengthen operational resilience. Microsoft’s contribution reflects a long-standing commitment to making security controls easier to deploy and more resilient over time.

Over the coming weeks and extending beyond this initiative, we will continue to share practical content designed to support organizations at every stage of their security maturity. Security is a process, not a product. The goal is not perfection, the goal is progress that threat actors feel. We will impose cost.

The gap between knowing what matters and doing it consistently is where threat actors have learned to operate. Closing that gap requires coordination, shared learning, and a willingness to prioritize enforcement over intention.

Operation Winter SHIELD offers an opportunity to drive systematic improvement to one control area at a time. Investigative experience explains why each control matters. Secure defaults and automation provide the path to implementation.

This work extends beyond any single awareness effort. The tactics threat actors use change quickly. The controls that reduce risk largely remain stable. What determines outcomes is how quickly and reliably those controls are put in place.

That is the work ahead. Moving from abstract ideas to real world security. Join me in going from knowing to doing.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post The security implementation gap: Why Microsoft is supporting Operation Winter SHIELD appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Continuous AI in practice: What developers can automate today with agentic CI

1 Share

Software engineering has always included work that’s repetitive, necessary, and historically difficult to automate. This isn’t because it lacks values, but because it resists deterministic rules. 

Continuous integration (CI) solved part of this by handling tests, builds, formatting, and static analysis—anything that can be described with deterministic rules. CI excels when correctness can be expressed unambiguously: a test passes or fails, a build succeeds or doesn’t, a rule is violated or isn’t. 

But CI is intentionally limited to problems that can be reduced to heuristics and rules. 

For most teams, the hardest work isn’t writing code. It’s everything that requires judgment around that code: reviewing changes, keeping documentation accurate, managing dependencies, tracking regressions, maintaining tests, monitoring quality, and responding to issues that only surface after code ships. 

But a lot of engineering work goes into work that requires interpretation, synthesis, and context, rather than deterministic validation. And an increasing share of engineering tasks fall into a category CI was never designed to handle: work that depends on understanding intent. 

“Any task that requires judgment goes beyond heuristics,” says Idan Gazit, head of GitHub Next, which works on research and development initiatives.

Any time something can’t be expressed as a rule or a flow chart is a place where AI becomes incredibly helpful.

Idan Gazit, head of GitHub Next

This is why GitHub Next has been exploring a new pattern: Continuous AI, or background agents that operate in your repository the way CI jobs do, but only for tasks that require reasoning instead of rules.

Why CI isn’t enough anymore

CI isn’t failing. It’s doing exactly what it was designed to do. 

CI is designed for binary outcomes. Tests pass or fail. Builds succeed or don’t. Linters flag well-defined violations. That works well for rule-based automation.

But many of the hardest and most time-consuming parts of engineering are judgment-heavy and context-dependent. 

Consider these scenarios: 

  • A docstring says one thing, but the implementation says another.
  • Text passes accessibility linting but is still confusing to users.
  • A dependency adds a new flag, altering behavior without a major version bump.
  • A regex is compiled inside a loop, tanking performance in subtle ways.
  • UI behavior changes are only visible when interacting with the product.

These problems are about whether intent still holds. 

“The first era of AI for code was about code generation,” Idan explains. “The second era involves cognition and tackling the cognitively heavy chores off of developers.”

This is the gap Continuous AI fills: not more automation, but a different class of automation. CI handles deterministic work. Continuous AI applies where correctness depends on reasoning, interpretation, and intent. 

What Continuous AI actually means

Continuous AI is not a new product or CI replacement. Traditional CI remains essential. 

Continuous AI is a pattern:

Continuous AI = natural-language rules + agentic reasoning, executed continuously inside your repository.

In practice, Continuous AI means expressing in plain language what should be true about your code, especially when that expectation cannot be reduced to rules or heuristics. An agent then evaluates the repository and produces artifacts a developer can review: suggested patches, issues, discussions, or insights.

Developers rarely author agentic workflows in a single pass. In practice, they collaborate with an agent to refine intent, add constraints, and define acceptable outputs. The workflow emerges through iteration, not a single sentence. 

For example: 

  • “Check whether documented behavior matches implementation, explain any mismatches, and propose a concrete fix.”
  • “Generate a weekly report summarizing project activity, emerging bug trends, and areas of increased churn.”
  • “Flag performance regressions in critical paths.”
  • “Detect semantic regressions in user flows.”

These workflows are not defined by brevity. They combine intent, constraints, and permitted outputs to express expectations that would be awkward or impossible to encode as deterministic rules. 

“In the future, it’s not about agents running in your repositories,” Idan says. “It’s about being able to presume you can cheaply define agents for anything you want off your plate permanently.”

Think about what your work looks like when you can delegate more of it to AI, and what parts of your work you want to retain: your judgment, your taste.

Idan Gazit, head of GitHub Next

Guardrails by design: Permissions and Safe Outputs

In our work, we define agentic workflows with safety as a first principle. By default, agents operate with read-only access to repositories. They cannot create issues, open pull requests, or modify content unless explicitly permitted. 

We call this Safe Outputs, which provides a deterministic contract for what an agent is allowed to do. When defining a workflow, developers specify exactly which artifacts an agent may produce, such as opening a pull request or filing an issue, and under what constraints. 

Anything outside those boundaries is forbidden. 

This model assumes agents can fail or behave unexpectedly. Outputs are sanitized, permissions are explicit, and all activity is logged and auditable. The blast radius is deterministic. 

This isn’t “AI taking over software development.” It’s AI operating within guardrails developers explicitly define. 

Why natural language complements YAML

As we’ve developed this, we’ve heard a common question: why not just extend CI with more rules? 

When a problem can be expressed deterministically, extending CI is exactly the right approach. YAML, schemas, and heuristics remain the correct tools for those jobs. 

But many expectations cannot be reduced to rules without losing meaning. 

Idan puts it simply: “There’s a larger class of chores and tasks we can’t express in heuristics.

A rule like “whenever documentation and code diverge, identify and fix it” cannot be expressed in a regex or schema. It requires understanding semantics and intent. A natural-language instruction can express that expectation clearly enough for an agent to reason over it. 

Natural language doesn’t replace YAML, but instead complements it. CI remains the foundation. Continuous AI expands automation into commands CI was never designed to cover. 

Developers stay in the loop, by design

Agentic workflows don’t make autonomous commits. Instead, they can create the same kinds of artifacts developers would (pull requests, issues, comments, or discussions) depending on what the workflow is permitted to do.

Pull requests remain the most common outputs because they align with how developers already review and reason about change. 

“The PR is the existing noun where developers expect to review work,” Idan says. “It’s the checkpoint everyone rallies around.”

That means:

  • Agents don’t merge code
  • Developers retain full control
  • Everything is visible and reviewable

Developer judgment remains the final authority. Continuous AI helps scale that judgment across a codebase. 

How GitHub Next is experimenting with these ideas

The GitHub Next prototype (or you can find the repository at gh aw) uses a deliberately simple pattern:

  1. Write an agentic workflow
  2. Compile it into a GitHub Action
  3. Push it
  4. Let an agent run on any GitHub Actions trigger (pull requests, pushes, issues, comments, or schedules) 

Nothing is hidden; everything is transparent and visible.

“You want an action to look for style violations like misplaced brackets, that’s heuristics,” Idan explains. “But when you want deeper intent checks, you need AI.” 

What Continuous AI can automate today

These aren’t theoretical examples. GitHub Next has tested these patterns in real repositories.

1. Fix mismatches between documentation and behavior

This is one of the hardest problems for CI because it requires understanding intent.

An agentic workflow can:

  • Read a function’s docstring
  • Compare it to the implementation
  • Detect mismatches
  • Suggest updates to either the code or the docs
  • Open a pull request

Idan calls this one of the most meaningful categories of work Continuous AI can address: “You don’t want to worry every time you ship code if the documentation is still right. That wasn’t possible to automate before AI.”

2. Generate ongoing project reports with reasoning

Maintainers and managers spend significant time answering the same questions repeatedly: What changed yesterday? Are bugs trending up or down? Which parts of the codebase are most active? 

Agentic workflows can generate recurring reports that pull from multiple data sources (issues, pull requests, commits, and CI results), and apply reasoning on top. 

For example, an agent can: 

  • Summarize daily or weekly activity 
  • Highlight emerging bug trends
  • Correlate recent changes with test failures
  • Surfaces areas of increased churn

The value isn’t the report itself. It’s the synthesis across multiple data sources that would otherwise require manual analysis. 

3. Keep translations up to date automatically

Anyone who has worked with localized applications knows the pattern: Content changes in English, translations fall behind, and teams batch work late in the cycle (often right before a release).

An agent can:

  • Detect when English text changes
  • Re-generate translations for all languages
  • Open a single pull request containing the updates

The workflow becomes continuous, not episodic. Machine translations might not be perfect out of the box, but having a draft translation ready for review in a pull request makes it that much easier to engage help from professional translators or community contributors.

4. Detect dependency drift and undocumented changes

Dependencies often change behavior without changing major versions. New flags appear. Defaults shift. Help output evolves.

In one demo, an agent:

  • Installed dependencies
  • Inspected CLI help text
  • Diffed it against previous days
  • Found an undocumented flag
  • Filed an issue before maintainers even noticed

This requires semantic interpretation, not just diffs, which is why classical CI cannot handle it. 

“This is the first harbinger of the new phase of AI,” Idan says. “We’re moving from generation to reasoning.”

5. Automated test-coverage burn down

In one experiment:

  • Test coverage went from ~5% to near 100%
  • 1,400+ tests were written
  • Across 45 days
  • For about ~$80 worth of tokens

And because the agent produced small pull requests daily, developers reviewed changes incrementally.

6. Background performance improvements

Linters and analyzers don’t always catch performance pitfalls that depend on understanding the code’s intent.

Example: compiling a regex inside a function call so it compiles on every invocation.

An agent can:

  • Recognize the inefficiency
  • Rewrite the code to pre-compile the regex
  • Open a pull request with an explanation

Small things add up, especially in frequently called code paths.

7. Automated interaction testing (using agents as deterministic play-testers)

This was one of the more creative demos from Universe: using agents to play a simple platformer game thousands of times to detect UX regressions.

Strip away the game, and the pattern is widely useful:

  • Onboarding flows
  • Multi-step forms
  • Retry loops
  • Input validation
  • Accessibility patterns under interaction

Agents can simulate user behavior at scale and compare variants.

How to build your first agentic workflow

Developers don’t need a new CI system or separate infrastructure to try this. The GitHub Next prototype (gh aw) uses a simple pattern:

1. Write a natural-language rule in a Markdown file

For example:

---
on: daily
permissions: read
safe-outputs:
  create-issue:
    title-prefix: "[news] "
---
Analyze the recent activity in the repository and:
- create an upbeat daily status report about the activity
- proviate an agentic task description to improve the project based on the activity.
Create an issue with the report.

2. Compile it into an action

gh aw compile daily-team-status

This generates a GitHub Actions workflow.

3. Review the YAML

Nothing is hidden. You can see exactly what the agent will do.

4. Push to your repository

The agentic workflow begins executing in response to repository events or on a schedule you define, just like any other action.

5. Review the issue it creates

Patterns to watch next

While still early, several trends are already emerging in developer workflows:

Pattern 1: Natural-language rules will become a part of automation

Developers will write short English rules that express intent:

  • “Keep translations current”
  • “Flag performance regressions”
  • “Warn on auth patterns that look unsafe”

Pattern 2: Repositories will begin hosting a fleet of small agents

Not one general agent, but many small ones with each responsible for one chore, one check, or one rule of thumb.

Pattern 3: Tests, docs, localization, and cleanup will shift into “continuous” mode

This mirrors the early CI movement: Not replacing developers, but changing when chores happen from “when someone remembers” to “every day.”

Pattern 4: Debuggability will win over complexity

Developers will adopt agentic patterns that are transparent, auditable, and diff-based—not opaque systems that act without visibility.

What developers should take away

“Custom agents for offline tasks, that’s what Continuous AI is,” Idan says. “Anything you couldn’t outsource before, you now can.”

More precisely: many judgment-heavy chores that were previously manual can now be made continuous.

This requires a mental shift, like moving from owning files to streaming music.

“You already had all the music,” Idan says. “But suddenly the player is helping you discover more.”

Start with one small workflow

Continuous AI is not an all-or-nothing paradigm. You don’t need to overhaul your pipeline. Start with something small:

  • Translate strings
  • Add missing tests
  • Check for docstring drift
  • Detect dependency changes
  • Flag subtle performance issues

Each of these is something agents can meaningfully assist with today.

Identify the recurring judgment-heavy tasks that quietly drain attention, and make those tasks continuous instead of episodic.

If CI automated rule-based work over the past decade, Continuous AI may do the same for select categories of judgment-based work, when applied deliberately and safely.

Explore Continuous AI Actions and frameworks >

The post Continuous AI in practice: What developers can automate today with agentic CI appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Forget the Model, It’s Workflows That Make LLM Products Run

1 Share

From his experience leading AI product teams, Andrew Mende (Senior Product Manager, Machine Learning at Booking.com) explained what it truly takes to ship LLM-based products in production.

Making AI products reliable requires new workflows

For Mende, the buzz around AI is a rare shift, like the rise of smartphones. But what does it mean for product teams?

This moment unlocks new ways of solving customer problems that were previously impossible due to technical constraints.

He was clear: traditional product management approaches often fail with AI-driven products.

LLM-based systems behave differently, demand new workflows, and bring new types of risk.

Unlike deterministic software, LLMs are probabilistic (identical inputs can produce different outputs), making experimentation easy but production readiness challenging, and forcing teams to rethink how they test, evaluate, and monitor features.

One of the biggest traps, Mende explained, is confusing a successful prototype with a scalable solution:

It’s easy to paste a prompt into ChatGPT and see results; much harder to make it reliable across thousands of real customer inputs.

Teams need structured datasets, big tables of real customer examples, to track accuracy, spot regressions, and see if changes actually work. Without them, it’s all guesswork.

Focus on accuracy, cost, and speed

Mende’s practical approach to model selection focuses on accuracy, cost, and latency: start with the most capable model to see if the problem can be solved, then move to smaller or faster models to optimize performance.

This requires testing multiple configurations (context size, prompts, and parameters) since even small changes affect results. Beyond the model, context selection, prompt instructions, and external tools are critical:

For example, when a customer asks about a specific order, the system should fetch real-time data instead of relying on static knowledge. This combination of LLMs and tools turns simple prompts into full systems, but also increases complexity and maintenance costs.

LLMs can transform how users interact – if teams build the right infrastructure

Mende concluded his How to Web lecture by saying LLMs shine by transforming user interaction: for the first time, digital products can understand plain language, turning customer requests directly into actions.

This shift brings digital experiences closer to human conversations and enables new product patterns that were out of reach just a few years ago.

The challenge now, Mende explained, is not whether LLMs work, but whether teams are willing to build the evaluation, monitoring, and infrastructure required to make them truly useful.

The post Forget the Model, It’s Workflows That Make LLM Products Run appeared first on ShiftMag.

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

‘Depths of Wikipedia’ Creator Annie Rauwerda on ‘Fragile’ Internet Citations

1 Share
Image credit: Annie Rauwerda, photographed by Ian Shiff, smiling in February 2023

Annie Rauwerda can’t remember a world without Wikipedia. Born in 1999, just two years before the platform launched, she says it has been omnipresent in her life and a source of endless fascination.

In the early days of the COVID-19 pandemic when she was a neuroscience student at the University of Michigan, Rauwerda said she spent a lot of time on Wikipedia and started posting quirky stories she found.

“As I clicked around, there were so many things with goofy titles,” said the now 26-year-old. “I thought to myself: ‘This could be big.’”

Making as many as five videos a day, Rauwerda indeed gained an audience with her off-beat discoveries — from stolen and missing moon rocks to the back story of people demonstrating “high fives.”  She created Depths of Wikipedia, a group of social media accounts and has more than 1.5 million followers on Instagram, 200,000 on TikTok, and 130,000 on BlueSky.  

In 2022, Rauwerda was named the Media Contributor of the Year by the Wikimedia Foundation, the nonprofit that hosts Wikipedia.

In October, Rauwerda was invited to present at the Internet Archive event in San Francisco celebrating the milestone of 1 trillion webpages saved. She brought a burst of energy and humor to the stage as she shared screenshots of some of her favorite Wikipedia articles.

Watch Annie at Internet Archive’s 1 Trillion Web Page Celebration:

Rauwerda calls herself an Internet Archive “super fan” and acknowledges its value in providing links to original sources.

“If Wikipedia is worth anything at all, it’s because of the citations, and those citations are increasingly hard to access,” she said, noting that more than half of the community articles contain a dead link. “That’s not a concern, though, for us, because we have partnerships with the Internet Archive to make sure that those links are archived and can be clicked by anyone.”

Professionally and personally, Rauwerda said she uses the Archive constantly as she looks for material, seeks out old blogs or edits Wikipedia pages.

“It’s really hard for me to think of an organization that I’m more enthusiastic about,” Rauwerda said of the Internet Archive. “I just love everything about it.”

What will matter most to future generations is hard to predict, Rauwerda said, so it’s crucial to save as much of the digital landscape as possible. “I’m thankful the Internet Archive exists,” she said, “especially given how fragile everything is online.”

Rauwerda said she’s had a “simultaneous love affair with the Internet Archive and Wikipedia” — often toggling back and forth as she dives into topics. She said she embraces the spirit of the open web and the community of people who support this work.

Beyond her social media presence, Rauwerda is writing a book about Wikipedia for Little Brown. The series of light-hearted essays about the off-beat people behind Wikipedia is slated for publication in the fall of 2026.

Rauwerda also turned her discoveries into a comedy show, which she first performed at small clubs in New York. After landing an agent, she went on a multi-city tour of the U.S., customizing the material for each region. She has another round of shows booked for 2026.

“It’s been so fun,” she said. “I’m gonna ride this while it lasts.”

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories