Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152792 stories
·
33 followers

One year on: Progress on our European digital commitments

1 Share

Europe is moving fast to capture the benefits of artificial intelligence, recognizing its potential to raise productivity, strengthen competitiveness, and help modernize public services. At the same time, organizations across Europe are focused on digital sovereignty and resilience: retaining control over their data and critical operations in a period of geopolitical volatility.

These priorities go together. That is why one year ago, we announced a set of European digital commitments to respond to these expectations. They focused on five areas:

  1. Help build a broad AI and cloud ecosystem across Europe
  2. Uphold Europe’s digital resilience even when there is geopolitical volatility
  3. Continue to protect the privacy of European data
  4. Help protect and defend Europe’s cybersecurity
  5. Help strengthen Europe’s economic competitiveness, including for open source

Together, they reflect a simple principle: Europe should be able to use global technology at scale, under European rules, with confidence that it will remain available, secure, and under customer control.

One year on, we take stock of how we’ve put those commitments into practice.

1. Building a broad AI and cloud ecosystem across Europe

A year ago, we detailed plans to increase our European datacenter capacity by 40%, expand cloud operations across 16 European countries, and reach more than 200 datacenters on the continent by 2027. Since then, we have announced new multi-billion euro investments in Portugal, Norway, and the UK, adding to and Switzerland. We also launched new cloud regions in Austria, Denmark, and Belgium. Together, this growing capacity is helping European organizations access cloud and AI capabilities closer to home while supporting sustainable growth through investments such as matching 100% of our annual global electricity consumption with renewable energy.

We emphasize now, as we did when first announcing our digital commitments, that European laws apply to our business practices in Europe, just as local laws govern local practices elsewhere in the world. We remain committed not only to building digital infrastructure for Europe, but also to respecting the role that laws across Europe play in regulating our products and services.

2. Upholding Europe’s digital resilience in a volatile geopolitical environment

For many customers, digital sovereignty is now about more than where data is stored. Institutions and businesses across Europe also want to know whether they can rely on critical digital services when geopolitical pressures rise, and whether they can adopt advanced AI capabilities without losing control.

We have made our with European national governments and the European Commission, including a commitment to promptly and vigorously contest in court any order by any government to suspend or cease cloud operations in Europe.

We also committed to continuity measures, including expanded partnerships with European cloud partners that can support our customers’ operational continuity in extreme scenarios. Reinforcing this approach, we launched a European resiliency partnership with Delos Cloud to safeguard business continuity in Europe in times of crisis. This work also supports closer cooperation among Europe’s sovereign cloud providers, including crisis response coordination and continuity options designed to help customers maintain operations even in the event of geopolitical disruptions.

We also expanded our strategic partnership with Capgemini to offer fully integrated, managed sovereign cloud services. In addition, we are deepening our collaboration with Accenture to help organizations design and implement sovereign cloud and AI solutions, supporting customers in highly regulated sectors as they balance innovation with control, compliance, and resilience.

To further strengthen governance and operational oversight in Europe, Microsoft’s European activities are now overseen by a board of directors composed exclusively of European nationals, reinforcing regional accountability and our commitments to cybersecurity, resilience, and compliance under European law.

3. Protecting the privacy of European data

Privacy, transparency, and customer control remain central to Europe’s expectations for cloud and AI. That’s why over the past year we have built a portfolio of sovereign cloud options, spanning public cloud, private cloud, and national partner solutions, so that customers can choose the level of control and oversight that best fits their legal, operational, and risk requirements. This portfolio spans infrastructure, productivity, and AI workloads across cloud, hybrid, or fully local deployments.

We have continued to implement our Defending Your Data Initiative, including our commitment to challenge government data requests for EU public‑sector or commercial customers where we have a lawful basis to do so.

We also completed the EU Data Boundary, enabling European customer data to be stored and processed.

In order to further reinforce transparency and oversight, we announced Data Guardian, which ensures that all remote access by Microsoft engineers to systems that store and process customer data in Europe is approved and monitored by personnel residing in Europe and logged in a tamper-evident ledger.

Over the past year, we have strengthened our sovereign solutions through new contractual assurances, closer partnerships with European providers, and expanded customer support.

The Microsoft Sovereign Cloud has been enhanced to help customers meet Europe’s growing expectations for control, resilience, and compliance without slowing down innovation. Recent updates add new governance and operational controls, expand productivity options for regulated environments, and strengthen encryption, while making it easier to use advanced AI capabilities that are fully customer-controlled. This includes solutions where AI models can run on customer-owned infrastructure with limited connectivity or even in fully disconnected environments. Earlier this week, we added new capabilities to our private cloud offering allowing organizations to run much larger workloads locally.

Sovereign Landing Zone provides a cloud architecture that embeds governance, compliance, and sovereign controls, helping European organizations deploy cloud environments that align with European regulatory requirements, with less complexity.

External validation of this approach continues to grow. Microsoft was named a leader in Forrester’s latest assessment of sovereign cloud platforms, recognizing the strength of our public cloud, private cloud, and partner-operated approach.

To help customers put this into practice, we opened our first three European Sovereignty and Resilience Studios in Munich, Brussels, and Amsterdam, where governments and enterprises work side by side with Microsoft’s engineers, policy experts, and security teams to capture the full promise of cloud and AI. Additional studios are planned to open in Microsoft’s nine other Innovation Hubs across Europe.

4. Helping protect and defend Europe’s cybersecurity

Cyber threats don’t stop at national borders, and Europe’s security depends on strong public‑private cooperation. During the last year, we have rolled out our European Security Program (ESP), an offering available at no cost to governments across the UK, EU, EFTA, and EU accession countries. It expands threat intelligence sharing and prioritizes new partnerships and investments to help protect critical infrastructure, disrupt cybercrime, and strengthen Europe’s collective ability to respond to attacks.

This program is live across 27 countries across Europe, providing support at no cost within a clear scope through structured briefings, early warnings, and tailored information sharing relevant to each country’s environment.

We have provided cybersecurity support to NATO, Ukraine, and other European governments, including threat intelligence, election protection, and disrupting attacks targeting European governments, companies, and citizens.

Since the start of Russia’s full-scale invasion of Ukraine in 2022, when we helped move critical data and services to secure datacenters across Europe and defend against sustained cyberattacks and eventual kinetic attacks, Microsoft has continued to support the country without interruption, providing more than $600 million in free technology, security, and financial assistance.

We have also expanded collaboration by embedding investigators with Europol’s European Cybercrime Centre (EC3). Together, we are translating technical threat intelligence into coordinated operational action, linking visibility into cybercriminal infrastructure with law enforcement’s ability to investigate, coordinate, and disrupt. This model underpinned recent cybercrime takedowns, including Tycoon 2FA, Lumma Stealer, and RedVDS. And, through our partnership with CyberPeace Institute, more than 300 European nonprofits are receiving cybersecurity support.

All of this work was reinforced in July with the appointment of Freddy Dezeure as Deputy Chief Information Security Officer, a European national based in Europe, who is coordinating Microsoft’s compliance with European cybersecurity regulations. Our European executive cybersecurity presence and oversight  are closely aligned with Microsoft’s broader cybersecurity governance, combining European guidelines with globally consistent security practices.

5. Strengthening Europe’s economic competitiveness, including for open source

We continue to support open ecosystems, including open source, to keep our AI and cloud platforms accessible and interoperable, and to give customers deployment options that fit their needs. There are almost 25 million European software developers active on GitHub, making more than 155 million contributions to public projects in the last year alone. Through Microsoft Foundry, customers can choose from more than 11,000 AI models, both open source and commercial, and run them in sovereign public or private clouds from cloud to the edge. This enables customers to deploy the same Microsoft Foundry model catalog within sovereignty‑aligned infrastructure.

But it is also vital that we support AI solutions that are more multilingual and attuned to cultural context. As part of our commitment to advance European commerce and culture, we launched LINGUA in September 2025 to support projects that collect high‑quality speech and text datasets for Europe’s underrepresented languages. Following an open call, we selected 12 projects spanning 16 languages and dialects across 10 countries, bringing together universities, nonprofits, a government language center, and a public broadcaster to create and digitize open datasets, preserve heritage languages, and develop new evaluation resources for multilingual AI.

We have new AI for Culture projects to digitally preserve iconic European sites and artifacts, including a digital replica of Notre Dame with the French Institut du Patrimoine and Iconem, and we are working with leading institutions to digitize historic cinematic model opera sets and enable access to metadata associated with millions of artifacts. We are also working with the Vatican Library on digitization and AI analysis of historic documents. All of this builds on preservation efforts underway since 2019 for landmarks such as St. Peter’s Basilica in Rome, Mont Saint Michel in France, and Ancient Olympia in Greece.

Relatedly, CĂŠline Geissmann was chosen to lead our Microsoft Open Innovation Center in Strasbourg to work at the intersection of AI, languages, culture, open data, and innovation.

Staying accountable as Europe’s digital landscape evolves

These commitments are our North Star for how we engage in Europe, grounded in European law and values, shaped by European priorities, and designed to progress over time.

As Europe’s digital and geopolitical context continues to evolve, we will keep engaging with policymakers, regulators, customers, and partners to test whether what we are delivering matches what Europe needs. Where it does not, we will adapt.

Trust cannot be claimed. It needs to be earned through our actions, day by day. We are committed to earning that trust by listening, acting, and delivering for Europe.

The post One year on: Progress on our European digital commitments  appeared first on Microsoft On the Issues.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Fixing Broken Markdown in AI Translation: Hardening a Production Pipeline

1 Share

By Minseok Song and Hiroshi Yoshioka (Microsoft MVPs)

 

TL;DR

Recent community feedback, especially from Japanese translations, revealed that many translation failures were not semantic, but structural.

Through detailed issue reports and discussions, we identified recurring patterns such as broken links, malformed code fences, inconsistent list structures, and CJK-specific formatting issues.

 

In response, Co-op Translator has undergone a series of structural improvements across multiple releases, culminating in v0.18.1 with enhancements such as parser-based code fence handling, list-aware chunking, language-specific Markdown templates, safer CJK emphasis normalization, more robust image migration, and improved internal anchor consistency.

 

These changes were directly informed by real-world community feedback. We would like to especially thank Hiroshi Yoshioka (Microsoft MVP), whose many detailed reports not only uncovered several of these systemic issues but also made this community report possible.
The result is not just improved Japanese translations, but a more reliable and resilient translation pipeline for any repository that depends on Markdown fidelity.

Introduction

Most translation bugs are not actually translation bugs.

They are structural failures.

 

They show up as broken links, missing bold markers, unclosed code fences, skipped content, or images that quietly point to the wrong place. To a learner reading translated technical documentation, those issues can make a page feel untrustworthy. To a maintainer localizing documentation at scale, they reveal something deeper: the translation pipeline is not preserving structure as carefully as it preserves meaning.

That insight became much clearer over the past several months through community feedback on Co-op Translator.

 

Co-op Translator helps maintain educational GitHub content across many languages while keeping Markdown, images, and notebooks synchronized as the source evolves. As Hiroshi Yoshioka reported a series of Japanese translation issues across real Microsoft learning repositories, each issue looked narrow on the surface: a broken link here, a skipped line there, bold markers not surviving around linked text, HTML image tags not being rewritten, or code fences breaking after chunking.

 

Example of a real community-reported issue where a code block was broken during translation, causing structural corruption in the output.

 

But taken together, those reports exposed a broader pattern:

The hardest problem was not “translate this sentence.”

The hardest problem was “translate this document without damaging its structure.”

This post is a community report on the hardening work that followed, especially in the recent run-up to v0.18.1, and what we learned from those real-world cases.

Why these reports mattered

One of the most useful things about community feedback is that it reveals failure modes that synthetic tests often miss.

These were not edge cases found in toy Markdown samples. The reports came from real translated content in active educational repositories. That meant we were dealing with the kinds of files maintainers actually have to ship:

  • nested lists
  • fenced code blocks
  • inline HTML
  • relative links
  • translated headings
  • migrated image assets
  • CJK punctuation and emphasis edge cases

In other words, we were seeing the kinds of Markdown that break when a translation system is only mostly correct.

1) We stopped treating code fences like a regex problem

Code fences are not a regex problem—they are a structural one.

Left: Regex-based handling breaks code fences and list structure across chunks.
Right: Parser-based processing preserves code blocks and their surrounding context as atomic units.

One of the earliest recurring themes was code fence integrity.

A report on incorrectly handled triple backticks highlighted a classic failure mode: if fenced blocks are detected or split incorrectly, placeholders can fall out of sync, chunk boundaries can be corrupted, and the translated file can come back structurally damaged. A later report showed a closely related issue: list items and indented code placeholders could be split into separate chunks, which then caused broken fences downstream.

The right fix was not another regex patch.

Instead, Co-op Translator moved to a parser-based approach using markdown-it-py for fenced code block detection. This made code block handling spec-aware and more resilient to cases like unmatched fences, variable fence lengths, and info strings. More importantly, it ensured code sections were treated as atomic units during chunking and placeholder restoration.

This same principle was extended to list-aware chunking.

Rather than splitting Markdown line by line and hoping the model would preserve structure, the pipeline now groups list items together with their continuation lines and indented placeholders such as @@CODE_BLOCK_X@@. This prevents bullets and their associated code content from being separated into different translation chunks.

This was not just a better heuristic. It changed the unit of chunking itself.

In practice, this required modifying the chunking pipeline to detect and preserve list-item blocks before token-based splitting. Instead of treating each line independently, we introduced a grouping step that keeps the entire list context intact, including nested indentation and code placeholders.

The change was implemented directly in the chunking logic: 

lines = _group_lines_preserving_list_items(part_text)

This helper ensures that list items and their associated code blocks are processed as a single unit, preventing structural corruption during translation.

Why this mattered

Technical documentation frequently embeds code examples directly under list items or step-by-step instructions. When these relationships are broken during translation, the issue is not just cosmetic. It results in structurally invalid Markdown and misplaced code blocks that can confuse readers and make examples unusable.

These were not edge cases. They appeared in real production documentation where:

  • Fenced code blocks became malformed after chunking
  • List items and their associated code placeholders were separated into different segments
  • Placeholder ordering drifted, breaking reconstruction of the original structure

In practice, this meant that even when the translated text was correct, the document itself could no longer be trusted as a working technical resource.

What changed in practice

Before:

  • Code samples could leak out of their list context
  • List items and code blocks were split across chunks
  • Placeholder ordering could drift, breaking reconstruction

After:

  • Code blocks are preserved as atomic units during chunking
  • List-bound code samples remain intact
  • Placeholder ordering is stable across the pipeline

2) We restored internal link consistency across translation chunks

Even when each chunk appears locally correct, internal links can break at the document level.

Left: Anchor links drift out of sync because headings and links are translated independently across chunks.
Right: After document-level normalization, links correctly resolve to their corresponding translated headings.

Another cluster of issues surfaced when translating longer Markdown documents: internal links would silently break once the content was processed in chunks.

Co-op Translator splits large documents into multiple chunks to fit within model constraints. While this works well for translation itself, it introduces a structural problem. Internal links such as [Go to section](#section-name) depend on heading-derived anchor slugs, and those slugs can change during translation. When each chunk is translated independently, links and headings can drift out of sync.

In practice, this meant that even when translated headings and links looked correct locally within a chunk, they no longer matched at the document level. Tables of contents, section jump links, and cross-references inside the same file could silently break.

The right fix was not to rely on chunk-level correctness.

Instead, Co-op Translator introduced a document-level normalization step for internal anchor links.

The pipeline now parses both the source and translated Markdown using markdown-it, extracts headings, generates GitHub-style slugs from the translated headings, and then realigns internal anchor links so they correctly point to their corresponding translated sections. Rather than trusting fragment identifiers produced during chunk-level translation, links are reconciled against the final translated document structure.

This was not just a small post-processing tweak. It changed where consistency is enforced.

In practice, this required introducing a normalization step that runs after all chunks are merged back into a single document. Instead of assuming each chunk is self-consistent, the system now treats the entire document as the source of truth and rebinds internal links accordingly.

The change was implemented as a dedicated normalization pass:

normalize_internal_anchor_links(source_markdown, translated_markdown)

This function aligns fragment identifiers with translated heading slugs, ensuring that internal navigation remains valid even when content has been translated in multiple independent chunks.

Why this mattered

Technical documentation relies heavily on internal navigation such as tables of contents, section links, and cross-references within the same file.

When anchor links drift out of sync with translated headings, the document becomes difficult to navigate even if the translation itself is accurate. Readers may click on links that lead to incorrect sections or nowhere at all, which significantly reduces trust in the content.

These issues surfaced in real-world usage where:

  • Internal links no longer matched translated heading slugs
  • Tables of contents pointed to incorrect or missing sections
  • Cross-references silently broke across chunk boundaries

This highlighted that correctness at the chunk level was not enough. Consistency had to be enforced at the document level.

What changed in practice

Before:

  • Internal links could drift out of sync with translated headings
  • Tables of contents pointed to incorrect or missing sections
  • Cross-references silently broke across chunk boundaries
  • Long documents behaved like fragmented outputs rather than a single unit

After:

  • Internal links are realigned with translated heading slugs at the document level
  • Tables of contents correctly resolve to translated sections
  • Cross-references remain consistent across the entire document
  • Long Markdown documents behave as a single coherent unit

3) We fixed CJK emphasis the safe way

Bold and italic rendering around CJK text was a recurring and subtle failure point.

Issues like “Markdown bold not handled correctly” may look minor, but they reveal a deeper compatibility problem: many Markdown renderers do not consistently apply emphasis when markers sit directly next to CJK characters.

To address this, we introduced a dedicated normalization step for emphasis markers.

Instead of relying on each renderer to interpret `*`, `**`, and `***` correctly in CJK-adjacent cases, Co-op Translator converts them into equivalent HTML tags such as `<em>` and `<strong>` when the target language is Japanese, Korean, or Chinese.

This shifts emphasis rendering from renderer-dependent behavior to deterministic output.

What mattered was not just fixing it, but fixing it safely.

The normalization is strictly scoped to CJK languages and carefully designed to avoid overmatching. It does not mutate inline code spans or unrelated fragments. This is critical, because overly aggressive formatting fixes can easily break code, identifiers, or underscore-heavy technical text.

Unlike whitespace-delimited languages, Japanese, Korean, and Chinese often place characters directly adjacent to emphasis markers without clear boundaries.

For example, a phrase like:

example is ...

may be translated into Japanese as:

例は ...

Here, the particle は is attached directly to the emphasized word. In some Markdown renderers, this breaks the expected boundary around ..., causing the emphasis to render incorrectly or not at all.

This pattern is not limited to Japanese. Similar boundary issues can appear across CJK languages due to the absence of whitespace between words.

Why this mattered

Formatting bugs around emphasis may look cosmetic, but they affect readability, hierarchy, and trust especially in instructional documentation where emphasis often signals warnings, key concepts, or required steps.

What changed in practice

Before:

  • Emphasis markers could render inconsistently when adjacent to CJK characters
  • Bold and italic formatting could break depending on the Markdown renderer
  • Fixes risked overmatching and corrupting code or inline technical content

After:

  • Emphasis rendering is deterministic across CJK languages using HTML tags
  • Bold and italic formatting remains consistent regardless of renderer behavior
  • Normalization is safely scoped, avoiding unintended mutations in code and inline content

Next steps

With the recent release, Co-op Translator now exposes a programmatic API that allows the translation pipeline to be executed directly from Python, not only through the CLI.

This is an important step, but it is not the end state.

The immediate focus is improving adoption. Documentation and usage patterns are being developed so that the API can be reliably integrated across different environments and workflows.

More fundamentally, the direction is shifting.

Co-op Translator is evolving from a repository-specific tool into a reusable translation engine that can operate as part of larger content pipelines.

This enables broader use cases, including:

  • Long-form content such as eBooks and technical blogs
  • Developer documentation and static site projects (for example, Docusaurus or Astro)
  • Continuous documentation pipelines that track and update translations as source content evolves
  • Multilingual SDK, API documentation, and knowledge base systems

The long-term goal is to treat translation as infrastructure rather than a one-time task.

Instead of generating static outputs, the system is being designed to support continuous updates, structural guarantees, and seamless integration into real-world documentation workflows.

Why community feedback mattered so much here

One of the most encouraging parts of this work is that the most useful reports were not always long reports.

Sometimes a single repository link, a screenshot, and one concrete example of broken output were enough to reveal a structural weakness in the translation engine. That feedback created a valuable loop between people reading translated docs and people maintaining the translation tooling.

Hiroshi's reports did not just identify isolated defects. They helped surface recurring categories of failure:

  • code fence integrity
  • chunk boundary safety
  • link preservation
  • CJK emphasis compatibility
  • image path migration
  • anchor normalization

Once those patterns became visible, the fixes could be implemented in the core and covered with tests so that the broader ecosystem not just one file or one repo would benefit.

Why this matters for learners worldwide

Co-op Translator is used in educational repositories where translated documentation can lower the barrier to learning for people around the world. That raises the quality bar.

A learner should not have to wonder whether a missing bold marker changed the meaning of a sentence.
A learner should not hit a broken anchor halfway through a tutorial.
A learner should not lose trust in a translated page because a code block or image path was corrupted during processing.

Improving those details is not cosmetic. It is part of making global technical education more reliable.

Closing thoughts

This community report comes down to a simple truth:

Translation quality depends on structural quality.

Community feedback helped Co-op Translator get better at preserving the things technical documents depend on most: code fences, lists, links, emphasis, images, and anchors. The result is a more dependable foundation for multilingual documentation not only for Japanese, but for any repository that needs translated content to behave like a maintained technical artifact rather than a plain text dump.

To everyone who has opened an issue, shared a screenshot, submitted a PR, or stress-tested translated docs in the real world: thank you. That feedback is helping Co-op Translator become a stronger tool for maintainers and a more trustworthy experience for learners.

If you are maintaining multilingual Markdown content, I hope these lessons are useful beyond this project too: use parsers where you can, make structure a first-class concern, and treat community bug reports as design input not just support tickets.

 

If you are working on multilingual documentation, you can explore Co-op Translator here:

https://github.com/Azure/co-op-translator

Selected GitHub references

 

About the authors

Minseok Song (Microsoft MVP) is an OSS maintainer of Co-op Translator focusing on GitHub-native multilingual automation.

Hiroshi Yoshioka (Microsoft MVP) is a community contributor who has played a key role in improving translation quality through detailed real-world feedback.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

🖼️Streamline Image Generation Workflow in Foundry Toolkit

1 Share

Integrating image generation into a production AI application has historically meant juggling multiple surfaces — browsing models in the Foundry portal, setting up deployments via the Azure CLI, testing prompts in a separate tool, then stitching together API credentials before writing a single line of app code. That context-switching adds friction at exactly the moment you want to be experimenting.

With this release, the full image generation workflow — discover, deploy, prompt, iterate, export code — lives inside your editor. A few things this unlocks for developers:

🎨GPT-Image-2 in the Model Catalog

GPT-Image-2 via Microsoft Foundry is now listed in the Foundry Toolkit Model Catalog. You can browse its capabilities, review inference parameters, and deploy it to any Azure AI Foundry project directly from the sidebar — no portal tab-switching required.

To get started:

  1. Open FOUNDRY TOOLKIT → My Resources → Model Catalog
  2. Search for gpt-image-2 and select it to view model details and inference parameters.
  3. Click Deploy to add it to your active Foundry project.

✨Image Playground

With GPT-Image-2 deployed, the Playground automatically surfaces an Image Playground mode. Describe what you want, hit generate, and see results side by side — no REST client, no extra tooling. Use the View Code shortcut to copy the API call directly into your project.

To generate your first image:

  1. Click + New Playground in the Playground tab — the mode auto-selects Image Playground when gpt-image-2 is the active model.
  2. Type a prompt and send — generated images appear in the canvas with download controls.
  3. Click View Code (top right) to get a ready-to-paste code snippet for your application.

Image generation is one of the fastest-growing use cases in production AI applications — from dynamic content creation to data augmentation to UI asset generation. This update ensures developers building on Microsoft Foundry have a first-class path to ship those capabilities faster.

🚀 Get Started Today

Ready to experience the future of AI development? Here's how to get started:

We'd love to hear from you! Whether it's a feature request, bug report, or feedback on your experience, join the conversation and contribute directly on our GitHub repository.

Happy Coding!

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Episode 2000!

1 Share
Recorded live at the Tavern Hall in Bellevue during the Party with Palermo for the MVP Summit, it's episode 2000! Carl and Richard take questions from the audience and play clips from past guests and listeners about their experiences with .NET, and the role that .NET Rocks has played in their careers. After two thousand shows, there are lots of stories, and plenty to celebrate. Thanks for listening!



Download audio: https://dts.podtrac.com/redirect.mp3/api.spreaker.com/download/episode/71767106/dotnetrocks_2000_episode_2000.mp3
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

WW 981: Semi-Sophisticated - Microsoft Releases Source Code For 86-DOS 1.00

1 Share

This episode dives into early MS-DOS/PC-DOS source code, Snapdragon X2 gaming, and the "We are Xbox" messaging. Plus, Paul details 2 big changes in the Windows Insider Program. The latest PowerToys adds 2 useful new utilities and many improvements. And a TWiT listener asked about buying Windows 11 on Arm to virtualize it on a Mac. You're not going to believe what happened next.

Windows

  • Microsoft announces big changes to Windows Update
  • Microsoft announces Windows Insider Program changes, with Experimental channel
  • Hands-on with both predictably shows some nice improvements
  • Hands-on with Snapdragon X2 gaming - more wins, but still some losses
  • Microsoft open sources some of the earliest MS-DOS source code and related materials
  • Intel loses $3.7 billion and Wall Street could not be happier. WTF is happening - Paul has a theory and its called collusion

AI

  • Microsoft and OpenAI revise partnership again and wait for it...
  • OpenAI immediately signs on with AWS
  • Microsoft 365 Copilot subscribers get Word, Excel, and PowerPoint agentic AI features
  • Copilot in Outlook can manage your inbox and calendar, but seriously stop using Outloook
  • GitHub Copilot's switch to usage-based billing starts June 1
  • OpenAI is reportedly working on a phone because of course it is
  • Adobe Firefly AI Assistant is now available in preview
  • And then Anthropic goes deeper into the creator market

Xbox and gaming

  • Microsoft Gaming is being (re)rebranded to Xbox!
  • New Xbox leadership can't stop explaining its plans and it's glorious
  • Microsoft still plans a mobile game store, waiting on Apple to stop being so f'ing terrible
  • Valve's Steam Controller will cost $99 and launches next week

Tips and picks

  • Tip of the week: Windows licenses, $, and you
  • App pick of the week: PowerToys 0.99
  • RunAs Radio this week: M365 Copilot vs Claude Cowork with Sharon Weaver
  • Brown liquor pick of the week: Reifel Rye

Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell

Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly

Check out Paul's blog at thurrott.com

The Windows Weekly theme music is courtesy of Carl Franklin.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:





Download audio: https://pdst.fm/e/pscrb.fm/rss/p/mgln.ai/e/294/cdn.twit.tv/megaphone/ww_981/ARML3814957756.mp3
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Sovereign Private Cloud scales to thousands of nodes with Azure Local

1 Share

Today, I am pleased to announce that Azure Local now scales to support deployments of up to thousands of servers within a single sovereign environment, allowing organizations to run much larger workloads locally across large-footprint datacenters, industrial environments and edge locations while maintaining control within their sovereign boundary.

Organizations operating national infrastructure, regulated workloads or mission-critical services are navigating a fundamental shift in how cloud infrastructure must be deployed and managed. As digital sovereignty postures evolve and regulatory requirements tighten across regions, infrastructure strategies are increasingly shaped by the need to maintain jurisdictional control over data, operations and dependencies. At the same time, AI and data-intensive applications are moving closer to where data is generated, requiring infrastructure that can scale to support larger deployment footprints while maintaining operational control, compliance and data residency requirements within sovereign environments.

Azure Local is the foundation for Microsoft’s Sovereign Private Cloud, allowing organizations to run cloud-consistent infrastructure on hardware they own and operate within their sovereign boundary. It supports deployments across connected, intermittently connected or fully disconnected environments. With Azure Local disconnected operations, customers retain the ability to apply policy enforcement, role-based access control, auditing and compliance configuration locally, allowing them control over how infrastructure is configured, secured and updated regardless of public cloud connectivity.

Scaling Sovereign Private Cloud
Sovereign Private Cloud deployments must scale to support not only larger workloads, but also the operational requirements of national infrastructure and regulated industries. Azure Local allows organizations to grow deployments from hundreds up to thousands of servers within a single sovereign boundary, allowing infrastructure to expand alongside demand without requiring architectural redesign.

As deployment footprints grow, resiliency becomes essential to maintaining continuous operations for mission critical services. Expanded fault domains and infrastructure pools help prevent hardware failures from resulting in service outages, ensuring critical workloads remain operational across environments with varying levels of cloud connectivity.

At these larger scale points, organizations can run data-intensive AI inference and analytics workloads entirely within their own environment. With support for high-performance graphics processing unit (GPU) infrastructure, sensitive models and operational data remain within customer-controlled infrastructure, while access management, auditing and compliance controls are maintained within the sovereign deployment.

Built for challenging workloads
Increased deployment scale unlocks new workload placement opportunities, from large sovereign private cloud deployments to distributed AI workloads, allowing organizations to run more data intensive and latency sensitive applications entirely within their sovereign boundary.

AT&T, one of the world’s largest telecommunications operators, is deploying Azure Local to run mission-critical infrastructure on hardware they own in their environment. The goal: full operational control while running at the scale the business demands.

“Azure Local provides the infrastructure foundation we need to run critical operations at scale, while ensuring control and governance across our environment. The consistency of the Azure operating model, delivered on our own infrastructure, is key as we continue to modernize while delivering reliable services to our customers.”

— Sherry McCaughan, Vice President – Mobility Core Services, AT&T

Kadaster, the Netherlands’ official land registry and mapping agency, is running Azure Local to keep sovereign control over some of the country’s most sensitive public data.

“As a government agency responsible for some of the Netherlands’ most sensitive data, we need infrastructure that gives us full control over where our data lives and how it’s governed. Azure Local has been a consistent foundation for that — and as our workloads grow in scale and complexity, the platform has grown with us.”

— Maarten van der Tol, General Manager, Kadaster

FiberCop, Italy’s most advanced and extensive digital network operator is deploying Azure Local across its edge locations to bring sovereign cloud and AI services to organizations throughout the country. Fabio Veronese, Chief Information & Technology Officer commented:

“FiberCop is better positioned than any other player on the Italian market to drive innovation and deliver cloud as well as AI services at national scale. Azure Local supports our mission to drive Italy’s digital future and brings Microsoft’s cloud capabilities to edge workloads across the country while keeping data sovereignty and compliance where they matter most.”

The infrastructure behind Sovereign Private Cloud
Azure Local is available today with validated compute and enterprise storage platforms from partners including DataON, Dell Technologies, Everpure, Hitachi Vantara, HPE, Lenovo and NetApp, allowing organizations to integrate existing Storage Area Networks (SAN) and preserve prior investments while allowing compute and storage resources to scale independently within their sovereign environment.

At the silicon level, Intel®  Xeon® 6 processors provide the compute foundation for the platform. Built for the density and performance demands of modern enterprise workloads, Xeon 6 also brings built-in AI acceleration with Intel® AMX, meaning organizations running inference or generative AI workloads within their sovereign environment do not need to introduce separate, specialized infrastructure to do so.

Together, Azure Local, validated compute and enterprise storage platforms, accelerated computing platforms and underlying silicon can provide a datacenter-scale stack that supports sovereign infrastructure deployments while helping ensure data, models and execution remain within customer-controlled environments.

Sovereign infrastructure built for your requirements
Azure Local was built to meet customers where their requirements are whether that means strict data residency, disconnected operations, regulated workloads or AI running close to where data is generated. As these requirements evolve across regulated industries and governments worldwide, Sovereign Private Cloud deployments can expand from a single node at the edge to large enterprise-scale datacenter environments, running on hardware organizations own and operate, with consistent lifecycle management through Azure.

Resources:

Learn more about Azure Local
Explore Microsoft’s Sovereign Cloud
Read the Tech Community blog
Visit the Azure Local solution catalog
Douglas Phillips leads global engineering efforts for Microsoft’s specialized, sovereign and private clouds. He is responsible for Microsoft’s global strategy, products and operations that bring Microsoft’s industry-leading solutions, including Azure, our adaptive cloud portfolio and Microsoft 365 collaboration suite, to customers with additional sovereignty, security, edge and compliance requirements.

The post Microsoft Sovereign Private Cloud scales to thousands of nodes with Azure Local appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories