Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149124 stories
·
33 followers

Jensen Huang Is Training His Own Replacement

1 Share

The leather jacket has no clothes.

Jensen Huang stands on stage at GTC, basking in the adulation of an audience that treats product launches like religious experiences while NVIDIA’s market cap hovers north of four trillion dollars. The narrative being pushed is simple, that Jensen is a visionary genius, that NVIDIA is the essential infrastructure of the AI revolution, and that the future belongs to GPU compute.

I think this narrative is wrong, not because NVIDIA isn’t dominant today, it obviously is, but because the very things driving that dominance are simultaneously building the machinery of its decline. The key diversification that made NVIDIA what it is today is being undone, and what’s left is a company eating its own tail — and not just with circular financing: it’s a silicon ouroboros.

Betrayal

Let’s start with where NVIDIA came from, gaming GPUs, and let’s be honest about how they’re treating that market now.

The RTX 50-series launch tells you everything. The 5080 is a 4080 Super whose headline exclusive feature is Multi Frame Generation — the insertion of AI-generated fake frames that add latency and visual artefacts — at a higher price. GamersNexus found it outperforms the 4080 Super by as little as 2.5% in some titles, with Blender benchmarks showing just 8% over the 4080 Super. The 5060 Ti ships with 8GB of VRAM in 2026 which is genuinely insulting — so much so that German retailers report the 16GB model outselling the 8GB version 16:1 and NVIDIA didn’t even send the 8GB card to reviewers.

I play on a 5080 and I’ve had personal experience of this and have spent significant time experimenting with these settings. The introduction of things like frame generation is noticeable, things just feel off.

Each generation now delivers less actual hardware improvement and more software gimmicks: DLSS, Frame Generation, Neural Shaders. These are all presented as features but they’re really admissions of failure and they exist because the raw silicon isn’t advancing fast enough to brute-force the problems they’re designed to mask and meanwhile prices go up.

But with no competitive alternative offering equivalent ray tracing performance, gamers have nowhere to go. They’re not convinced — they’re captive. And the gaming press, dependent on NVIDIA access and ad revenue, does the convincing for them, selling upscaled 1080p internal resolution at 4K output as though it were the same as native 4K. Follow independent gamers on YouTube and you’ll quickly come across people bemoaning that things now look messy, that we’ve lost the sharpness we used to have, and that modern games are running worse for little to no improved visuals.

Access Media

Why does this narrative go largely unchallenged? Because the gaming media covering GPUs is structurally compromised. As is the press covering technology in general. It all depends on access.

Digital Foundry produces technically excellent analysis, but the editorial framing is selectively applied in ways that are hard to ignore, for example in 2023, DF was publicly called out for an apparent double standard in how they covered NVIDIA’s frame generation versus AMD’s FSR3 — framing the same fundamental technology in markedly different terms depending on who made it. Their RTX 5080 DLSS 4 coverage arrived as an exclusive early preview on NVIDIA’s engineering sample hardware, before any independent testing was possible — effectively a first-look marketing vehicle presented as independent analysis. Then you’ve got the NVIDIA-sponsored content elephant in the room. DF has published multiple videos explicitly sponsored by NVIDIA. Thats when unconcious bias starts to creep in.

To be fair it’s not just Digital Foundry - this is rampant across the entire games press and it has long been a problem. I, for one, have always refused to use the word journalism with respect to games media for this reason.

There are exceptions and often today they can be found on YouTube. Gamers Nexus for example — Steve Burke seems genuinely happy to torch a relationship if the product deserves it. Both GamersNexus and Hardware Unboxed released videos detailing how NVIDIA was selectively granting access to produce favourable coverage using inflated Multi Frame Generation benchmarks. NVIDIA didn’t send reviewers the 8GB card at all — Hardware Unboxed had to buy one themselves to reveal the performance problems. Outlets like GN are the minority, and NVIDIA knows it.

The Data Centre Gold Rush

So NVIDIA has been neglecting consumers in favour of the data centre AI gold rush and that’s fair enough — there’s money to be made in them there warehouses, but it’s worth looking at how the money is flowing.

A significant portion of NVIDIA’s data centre revenue comes from companies that NVIDIA itself has invested in, who then use that capital to buy NVIDIA hardware. NVIDIA has poured money into CoreWeave, Lambda, Nebius, xAI, and even OpenAI — all of whom are major GPU customers. In January 2026 it invested another $2 billion in CoreWeave on top of a $3.3 billion existing stake, while also committing to be the buyer of last resort for any unsold CoreWeave capacity through 2032. CoreWeave, for its part, had $18.8 billion in debt obligations as of September 2025 with much of it collateralised against NVIDIA GPUs.

This is all entirely legal and unremarkable as a business practice. But it has a structural consequence: when your investors and your customers overlap this heavily, revenue growth and capital deployment become hard to distinguish.

As Ed Zitron has extensively documented, NVIDIA isn’t Enron — but the deals it’s doing with neoclouds are, in his words, “dodgy and weird and unsustainable.” NVIDIA felt compelled to leak a seven-page internal memo insisting it was nothing like Enron — which is the kind of thing you do when you’re definitely not worried about people thinking you’re like Enron. Short sellers Jim Chanos and Michael Burry weren’t convinced. Chanos warned that layering “arcane financial structures on top of these money-losing entities” is the real vulnerability and Burry flagged what he called “suspicious revenue recognition” across multiple AI companies. Bloomberg described the CoreWeave deal plainly as “the latest example of the circular financing deals that have lifted valuations of AI companies and fueled concerns about a bubble.”

Gary Marcus, the NYU cognitive scientist and long-standing AI sceptic (at least of the claims of the LLM makers), has been tracking these dynamics on his Substack since mid-2025. He called the Oracle-OpenAI deal “peak bubble” and warned that the industry had entered “peak musical chairs.” In a more recent piece, he argued that NVIDIA’s stock plateau — up 1200% over five years but essentially flat for six months — marked the point at which Wall Street began losing confidence, driven in part by concerns about circular financing and the profitability of LLM companies. Marcus and Zitron have been two of the most persistent voices making this case while much of the financial press and many analysts were still writing breathless coverage.

This doesn’t make NVIDIA’s revenue fraudulent, it makes it fragile. Analyst models that project these growth rates forward are treating circular capital flows as though they represent independent, end-user-driven demand. And it’s worth remembering that many of the businesses analysts work for generate revenue from fees: they’re not inclined to be too critical. When the AI investment cycle corrects — and it will, because cycles always do — the revenue that was never anchored to external demand will be the first to evaporate.

An Emerging Threat

Meanwhile, the real threat to NVIDIA’s data centre dominance is emerging from below. Purpose-built ASICs for AI inference are starting to compete on price-performance — and in some cases, they’re not just competing, they’re winning.

According to TrendForce, custom ASIC shipments from cloud providers are projected to grow 44% in 2026, while GPU shipments grow at just 16%, and the share of ASICs in AI servers is expected to jump from around 21% in 2025 to nearly 28% this year. Some predict NVIDIA could fall from over 90% inference market share to 20-30% by 2028 as ASICs take over production inference workloads, thats a heck of a fall but even a less severe drop than that is going to cause NVIDIA issues.

And there’s recent precedent for this. During the cryptocurrency boom, miners bought GPUs in bulk for proof-of-work hashing — until purpose-built ASICs arrived that were orders of magnitude more efficient at the same task. GPUs became uncompetitive almost overnight and NVIDIA was left with excess inventory. The stock fell over 50% from its October 2018 peak while gaming revenue nearly halved in a single quarter. Jensen called it a “crypto hangover.” The pattern is straightforward: when a workload becomes well-defined enough to justify custom silicon, general-purpose hardware loses. AI inference is reaching exactly that threshold now. His “ASIC hangover” could be the stuff of nightmares.

The specific challengers tell the story. Google’s TPU Ironwood (7th generation) is considered technically on par with or superior to NVIDIA’s GPUs by some experts, including Chris Miller, author of Chip War. Anthropic trains its most advanced models on up to one million Google TPUs — not NVIDIA GPUs. Amazon is filling data centres with its own Trainium2 chips. OpenAI has committed to deploying 10 gigawatts of custom Broadcom ASICs starting in 2026 while Cerebras’ wafer-scale engine delivers inference at over 6× the speed of Groq’s LPU, which itself was already dramatically outperforming NVIDIA hardware on key benchmarks. SambaNova claims 16 of its chips can replace 320 GPUs for serving a 671-billion-parameter model.

Perhaps the most telling data point is what NVIDIA did in response. In late 2025, it paid $20 billion to acqui-hire Groq — the inference startup founded by one of the original architects of Google’s TPU. Groq’s Language Processing Units were delivering 2-3× speedups over NVIDIA hardware on inference benchmarks, and both AMD and Intel were reportedly bidding aggressively for the company. NVIDIA’s move was widely characterised as defensive and based on neutralising an emerging threat rather than buying growth. When you spend $20 billion to absorb a competitor whose entire value proposition is that they’re faster than your products, then that’s not a position of strength: that’s weakness.

The standard counter-argument to this is CUDA lock-in, that every ML team thinks in CUDA, every codebase is coupled to it, and the institutional cost of switching is enormous. This is a genuinely strong defence — or at least it was - until AI began collapsing technical moats.

Oh The Irony

The technology driving NVIDIA’s revenue boom — large language models — is also the thing that neutralises NVIDIA’s deepest competitive moat.

The CUDA lock-in argument rests on the assumption that migrating millions of lines of GPU-optimised code is prohibitively expensive and time-consuming as porting meant humans manually rewriting and retesting everything. Thats a huge job.

But if you can point an LLM at a CUDA codebase and say “port this to ROCm” or “retarget this for our custom ASIC instruction set” — and get the vast majority of the way there in days rather than months — the switching cost argument collapses. The economic calculation changes dramatically when migration drops from “eighteen months” to “a few weeks of validation and tuning.”

And the companies best positioned to do this are the exact hyperscalers who are also building or commissioning ASICs. Google, Amazon, and Microsoft — they all have both the AI capability to automate the migration and the strategic incentive to break free from NVIDIA dependency.

NVIDIA is selling the tools that will be used to escape its own ecosystem.

Why CUDA Is the Perfect Target

This isn’t hand-wavy speculation about AI maybe being able to port code someday, CUDA is almost comically well-suited to automated translation.

Every CUDA kernel is a pure function with explicit inputs, explicit outputs, and no hidden state. There’s no spooky action at a distance — no global mutable state leaking between calls, no side effects you need to trace through a dependency graph. The contract is right there in the function signature and so an agent can look at a single kernel in isolation, understand exactly what it does, rewrite it for a different target, and verify the output without needing to comprehend the entire codebase.

And the verification story is perfect for agentic iteration loops as you have deterministic numerical inputs and outputs, so you can generate test cases from the CUDA version, run them on the ported version, and diff the results automatically. An agent doesn’t need to understand the mathematics — it just needs to confirm that the same inputs produce the same outputs within tolerance. Heck you can even firewall the agent writing the new code from the agent writing its test. That’s a tight, automatable feedback loop with no human judgement required.

But the real killer is the parallelisation. A CUDA codebase might contain thousands of kernels, but they’re largely independent units. So you spin up an orchestrator agent that inventories the codebase and builds a dependency graph and it fans out to N worker agents, each handling a kernel or module. Each worker rewrites its target, generates tests, and iterates until the output matches. A validation agent runs integration tests on the assembled result. The whole pipeline is embarrassingly parallel — the same property that made the code suitable for GPUs in the first place makes it suitable for parallel agentic translation and the hyperscalers are literally built for this.

Now, the obvious objection is that CUDA isn’t just kernels there’s cuDNN, TensorRT, Nsight, NCCL, Thrust — an entire ecosystem of libraries, profiling tools, and multi-GPU communication primitives that teams have built years of workflow around. And all that is true but its a dependency graph and not magic. These libraries are themselves composed of well-documented APIs with known input-output contracts. The migration challenge is real but it’s an engineering problem with a finite surface area and not an open research question.

And the hyperscalers aren’t starting from scratch — Google’s JAX ecosystem, AMD’s ROCm stack, and Intel’s oneAPI are all mature enough that the target platforms already have equivalents for most of this tooling. The gap isn’t “does an alternative exist” anymore, it’s “is the switching cost worth it” — and that cost is falling off a cliff precisely because the models NVIDIA’s hardware trained are now capable enough to automate the tedious parts of the migration.

They’re just as vulnerable to the automation of software development as the rest of us and with every quarter that passes, the moat gets shallower. And the hyperscalers have very very very big pumps.

The Narrative Arc

NVIDIA have a pretty simple story if you sum it up. Jensen built a great gaming GPU company and while 3dfx were flailing around he delivered great products that gamers wanted and consolidated his position with acquisition. Recognising the lack of diversity was a risk and looking for ways to make GPUs more broadly useful he diversified into data centre compute, a smart and necessary move, but not the stroke of genius the press portrays. Solid execution. Business school 101. And then the transformer revolution landed in his lap, LLMs became the new hot thing, and his interesting GPU company became the behemoth we know today.

Being in the right place at the right time with the right product isn’t the same as having engineered the entire outcome, but the leather jacket mythology requires a visionary, and the tech press love a messianic story, so that’s what we got.

Now trace the arc forward. Gaming company becomes compute company becomes AI company becomes victim of AI.

The diversification that saved NVIDIA from being just a gaming company is collapsing back into a single dependency — data centre AI revenue — that is simultaneously propped up by a form of circular financing and threatened by the very technology it enables. The customers buying the GPUs are using those GPUs to train the models that will make it trivially cheap to migrate away from NVIDIA’s ecosystem onto cheaper, faster, custom silicon.

It’s the Ouroboros business model. The snake is going to eat its own tail, except the tail is a four-trillion-dollar market cap.

The Intel Parallel

The historical parallel is Intel in the mid 2010s, the had absolute market dominance and no real competition. AMD had been written off and so they got lazy and extractive delivering incremental improvements with premium pricing, because where were you going to go? Everybody bought Intel. Then AMD came back with Zen and the whole thing unravelled faster than anyone expected. Look at Intel today.

NVIDIA is arguably more entrenched, but the dynamics are in many ways similar and the arrogance that comes from unchallenged dominance eventually creates the opening for someone else. Whether that’s ASICs eating the data centre business, AMD getting serious about RT, or Intel maturing their architecture — something will crack. And although perhaps more entrenched their dediversification (is that even a word?) makes them extremely vulnerable - they have a single product and a handful of customers.

Its not a question of whether NVIDIA’s position is vulnerable - it is. Its got a single product line, a handful of customers with enormous leverage, and cheaper more performant alternatives emerging. It’s whether Jensen recognises it before the correction arrives but the GTC keynotes suggest a man who has started to believe his own mythology, and that’s usually when theres a fall.

You can see it in how he handles pressure. Just recently, the $100 billion OpenAI infrastructure deal — that was announced with great fanfare in September 2025 — quietly collapsed to $30 billion. The deal had been in trouble for months: NVIDIA’s own quarterly filings warned there was “no assurance” it would be completed and Jensen himself fell back on this when challenged. The Wall Street Journal reported that Jensen had been privately criticising OpenAI’s business approach while the deal was supposedly “on track.” When the WSJ first reported the deal was stalling, Jensen called it “nonsense.”. And yet weeks later, it was confirmed. Meanwhile, reports emerged that OpenAI was unhappy with NVIDIA’s inference capabilities and had been blaming weaknesses in its Codex product on NVIDIA hardware.

MIT Sloan professor Michael Cusumano described the original $100 billion arrangement to the Financial Times as “kind of a wash” — NVIDIA invests $100 billion in OpenAI stock, OpenAI spends $100 billion on NVIDIA chips. As TechCrunch noted, Jensen’s stated reason for pulling back — that OpenAI’s upcoming IPO closes the window — doesn’t square with how late-stage private investing actually works.

This is not the behaviour of someone operating from a position of strength. Dismissing credible reporting as nonsense, then being proven wrong. Blaming the other party when a deal falls through. Offering explanations that don’t withstand scrutiny. These are the tells of someone who feels the ground shifting and doesn’t like it.

NVIDIA’s pricing confidence tells you everything about how they see the competitive landscape. They believe there’s nowhere else to go. They’ve gotten comfortable in a market where that has recently been the case. History suggests that this kind of belief is the beginning of the end and while NVIDIAs rise to its present height has been meteoric its possible its fall will be just as swift. And for gamers like me, not unwelcome.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

The Hidden Power of Frameworks

1 Share

Power of Frameworks

“The most practical way to simplify complexity is through models.”
Peter Drucker

I’ve created frameworks at Microsoft for leadership advantage and to scale impact around the world.

I’ve also used frameworks and mental models to think and communicate better through the tough stuff.

Most people communicate in streams of thoughts.

High-impact leaders communicate in structured patterns.

Frameworks turn your thinking into portable insight.

Without frameworks:

  • Ideas feel rambling

  • Advice feels generic

  • People forget what you said

With frameworks:

  • Ideas feel clear

  • Advice feels authoritative

  • People remember and repeat it

In other words:

Frameworks convert knowledge into influence.

Key Takeaways

  • Compress complex ideas into simple patterns
  • Make insight memorable and repeatable
  • Scale knowledge across teams and organizations
  • Signal expertise and pattern recognition
  • Turn experience into practical decision tools

The Hidden Mechanism: Why Frameworks Work

The uncommon insight is that frameworks solve three cognitive problems simultaneously.

1. Cognitive Load

Humans struggle to process unstructured information.

Frameworks compress complexity.

Example:

Instead of explaining leadership for 20 minutes:

“Leadership is clarity, energy, and results.”

Instant understanding.


2. Memory Encoding

People remember patterns, not paragraphs.

That’s why great frameworks often use:

  • 3s

  • contrasts

  • before/after

  • acronyms

  • named models

Examples everyone remembers:

  • Start With Why

  • The 7 Habits

  • Jobs To Be Done

  • Circle of Influence

The insight:

Frameworks create mental handles.


3. Authority Signaling

Frameworks signal original thinking.

When you say:

“Here are three ways leaders fail.”

You sound like someone with experience.

Even if the insight is simple.

Frameworks create the perception of:

  • expertise

  • pattern recognition

  • strategic thinking


Turning Raw Ideas into Repeatable Models

One of the most important things I learned at Microsoft, is turn raw ideas into repeatable models.

Instead of saying:

“Good leaders communicate clearly.”

Say:

The CLEAR Model

  • Context

  • Logic

  • Expectation

  • Action

  • Result

Now your idea has a name and structure.

That makes it:

  • teachable

  • shareable

  • memorable


A Quick Story from Microsoft

One of the biggest lessons I learned during my 25 years at Microsoft was the power of frameworks to scale thinking.

Early in my role supporting innovation teams, I was sending long strategy emails filled with insights, trends, and analysis. Leaders appreciated the ideas, but the impact was limited.

So I started turning insights into simple frameworks leaders could apply immediately.

Instead of long explanations, I would share models like:

Vision → Value → Velocity

Leaders across teams could instantly use the framework to evaluate initiatives:

Is the vision clear?
Does it create real value?
Can we execute with speed?

The result was powerful.

Instead of one conversation at a time, the framework spread across teams and organizations.

That’s when I realized something important:

Frameworks don’t just clarify thinking.
They scale impact.


The Creator Advantage

Another non-obvious point she emphasizes:

Frameworks make content scalable.

One framework can generate:

  • posts

  • talks

  • training

  • courses

  • consulting

Example pattern:

Framework → Examples → Stories → Applications

This is why many thought leaders build entire brands around frameworks.


The Advanced Insight Most People Miss

Frameworks don’t just explain ideas.

They create new ways to see problems.

A great framework is really a lens.

For example:

When I created Vision → Value → Velocity, I didn’t just describe productivity.

I created a decision lens for evaluating work.

That’s when frameworks become powerful.

They move from description → decision tool.


The Real Test of a Great Framework

I found that a great framework should pass three tests:

  1. Memorable

  2. Repeatable

  3. Actionable

If people can’t repeat it, it won’t spread.

If they can’t apply it, it won’t matter.


Frameworks are Compressed Experience

Frameworks are essentially compressed experience.

They turn years of observation into something someone can use in 30 seconds.

That’s why the best thinkers — Drucker, Covey, Porter, Jobs — all spoke in frameworks.

You Might Also Like

Frameworks Library

Charlie Munger’s Mental Models for Leadership Judgment

JD’s Frameworks

The post The Hidden Power of Frameworks appeared first on JD Meier.

Read the whole story
alvinashcraft
45 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

IoT Coffee Talk: Episode 302 - Netscape (A Neuromorphic Throwback)

1 Share
From: Iot Coffee Talk
Duration: 38:51
Views: 2

Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.

This week Rob and Pete jump on Web3 for a discussion about:

🎶 🎙️ BAD KARAOKE! 🎸 🥁 "Catfish Blues", Jimi Hendrix
🐣 Why Edge AI is not going to MWC when mobile is largely about edge AI?
🐣 Why aren't we using more neuromorphic compute for AI stuff?
🐣 We want more for less, but what went wrong? Is IoT and edge AI the way to go?
🐣 Rob and Pete talk about the legacy of Texas Tech!
🐣 Why are Apple's Mac Mini selling out? Did they have AI right this whole time?
🐣 AI needs guardrails. Do humans need guardrails with AI?
🐣 How the old school mini computer creators create our whole new world.
🐣 The origins of the Internet and how its wizards apparently didn't sleep.
🐣 From Netscape to Clubhouse.
🐣 How "Plug and Pray" change the world and propelled Microsoft Windows.
🐣 It's never too late to innovate. Being first likely means you will be first.
🐣 Pete shows off his Oscar Goldman look!

It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!

Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!

If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Communities. Just make a minimally required donation to www.elevatecommunities.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!

Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on Buzzsprout, like, subscribe, and share: https://lnkd.in/gyuhNZ62

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

GeekWire Podcast on location at OpenAI in Bellevue, with CTO of Applications Vijaye Raji

1 Share
Vijaye Raji, OpenAI’s CTO of Applications, speaks at the company’s Bellevue office opening. (GeekWire Photo / Todd Bishop)

OpenAI just opened its largest office outside San Francisco, in downtown Bellevue, Wash., and we were there for the grand opening to tour the space, check out the vibe, and record this week’s GeekWire Podcast.

Chatting inside the OpenAI game room, we share our observations about the Mad Men-meets-Pacific Northwest aesthetic — which features open floor plans and a wide variety of common areas — and try to figure out what it all says about OpenAI’s culture. 

Plus, a conversation with Vijaye Raji, former Statsig CEO and now OpenAI’s CTO of applications, about Codex, infrastructure, hiring, and the evolution and growth of Silicon Valley tech giants in the region. 

In our final segment, it’s the return of the GeekWire trivia challenge, with a question focusing on one of the earliest tech giants to establish an outpost in the Seattle area. 

‘Hard to imagine going back’

One of the most interesting moments in the conversation with Raji came when he described how OpenAI’s own Codex tool has changed his day-to-day work, to the point where he’s personally making software again, or at least he’s prompting the software to make software.

“Codex has made coding a lot more delightful,” Raji said. “I’m back coding.”

He described a new daily rhythm: “Before you hop into a meeting, you ask it to go do a set of tasks, and then you jump into a meeting, and then when you come back, it’s done, and then you review it,” he said. “It’s so cool.”

OpenAI CTO of Applications Vijaye Raji (left) and Bellevue Mayor Mo Malakoutian just after cutting the ribbon at the opening of OpenAI’s new Bellevue office. (GeekWire Photo / Todd Bishop)

Internally, Raji said teams using Codex are seeing 2-3x productivity gains in terms of code output. Beyond engineering, the tool has found its way into marketing, sales, and operations.

“It’s very hard for me to imagine going back to the way we used to write code anymore,” he said. “It’s fundamentally changed.”

OpenAI’s Codex, which got a Windows app this week, is part of an explosion of AI coding tools including GitHub Copilot (Microsoft), Amazon Q Developer, Google’s Gemini Code Assist, Anthropic’s Claude Code and others, all promising significant developer productivity gains.

A template for other OpenAI offices

As for the Bellevue office, Raji sees it as a potential model for OpenAI’s expansion elsewhere. The proximity to San Francisco headquarters, the shared time zone, the short distance from OpenAI partners Microsoft and Amazon, and the depth of local infrastructure talent make it an ideal test case.

A seating area at OpenAI’s new Bellevue office, featuring the retro-modern aesthetic that runs throughout the space. (Trevor Tondro Photo for OpenAI)

“If we can make Seattle very, very successful, we can take that formula and apply it to more offices,” he said.

OpenAI currently has 250 employees in Bellevue, with room to grow to 1,400. The office houses teams working on infrastructure, ChatGPT, research, advertising, and partnerships.

Raji will be speaking at GeekWire’s AI event, Agents of Transformation, March 24. More info and tickets.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Related coverage: Inside OpenAI’s new Bellevue office: A swanky statement about AI’s impact on the Seattle region

Audio editing by Curt Milton.

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

OpenAI delays ChatGPT’s ‘adult mode’ again

1 Share
The feature, which will give verified adult users access to erotica and other adult content, had already been delayed from December.
Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How to Center Any Element in CSS: 7 Methods That Always Work

1 Share

Centering elements in CSS often seems straightforward at first, but it quickly becomes confusing once you start building real layouts. A property like text-align: center; works perfectly for text, yet it fails when you try to center an image or a block element.

Then you experiment with margin: auto;, which centers a div horizontally but doesn’t help with vertical alignment. Before long, you find yourself searching through solutions involving Flexbox, Grid, transforms, and other techniques that appear complicated and inconsistent.

The reality is that CSS does not provide a single universal property that can center everything. Instead, each layout scenario requires the right method, and understanding when to use each technique is the key to mastering CSS centering.

Table of Contents

First: Understand the Two Types of Centering

Before diving into centering techniques, it’s important to understand the two types of centering in CSS, because different methods work along different axes. CSS layouts operate on two axes: Knowing which axis you want to center along helps you choose the right approach.

There are two axes in CSS layout:

Axis

Direction

Horizontal

Left to Right

Vertical

Top to Bottom

When someone says “center this element”, they usually mean one of four things:

  • Center text inside a container

  • Center a block horizontally

  • Center vertically

  • Center both horizontally and vertically

Each requires a different solution.

Method 1: Center Inline Content (text, inline elements)

This method centers inline content such as text, links, inline images, and sometimes buttons, using the text-align: center; property. This is the simplest centering method in CSS, but it is often misunderstood because it only affects the content inside a block container, not the container itself.

Example

<div class="box">
  <h2>Hello World</h2>
  <p>This text is centered.</p>
</div>
.box {
  text-align: center;
  border: 2px solid #444;
  padding: 20px;
}

Output

f9c2bb33-f6aa-4b12-b5cc-2d3cc9d76032

Why It Works

When you apply text-align: center; to a parent element, the browser horizontally aligns all inline and inline-block children within that container. This makes it perfect for centering headings, paragraphs, navigation links, or small inline elements, but it won’t work for block-level elements like divs unless their display is changed to inline-block.

So this will NOT work:

.box {
  width: 300px;
  text-align: center; /* does NOT center the box */
}

Real-world use cases

  • Center headings

  • Center button labels

  • Navigation menus

  • Card content alignment

Beginner Mistake

Most people try to center a <div> with text-align: center;. This won’t move the div, only its contents.

Method 2: Center a Block Horizontally

This method centers a block element horizontally using margin: 0 auto;, which is one of the oldest and most reliable CSS techniques. It works by automatically distributing the available horizontal space equally on the left and right sides of the element. When you set the left and right margins to auto, the browser calculates the remaining space in the container and splits it evenly, pushing the element into the center.

Works when:

  • Element has a width

  • Element is block-level

Example, Center a Card

<div class="card">
  I am centered!
</div>
.card {
  width: 300px;
  margin: 0 auto;
  padding: 20px;
  background: #eee;
}

Output

1f1e1df4-7e33-4f54-a9f7-c688c5432782

Why It Works

When you set the elements' margins to auto, the browser calculates the remaining horizontal space in the container after accounting for the element’s width. It then distributes this extra space equally between the left and right margins, which pushes the element into the center. This happens automatically, making margin: 0 auto; a simple and reliable way to horizontally center block elements with a fixed width.

|----auto----|---element---|----auto----|

Browser calculates: left margin = right margin. So the element sits in the middle.

Important Rule

If width is not defined, it won't work:

.card {
  margin: auto; /* won't center , takes full width */
}

Because block elements default to width: 100%;.

Real-world use cases

  • Center website layout container

  • Center forms

  • Center blog content area

Method 3: Perfect Center (Horizontal + Vertical) with Flexbox

This method uses Flexbox to center an element both horizontally and vertically, making it one of the most reliable modern CSS solutions. When you set a container to display: flex;, you activate the Flexbox layout system, which gives you powerful alignment controls. The property justify-content: center; centers the content along the main axis (usually horizontal), while align-items: center; centers it along the cross axis (usually vertical).

Example, Center Login Box

<div class="page">
  <div class="login">
    Login Form
  </div>
</div>

.page {
  height: 100vh;
  display: flex;
  justify-content: center;
  align-items: center;
}

.login {
  padding: 40px;
  background: white;
  border: 2px solid #333;
}

Output

e7c4abc2-1d23-4933-9b3a-dad52326e924

Why It Works

Flexbox treats the container and its children as a flexible layout system, automatically distributing space along the main and cross axes. This allows any element, regardless of its size, to sit perfectly in the middle of the container, making it ideal for centering modals, hero sections, and other dynamic content.

Property

Controls

justify-content

horizontal

align-items

vertical

This Works Regardless Of:

  • Unknown height

  • Unknown width

  • Responsive layouts

  • Dynamic content

That’s why it’s widely used today.

Real-world use cases

Developers commonly use Flexbox centering to place important interface elements directly in the middle of the screen. For example, it helps center modal dialogs, loading spinners, hero section content, and other full-screen UI components. Hence, they remain visually balanced and easy for users to notice, regardless of screen size.

Method 4: Center Using CSS Grid (The Easiest Method Ever)

CSS Grid offers one of the simplest ways to center elements both horizontally and vertically. By setting a container to display: grid; and applying place-items: center;, you can position any child element perfectly in the middle with just a few lines of code. This method works because Grid provides built-in alignment controls that automatically handle positioning along both axes.

Example

<div class="wrapper">
  <div class="box">Centered!</div>
</div>

.wrapper {
  height: 100vh;
  display: grid;
  place-items: center;
}
.box {
  width: 200px;
  padding: 30px;
  text-align: center;
  background: white;
  border: 2px solid #333;
}

Output

fb959081-1adb-467b-bf7d-754ec2777566

In this example, the .wrapper acts as the grid container, and the .box element becomes a grid item. The property place-items: center; automatically aligns the box in the middle of the container, both horizontally and vertically.

100vh means 100% of the viewport height, which is the full height of the visible browser window. When you set height: 100vh; on a container, it expands to fill the entire screen from top to bottom.

Why It Works

The property place-items: center is actually shorthand for two grid alignment properties:

align-items: center;
justify-items: center;
  • align-items controls vertical alignment inside the grid.

  • justify-items controls horizontal alignment.

By combining both in one line, Grid centers elements in both directions automatically without needing additional layout rules.

When to Prefer Grid Over Flexbox

CSS Grid is ideal when you only need simple centering and don’t require complex layout control. It keeps your code short and easy to read.

Use Grid when:

  • You only need to center a single element

  • You are not building complex layouts

  • You want the simplest and cleanest code

Use Flexbox when:

  • You are aligning multiple items

  • Layout direction matters (row vs column)

  • You need spacing control between elements

Method 5: Center with Absolute Position + Transform

This centers an element using absolute positioning combined with CSS transforms, and it works even when you are not using Flexbox or Grid. In this approach, you position the element with position: absolute; and move it to the middle of its parent using top: 50%; and left: 50%;.

Example, Center Popup

<div class="container">
  <div class="popup">I'm centered</div>
</div>
.container {
  position: relative;
  height: 400px;
}

.popup {
  position: absolute;
  top: 50%;
  left: 50%;
  transform: translate(-50%, -50%);
}

Output

bc20cc8d-090c-45cf-a240-8792c6963784

Why It Works

  1. top: 50% moves the top edge to the middle

  2. left: 50% moves the left edge to the middle

  3. translate(-50%, -50%) shifts the element back by half its size

So the center becomes the element’s midpoint, not the corner.

Explanation

Without transform: The element corner sits at the center, which means this places the top-left corner of the element at the center point.

f579c42a-074f-4bda-8fab-5c8163ff8fac

To fix that, you apply transform: translate(-50%, -50%);, which shifts the element back by half of its own width and height. This adjustment ensures the actual center of the element aligns with the center of the container. Developers often use this technique for overlays, modals, tooltips, and floating UI components.

Real-world use cases

  • Modals

  • Tooltips

  • Floating labels

  • Overlays

Method 6: Vertical Center Single Line Text

This method vertically centers single-line text inside a container by using the line-height property. When you set the line-height to the same value as the container’s height, the browser places the text in the vertical middle of that space because the line box expands to fill the container evenly.

Example, Center Text in Button

<button class="btn">Click Me</button>
.btn {
  height: 60px;
  line-height: 60px;
  text-align: center;
}

Output

be456849-c4b8-49aa-a330-fe448b9a5ee6

Why It Works

This technique works best for elements with a fixed height, such as buttons, badges, or navigation items. However, it only works reliably when the text stays on one line, because multiple lines will break the vertical alignment.

Limitations

The main limitation of using line-height to vertically center text is that it only works for single-line text. If the text wraps onto multiple lines, the line-height no longer matches the container height for each line, causing the vertical centering to break.

This makes the method unsuitable for paragraphs, headings, or any content that can expand beyond one line, so it’s best reserved for buttons, labels, or other fixed-height, single-line elements.

Method 7: The Table-Cell Method (Old but Useful)

This method uses the table-cell technique to center content vertically and horizontally, a reliable approach for older CSS layouts and email templates. By setting a container to display: table; and its child element to display: table-cell; with vertical-align: middle; and text-align: center;, The browser treats the child like a table cell and automatically centers its content.

Example

<div class="outer">
  <div class="inner">Centered</div>
</div>
.outer {
  display: table;
  width: 100%;
  height: 300px;
}

.inner {
  display: table-cell;
  vertical-align: middle;
  text-align: center;
}

Output

e0960bcf-bd1c-43c0-9d1b-afa48530fd8d

How It Works

  • The .outer container acts as a table.

  • The .inner element behaves like a table cell.

  • Table cells automatically respect vertical alignment rules.

  • Combining vertical-align: middle; and text-align: center; perfectly centers the content both vertically and horizontally.

Why Use This Method

  • It works in older browsers that don’t fully support Flexbox or Grid.

  • It’s especially useful in email templates or legacy layouts.

  • No knowledge of height calculation or transforms is required.

Quick Decision Guide

Situation

Best Method

Center text

text-align

Center block horizontally

margin auto

Center anything modern

flexbox

Simplest full center

grid

Overlay/modal

absolute + transform

Single-line text vertical

line-height

Legacy/email support

table-cell

Common Beginner Problems (And Fixes)

Problem 1: “margin auto not working.”

You forgot the width.

width: 300px;
margin: auto;

Problem 2: “align-items center not working.”

Parent needs height.

height: 100vh;

Problem 3: “absolute centering weird position.”

Parent missing positioning.

parent { position: relative; }

Problem 4: Flexbox vertical centering fails

Check direction:

flex-direction: column;

Now vertical/horizontal axes swap!

Pro Tips You’ll Use Forever

1. Flexbox = alignment tool

2. Grid = placement tool

3. Margin auto = layout tool

Different tools, different jobs.

Remember This Rule

  • If you are centering one thing, use Grid

  • If centering many things, use Flexbox

Summary

CSS centering often feels difficult because beginners expect a single magic property that works in every situation, but no such property exists. Instead, CSS provides multiple layout systems, each designed to solve specific alignment problems.

These include inline alignment for text and inline elements, flow layout for standard block elements, Flexbox for flexible row or column arrangements, Grid for two-dimensional layouts, and positioned layouts for absolute or fixed elements. Once you understand which system applies to your scenario, centering becomes predictable and much easier to implement.

The 7 Methods You Should Memorize

  1. text-align: center

  2. margin: 0 auto

  3. Flexbox centering

  4. Grid place-items: center

  5. Absolute + transform

  6. Line-height trick

  7. Table-cell fallback



Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories