Microsoft CEO Satya Nadella speaks at OpenAI DevDay on Nov. 6, 2023 as Sam Altman looks on. (GeekWire File Photo / Todd Bishop)
The news: OpenAI and Microsoft are reworking the terms of their multibillion-dollar partnership, according to a report from the Financial Times.
Background: OpenAI announced last week that it is moving forward with a restructuring plan to create a for-profit public benefit corporation, controlled by its nonprofit parent. OpenAI needs sign-off from Microsoft — which has invested more than $13 billion in the ChatGPT maker since 2019 — to proceed with the plan.
What’s new: The Financial Times reported that Microsoft could take a smaller equity stake in the new entity in exchange for extended access to OpenAI’s technology beyond the previously agreed upon 2030 cutoff.
Microsoft provides computing capacity for OpenAI’s services — and uses OpenAI technology in its own products, including Microsoft Copilot.
The Information reported last week that OpenAI plans to reduce the percentage of revenue it shares with Microsoft as part of the restructuring.
Also on the table: The restructuring could enable a future OpenAI IPO, according to the Financial Times.
The Trump administration has reportedly fired Register of Copyrights Shira Perlmutter, who leads the US Copyright Office, following the office’s choice to release a pre-publication version of its opinion on the fair use status of AI training data that’s made up of copyrighted information.
Representative Joe Morelle, the ranking Democrat of the Committee on House Administration, called her firing an “unprecedented power grab with no legal basis,” linking the firing directly to her report, which he says amounted to her refusing “to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models.”
Among the report’s conclusions is that while the fair use status of AI training “will depend on what works were used, from what source, for what purpose, and with what controls on the outputs—all of which can affect the market.” The report says research and scholarship might be fair use but says many other AI tools might not be:
But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.
University of Colorado law professor Blake Reid called the report a “straight-ticket loss for the AI companies” in a post prior to reports emerged that Perlmutter had been fired, writing that he wondered “if a purge at the Copyright Office is incoming and they felt the need to rush this out.” Reid wrote that although the Copyright Office generally can’t “issue binding interpretations of copyright law,” courts turn to its expertise when drafting their opinions.
Whether the Copyright Office’s release of its findings is the reason Perlmutter was cut loose or is just very curious timing isn’t clear, as the White House doesn’t seem to have commented on it. Copyright law expert Meredith Rose questioned the link, calling the report “113 pages of ‘well, it depends!’” and adding that “people who find that offensive enough to call for her ouster would have to be utter lunatics—on EITHER side of this fight.”
Confusing the issue further, President Trump reposted (or ReTruthed, if you like) commentary on news of Perlmutter’s firing by a Truth Social account attributed to Mike Davis, a former legal clerk for Neil Gorsuch who was rumored last year for Trump’s Attorney General pick.
“Now tech bros are going to attempt to steal creators’ copyrights for AI profits,” Davis wrote while linking to a CBS News story, “This is 100% unacceptable.”
The day the Copyright Office’s report was released, President Trump also fired Librarian of Congress Carla Hayden, whose department the Copyright Office is part of. As NPR reports, White House press secretary Karoline Leavitt claimed, without specifics, that Hayden had done “concerning things … in the pursuit of DEI and putting inappropriate books in the library for children.” Every book published in the United States goes into the Library of Congress.
In March, GitHub CEO Thomas Dohmke joined Microsoft CEO Satya Nadella in Seoul, South Korea, for a stop on the “Microsoft AI Tour.” The event promised in-depth skilling sessions to help attendees learn the Copilot AI Stack.
That initiative is just one way Dohmke has been promoting GitHub’s commitment to our AI-powered future — while still affirming GitHub’s deep allegiance to programmers.
It’s not just that everyone wants to hear what GitHub’s CEO has to say. In the interview, and in comments shared with The New Stack, Dohmke took a stand, explaining why AI will revolutionize the way code gets written, finally “democratizing” access to the power of programming while bringing extra speed and productivity to developers everywhere.
But he also explained why — even in the age of AI — those people saying we’ll no longer need to learn how to code are wrong.
Beyond ‘Read-Only Mode’
“I’ve been developing software since the early 1990s,” Dohmke said at the start of the video. He introduced himself modestly, before adding: “Today my role is mostly being the GitHub CEO, leading the largest developer platform on this planet.”
But underneath it all, Dohmke loves programming. Asked for advice for the next generation, the first thing he said was “you’ve got to learn coding.” Since we’re carrying hardware and software with us every day — and it’s in the world around us — “I think as humans it is crucial to not only be in ‘read-only mode,’ but also be able to create things ourself… At least understand how creation is done on these devices.”
But then he added, “I think #2 is you’ve got to use AI to do that.”
As Dohmke sees it, AI “democratizes access to technology” (as well as access to many other things).
One specific reason: While English remains the primary language of software development, “in Germany most kids — and in fact most people — don’t speak fluent English, which is the primary language of software development. And so having an agent available that answers any question but also lets you realize your dream of building your dream is incredibly exciting.”
Dohmke expanded on his vision in an email interview with The New Stack. “With AI we will soon realize a world where anyone can create software just as easily as uploading a video to TikTok,” he said, adding that “the starting point will often be a prompt, written in natural language.”
It’s a long way from his programming start. In the YouTube interview, Dohmke remembers being a teenage programmer in East Germany, at a time when “there wasn’t even the internet — or I certainly had no internet access. So I had to figure it out all by myself, with books, with magazines, going to a computer club in the community center kind of hoping that somebody will be there.”
So five years into the age of AI, there’s one thing that’s absolutely clear to him: “AI makes software development so much more accessible for anyone who wants to learn coding.”
Speed and Productivity
Where does GitHub Copilot fit into this future? A GitHub spokesperson sees the tool as already evolving “into a true peer-programming agent,” with updates this year even allowing the coding assistant to suggest the next logical change to your code.
And GitHub Copilot has now surpassed 15 million users, growing more than four times its size just 12 months ago.
From his own experiences, Dohmke knows Copilot can bring many other benefits. The truth is many programmers he knows have also felt the pain of abandoning half-finished projects, he said in the YouTube interview, “because you ultimately realize it’s much more complex than you thought, and it’s not worth spending the time on it. So I think AI helps us to realize the dream of taking an idea and implementing it much faster.”
Later Dohmke said he sees AI “completely changing how software developers work,” dramatically speeding up their output “10%, 20%, maybe even 50% more productive.” And using AI also gives him glimpses of the answer to the ultimate question: “How much more do we still have to do as an industry to actually get to that dream of having an orchestra of agents that we’re controlling during our personal and our professional lives?”
“I think actually that’s one of the true superpowers of AI, whether it’s learning to code or exploring the world. You have an assistant available to you that has infinite patience.”
In short, “I’m daily excited about what we’re building there.”
The Need for Programmers
Dohmke stresses in our email interview that he’s still very clear that this won’t replace the need for human programmers. “What happens when bugs and vulnerabilities are introduced to the source code — or the software breaks?”
“Every person who builds software will need to be able to maintain their own software as well. And we will continue to need professional developers to fix big problems that the everyday person can’t, more than ever.”
It’s for that reason that he has a clear understanding of what’s required for the future. “Instead of encouraging kids not to code, I am convinced that every country and school system should introduce universal coding classes beginning at an early age.”
“Coding should be a core part of our global educational curriculum, just like literacy, math, history, physics and arts.”
As AI assumes larger roles in our society, coding literacy becomes even more important. “As we advance towards a future with AGI, it’s critical that we understand how to program and reprogram machines that are thinking and delivering on our behalf.
“AI must be autonomous only under our direction.”
AI for Code Verification?
In our email interview Dohmke also shared that he’s received positive reactions to his video appearance — and that it’s been a breath of fresh air. “Saying we no longer need coding education because of AI is like saying math became obsolete when calculators were invented. It doesn’t add up.”
Students — and all of us — are trying to “develop and evolve our critical thinking skills” to be in a position to “use the right tools at the right time, and verify their output.”
AI also has a role there. In a post on LinkedIn, Dohmke notes that Copilot “can now iterate on code, recognize errors and fix them automatically. This comes in addition to other Copilot agents like Autofix, which helps developers remediate vulnerabilities, and our code review agent, which has already reviewed over 8 million pull requests.”
So it’s not just for creating code stubs. A GitHub spokesperson pointed out this week that already companies like Twilio, Cisco, HPE, SkyScanner and Target “continue to choose GitHub Copilot to equip their developers with AI throughout the entire dev life cycle.”
Dohmke underscores the point in the YouTube interview, stating point blank that GitHub “wants to be on the forefront of AI code generation.
“We want to provide tools to developers to be more productive and more happy when writing code.”
Evolving Rapidly
Maybe it’s all what you’d expect from a man with an a profile on X, formerly known as Twitter, that says he’s “building GitHub Copilot for the sake of developer happiness.”
But Dohmke seems to believe it may truly change the way we code — and GitHub is ready. Dohmke’s LinkedIn post applauds the teams at GitHub as being “committed to rapidly evolving our product with sustained velocity.” But it goes on to say that “what started as the first AI pair programmer is soon evolving into a software engineering agent, embedded right where your code lives — and with it, GitHub itself will become not only the home of your repos, but also for your agents.”
Yet even here, the Dohmke still seems uniquely aware of the value of human programmers. In the YouTube video, he said, “I don’t think we’re anywhere close to a world where you can just write a single prompt and say ‘Build GitHub,’ and then an AI agent builds all of the features of GitHub, or even just the very basic primitives like repository storage, you know, Git storage and issue tracking.”
There are thousands of complex decisions that go into architecting a system made by everyone, from developers and engineers to product managers, about frameworks, languages, operating systems, whether or not to use the cloud.
“Getting to a point where agents can make all these decisions and write an app that actually is a viable business — you know, finds market fit, has a great user experience, and ultimately generates both revenue and profit. That I think we are quite far away from, and so we need engineers to do engineering stuff. They need to exercise their craft and apply systems thinking and design, and build really great applications.”
‘You’re Never Done’
This led Dohmke to one last piece of advice: to always be learning. “You’re never done.” And it seems to be a lesson that’s drawn from his own life.
“If I look back 30 years — what development looked like then, and what it looks like now — I would’ve been very behind if I hadn’t constantly read blog posts, literature and tried out things myself.
The difference now? “We just have so much more access to information.”
In this post, I argue that technical writers should actively challenge ideas they find problematic, drawing inspiration from Jonathan Rauch's The Constitution of Knowledge. Rauch argues that truth arises from social debate and critique. Taking it a step further, I encourage trusting internal red flags or intuition when something feels amiss, even if the reasons aren't immediately clear. When direct confrontation is challenging, use open-ended, clarifying questions to investigate concerns and collaboratively explore issues.
Nabeel Qureshi is an entrepreneur, writer, researcher, and visiting scholar of AI policy at the Mercatus Center (alongside Tyler Cowen). Previously, he spent nearly eight years at Palantir, working as a forward-deployed engineer. His work at Palantir ranged from accelerating the Covid-19 response to applying AI to drug discovery to optimizing aircraft manufacturing at Airbus. Nabeel was also a founding employee and VP of business development at GoCardless, a leading European fintech unicorn.
What you’ll learn:
• Why almost a third of all Palantir’s PMs go on to start companies
• How the “forward-deployed engineer” model works and why it creates exceptional product leaders
• How Palantir transformed from a “sparkling Accenture” into a $200 billion data/software platform company with more than 80% margins
• The unconventional hiring approach that screens for independent-minded, intellectually curious, and highly competitive people
• Why the company intentionally avoids traditional titles and career ladders—and what they do instead
• Why they built an ontology-first data platform that LLMs love
• How Palantir’s controversial “bat signal” recruiting strategy filtered for specific talent types
• The moral case for working at a company like Palantir
—
Brought to you by:
• WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs
• Attio—The powerful, flexible CRM for fast-growing startups
Lenny may be an investor in the companies discussed.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.lennysnewsletter.com/subscribe