Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147665 stories
·
33 followers

The CAP Theorem Is Why Your Cloud App Sometimes Feels Off

1 Share
There is a moment every cloud engineer seemingly has, whether they admit it or not. You open an application and something feels strange. A record you just saved is not there yet, a dashboard shows two different answers depending on where you look, or a system insists an action never happened even though you just performed it. At some point, a smart sounding person says “eventual consistency,” everyone nods, and the conversation moves on without anyone actually feeling satisfied by the...

Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

OpenClaw is being called a security “Dumpster fire,” but there is a way to stay safe

1 Share
Conceptual 3D render of a row of dark protective shields with one shield glowing in bright gold, symbolizing advanced cybersecurity, data protection, and secure sandboxing.

In a blog earlier this February, Snyk engineers said they scanned the entire ClawHub (the OpenClaw marketplace) and found that over 7 percent of the skills contained flaws that expose sensitive credentials. “They are functional, popular agent skills that instruct AI agents to mishandle secrets, forcing them to pass API keys, passwords, and even credit card numbers through the LLM’s context window and output logs in plaintext,” they reported.

OK, so we know OpenClaw is a security “Dumpster fire” right now, as we have reported.

I looked at Deno sometime ago; it treats TypeScript as a first-class citizen. I couldn’t help notice this detail in their recent Sandbox update:

You don’t want to run untrusted code (generated by your LLMs, your users’ LLMs, or even handwritten by users) directly on your server. It will compromise your system, steal your API keys, and call out to evil dot com. You need isolation.

Deno Sandbox gives you lightweight Linux microVMs (running in the Deno Deploy cloud) to run untrusted code with defense-in-depth security.

OK, sandboxes aren’t new, but Deno’s deployment environment caught my attention.

Deno and Deno Deploy

Well, it’s been a while since my last article about Deno and TypeScript, so I’ll speed through my example just to make sure I still remember everything before we check out the new sandbox stuff.

So let’s install Deno on my Mac. Fortunately, this looks the same as before:

As before, Deno correctly detected my shell. After restarting it, I checked everything was hunky dory:

So I’m not a TypeScript guy, and yet in that article, I wrote a bit of code to persuade myself that TypeScript is just looking at contents for equivalence. (Checkout the post for more on how an OOP developer can grok TypeScript)

class Car {
  drive() {
    // hit the pedal to the floor
  }
}
class Golfer {
  drive() {
    // hit the ball far
  }
}
// No error?
let w: Car = new Golfer();


So let’s do what we did last time and use a project initializer to run a TypeScript test.

I replace the main.ts with my drive method example from above, and run it:

So Deno handles my TypeScript as a first-class object, and proves it is a structural type system. But let’s get to the good stuff, and sign into Deno itself:

Before we can use a sandbox, we need to hop through a small verification hoop:

Don’t worry — it just checks your credit card exists, using the handy StripeLink that appears on your phone like a phishing request. Now we can set up — I’ll be following the right-hand column with code integration:

Now, we have the typical problem of connecting our identity to our requests. You can create a sandbox directly in code, which is neat — but first, we need a token.

So I’ll create an organisation token to connect my identity to Deno. I installed the SDK as the panel above suggested and created a token using the nice blue button. One small gripe here is that the terms “access token”, “organisation token”, and “deploy token” seemed to be used interchangeably.

OK, after setting the DENO_DEPLOY_TOKEN environment variable in my shell, we should be ready to run some code and create our very own sandbox on Deno’s cloud.

I save the following code as main.ts . I’m going to assume await is some sort of promise, as this is clearly asynchronous code. (The term “await” is also familiar enough in Victorian prose.)

import { Sandbox } from "@deno/sandbox"; 
await using sandbox = await Sandbox.create(); 
await sandbox.sh`echo "Hello, world!"`;


Remember to prove this happened, Deno will have to retain a record of the sandbox even after it has expired. As we are dealing with a security solution, we do need to tell Deno that we are happy use networking with the right flags:

OK, depending on how the statements are called, that appeared to work. Better proof must come in the appearance of the sandbox in my records:

We can see a little more detail in the instance from a nice filterable event log on the dashboard:

Well, that was just fine. I wrote some code on my laptop and ran it in a sandbox on Deno’s cloud. But we need to do a bit more to avoid the horrors of exfiltration.

Exfiltration shooter

What exactly is exfiltration? Of course, I could give the example of popular multiplayer games (you know them, or you don’t) whose very purpose is to appear as an avatar in the game server, steal things, then escape. This can happen accidentally in real life, too; you have seen this when the press manages to see notes a politician made in a private meeting, only to walk confidently outside, exposing the notes they are holding. In this case, the politician has misunderstood their safe boundaries—or has never used their camera’s zoom function.

This isn’t a security article, and I’m not Bruce Schneier — but you get the idea. You don’t want to run code in your cosy sandbox that captures and escapes with secrets. One way to combat this is to restrict exit points, but another is to obfuscate your private data while it resides within the sandbox. This is what Deno refers to as secret redaction and substitution.

Configured secrets never enter the sandbox environment variables. Instead, Deno Deploy substitutes them, only to reveal them when the sandbox makes outbound requests to an approved host.

I’ll show this process partway. We can set up a secret simply enough, and the approved host where it will be revealed to:

await using sandbox = await Sandbox.create({
  secrets: {
    ANTHROPIC_API_KEY: {
      hosts: ["api.anthropic.com"],
      value: process.env.ANTHROPIC_API_KEY,
    },
  },
});


So this means that the Deno will obfuscate the environment key that it finds in my laptop, but send it to Anthropic, revealed only after it leaves the sandbox:

I won’t make a real call to the LLM in the Sandbox (I certainly could, as I can access the Sandbox via the CLI and have it last for as long as I need), but I’ll set up a secret on my laptop environment as if I were:

And with my code altered:

I’ll run the code and see what the value of the secret is in the Sandbox:

As I said, to fully prove this, I’d have to contact Anthropic with my key to prove the process — but I’ll leave that to you.

From a Deno tutorial video. The diagram appears under the hosts as they demonstrate sandboxes.

Conclusion

I focused on just one aspect, obfuscation, but you can also control the allowed outgoing addresses just as easily. And we’ve already looked at other aspects of the Deno Deploy service.

Obviously, the timing couldn’t be better. With the exponential increase in generated and untrusted code (that people nevertheless wish to trust), this type of service is gold dust. I’m sure it will be appearing in different services pretty soon.

The post OpenClaw is being called a security “Dumpster fire,” but there is a way to stay safe appeared first on The New Stack.

Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Fake Job Recruiters Hid Malware In Developer Coding Challenges

1 Share
"A new variation of the fake recruiter campaign from North Korean threat actors is targeting JavaScript and Python developers with cryptocurrency-related tasks," reports the Register. Researchers at software supply-chain security company ReversingLabs say that the threat actor creates fake companies in the blockchain and crypto-trading sectors and publishes job offerings on various platforms, like LinkedIn, Facebook, and Reddit. Developers applying for the job are required to show their skills by running, debugging, and improving a given project. However, the attacker's purpose is to make the applicant run the code... [The campaign involves 192 malicious packages published in the npm and PyPi registries. The packages download a remote access trojan that can exfiltrate files, drop additional payloads, or execute arbitrary commands sent from a command-and-control server.] In one case highlighted in the ReversingLabs report, a package named 'bigmathutils,' with 10,000 downloads, was benign until it reached version 1.1.0, which introduced malicious payloads. Shortly after, the threat actor removed the package, marking it as deprecated, likely to conceal the activity... The RAT checks whether the MetaMask cryptocurrency extension is installed on the victim's browser, a clear indication of its money-stealing goals... ReversingLabs has found multiple variants written in JavaScript, Python, and VBS, showing an intention to cover all possible targets. The campaign has been ongoing since at least May 2025...

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Sequoia CEO coach: Why it’s never been easier to start a company, and never been harder to scale one | Brian Halligan (co-founder, HubSpot)

1 Share

Brian Halligan co-founded HubSpot, ran it as CEO for about 15 years, and now coaches Sequoia’s fastest-growing founders as their in-house CEO coach.

We discuss:

1. His LOCKS framework for evaluating founders

2. Why you should build your team like the 2004 Red Sox

3. Why hiring “spicy” candidates beats consensus picks

4. Why enterprise sales will be the last white-collar job AI replaces

5. Some of my favorite “Halliganisms”

Brought to you by:

Sentry—Code breaks, fix it faster: http://sentry.io/lenny

Datadog—Now home to Eppo, the leading experimentation and feature flagging platform: https://www.datadoghq.com/lenny

WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs: https://workos.com/lenny

Episode transcript: https://www.lennysnewsletter.com/p/sequoia-ceo-coach-why-its-never-been

Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0

Where to find Brian Halligan

• X: https://x.com/bhalligan

• LinkedIn: linkedin.com/in/brianhalligan

• Delphi: https://www.delphi.ai/bhalligan

• Podcast: https://sequoiacap.com/series/long-strange-trip

Where to find Lenny:

• Newsletter: https://www.lennysnewsletter.com

• X: https://twitter.com/lennysan

• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/

In this episode, we cover:

(00:00) Introduction to Brian Halligan

(03:56) The perpetual state of constructive dissatisfaction

(05:25) Coaching CEOs

(07:49) The art of interviewing and hiring

(11:21) Getting the most out of reference calls

(13:10) Homegrown talent vs. big company hires

(16:31) Traits of successful CEOs

(19:40) Brian’s LOCKS framework for evaluating founders

(21:34) Are great CEO’s born or made?

(23:41) Giving effective feedback

(25:54) The future of go-to-market strategies

(31:56) Understanding forward deployed engineers

(34:17) How the CEO role has evolved over the last 20 years

(38:10) Halliganisms

(01:01:18) The CEO’s role in scaling a company

(01:02:41) Lightning round and final thoughts

Referenced:

• Dev Ittycheria on LinkedIn: https://www.linkedin.com/in/dittycheria

• HubSpot: https://www.hubspot.com

• Parker Conrad on LinkedIn: https://www.linkedin.com/in/parkerconrad

• McKinsey & Company: https://www.mckinsey.com

• Brian Chesky’s new playbook: https://www.lennysnewsletter.com/p/brian-cheskys-contrarian-approach

• Jensen Huang on LinkedIn: https://www.linkedin.com/in/jenhsunhuang

• Winston Weinberg on LinkedIn: https://www.linkedin.com/in/winston-weinberg

• James Cadwallader on LinkedIn: https://www.linkedin.com/in/jsca

• Gabriel Stengel on LinkedIn: https://www.linkedin.com/in/gabestengel

• He saved OpenAI, invented the “Like” button, and built Google Maps: Bret Taylor on the future of careers, coding, agents, and more: https://www.lennysnewsletter.com/p/he-saved-openai-bret-taylor

• Scaling Entrepreneurial Ventures: https://orbit.mit.edu/classes/scaling-entrepreneurial-ventures-15.392

• OpenClaw: https://openclaw.ai

• Ruth Porat on LinkedIn: https://www.linkedin.com/in/ruth-porat

• Mike Krzyzewski: https://goduke.com/sports/mens-basketball/roster/coaches/mike-krzyzewski/4159

• Dalai Lama’s 18 Rules for Living: https://www.prm.nau.edu/prm205/Dalai-Lama-18-rules-for-living.htm

• Zigging vs. zagging: How HubSpot built a $30B company | Dharmesh Shah (co-founder/CTO): https://www.lennysnewsletter.com/p/lessons-from-30-years-of-building

• Kareem Amin on LinkedIn: https://www.linkedin.com/in/kareemamin

• Glassdoor: https://www.glassdoor.com

• Tobi Lütke’s leadership playbook: Playing infinite games, operating from first principles, and maximizing human potential (founder and CEO of Shopify): https://www.lennysnewsletter.com/p/tobi-lutkes-leadership-playbook

• Katie Burke on LinkedIn: https://www.linkedin.com/in/katie-burke-965767a

• Jerry Garcia: https://en.wikipedia.org/wiki/Jerry_Garcia

• Bob Weir: https://en.wikipedia.org/wiki/Bob_Weir

• Phil Lesh: https://en.wikipedia.org/wiki/Phil_Lesh

• Ron “Pigpen” McKernan: https://en.wikipedia.org/wiki/Ron_%22Pigpen%22_McKernan

• Marc Andreessen: The real AI boom hasn’t even started yet: https://www.lennysnewsletter.com/p/marc-andreessen-the-real-ai-boom

The American Revolution: https://www.pbs.org/kenburns/the-american-revolution

• Delphi: https://www.delphi.ai

• Sonos: https://www.sonos.com

• Yamini Rangan on LinkedIn: https://www.linkedin.com/in/yaminirangan

• The Boston Red Sox: https://www.mlb.com/redsox

Recommended book:

Marketing Lessons from the Grateful Dead: What Every Business Can Learn from the Most Iconic Band in History: https://www.amazon.com/Marketing-Lessons-Grateful-Dead-Business/dp/0470900520

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.

Lenny may be an investor in the companies discussed.



To hear more, visit www.lennysnewsletter.com



Download audio: https://api.substack.com/feed/podcast/187154837/0c611c6487a4ace2157de90893760367.mp3
Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Evolving AI Plans

1 Share
Originally posted on my blog at: https://darkgenesis.zenithmoon.com
AI Tools evolving to make better AI Tools, inception at its finest

The world of AI is moving faster than most can keep up, by the time you have mastered one pattern, several others have cropped up. If you do not keep a check on yourself, you could be left behind. If you thought it was tough in the 80/90’s (I am old, I get it) with how fast things were evolving, then buckle up, we are now reaching lightspeed.

Note
Even my older AI blog posts are starting to feel old already, but I am enjoying the challenge!

New patterns for a new age

The Bueller quote heading this article has never been more true, it seems almost daily I am reviewing the news, latest posts from ALL of the major partners involved with Agents and LLM’s for new patterns, ideas and suggestions. Thankfully, everyone is riffing off each other in a good way, someone suggests or posts something, then someone else builds on that and goes that bit further, whether it is:

  • A new LLM logic.
  • Improved MCP interfaces and tools.
  • Patterns, plans and guidance.
  • Token reuse or conservation.

The list goes on, in some ways, it is like the best parts of collaborative science (in some ways, a dark dystopian nightmare), ever creeping forward.

This post was sparked after I followed up on some of these reports and started to see REAL IMPROVEMENT in several key factors in my own research, namely:

  • Reduced hallucinations (not gone completely, but significantly reduced).
  • Improved memory (yes, you can have memory if you do it right)
  • Improved outcomes.

It almost sounds too true, but my workflow has improved, especially with some dedicated research projects I continually use to test new patterns.

Warning
This is NOT a silver bullet, to get more out, you MUST put more in, and thus the cost, depending on what you are using will INCREASE.
My recommendation, as is becoming more of the goal in today’s market, is to HOST your own LLM infrastructure (they have it running on Raspberry Pi’s now). It is possible with a little effort, and you can reduce your downstream costs to all but the critical path.
Sadly, this is out of scope for this article, but I will try to add more later.

So, what are these patterns I tease?

Evolving patterns

The updated patterns I have started employing recently fall into two categories with a single shared theme:

Shared theme — The living document

The core of the recent updates is to create LIVING DOCUMENTS at the core of any process. This is a SINGULAR document (and you MUST be explicit, as some agents, looking at you Claude, will randomly go off and create masses of documents you did not ask for) which ALL agent calls must update with their state, or you inform it to update at critical junctures.

I have found this works far better than using the Memory MCP, or constantly updating the instructions guide for the agent, and as a separate document, you can create references to it from all other material, including the instructions. In testing, this has been far more efficient and rigorously followed than instructions or memory alone.

  • You start the session with the instructions pointing to the living document
  • The agent reads the current state with suggested next steps
  • It validates any failures or bad paths from previous attempts
  • It then plans ahead!

Your mileage may vary, but this singular change VASTLY improves my outcomes. Granted, at the cost of the additional tokens needed to read the document, hence the additional cost. You “can” summarise the document in Agent format if you wish, but I have found this actually degrades its performance and use.

Patterns

Now to the other meat on the bone, the revised patterns that consume the living document. I have long stated that LLM’s are more tuned to the small tasks, a quick fix, diagnosis, but where you need to dig deeper, you have to keep a better handle on the outcome, thus these are the methods I have now turned to using:

It all starts with a defined and reviewed plan

By far my biggest improvement through LLM use is to focus and narrow its path, to think and to constantly challenge its assumptions. It is NOT fool proof, but it greatly improves your chances of a positive outcome. The whole approach (which is token-heavy) is an almost constant state of review, approval, implementation and review, which when implemented, looks something like this:

  • First Agent session — Create the PRD (Product Definition) from a detailed set of requirements, this ensures a list of preferences and outcomes.
  • Build an instruction document using the PRD as a reference. Guiding the steps for planning, with KEY steps to avoid creating documents unless requested (stops the multi-document scenario). This might sound counterintuitive, however, they serve very distinct purposes. One directs how to think, the other tells the LLM what to think about.
Important
Ensure to leave off with a note about the aforementioned Living Document, instruct the agent to check the Living document at the start of a prompt and update it with its intent and finish the query with a statement in the living document as to the outcome. This trains and helps the Agent learn, as well as something to pick up on if you start a completely fresh session.
  • Next, start a completely new session (even with another agent if you wish), define the session as a reviewer / architect actor and have it review and improve the plan. Take note of what has changed and follow its thinking. Make corrections where YOU disagree. It is essential you are part of this review process, as key decisions that were maybe not clear in the design become apparent (as often happens in life, poor design leads to poor outcomes).
Note
You can repeat the previous step a few times with different defined actors, different personas, just as you would in real life. The best plans involve a team, and in this case, it is a team of agents AND YOU!
  • At the end, ensure to begin the plan with a Living document record as it delivers the implementation.

Once you are happy with the plan, the next step begins, the following two patterns can be taken:

Plan big but follow your own path

With the plan in hand, instruct the Agent to document and detail the plan for implementation, ALWAYS ensure you finish with “If you have questions, ask them before beginning” (If you do not, it will not, and it ABSOLUTELY SHOULD). The outcome choice is up to you:

  • A singular implementation document.
  • Several documents, one for each stage or component, ordered for implementation.
  • A mix, depending on your style, backend first, then frontend and UX.

Ultimately, this is a plan YOU will follow, this is my preferred mode as it gives more opportunity to question as you implement or even change your choices. All the agent has effectively done is ratify the architectural state based on your inputs, it is arguably the most human way to use the tools to achieve your goals.

If anything is unclear, ask the LLM to clarify, make changes or explain something “BUT ONLY TO THE PLAN”, the Agent should NOT touch the code, that is your domain.

Note
In my experience, when instructing the planning phase, I also interject to ask the Agent to explain each section or block of code, as to its intention, what it is meant to achieve, or what it is supposed to do. This helps guide the implementation as to whether it is the best thing to do. And if you disagree, get it to update its plan, or make your own implementation and then get the Agent to update the spec, then review for ancillary impacts.

This approach feels more like the Agent working cooperatively than running the show and seeing what it comes out with.

Important
At each stage, with each change, ensure to instruct the Agent to update the living document. It might be recorded in the Instructions, but I feel safer double-checking at critical points.

At all points, question everything, it is simply good for the soul and makes the challenge more fun. All that is left is to continue to the end, test everything and the result I have found to be vastly improved.

Tip
Another fun step, but at the cost of tokens, is to ask the Agent from time to time to review what you have actually implemented against the plan and give a status report (helps to reduce human error), marking the document with AI’s favourite thing, Icons and Emoji’s. As well as providing a summary, as well as giving you a visible checkpoint of your progress.

Automated plan, automated deployment

The automated plan follows a similar track to the manual implementation path, except you ensure to break out the plan into repeatable and testable sections. It is not as efficient as the manual path, but for shorter and more throwaway experiments, it can be beneficial.

Note
In some cases, I am actually running the two approaches in Parallel! Using the fast route to test theories in advance before incorporating them into plans, similar to prototyping.

Rather than the human doing the implementation, the agent is running the show, but CRITICALLY, not all at once. To avoid hallucinations and dreams of code, each section must be complete, testable and ultimately HUMAN VERIFIABLE.

An example of this was a total conversion of an old game sample in C++ using some REALLY legacy assets. The plan broke down as follows (granted after so many failed attempts in the past):

  • Review the project and break up / document the systems and content of the sample.
  • Research old asset formats and create detailed documentation of their makeup (critical for migration).
  • Build out sample sets of each content type and define a migration strategy.
  • For each asset type, build individual pipelines, with human verification at each step.
  • Then implement the migration in individual sessions or phases, each phase does NOT complete until the human in the process signs off on it.
  • Then plan the implementation, in a stackable way, each implementation building on the last, again with human signoff.

Doing this phased approach with signoff greatly avoids mass delusion and lots of wasted tokens on something that can never ultimately work.

Important
The human as part of the process is essential, not just for the quality of the output, but also as an efficiency surrounding costs and tokens. Yes, it is more effort than vibe-coding a website or app, but the results are FAR superior.

Conclusion

Thus ends this page in our journey, but the destination is still far from sight.

Learning should never end and we should always strive for improvement, and in my humble view, this is in Cooperation with an agent and not just handing over the keys on a prompt, no matter how detailed, as it is ultimately flawed. We learn more along the journey, take those understandings and evolve a better plan. It is strange, in the many years as a Project Manager, designer, QA and even analyst, this is ultimately how we humans work better, we plan, we question, we revise and ultimately deliver a better outcome. It might not be perfect, but from that, we continue to build better.

Enough musing, back to the code, which is where I feel most at home, with my new pal sitting on my shoulder. (Still not sure whether it is a little angel or devil, but let us see where this leads)

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI: Igniting the Spark to End Stagnation

1 Share

Much of the West has been economically stagnant. Countries like Canada have failed to improve their productivity and standard of living as of late. In Canada, there has been no progress in Canadian living standards as measured by per-person GDP over the past five years. It is hard to overstate how anomalous this is: the USSR collapsed in part because it could only sustain a growth rate of about 1%, far below what the West was capable of. Canada is more stagnant than the USSR.

Late in 2022, some of us got access to a technical breakthrough: AI. In three years, it has become part of our lives. Nearly all students use AI to do research or write essays.

Dallas Fed economists projected the most credible effect that AI might have on our economies: AI should help reverse the post-2008 slowdown and deliver higher living standards in line with historical technological progress.

It will imply a profound, rapid but gradual transformation of our economy. There will still be teachers, accountants, and even translators in the future… but their work will change as it has changed in the past. Accountants do far less arithmetic today; that part of their work has been replaced by software. Even more of their work is about to be replaced by software, thus improving their productivity further. We will still have teachers, but all our kids, including the poorest ones, will have dedicated always-on tutors: this will not be just available in Canada or the USA, but everywhere. It is up to us to decide who is allowed to build this technology.

AI empowers the individual. An entrepreneur with a small team can get faster access to quality advice, copywriting, and so forth. Artists with an imagination can create more with fewer constraints.

I don’t have to prove these facts: they are fast becoming obvious to the whole world.

New jobs are created. Students of mine work as AI specialists. One of them helps build software providing AI assistance to pharmacists. One of my sons is an AI engineer. These are great jobs.

We often hear claims that artificial intelligence will consume vast amounts of energy and water in the coming years. It is true that data centers, which host AI workloads along with many other computing tasks, rely on water for cooling.

But let’s look at the actual water numbers. In 2023, U.S. data centers directly consumed roughly 17.4 billion gallons of water—a figure that could potentially double or quadruple by 2028 as demand grows. By comparison, American golf courses use more than 500 billion gallons every year for irrigation, often in arid regions where this usage is widely criticized as wasteful. Even if data-center water demand were to grow exponentially, it would take decades to reach the scale of golf-course irrigation.

On the energy side, data centers are indeed taking a larger share of electricity demand. According to the International Energy Agency’s latest analysis, they consumed approximately 415 TWh in 2024—about 1.5% of global electricity consumption. This is projected to more than double to around 945 TWh by 2030 (just under 3% of global electricity). However, even this rapid growth accounts for less than 10% (roughly 8%) of the total expected increase in worldwide electricity demand through 2030. Data centers are therefore not the main driver of the much larger rise in overall energy use.

If we let engineers in Australia, Canada, or Argentina free to innovate, we will surely see fantastic developments.

You might also have heard about the possibility that ChatGPT might decide to kill us all. Nobody can predict the future, but you are surely more likely to be killed by cancer than by a rogue AI. And AI might help you with your cancer.

We always have a choice. Nations can try to regulate AI out of existence. We can set up new government bodies to prevent the application of AI. This will surely dampen the productivity gains and marginalize some nations economically.

The European Union showed it could be done. By some reports, Europeans make more money by fining American software companies than by building their own innovation enterprises. Countries like Canada have economies dominated by finance, mining and oil (with a side of Shopify).

If you are already well off, stopping innovation sounds good. It’s not if you are trying to get a start.

AI is likely to help young people who need it so much. They, more than any other group, will find it easier to occupy the new jobs, start the new businesses.

If you are a politician and you want to lose the vote of young people: make it difficult to use AI. It will crater your credibility.

It is time to renew our prosperity. It is time to create new exciting jobs.

 

References:

Wynne, M. A., & Derr, L. (2025, June 24). Advances in AI will boost productivity, living standards over time. Federal Reserve Bank of Dallas.

Fraser Institute. (2025, December 16). Canada’s recent economic growth performance has been awful.

DemandSage. (2026, January 9). 75 AI in education statistics 2026 (Global trends & facts).

MIT Technology Review. (2026, January 21). Rethinking AI’s future in an augmented workplace.

Davis, J. H. (2025). Coming into view: How AI and other megatrends will shape your investments. Wiley.

Choi, J. H., & Xie, C. (2025, June 26). AI is reshaping accounting jobs by doing the boring stuff. Stanford Graduate School of Business.

International Energy Agency. (n.d.). Energy demand from AI.

University of Colorado Anschutz Medical Campus. (2025, May 19). Real talk about AI and advancing cancer treatments.

International Energy Agency. (2025). Global energy review 2025.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories