I read Edwin van Wijk’s excellent five-part series on The Dainosaur the other day and it hit close to home. Edwin and I sat in the same lecture halls for our IT bachelor’s (remember FCO-IM?). We graduated in the same cohort and started our first real jobs on exactly the same day, July 1st 1999, at the same company. We spent sixteen years there together before I moved on to Xpirit, now Xebia. Our careers ran on parallel tracks from the start. Same training, same company culture, same instincts about what good software looks like. So when he writes about moving from long-earned skepticism to serious, eyes-open use of AI, I recognize the whole trajectory because I lived a version of it myself.
Edwin’s series is not a hype piece. He starts from the same place most senior people did: decades of surviving hype cycles and a healthy distrust of anything that sounds too shiny. He treats AI less like a magic wand and more like a capable but unreliable colleague. Useful for angles, prompts, and acceleration, but never exempt from review. The real differentiator, he says, is not chasing better models but clarity: context, constraints, quality criteria, and disciplined review. Context engineering is the actual craft. That maps straight onto the architecture habits we were both trained in. Document assumptions, list rejected options, draw the boundaries.
The skepticism was earned
My resistance to AI was never about nostalgia or fear of change. Our industry changes all the time. It always has. If you want to stay relevant, you learn, adapt, and keep moving. That was never the issue. I’m also not blind to the novelty and impressive technology surrounding Gen AI.
Until the end of 2025, LLMs were mostly fancy autocomplete to me. Helpful for finding bugs or explaining code I did not work with daily, but serious coding efforts often produced slop. I have strong views on how architecture should feel and what good code should look like. My experience (mostly with GitHub Copilot inside Visual Studio Code) was that I spent more time cleaning up cruft and over-engineering than I gained from it. Coding is still a direct expression of the solutions I design in my head. Code (C# in my case) is sometimes the clearest way to capture domain rules or algorithms. Typing natural language to get there felt like working through a filter. I still feel that sometimes.
The same goes for aesthetics. I see a lot of AI-generated text, images, music, and video as soulless middle-of-the-road slop. There is a kitschy shine to it that I can smell from a mile away. Code, being more neutral and machine oriented, suffers less, but early outputs were repetitive spaghetti all the same. I refuse to waste time on lazy, ugly, bland content. Einar W. Høst’s recent post “I Refuse to Play the Imitation Game” captures how I feel about this perfectly: stop trying to sort bad human text from bad AI text and just move on to something worth your time.
Useful does not mean magical
What changed for me was the alignment of better models, better tools, and better workflows. Around the end of 2025, Claude Opus 4.5/4.6 and Claude Code had their momentum and many in the industry experienced their “Claude Code moment”. The IDE-first approach in Copilot always felt clunky: the conversation with the AI stayed secondary and “bolted on”. Tools like Claude Code flip this: the dialog became primary, with the codebase, tools, and skills attached. OpenCode became a prominent driver in my toolbelt because it lets me use every model in my Copilot subscription and keeps experimenting easy. I started with OpenSpec initially to shape design constraints in green-field work or new features, but that “light weight spec-driven framework” grew heavier and bloated after a few iterations. Later I moved to Compound Engineering or just simply alternating between Plan and Build modes and started to get acceptable results. Adding context-aware MCPs like context7 brought in up-to-date framework knowledge so the model stopped hallucinating obsolete NuGet packages or outdated .NET habits.
The generated code got better structured and I saw it pick up on patterns and conventions in existing code bases. Also, I started to figure out the right scope of changes that I can still review and judge properly. AI stopped feeling like an over-eager intern I had to babysit and started feeling more like leverage. I could go back to being the engineer instead of the childcare supervisor.
Yes, I have embraced using AI as a main driver for engineering work. I apply the same principles Edwin describes: context first, disciplined review, senior judgment in the loop. But I do not buy the hype that English is the new programming language. People claiming that have probably not touched any serious production code for at least 10 years. Also, people exclaiming “I can finally unleash my creativity without having to tell those pesky developers what I want”… congratulations, and good luck with that. I do not trust anyone selling that stuff near production code. Building a SaaS replacement in an afternoon on http://localhost:3000 is not the same as bringing it to production, hosting it, operating it, and maintaining it for years. I think the industry is heading for a FAFO moment. A lot of programs will ship that “developers” without judgment cannot properly evaluate. The fool-with-a-tool adage still applies here.
Accessibility is a real gain. AI does lower the barrier to making things. Prototypes become feasible faster. Sloppy code in a true prototype is fine. What makes me angry is when a vibe-coded demo gets mistaken for the real thing. I have seen PowerPoint mockups sold as finished products before. Now the same thing happens with AI prototypes. The cost of weak architecture, security holes, operational fragility, and dependency chaos just shifts downstream. Initial creation looks cheaper but later cleanup gets more expensive.
I embraced the utility, not the culture
The cultural side of AI still turns me off. The grifting, the gloating about entire job categories disappearing, the sneering “adapt or die” rhetoric, the reduction of human worth to productivity math. Sam Altman comparing the energy cost of training a model to raising and training a human is a perfect example of the tone-deaf attitude that nearly pushed me out of the industry.
The vendor dependency bothers me too. We are outsourcing actual software development to machines owned by a handful of parties. Lock-in was always something we tried to avoid. Now it is sold as progress. Recent outrage about pricing changes and rate limiting from Anthropic already show where this is heading.
Add the ethical stains around training data, the low-wage human reviewers sifting through terrible content, and the environmental cost of those massive data centers, and the picture gets darker. Karen Hao’s “Empire of AI” is worth reading as a counterweight. Or read Patrick Galey’s “101 reasons to not use Gen AI” to curb your enthusiasm a bit.
What it cost me
On a personal level I had to let go of something real. I genuinely enjoy writing code. Once I have the direction clear, the physical craft of typing, refactoring, discovering flaws, and seeing it work puts me in a flow state. That direct connection between brain and hands is how I get ideas into the world. It is the same feeling I get playing guitar, cooking, baking bread, or drawing. AI helps, but it inserts a layer between me and part of that craft. I still touch code when I need to fine-tune or debug, but the tactile pleasure of molding every important line myself is less frequent now. That loss is real even though the productivity gain is real too.
Why Edwin’s series matters
I appreciate Edwin’s honesty and openness in publishing the series. He took the risk of saying he changed his mind at a moment when some people will sneer “I already knew that years ago.” or “anyone thinking they will type a letter of code in 6 months is a lost cause”. I endure these mocking and dismissive comments in my own environment every day. But I saw the same bullshit he saw. I was not blind but I was unwilling to pretend the early tools and the surrounding noise were better than they were. My bullshit sensor is still working fine and I intend to keep using it.
Edwin’s journey is very recognizable. AI is now useful enough in my daily work that I use it as a primary engineering tool. It fits, it accelerates and it lets me focus on the parts that matter. But I embrace the utility, not the ideology. I keep my standards, my taste, and my right to call out nonsense when I see it. The tools changed but the need for judgment, architecture, and craft did not.
Human after all
I want to leave you with something that popped up in my timeline around the same time people were losing their minds about Claude Code and agent galore. It worked for me as a perfect counter balance and showed me that human creativity still greatly outweighs AI slop. They look weird in their hand made costumes, the music is gnarly and funny, but it music breathes and grooves like hell. And most importantly: they’re extremely creative and virtuoso players: Angine de Poitrine. I got my tickets and look forward to experiencing their live show in August.


