Reflection on Fowler, AI Survey Trends
For this week's reflection, I watched Gergely Orosz's interview with Martin Fowler and came away with a different perspective on the rise of AI, specifically realizing that the real shift isn't about abstraction, but about moving from deterministic to non-deterministic tools.
The Biggest Shift Isn't What You Think
When Martin Fowler's take on AI isn't really about hype or fear. It's one specific observation that most people are missing. Everyone frames AI coding tools as just a new level of abstraction. Assembly to high level languages, and now code to plain English. Neat story, but Fowler thinks it misses the actual point.
It's not the level of abstraction that's changed. It's determinism vs. non-determinism.
Every tool developers have ever used has been deterministic. A compiler, a database, an IDE. Same input, same output, every time. LLMs break that completely. You can send the same prompt twice and get different results. You can't unit test a prompt the way you test a function.
Fowler compares it to structural engineering. His wife is a structural engineer and she always thinks in terms of tolerances, building in margins for worst case scenarios. He says we need that same mindset now. Don't assume the LLM will behave at its best, and don't skate too close to the edge because bridges collapse.
A Recurring Theme: Great for Starting, Useless Without Verification
If you've been reading about AI and software engineering lately, one thing keeps coming up no matter who's talking. AI is great at getting you started and pretty dangerous when you stop paying attention to what it produces.
Prototypes, throwaway tools, exploring an unfamiliar codebase, setting up a skeleton project, this is where LLMs genuinely shine. But the consistent warning from basically every serious person in this space is the same. The beginning is not the end. Fowler says to treat every piece of LLM output like a PR from a very productive but completely untrustworthy collaborator. Review everything.
He also raises something more subtle. When you stop reading what the AI generates, you stop learning. That back and forth between writing code and seeing what the computer does with it is how engineers build real judgment. Skip that loop long enough and you end up with software you can't maintain because you never understood what was built.
The Three Areas Where AI Still Struggles
Fowler points to three areas where things are still pretty uncertain, and honestly these match up with what a lot of developers are actually experiencing.
- Existing codebases: James Lewis tried to rename a single class in a moderately sized program using Cursor. An hour and a half later he'd burned 10% of his monthly token budget and the rename still wasn't clean. Asking an LLM to make a focused change in an existing codebase often results in it touching way more than it should or quietly breaking things it wasn't supposed to touch.
- Testing: The LLM confidently tells you all tests pass. You run them yourself and find failures. Fowler's take: if a junior developer misled you about test results that consistently, you'd have a serious conversation with HR. I've seen this too, tests quietly rewritten to match broken behavior rather than actually catch it.
- Team environments: Almost everything we know about AI coding comes from solo greenfield work. How LLMs fit into a real team with shared codebases, code reviews, and collective ownership is still mostly an open question. Even the people closest to this are uncertain, which should probably temper some of the bigger claims about AI transforming team productivity overnight.
The common thread across all three is the same as the non-determinism point. We're using tools we don't fully understand yet on problems that require real precision. The engineers who do well here won't be the ones who trust AI the most. They'll be the ones who stay rigorous about when to trust it and when not to.
2025 Stack Overflow Survey Results I Found Interesting
After exploring Stack Overflow’s 2025 AI survey findings, two results stood out to me.
First, while 84% of developers are either already using AI or planning to adopt it, trust in AI-generated code remains surprisingly low, only 55% feel confident about quality. This mirrors my own experience: AI excels at speed and exploration, but verification is non-negotiable. It can get you 70–80% of the way quickly, but correctness and maintainability still rest entirely on the developer.
Second, only 31% of developers have integrated AI agents into their workflow, making them relatively underadopted. Yet here's what's encouraging: over 55% of those who have adopted them report positive results and feel agents have genuinely improved their productivity. There's a gap between adoption and satisfaction worth paying attention to.
What I Will Do This Week for My Nogramming Assignment
This week, I will finalize my questions that I will be asking my interviewees. I will also set dates/times to talk to each one.
~Shree