Is AI The Silver Bullet?
In 1986, Frederick Brooks, Jr proposed the following hypothesis:
"There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in stability"
His 1986 article is reprinted in the 1995 20th anniversary edition of The Mythical Man-Month under the chapter titled "No Silver Bullet - Essence and Accident in Software Engineering." It includes a following chapter that attempts a retrospective on his hypothesis, which comes nearly 10 years after the original article.
Previously, I explored whether or not the thesis of The Mythical Man-Month holds true today. We're now 40 years after the silver bullet article, and I'll dive in to answer the question: does agentic coding deliver a 10x improvement in productivity, reliability, or simplicity?
Essential vs Accidental Complexity
First, it's worth coming back to this comparison. This is a concept that comes from Aristotle's metaphysics adapted for software engineering.
For Aristotle, every thing has either essential or accidental properties. Essential properties are what makes a thing what it is. Accidental properties are incidental, they are properties that a thing happens to have.
For Brooks, software engineering follows a similar pattern. Essential complexity is the irreducible difficulty that exists in the problem itself. Accidental complexity is the result of the supporting systems, technical debt, interface layers, technical limitations, programming languages, and processes. Accidental complexity isn't complexity incurred by chance, the better word - which Brooks points out - is incidental.
This comes down to, better tooling affects the incidental complexity, but doesn't affect the essential complexity.
Brooks states that essential complexity is always going to be greater than 50% of the work, which means that even eliminating all incidental complexity will, at most, only yield a 2x improvement. The goal of a tool yielding a 10x improvement is mathematically impossible, unless the tool can address essential complexity.
Brooks' argument follows Amdahl's Law regarding the mathematical formula for understanding how a task's velocity can be increased when you can only affect or improve a part of it. If a task has two parts, one that can be improved and one that cannot, the overall optimization of that task is limited by the part that cannot be improved. Or, put another way, performance improvements will always be limited by the parts of a system that are inherently unimprovable.
This reframes the question, then: can agentic coding affect essential complexity?
AI in 1986
One fascinating part of this article is the conversation on AI. Nothing is actually new. Even in 1986, Brooks states, "Many people expect advances in artificial intelligence to provide the revolutionary breakthrough that will give order-of-magnitude gains in software productivity and quality."
Brooks broke this down into two different types of AI:
AI-1: Computers able to solve problems that could only be solved by human intelligence.
AI-2: Expert systems that use heuristic or rule-based programming techniques to solve problems.
This is just worth noting - where we are today is because of the computer science of the past, even the distant past of 40 years ago. That leads us to clarify, do the agentic coding systems of today fall into the AI-1 or AI-2 category?
LLMs are a form of AI-1 where we can begin to see effects on incidental complexity, by simulating human understanding and impressive pattern matching. This has the opportunity to simplify the abstractions and interfaces that are some of the causes of incidental complexity.
The problem with software, though, as Brooks says, is deciding what to write, not writing it. Extending Brooks's definition, a true AI-2 system would require a fully encoded world model of domain expertise. Such a system might actually affect essential complexity. Given a problem to define and refine, a system that utilizes the techniques of problem solving will begin to propose solutions for the problem.
AI in 2026
So, are we at AI-1 or AI-2 today? What category do agentic coding tools, utilizing the latest LLM models fall under?
My assertion is that we are still at AI-1. This is a great place to be! Using these tools, I've been able to build complex web and native applications, setup boilerplate and scaffolding that I never want to think about, handle CSS and front-end code that is tedious, generate tests and testing frameworks, and utilize documentation to handle best practices. Really, any dumb idea I've had for a website over the past 30 years, I can just make.
These are real advances, especially for new codebases, that address the incidental complexity of a system. However, it does introduce another level of incidental complexity - especially in legacy systems. LLMs are still probabilistic systems. They will rarely return the same result twice, which means your agentic PR reviewer won't be 100% reliable, your test driven agentic coding workflow may decide to change tests so they pass, the coding agent may make up a dependency that doesn't actually exist.
These are solvable problems in agentic coding workflows, but it is still incidental complexity.
If it can't solve all of the incidental complexity, does it address the essential complexity of the problems? Can it?
Let's look at Brooks' four inherent properties of software that make it essentially difficult.
1. Complexity - any software system or entity is more complex for their size than perhaps any other human construct. And no two systems, entities, or parts of a system are completely alike.
2. Conformity - software has external interfaces, which are arbitrary to the problem. These include the hardware itself, legal regulations, other software and system interfaces, and simply human conventions.
3. Changeability - software is not immutable and is built to be changeable and adaptable over its lifetime
4. Invisibility - software is close to an idea. It has no physical form, which makes it challenging to visualize or communicate.
So, we can ask, does the current form of agentic agents address these problems?
1. Can it understand the complexity of a problem domain?
2. Can it resolve ambiguity with external interfaces and requirements?
3. Can it make the necessary trade-offs necessary to address immediate business needs while remaining adaptable?
4. Can it know when the idea or spec is wrong?
While we are at the early stages of LLMs, today, the answer to all four of these questions is no.
In his 1995 follow up, Brooks declared victory on his assertion. Advancements like high-level languages, object-oriented programming, hardware improvements, and others still only addressed the incidental complexity. AI, in its current form, is a repeat of this pattern, promising transformational changes, while only moving the bar and adding another layer.
The tool shapes the user
AI Agents, like all software advancements (or really most modern advancements in technique) optimize the how of a problem, without engaging the why. "The central question of how to improve the software art centers, as it always has, on people." This was true in 1995, and it remains true today.
The evergreen challenge in building software has been the pressure to focus on feature delivery or the newest technology, without addressing the essential work that needs to be handled for a system. Is there a tool or advancement that solves this problem? No. As long as there are investors and boards and CEOs reading LinkedIn hype articles and CTOs being demoed "automation" tools, the pressure will remain to skip the essential work.
"Great designs come from great designers. Software construction is a creative process. Sound methodology can empower and liberate the creative mind; it cannot enflame or inspire the drudge."
AI, even if we get to the expert systems level, inherently cannot follow the actual creative process because they are themselves created processes. Probabilistic results are not based on the flow and exchange of ephemeral ideas. With that said, software itself is the closest construct humans have created that come close to encapsulating what an idea is.
The Accidental Complexity Reduction
This is not a complete dismissal. Current agentic coding tools and workflows have the opportunity to significantly reduce the accidental/incidental complexity of building software. The barrier to software development is also changing, making tools available for non-programmers to be involved in the software development process.
However, we have still not achieved Brooks' silver bullet in a 10x improvement in productivity, reliability, or simplicity.
The conclusion remains: the essential work of software is and remains distinctly human.
I'll wrap with this quote from the article:
"Einstein repeatedly argued that there must be simplified explanations of nature, because God is not capricious or arbitrary. No such faith comforts the software engineer. Much of the complexity he must master is arbitrary complexity, forced without rhyme or reason by the many human institutions and systems to which his interfaces must conform. These differ from interface to interface, and from time to time, not because of necessity, but only because they were designed by different people, rather than God."
