← Back
The Mythical Man Month - turning house for air traffic

Introduction

During my Software Engineering program in college, one of the first books, that has continued to stick with me throughout my career, is The Mythical Man-Month. As we get into just over 50 years since it was published, I still find myself referencing the concepts.

Today, AI coding assistance and autonomous coding agents have become actual, useful tools for producing code. I use them, my teams use them, my product partners use them. At times, they can handle complex tasks in a way that seemed like a dream just a few years ago.

With all these advancements, the question now stands, does the Mythical Man-Month still remain true? Can we add more people to a project to deliver it sooner? Are people and time more interchangeable now? Do AI coding agents change the person-month equation?

The Continued Failure of Software Projects

There are many ways that a software project can fail. As much so, there are an equal amount of definitions of what failing means for a software project. This is as true today as it was in 1975.

The Mythical Man-Month principle comes down to the assumption that people and time are fungible. Companies look at a problem, see the estimate from the technology team, and inevitably ask, "What would the estimate be if we added more people?"

Persons and time are not interchangeable. The classic example from the book is, "nine women can't make a baby in one month."

As a project team size increases, there is more collaboration and communication that is needed. As new people are added to the team, the existing members of the team need to step aside from the project itself to train the new team member.

And finally, the tasks of a project can't be parallelized. The tasks have sequential dependencies that either require significant time to plan parallel work or the tasks simply require the previous task to be completed.

Today, the question a business leader or stakeholder might ask is, "What would the estimate be if we added more AI Agents?"

From an engineering leader perspective, and probably even from the individual engineer's perspective, the gut reaction is that the time constraint is never the hands on keyboard work. Whether we have AI tools, scaffold libraries, code generators, auto-complete - the complexity of a software project is rarely the code itself.

Conceptual Integrity

In the age of probabilistically written code, the problem of conceptual integrity increases significantly. There is an urgency among business leadership to introduce AI software tooling, as if it is a silver bullet. This runs into the same problem all software projects run into when pressed with urgency - the time for planning and configuration is skipped in the hope that we can come back to it later.

Architecture control is essential at all times. One aspect that has evolved since the publishing of the Mythical Man-Month is the split career path model of Staff Engineering and people manager. For Brooks, this is similar to the Chief Programmer and surgical team concepts.

The Chief Programmer of today is the Staff Engineer within an organization. As we get into a more agentic programming world, this role remains valuable, if not even more so.

At some point, every software engineer may be supported by a surgical team of agent specialists. They, however, will still need the person to hold together all of the concepts, while managing their own group of specialized agents.

The issue with LLMs at this moment is their long term coherence. Even with the best prompt, guidance, supporting data, and rules, the LLM will still return different results nearly every time. Can it be trusted?

The Chief Programmer's role, and frankly every engineer's role, becomes maintaining conceptual integrity. Small teams of small teams becomes possible if this remains intact.

Discipline and Restraint

Brooks discusses the second system - the system that follows the first, but becomes bloated and over-engineered. We see this similarly today with unfinished projects, retired features that aren't removed, and the fact that our engineering teams churn every 2-3 years.

Additionally, software seems to iterate constantly - new language features, new compilers, new package managers, new libraries, etc. There are many solutions for every problem. As new people join teams and old people leave, the new people bring their ideas to the system. Now the project has two systems, without any significant improvement to the first or completing the migration to the second.

This idea is slightly different than the "second-system effect," but the results are the same.

With the rise of agentic coding assistants, this problem can exacerbate itself even more. How often have we seen a project POC make its way to production while still in its POC state? What happens when a POC becomes instant and the business wants to ship?

The responsibility remains on the people in the engineering organization to exercise discipline and restraint. First, in the introduction of these AI tools. And second, in the usage of them. The system design and its completeness will matter more and more. Can the AI write this? Maintain it? Hold it as truth?

The individual systems within the whole of the system will need the top-down guidance it has always had, while being disciplined and restrained within that system.

Perhaps, because the agentic tooling increases the speed at which code is written, they will enable us to spend more time coherently defining the system design - setting up its constraints and defining its policies. But here we land again, people and time are not interchangeable. The time it takes to execute a project remains outside of writing the code itself.

This is exacerbated when introducing agentic coding tools into legacy and existing systems. Existing systems have this tech debt, they have multiple patterns of building the same thing, and there is dead code. The system design isn't robustly documented or followed precisely. Documentation and communication has always been required for discipline and restraint within an organization's system design, do we think we're going to change that now?

New systems have an advantage of starting this way, but it is too early to see how quickly that can be maintained or if we fall back into our existing patterns.

The Surgical Team

"A small sharp team is best — as few minds as possible."

Small teams are able to deliver on projects more effectively. This was true in 1975 and is true today. Small teams require less inter-team communication. A small team lead can maintain system design adherence. Decisions happen faster. Execution can be aligned quickly if it is going down the wrong path.

Now, we're nearly at the point where I think a software engineer could have their own surgical team of coding agents. Is this enough for a project team? Having a person, the project lead, maintaining the system design and feature requirements while a team of coding agents executes?

It does refine the decision process down to a single person, but communication continues to be the bottleneck. The constraints need to be tightened and monitored while the agent executes. And while Brooks would oppose too many cooks in the decision making kitchen, having a single person making decisions for themselves doesn't seem wise.

Brooks' concept of the surgical team remains true and effective, even in the agentic coding world.

Essential vs Accidental Complexity

The codebase I work on today is only 10 years old. There is some essential complexity and there is some accidental complexity. The essential complexity exists because the business domain - data, machine learning, marketing features, and third-party integrations - is genuinely complex. The accidental complexity is simply due to the number of people who have taken lead of a system and a lack of discipline in defining and following the system design.

This is all the types of tech debt accrued mentioned before.

As software engineers become more comfortable and efficient with producing code with AI, the business will expect an increase of velocity in feature release. With an increase in velocity, which we see regardless of AI involved or not, we see a decrease in adherence to the system design. This doesn't even have to result in bugs, it could just be a different design pattern.

This accidental complexity is complicated enough for people to deal with. Junior engineers and new hires who come into a system should be able to identify the patterns in how software is written and adheres to the overall design. This goes back to the challenge of introducing agentic tooling into a legacy system. The AI on its own cannot tell the difference between essential and accidental complexity. It doesn't know what is tech debt, the correct patterns, or even dead code.

This is where the Mythical Man-Month continues to hold. While engineering efficiency may increase, the time to maintain complexity within a system also increases. Again, even with all the right and best guidance, an LLM will often return different answers. This will inherently lead to accidental complexity. The software engineer will need to intervene and manage compliance of the agents.

There is No Silver Bullet

"There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity."

Brooks wrote this in 1986 as a separate paper. It was then included in the 1995, 20th anniversary edition of The Mythical Man-Month along with a "No Silver Bullet Refired" 10 year follow up to see if his assertions remained true. This topic is worth its own exploration.

The question for us here is, do AI, LLMs, and agentic coding provide the silver bullet for software development and technological advancement?

This is not true. We are still in a position where the AIs aren't capable of understanding the problem domain, reliably maintaining discipline within a system design constraint, making informed design tradeoffs, or testing conceptual correctness. This is what binds us still to the relevancy of the Mythical Man-Month. An AI can simulate understanding, which may be enough for some use cases, but it will not be able, with current technological capabilities, to actually understand the essential complexity of a system. This keeps these responsibilities in the hands of the person building software, even if they are supported by these tools.

Conclusion

"The essence of a software entity is a construct of interlocking concepts... I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it."

The hard part of software engineering is the engineering. Specifications and documentation are essential for shared understanding of the system as a whole, as well as the specific features. Features do not stand on their own, they integrate lightly or wholly within the system in which they are created. The system may be missing some of the frills or lacks flexibility, but those constraints make good software. In order for a feature to integrate, communication structures must be in place - not just the software, but the teams that are building it. Coordinated communication is just as much a part of the system design as the integration between components.

Communication and decisions between people is where understanding exists. It is how concepts are tested and proven. It is how we succeed in building software.

And this is where the time is, not in the writing of code, but the specification of the system and its advancement. The problem with software remains a human problem, and therefore, the Mythical Man-Month remains as relevant today as it did 51 years ago.