The End of Human-Readable Code: It’s Time to Write for AI

Ken Erwin
7 min readFeb 23, 2025

--

Passing the torch

I’ve spent a significant portion of my career learning how to make code better. Following the time-tested best practices that have defined this industry, from Grace Hopper to Uncle Bob, Joel Spolsky, Steve McConnell, and many others. I’ve spent countless hours writing and re-writing my own code (and others’) to make it more readable, maintainable, and in a word, clean. We all know the old saying, “code is read more than it is written,” and while anyone who has worked on a legacy codebase probably would agree, I can’t imagine the majority wouldn’t further note how much time is often wasted just trying to weed through and decipher the source code.

I have thought a lot about this topic, especially over the last year, as the use of AI has invaded almost every part of what traditional programmers spend most of their days doing: coding. The realization I have come to is best explained with a very basic metaphor: essentially, we’ve all spent our entire coding careers just trying to start a fire using flint. And now, with the use of so many readily available AI coding tools, we’ve just been handed a torch-everything-in-one-second-flat-flamethrower. In short, we’re undergoing a paradigm shift that calls us to completely re-think the framework that has shaped our understanding of coding — and more importantly, to re-define our relationship with it. This next year especially, we need to stop optimizing our code for humans and instead, optimize it for AI.

Here’s the evidence:

I put together a testing suite to compare the performance of different code implementation styles and their impact on AI model performance. I used the latest Anthropic Sonnet 3.5 model to perform various software engineering tasks across four different implementation styles. (You can find it here: https://github.com/kenerwin88/write-for-ai)

I gave it four different implementation styles to test (in order from least optimized to most):

  1. Poor (minimal documentation, poor structure, simulating a codebase that is a mess)
  2. Standard (Typical clean code, no type information, no documentation, used as basis for comparison)
  3. Human-Readable (Very clean, similar to LLM-Optimized, but no LLM specific optimizations. Very readable for humans.)
  4. LLM-Optimized (structured for AI consumption, with type information, documentation, and relationships between files)

Each of these implementation styles were then fed to Sonnet, and given the exact same tasks:

The results were very interesting (I used an LLM as well to grade the results to determine accuracy & quality):

  • Accuracy: LLM-Optimized, Standard, and Human-Readable implementations all achieved 91.6% accuracy, while Poor implementations only reached 75%. This means that the LLM was able to understand the code and perform the task at hand just as well as the other implementations.
  • Completion Time: Standard: 84 seconds, LLM-Optimization: 90 seconds, Poor: 91 seconds, Human-Readability: 94 seconds
  • Token Usage: Standard: 13.5k tokens, Poor: 14.6k tokens, LLM-Optimization: 19.4k tokens, Human-Readability: 20.8k tokens

While the Standard implementation was marginally faster, the LLM-Optimized version consistently showed the best balance of features, security, and reliability. It was also faster and more efficient than the human-readable implementation.

You can run it yourself, add your own implementations, and try various prompts to see how different implementations perform. One thing I found very interesting is that I tried various forms of “LLM-Optimized” implementations, and without a proper prompt, the LLM would add additional comments/wasteful tokens that actually made it perform worse overall. With some variations, occasionally the “Human Optimized” implementation would perform just slightly better as well, but the LLM optimized was by far the most consistent performer.

For this example, I used a single file to test the performance of each implementation. However, in a real-world scenario, you would have many files, and I believe the LLM-Optimized implementation would likely perform even better via the relationships between files being available to the LLM.

While not in the testing suite, at larger scales I’ve found the domain driven design implementations also tend to perform better than MVC or other traditional design patterns. I’m hopeful I’ll soon have an example + data for that as well. Plus, all my examples are in Python, it would be very interesting to see what the results are in other languages.

So, what makes code AI-Friendly?

The differences between old-school, traditional code and AI-enhanced/optimized code go way beyond just documentation. Here are the main reasons why the LLM version was much more effective:

1. Context Headers: Clear file-level documentation that explains system context, business rules, and technical dependencies:

As a Human, I also like these

2. Semantic Grouping: Explicit section markers that help AI models understand code organization:

3. Relationship Markers: Clear indicators of code relationships:

I believe these will be the key to unlocking AI in huge codebases

4. Type Information: Explicit type hints and schemas that help AI understand data structures:

The Cost-Benefit Analysis

The data shows that LLM-Optimized code uses around 44% more tokens than the Standard version, but about 7% less than the extra wordy Human-Readable version. However, this overhead is justified by:

  1. Higher reliability (comprehensive error handling)
  2. Better security features
  3. Clearer code structure
  4. More maintainable systems

We’re accepting a modest increase in initial processing cost in exchange for significantly improved clarity and manipulability of the code by AI systems. This approach is similar to our previous decision to use human-readable variable names and comments — despite the added overhead — to enhance maintainability for human developers. For large codebases, I believe this method may ultimately lead to reduced overall token usage, as the AI will require fewer iterations to understand the code.

How do I optimize my code for AI?

After quite a bit of trial and error, I’ve found this prompt to work very well:

Please rewrite the following code to be optimally parseable by an LLM while maintaining identical functionality and method signatures. Follow these guidelines:

1. Structure:
- Use clear, logical code organization
- Add descriptive but concise docstrings that explain purpose and behavior
- Break complex operations into well-named helper functions
- Maintain consistent indentation and formatting

2. Naming and Documentation:
- Keep all existing method names unchanged
- Use self-documenting variable names
- Include type hints for parameters and return values
- Add brief inline comments only for non-obvious logic

3. Optimization Guidelines:
- Remove redundant or unnecessary comments
- Eliminate dead code and unused imports
- Simplify complex conditionals
- Use pythonic patterns where they improve readability
- Avoid unnecessary abstraction layers

4. LLM-Specific Considerations:
- Structure code to make dependencies and data flow clear
- Use standard library functions where possible
- Keep line lengths reasonable (≤100 characters)
- Group related functionality together
- Minimize nested code blocks
- Add clear structure markers
- Add comments that benefit AI models, if they only benefit humans do not include

5. Performance Balance:
- Maintain O(n) complexity where possible
- Don't sacrifice significant performance for readability
- Keep memory usage efficient

Please provide the optimized code while preserving all existing functionality and external interfaces.

The Paradigm Shift

Just as a flamethrower can radically transform the way we start a fire, AI is revolutionizing our approach to writing code, especially shifting our role as the writers of it. We're no longer coding solely for human developers to read and understand—we're crafting it for AI systems that can:

  • Understand system architecture instantly
  • Detect potential security vulnerabilities
  • Suggest optimizations
  • Generate new features
  • Fix bugs automatically

Looking Forward

This isn't solely about tweaking our current code to be more AI-compatible or friendly—it's about acknowledging that we're in the midst of a fundamental transformation in software development. Today's AI coding assistants offer just a glimpse of what’s to come. As these systems evolve and become more sophisticated, the ability to craft code optimized for AI processing will become increasingly essential.

The data from my own testing shows that while traditional "clean code" practices serve us well, they're not optimized for our new AI-powered development environment. By adapting our coding practices now, we can better position ourselves for a future where AI plays a central role in the development process.

I am not suggesting we discard clean code principles—instead, we need to evolve them. Just as we transitioned from coding for machines to coding for humans, we're now entering an era of AI-developed code.

The real question isn't if we'll make the shift—it's how quickly can we shift our mindset to change how we see ourselves as coders who now utilize AI.

As we move into this new era of AI-driven development, it’s clear we’re standing at a crossroads. We’ve spent years perfecting the art of writing code for human interpretation, but now it’s time to take a step back and rethink what "good code" truly means. Optimizing for humans, while still vital, can no longer be our primary focus. Our attention must shift to creating code that not only flows smoothly for us but more importantly, can be efficiently understood and leveraged by AI.

The tools have changed, and I for one am ready to adapt.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

No responses yet

Write a response