The atrophy of thinking with Large Langage Models

This is the raw version of the article. To get the rich content, get to my Remanso space.

The atrophy of thinking with Large Langage Models

In Thinking in Systems, one system trap is by shifting the burden to the intervenor.

In 2012, neuroscientist Manfred Spitzer published Digital Dementia, arguing that when we outsource mental tasks to digital devices, the brain pathways responsible for those tasks atrophy. Use it or lose it. Not all of this is proven scientifically, but neuroplasticity research shows the brain strengthens pathways that get used and weakens ones that don't. The core principle of the book is that the cognitive skills that you stop practicing will decline.

Margaret-Anne Storey, a software engineering researcher, recently gave this a more precise name: cognitive debt. Technical debt lives in the code. Cognitive debt lives in developers' heads. It's the accumulated loss of understanding that happens when you build fast without comprehending what you built. She grounds it in Peter Naur's 1985 theory that a program is a theory existing in developers' minds, capturing what it does, how intentions map to implementation, and how it can evolve. When that theory fragments, the system becomes a black box.

Apply this directly to fully agentic coding. If you stop writing code and only review AI output, your ability to reason about code atrophies. Slowly, invisibly, but inevitably. You can't deeply review what you can no longer deeply understand.


References

  • Finding the right amount of ai | Tom Wojcik
  • Stop generating, start thinking
  • Good brain | Cassidy Williams
  • Cognitive Debt: When Velocity Exceeds Comprehension | Ganesh Pagade