← All articles

The Debt You Don't See

AI work generates two kinds of debt: technical and cognitive. Both accrue silently. Both compound. And the interest rate is accelerating.

Bert Carroll ·

Software has always generated technical debt. You ship something fast, cut a corner, skip a test, and the interest starts compounding. That's well understood. Every engineering team has a backlog of it. The concept is 30 years old.

AI-accelerated work generates a second kind of debt. Researchers at MIT's Media Lab named it: cognitive debt. In a 2025 study using EEG to track brain activity across writing sessions, they found that participants who relied on an LLM from the start showed systematically reduced neural connectivity. Their brains disengaged. But participants who did the work manually first and then switched to AI showed the opposite: strong multi-band connectivity across prefrontal and occipito-parietal regions.1 The people who built understanding before automating retained that understanding. The people who skipped straight to automation never built it.

That finding is about individual cognition, but the principle scales. In practice, cognitive debt is the growing gap between what you've built and what you understand about what you've built.

When you ship 87 story points in a week across six repositories, four clients, and three deployment platforms, you are producing at a rate that was physically impossible two years ago. The output is real. The code works. The clients are happy. But somewhere in that week, you made forty decisions you didn't document, solved twelve problems you won't remember next month, and built on three patterns that might already be deprecated by the time you revisit them.

That's the debt. Not in the code. In you.


Two Balance Sheets

Traditional technical debt lives in the codebase. It's measurable, if you're disciplined. Test coverage, linting, dependency freshness, architectural coherence. You can see it in the code and you can see it in the velocity charts when it starts dragging.

Cognitive debt lives in the space between sessions. It's the context that evaporates when a conversation ends. The decision you made at 11 PM about why this API returns data in this shape. The guardrail you added to your AI configuration because a hallucination burned you on a client deliverable. The reason you chose Azure over Supabase for this specific project, even though Supabase is faster for everything else.

If you don't capture those, you'll encounter the same problem again and pay to solve it twice. Or worse, you'll make a different decision the second time because you forgot the reasoning behind the first one, and now you have two systems that do the same thing in incompatible ways.

Every builder working with AI has this problem. Most don't realize it because the output velocity feels like progress. It is progress. But progress without alignment is just motion.


The Interest Rate Is Accelerating

Here's what makes cognitive debt different from the traditional kind: the interest rate is not fixed. It's tied to the rate of change in the tools you use.

A pattern I codified in February might be partially obsolete by April because the model improved, a new MCP server shipped, a framework released a breaking change, or my own workflow evolved in a way that makes the old pattern counterproductive. The guardrail I wrote to prevent a specific failure mode might now prevent a better approach from working.

This is not hypothetical. I track this. In 90 sessions over two months, I identified 49 instances of "wrong approach" friction, 35 "misunderstood request" errors, and 33 bugs that made it into output. After codifying the patterns and adding guardrails to my AI configuration, the next 120 sessions cut those numbers roughly in half. The guardrails themselves were debt payments. Without them, every session was paying interest on mistakes I'd already solved but hadn't recorded.

The speed of innovation means that the shelf life of any given pattern is shorter than it used to be. A year ago, you could learn a tool and use it for months. Now, the tool you learned last month has a new paradigm this month. The debt isn't just "I haven't documented this." It's "the thing I documented is no longer accurate, and I don't know which parts."


What Cognitive Debt Looks Like in Practice

It's not dramatic. It's mundane. That's why it's dangerous.

  • Invisible output. You ship work across multiple repositories and contexts in a single session. The daily note captures what you focused on. It doesn't capture the 30% of work that happened in other repos. If you price by complexity delivered, that's real revenue-equivalent capacity that doesn't show up in your tracking, your velocity metrics, or your capacity planning.
  • Repeated mistakes. You solve a deployment problem on Tuesday. On Friday, in a different project, you hit the same class of problem and spend another hour because the solution lives in a conversation that's already gone. The fix was never extracted into a pattern.
  • Stale documentation. Your architecture doc says the system uses one authentication method. Three sessions ago, you migrated to another. Nobody updated the doc because the migration was part of a larger feature push and the doc wasn't in scope.
  • Decision amnesia. You chose a specific approach for good reasons that you discussed with a colleague or an AI assistant. The conversation ended. The reasons evaporated. Six weeks later, someone asks why the system works this way, and you can't fully reconstruct the reasoning.
  • Context fragmentation. You work on four client projects and two internal tools in the same week. Each has its own conventions, deployment targets, and constraints. Without active alignment, the mental model for Project A starts leaking into Project B. You write Azure patterns in a Supabase project. You apply a client's constraint to an internal tool where it doesn't apply.

None of these are crises. All of them are friction. And friction at scale is the difference between sustainable output and burnout.


Debt Payments

The solution is not to slow down. If the work needs to be done, it needs to be done. The solution is to build debt payments into the workflow itself so they happen automatically rather than as a separate discipline you have to remember.

Here's what I've found actually works:

Codify decisions at the point of making them

Not after the session. Not in a weekly review. At the moment you make a non-obvious decision, capture why. The format matters less than the timing. A one-line comment in a configuration file. A guardrail in your AI instructions. A note in the daily journal. If it leaves your working memory before it's written down, it's gone.

Track all output, not just the primary context

If you're working in one repository but making changes in three, all three need to show up in the record. Cross-repository work is where the largest cognitive debt accumulates because it's the easiest to forget. You were "working on the client project" but you also fixed an infrastructure issue, updated a deployment pipeline, and created a new internal tool. Those are real story points. They represent real capacity. If they don't show up in your metrics, your capacity model is wrong.

Automate the capture

Manual documentation disciplines fail under load. When you're shipping 15 story points in a day, you will not stop to write a retrospective. The capture has to be embedded in the workflow: commit messages that describe why, not just what. Session logs that scan all touched repos, not just the one you started in. End-of-day routines that check for undocumented output.

Expire stale patterns

A pattern that was true two months ago might not be true today. Review your documented patterns on a cadence. If a guardrail references a tool version that's been superseded, update or remove it. If a decision was made under constraints that no longer apply, revisit it. Stale documentation is worse than no documentation because it creates false confidence.

Separate the narrative from the operations

Commits tell you what changed. State files tell you what's current. Neither tells you why. The narrative layer, the "what happened and why it matters" record, is a distinct artifact that needs its own step in the workflow. Without it, you have a changelog and a status board but no institutional memory.


The Uncomfortable Math

I recently reconstructed a week of work from commit history. The daily notes had captured 56 story points. The actual output, once cross-repository work was included, was 87. That's 31 story points of real delivered value that was completely invisible. Over a third of the week's actual output didn't exist in any record.

The missing work wasn't wasted. It was infrastructure, internal tooling, cross-project fixes. The kind of work that makes everything else faster. But because it wasn't tracked, it couldn't inform capacity planning, pricing decisions, or the very practical question of whether this pace is sustainable.

An 87-story-point week is what a four-person team delivers in a sprint. For one person, even with AI acceleration, that's a pace that demands honest accounting. Not to justify it to anyone else, but to make an informed decision about whether to keep doing it.

You can't manage what you can't see. And the fastest way to burn out isn't overwork. It's overwork you can't measure, can't attribute, and can't use to make better decisions about what to take on next.


Staying Aligned

The word I keep coming back to is alignment. Not in the AI safety sense. In the mechanical sense. When a machine is aligned, all the parts are moving in the same direction with minimal friction. When it drifts out of alignment, the same energy input produces less output and more heat.

Cognitive debt is misalignment between your output and your understanding of your output. Technical debt is misalignment between your code and your intentions for your code. Both accumulate naturally. Both require active maintenance. And in AI-accelerated work, both accumulate faster than they ever have before, because the output rate is higher and the tools are changing underneath you while you use them.

The discipline is not "slow down." The discipline is "stay aligned while going fast." That means building the alignment checks into the process itself, not bolting them on after the fact. It means treating documentation, pattern capture, and honest capacity tracking not as overhead but as structural components of the work. It means accepting that some fraction of your output each week will be spent on the machine that produces the output, not on the output itself.

That fraction is not waste. It's maintenance. And the alternative, skipping it, is how you end up six months into a practice running on decisions you can't explain, patterns you can't verify, and a pace you can't sustain.


Sources

  1. Kosmyna, N., Hauptmann, E., Yuan, Y.T., Situ, J., Liao, X-H., Beresnitzky, A.V., Braunstein, I., & Maes, P. (2025). "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task." MIT Media Lab. arXiv:2506.08872. The study found that brain connectivity systematically scaled down with the amount of external AI support, while participants who built manual proficiency first retained strong neural engagement when switching to AI-assisted work.