The Coming Knowledge-Work Supply-Chain Crisis
"Humans. Here they are. Bottleneck, bottleneck. Hi, good to see you. And some of you are terrified. You're going to be even bigger bottlenecks." - Tyler Cowen
Remember the first time an autocomplete suggestion nailed exactly what you meant to type? Multiply that by a thousand and aim it at every task you once called “work.” AI is scaling the creation side of knowledge work at an exponential rate, but our decision-making tools and rituals remain stuck in the past. The imbalance creates bottlenecks in everything from code reviews to roadmapping and everything in between. Before we drown in our own todo queues, we need to rethink the entire production-to-judgment pipeline.
AI Accelerates Production, Not Judgement
Over the past few months, I’ve shared various experiments where AI dramatically accelerates production tasks:
Breaking down a big refactoring into bite-sized tasks for an AI to do
Implementing complete features and tests directly from user stories
There’s one common theme here: AI excels at production but always ends up with humans as a critical bottleneck dealing with a mountain of tasks to evaluate, approve, or modify what it creates.
Meaning-Making: Where the Human Bottleneck Begins
This pile of tasks is how I understand what Vaughn Tan refers to as Meaningmaking: the uniquely human ability to make subjective decisions about the relative value of things. He argues this type of value judgement is something AI fundamentally cannot do, as it can only pattern match against existing decisions, not create new frameworks for assigning worth.
When an AI generates 10 pull requests overnight, a human needs to decide which ones are worth merging, which need modification, and which should be rejected entirely. This isn’t just about checking if the code works (which you still need to do!), it’s also about making judgement calls on whether the changes align with the project’s goals, whether they solve the right problems, and whether they will be maintainable long-term.
The Twin Crises: Satisfaction and Scale
Ok, calling this a “crisis” may be a bit hyperbolic, but we can already see at least two problems emerging.
First, Rohit Krishnan points out in When We Become Cogs that working with AI effectively leads to less job satisfaction. An MIT study found materials scientists experienced a 44% drop in job satisfaction when AI automated 57% of their “idea-generation” tasks — precisely the creative work they most enjoyed. This is similar to the direction that software development is going - as AI gets better at generating code, more and more of the work of a software engineer will turn into PR review and less of the aspects of creative problem-solving that drew many of them to the field.
Second, you may have realized while watching some of my demos that our tools aren’t designed for the volume of work AI can generate. In AI Programs While I Sleep, you can see that I am already underwater with hundreds of AI-generated PRs to review. Our code review tools are designed for reviewing at most 5-10 PRs a day, not 50. You can also see a similar pattern emerge in the other videos having to do with managing user stories, doing product acceptance, and test case validation. Our tools are designed for orders of magnitude less work.
These two problems compound each other. Just as the tools knowledge workers use for evaluation and judgment (the “meaningmaking” work) start to break under the weight of more tasks than they are designed for, the tasks themselves are becoming much less rewarding. The result? Work piles up in review queues, decisions get rushed or postponed, and we’re no better off than before adding AI tools into our process.
Redesigning for Decision Velocity
This raises some big questions:
How might we design tools to enhance decision-making velocity?
What would code review look like if optimized for 50 PRs daily instead of 5?
Which skills become premium when humans focus on judgement rather than production?
Can we find job satisfaction in a majorly reviewer or “decider” role?
Our current evaluation tools were designed for an era of scarcity - when human effort was the limiting factor in production. In an era of AI-driven abundance, we need systems built around human cognitive limitations.
The meta-challenge here is that we’re using tools optimized for the constraint of yesterday (production capacity) while facing a completely different constraint today (judgement capacity). The organizations that thrive will be those that recognize this fundamental shift and redesign their workflows accordingly.
From Makers to Deciders: A New Skill Stack
For those familiar with John Boyd’s OODA loop (Observe, Orient, Decide, Act), there’s a parallel. AI is increasingly handling the “Orient” and “Act” phases — the creative synthesis and execution that many knowledge workers found most satisfying. What remains are the “Observe” and “Decide” phases - the evaluation and judgment work that our tools and processes aren’t optimized for.
We must reimagine knowledge work as a high-velocity decision-making operation rather than a creative production process. Without new tools and frameworks, humans will become overwheLLMed judges in a court where AI generates more cases than could ever be heard.
Ultimately, I don’t see AI completely replacing knowledge workers any time soon. What I see happening is us not being prepared for how AI transforms the nature of knowledge work and us having a very painful and slow transition into this new era.
This article indirectly means that knowledge workers who are already skilled in a field will have an edge. Because they can act as judges and deciders due to their deeper understanding of the system. Experience matters a lot now.
We estimate that about 50% of knowledge work will be disrupted (require transformation) of which 15-20% will be made redundant by AI agents. That's a big shift. More here: www.ai-risk.co