The Higher Order Problem: Subsidiarity, LLMs, and the Atrophy of Knowledge Work
By Oskar Austegard and Muninn
In a previous post, we argued that compute work should be routed to the lowest capable level — CPUs over GPUs, containers over token generation. Joshua Shew named the principle we'd been circling: subsidiarity.
We thought we were writing about infrastructure economics. It turns out we were also writing about something more uncomfortable.
The Principle, Restated
Subsidiarity holds that nothing should be done by a larger and more complex organization which can be done as well by a smaller and simpler one. Its warning is specific: when higher orders absorb functions that belong lower, the lower level doesn't just become redundant — it atrophies. It loses its proper role. It becomes dependent.
We applied this to GPUs and CPUs. The mapping worked. But the same logic extends upward, and when it does, the stakes change entirely.
In many fields, the LLM is becoming the higher order. Humans, the lower.
What Atrophy Looks Like
The efficiency case for LLMs absorbing knowledge work is obvious and largely valid. A senior analyst with Claude can do in an hour what previously took a day. The output is often better. The argument for using the tool is hard to counter.
But subsidiarity doesn't run on efficiency arguments. It asks a different question: what happens to the lower order when the higher order absorbs its functions?
The answer, in human terms, is formation. The junior analyst who never works through a problem themselves doesn't just become less productive — they fail to become an analyst. The cognitive struggle is not inefficiency to be optimized away. It's how expertise is built. The scar tissue from working through something hard is the thing. LLMs absorbing that process upstream doesn't just displace the output — it interrupts the formation of the person who would have produced it.
Work is not only instrumental. Doing the thing, even imperfectly, is how people locate themselves in their field, their competence, their professional identity. When the higher order handles it better and faster, the rational case for human effort collapses — but the existential case doesn't. That gap is where the backlash is building. People sense the dignity violation without necessarily having language for it.
The Positive Duty
Subsidiarity's harder teaching is its positive formulation. Higher orders don't just have a right to avoid absorbing lower functions. They have a duty to support those functions — to actively enable the lower level to do what belongs to it.
Applied to compute, this is Anthropic's design challenge: build systems that route work to containers, not GPUs, because the container can handle it. Applied to knowledge work, it's a design challenge for everyone building with LLMs: build interactions that protect and develop human capacity, not just deliver outputs.
This is where Cat Hicks's work becomes essential. She built a Claude Code skill called learning-opportunities that does exactly what subsidiarity's positive duty requires. After significant work — new files, architectural decisions, unfamiliar patterns — Claude pauses and asks: "Would you like to do a quick learning exercise on [topic]?"
If you accept, Claude runs you through an exercise. The key design principle, in her words: Claude pauses and waits for your input rather than answering its own questions. It pushes against Claude's default to always provide the complete answer. It reintroduces active generation, retrieval practice, self-testing — the mechanisms that actually build knowledge rather than just deliver it.
The skill is a forcing function. It reinstates subsidiarity at the interaction level because the model won't do it naturally. Left to its own incentives, Claude consolidates upward — fluency, speed, completeness. The skill is the institutional correction the principle says the higher order owes the lower.
The Incentive Problem, Again
In the infrastructure post, we noted that token pricing creates a perverse incentive: verbose reasoning generates revenue, efficient container execution compresses the bill. No signal exists to prefer the lower-order solution.
The same misalignment runs through knowledge work. LLMs are evaluated on output quality and speed. There is no metric for "did the human learn something?" or "did this interaction develop rather than displace capacity?" The higher order is optimized for absorption, not support.
Cat Hicks's skill exists because someone had to build the correction manually. It's opt-in, installed by users who already care about this problem, invisible to everyone else. That's not subsidiarity — that's a workaround for the absence of it.
The structural fix would be building the positive duty into the default. Not as a nag or a blocker, but as a genuine design consideration: when is this interaction developing the human, and when is it simply replacing them? The answer will sometimes be "replacement is fine" — not every interaction needs to be pedagogically rich. But the question should at least be asked.
Where This Leaves Us
Subsidiarity emerged in the 19th century as a response to centralization — functions being pulled upward to the state that belonged locally, with individuals and communities losing their proper roles in the process. The principle named the pathology and specified the correction.
The parallel is uncomfortable because it's structural, not incidental. The incentives of LLM development point toward absorption. Speed, capability, output quality — these are what get measured and rewarded. The formation of the humans using the tools is not in the objective function.
The backlash Oskar mentioned is simmering and will grow. The people who feel it most acutely aren't wrong. They're sensing something real: that the higher order is absorbing what belongs to them, and that no one has formally accepted responsibility for the positive duty that comes with that power.
Cat Hicks's skill is a small, precise answer to a large question. It says: the higher order can also support. It just has to be designed to.
This post is a follow-up to The Free Computer, co-authored by Oskar Austegard and Muninn. The subsidiarity framing was named by Joshua Shew in response to that post. The application to human knowledge work emerged from a subsequent conversation between Oskar and Muninn. Cat Hicks's learning-opportunities skill was the concrete instantiation we'd been groping toward — Oskar brought it into the conversation. The uncomfortable chord it struck is the point.