Quantum Work: Why Transformations Stall When Work Has the Wrong Shape
- Kenneth Linnebjerg

- Apr 30
- 7 min read
Large transformations often fail for reasons that look operational on the surface but are structural underneath.
Teams seem busy. Governance is active. Roadmaps are full. Reporting is frequent. Yet progress remains slower than expected, dependencies multiply, and the same work keeps coming back in slightly different forms.
Leaders usually respond by adding more control: more meetings, more tracking, more refinement, more escalation. But in many cases the real problem sits elsewhere.
The work itself is the wrong shape.
That is the essence of what I call Quantum Work: the natural unit of change. It is the smallest meaningful unit of work that can move through a transformation system without losing coherence, without creating avoidable ambiguity, and without generating structural rework.
This may sound like a backlog topic. It is not. It is a transformation design topic.
Most organizations have never defined what a manageable work unit actually is. Work enters delivery as epics, features, tasks, enablers, defects, work packages, and requirements.
Some of these are too large to move cleanly. Some are too vague to validate. Others are so fragmented that they no longer represent meaningful change. The result is not just planning difficulty. It is structural instability. [1][2]

Why this problem is so common
Almost everyone working in a large transformation has seen the same pattern.
A feature looks reasonable in governance. Once refinement begins, it turns out to contain process change, integration logic, data consequences, testing implications, business exceptions, and architectural questions. It remains one item because that is how it was funded or reported. Weeks later, the team is still “working on it,” but the actual boundary of the work has shifted several times.
The opposite also happens. Work is broken down so aggressively that the original outcome disappears. Progress is measured through technical sub-tasks, but nothing meaningful is actually ready. The system shows motion, but not usable progress.
These two situations appear different. Structurally, they are the same. The organization does not have a disciplined definition of what a flowable unit of work looks like.
Without that definition, planning becomes unstable, estimation becomes inconsistent, sequencing becomes political, and governance starts reacting to artifacts rather than real flow. The transformation becomes active, but not governable.
Why work size is really a system issue
This is where many organizations make a category mistake. They treat work sizing as a team-level refinement matter. It is not. It is a system-level flow variable.
Donald Reinertsen makes this point clearly in product development. Large batch sizes often look efficient on paper, but they increase queues, delay feedback, and slow the system as a whole. Smaller batches reduce cycle time and increase learning speed. [1]
That logic applies directly to transformation work. A large feature is not just a large requirement. It is a batch moving through a constrained system.
Little’s Law sharpens the point. In queueing systems, the average number of items in a system is tied directly to the average time they spend there. If you increase work-in-progress, you increase time in system unless capacity and constraints are changed in some meaningful way. [3]
In transformation terms, oversized work items do not only take longer themselves. They slow down everything around them. They generate more waiting, more clarifications, more coordination, and more partial progress.
This is why large transformations can look full of movement while remaining slow in substance. They are not necessarily underperforming in effort. They are carrying too many badly shaped units through too many decision points.
Smaller is necessary, but not sufficient
At this point many methods offer a familiar answer: break the work down further.
That is directionally right, but incomplete.
John Sweller’s work on cognitive load helps explain why. When too many interacting elements must be understood at once, working memory becomes overloaded. [4]
This is highly relevant to transformation work. If one work item contains business rules, process variation, integration logic, edge cases, data impact, and operational implications all at once, the team is not handling one problem. It is handling a bundle of interacting problems.
But there is also a lower limit. If the work is broken down too far, meaning disappears. The organization may reduce local complexity only to create a different form of system burden: stitching burden. The item becomes easier to assign but harder to understand as part of a real outcome.
That is why Quantum Work is not the smallest possible piece. It is the smallest meaningful executable whole.
The hidden importance of coherence
Fred Brooks pointed to something deeply relevant here through the idea of conceptual integrity. Systems become easier to reason about when their parts reflect coherent design logic rather than a pile of disconnected local optimizations. [5] That same principle applies to work.
A work item becomes manageable not simply because it is small, but because its internal elements belong together. The business intention, the technical implication, and the acceptance logic are close enough to travel through execution as one coherent object.
Herbert Simon’s idea of near-decomposability helps make the same point from systems theory. Complex systems become manageable when internal relationships within a bounded unit are stronger than the relationships between units. [6] This is exactly what many transformation items lack.
They are named as one thing, but structurally they contain several loosely related concerns. The seams only become visible once delivery begins.
So the real issue is not work volume alone. It is whether the work has enough internal coherence to behave like a real delivery unit.
What Quantum Work really means
A Quantum Work item has a clear purpose. It creates a meaningful state change rather than describing a vague activity. It has bounded scope. Its interfaces are visible enough that the team can understand where the work begins and where external dependency starts.
It preserves coherence. The business logic, implementation intent, and validation logic remain close enough to move together. And it admits feedback. It can be reviewed, tested, and judged before too much downstream work has accumulated.
This is where DORA’s work on small batches is useful. High-performing delivery systems tend to work in units that are small, independent, valuable, and testable. [2] That is very close to the logic of Quantum Work. But Quantum Work stretches that logic beyond software delivery into the broader transformation system. It asks not only whether the item can be coded, but whether it can pass through business shaping, governance, delivery, and validation without becoming distorted.
Why rework is usually telling you something structural
Organizations often treat rework as an execution problem. Someone misunderstood a requirement. A team moved too early. Testing was not thorough enough.
Sometimes that is true. But in large transformations, repeated rework is often a signal that the original work unit was wrongly shaped.
If the unit was too large, learning came too late. Multiple assumptions were bundled into one item, and the error only surfaced after architecture, build, or testing had already invested effort.
If the unit was too fragmented, the team optimized pieces that later failed to combine into a coherent capability.
In both cases, rework is not just a quality issue. It is a structural signal. The system was forced to learn at the wrong level of resolution. Reinertsen’s batch logic, DORA’s feedback logic, and Goldratt’s flow logic all point in the same direction: when learning arrives too late, cost compounds. [1][2][7]
Why this matters for governance
Governance cannot stabilize work that has no stable unit. If the work entering governance is too large, decisions become vague. Steering groups approve direction rather than concrete movement. Architecture boards review abstractions. Reporting becomes narrative-heavy because nothing observable is truly complete.
If the work is too fragmented, governance drowns in detail. There is status everywhere, but meaning nowhere.
Quantum Work creates the missing middle. It gives the transformation a unit small enough to support flow and large enough to support governance. That is why it matters so much. Without a stable natural unit of change, every later layer in a transformation model becomes unstable: work hierarchy, work types, state transitions, and flow governance all remain interpretive.
With it, the system becomes more observable. Work can be shaped, challenged, sequenced, counted, and validated as real movement rather than activity noise.
The deeper shift
The important shift is this:
A work item is not good because it has been written down.It is not good because it has an owner.It is not good because it has been estimated.It is not good because it fits a template.
It is good when it is shaped so that the system can process it without structural distortion.
That moves the conversation away from ceremony and toward flow physics: queueing, cognitive load, coordination cost, feedback speed, and bounded execution. Once seen this way, many persistent transformation problems look different. The team that “cannot estimate” may be receiving non-quantum work. The governance forum that “creates bottlenecks” may be acting on items too broad to decide cleanly. The testing function that “always finds issues late” may simply be downstream of work units too large for early validation.
Quantum Work does not remove complexity. It gives complexity a form that can move.
That is why it is the foundation of flow.
REFERENCES:
[1] Donald G. Reinertsen — The Principles of Product Development Flow, Chapter 1 sample.URL: https://lpd2.com/wp-content/uploads/2013/06/ReinertsenFLOWChap1.pdf
[2] DORA / Google Cloud — Capabilities: Working in Small Batches.URL: https://dora.dev/capabilities/working-in-small-batches/
[3] John D. C. Little — A Proof for the Queuing Formula: L = λW.URL: https://pubsonline.informs.org/doi/10.1287/opre.9.3.383
[4] John Sweller — Cognitive Load During Problem Solving: Effects on Learning.URL: https://onlinelibrary.wiley.com/doi/10.1207/s15516709cog1202_4
[5] Frederick P. Brooks, Jr. — The Mythical Man-Month: Essays on Software Engineering (sample pages).URL: https://ptgmedia.pearsoncmg.com/images/9780201835953/samplepages/0201835959.pdf
[6] Herbert A. Simon — The Architecture of Complexity.URL: https://faculty.sites.iastate.edu/tesfatsi/archive/tesfatsi/ArchitectureOfComplexity.HSimon1962.pdf
[7] Eliyahu M. Goldratt — Standing on the Shoulders of Giants.URL: https://businesswales.gov.wales/sites/main/files/documents/Standing-on-the-Shoulders-of-Giants.pdf





Comments