top of page

Why Large Transformations Must Be Managed as Intelligent Software Production Systems

  • Writer: Kenneth Linnebjerg
    Kenneth Linnebjerg
  • Apr 16
  • 10 min read

Large transformations rarely fail because people are inactive - They fail because work does not flow.


That distinction matters more than it first appears. In most organizations, transformation is still managed as a project coordination problem: define the scope, assign owners, build a plan, align dependencies, govern milestones, and escalate issues when something slips. All of that sounds reasonable. Much of it is necessary. But it is not sufficient. Because large transformation work does not simply get “managed.” It moves through a system.


It enters. It waits. It is refined. It is handed over. It fragments. It gets blocked. It gets reworked. It queues behind scarce decisions, scarce specialists, overloaded governance forums, and unclear interfaces. And once you start looking at transformation this way, many of the frustrations leaders experience every week stop looking random. They start looking structural.


That is the shift this article argues for: A transformation is not only a project. It is also a software production system. And until leaders begin to see it that way, they will keep managing symptoms instead of causes.



Transformation as production system
Processing Software work items through their lifecycle from idea to release is similar to processing materials through a production or process line from raw material to working specimen - as with tooling and quality assurance for physical products, mature stage gate stewardship in software development is essential to secure flow

The problem: activity is visible, flow is not

Most transformation governance is built around visible activity.


How many workstreams are active?

How many milestones are green?

How many workshops were held?

How many epics were opened?

How many teams are staffed?

How many actions were closed?


The problem is not that these things are useless. The problem is that they are not flow measures. A program can be full of activity and still be structurally stalled. Teams can be working hard while the transformation as a whole is slowing down. Steering committees can receive dense status packs while work quietly accumulates in the wrong places: waiting for architecture decisions, waiting for business clarification, waiting for data definitions, waiting for environments, waiting for approvals, waiting for integration readiness.


In other words, the system is busy, but the work is not moving cleanly from idea to outcome.

That is why so many large programs create a strange experience for everyone involved: high energy, high meeting load, high reporting intensity — and still a growing sense that nothing is truly advancing. Your organization is not lacking effort. It is lacking flow visibility.


A transformation behaves like a factory whether you admit it or not

This is where the production-system lens becomes useful. In any production system, performance depends on a few core dynamics:

  • how much work enters the system

  • how much work is already inside it

  • how fast work exits in usable form

  • where bottlenecks constrain movement

  • where queues build up

  • how defects and rework circulate backward

The same is true in large transformations.

Requirements are not just “defined.” They are processed.

Architecture is not just “reviewed.” It is a capacity-constrained step.

Governance is not just “control.” It is often a queue.

Testing is not just “execution.” It is downstream absorption capacity.

Cross-team handovers are not just coordination points. They are interfaces in a flow system.

This is why classic operational thinking matters here. Little’s Law, one of the best-known relationships in operations research, shows that work in process, throughput, and cycle time are mathematically linked. In practical terms, if you increase the amount of work inside a system without increasing actual throughput, the result shows up as longer elapsed time.

The work does not disappear. It becomes waiting. That is exactly what happens in transformations.


Leaders approve more initiatives. Teams start more items. More work is launched “in parallel.” And then everybody wonders why lead times get worse, dependencies multiply, and forecasts become unreliable. The answer is simple, but uncomfortable:

You loaded more work into the system than the system could process cleanly.

Why “keeping everyone busy” makes things worse

One of the most damaging assumptions in transformation is that high utilization is always good. It sounds efficient to keep specialists fully booked. It sounds responsible to ensure that no team is idle. It sounds productive to keep every workstream active. But in flow systems, this logic often backfires.


Once utilization approaches system limits, waiting time tends to rise sharply. That is a well-established principle in queueing and operations management. The more tightly packed the system becomes, the less resilience it has to absorb variability, exceptions, interruptions, or uneven work arrival. What looks like efficiency at the local level often creates delay at the system level. This is why local optimization is so dangerous in large programs.


A business analysis team may proudly push more requirements forward.

An architecture team may generate more review output.

A PMO may increase reporting cadence.

A delivery team may maximize ticket throughput.


But if the constraining point in the overall transformation cannot absorb that output, then all that local productivity simply becomes inventory. It piles up in front of the bottleneck. It creates more waiting, more noise, and more coordination overhead.


Theory of Constraints has made this point for decades: system performance is governed by the constraint, not by the average busyness of all resources. Improving non-constraints without regard to the constraint often creates the illusion of progress while reducing total flow.


In large transformations, that constraint is often not where leaders first look. It may not be a delivery team at all. It may be business clarification. It may be decision-making capacity. It may be enterprise architecture. It may be test environment readiness. It may be data mapping. It may be vendor response time. But wherever it sits, the rest of the system will end up orbiting it.


Most delays are not execution delays. They are queue delays.

This is one of the most important insights leaders can take from production thinking.

Work usually does not spend most of its life being actively processed. It spends most of its life waiting.


Waiting for review.

Waiting for decision.

Waiting for clarification.

Waiting for dependency completion.

Waiting for a scarce specialist.

Waiting for the next governance forum.

Waiting for a team that is already overloaded.


And because many of these waits are informal, they are rarely treated as design problems. They are treated as normal friction. But queueing theory does not care whether the queue is formal or informal. If work accumulates faster than it can be processed, delay will grow.


That is why large programs often feel slow even when no one can point to a single dramatic failure. The slowness is distributed. It sits in the gaps between teams, between functions, between governance layers, and between partially defined states of work.


This is also why leaders misdiagnose so many problems as “people issues.” The deeper cause is often structural. The work has entered a system with too many implicit handovers, too much work in progress, and too little control over how work is released and absorbed.


Push systems create transformation overload

Toyota’s production thinking remains relevant here for a reason. One of its deepest lessons is that downstream capacity matters. Work should not simply be pushed forward because upstream has produced something. Flow improves when work is pulled according to actual readiness and need, and when defects are stopped early rather than passed downstream.

Most transformations do the opposite.


Work is pushed downstream because:

  • the milestone says it should move

  • funding has been approved

  • the plan requires visible progress

  • an upstream team wants closure

  • leadership wants to “keep momentum”


But downstream teams do not experience this as momentum.


They experience it as overload.

They receive requirements that are not mature enough.

They receive architecture inputs that are not specific enough.

They receive testable scope that is still changing.

They receive change requests that were not shaped to fit their capacity.


The result is predictable: more work in progress, more fragmented attention, more hidden queues, more late-stage rework, and more governance to manage the consequences.

This is why many transformation offices become busier as performance gets worse.


Governance expands to compensate for poor flow design. Instead of fixing the structure, the organization adds more meetings, more escalation routes, more status categories, more reporting layers, and more manual coordination. That may create temporary visibility. It does not restore flow.


The real shift: from managing activity to managing throughput

Once you see transformation as a production system, different questions become more important.


Not: are all teams busy?

- But: where is the constraint?

Not: how many things have we started?

- But: how many meaningful things are exiting the system?

Not: are milestones green?

- But: where is work aging without moving?

Not: who owns this issue?

- But: which interface is causing recurring delay?

Not: how do we push harder? - But: how do we reduce load, improve work shape, and protect flow?


This is where modern delivery research also becomes useful. DORA’s work on software delivery helped establish a more structural view of performance, focusing on measures like lead time, deployment frequency, recovery time, and failure rate.


The broader lesson is not limited to engineering. It is that performance must be seen as a system property - a balance of throughput and stability - not as a simple count of local effort. High-performing systems are not just fast. They are capable of moving work reliably. That is precisely what large transformations need.


What leaders should start making visible

If flow is the issue, then flow must become observable. That means leaders need to look for things that traditional reporting often hides:

Accumulation - Where is work building up?

Age - Which items have been active for a long time without meaningful state change?

Constraint load - Which people, teams, or forums are acting as true throughput governors?

Rework loops - Where does work repeatedly come back after supposedly moving forward?

Interface failure - Which handovers create negotiation, ambiguity, or repeated clarification?

Admission overload - How much work has entered the system without realistic downstream absorption capacity?

These are not abstract questions. They are the operational reality of large-scale change. Once visible, they give leaders something more valuable than status: they give them leverage. Because the core issue is usually not whether people are trying. It is whether the system has been designed to let work move.


Transformation systems must be designed, not assumed

That may be the most important conclusion of all. Most organizations do not explicitly design the transformation system. They assume it will emerge from governance structures, planning routines, delivery methods, and strong people. But production systems do not become coherent by assumption. They become coherent by design. That design requires at least four things.

First, clear states of work. Not administrative labels, but real processing conditions with defined entry and exit meaning.

Second, controlled release. Not every approved idea should enter active delivery immediately. Entry must be governed by capacity and readiness.


Third, explicit interfaces. Cross-team handovers cannot remain vague. Ambiguity at interfaces becomes delay in the system.


Fourth, work units of comparable shape. If one “item” is a two-day clarification and another is a six-month cross-platform capability, then cycle time, WIP, and throughput become hard to interpret. This is where the next level of transformation design begins. Once leaders start looking at flow, they quickly discover that flow cannot be governed properly unless work itself becomes more consistently shaped and transitions between states become more explicit. That is not a reporting improvement. It is a structural one.


Final thought - Intelligent Software Production Systems

Large transformations do not fail because people stop working. They fail because work enters a system that cannot absorb, shape, and move it predictably. That is why more planning does not always help. More governance does not always help. More activity certainly does not always help.


What helps is understanding that transformation behaves like a production system whether we choose to see it or not. Work flows through states. It accumulates in queues. It is governed by constraints. It slows down when overloaded. It breaks down at weak interfaces. It returns as rework when poor-quality inputs are allowed to pass forward. Once that becomes visible, leadership changes. You stop managing start managing flow.

And that is where predictability begins to return.



REFERENCES:


  1. John D. C. Little — “A Proof for the Queuing Formula: L = λW” Link: https://pubsonline.informs.org/doi/10.1287/opre.9.3.383 Why it matters: Foundational support for the relationship between work in process, throughput, and cycle time. It gives a rigorous base for the claim that when more work is loaded into a system without increased throughput, delay grows structurally rather than accidentally.

  2. Wallace J. Hopp and Mark L. Spearman — Factory Physics* Link: https://factoryphysics.com/ Why it matters: One of the strongest sources for understanding flow, bottlenecks, variability, throughput, utilization, and queue behavior in production systems. It supports the central reframing of transformation as a system governed by operational laws rather than just planning discipline.

  3. Toyota Motor Corporation — Toyota Production System Link: https://global.toyota/en/company/vision-and-philosophy/production-system/ Why it matters: Provides the original production-system logic behind pull, flow, Just-in-Time, and stopping defects at the source. This is essential for the argument that transformation performance depends on how work moves and is controlled between states, not just on local execution effort.

  4. Steven Spear and H. Kent Bowen — “Decoding the DNA of the Toyota Production System” Link: https://hbr.org/1999/09/decoding-the-dna-of-the-toyota-production-system Why it matters: Important for the insight that high-performing systems do not rely on vague coordination. They depend on explicitly designed activities, connections, and pathways of flow. This aligns directly with the argument that transformation systems must be designed, not assumed.

  5. Eliyahu M. Goldratt — The Goal Link: https://www.routledge.com/The-Goal-A-Process-of-Ongoing-Improvement/Goldratt-Cox/p/book/9781138384026 Why it matters: Classic support for the Theory of Constraints view that overall system output is governed by its constraint, not by the average utilization of all resources. This is central to the critique of local optimization and the false comfort of keeping everybody busy.

  6. TOC Institute — Theory of Constraints / Five Focusing Steps Link: https://www.tocinstitute.org/five-focusing-steps.html Why it matters: Provides a practical formulation of constraint-based system improvement. It supports the argument that leaders must identify where flow is truly constrained and manage the whole system around that point instead of optimizing isolated functions.

  7. Nicole Forsgren, Jez Humble, and Gene Kim — Accelerate Link: https://itrevolution.com/product/accelerate/ Why it matters: Strong modern evidence that performance should be understood as a system capability combining throughput and stability. It supports the chapter’s argument that activity and local productivity are weak indicators compared with lead time, reliability, and overall flow quality.

  8. DORA — Metrics and Delivery Performance Research Link: https://dora.dev/ Why it matters: Extends the argument into modern digital delivery by showing how lead time, deployment frequency, recovery time, and reliability reveal real system performance more clearly than traditional status reporting and milestone tracking.

  9. C. Maglaras and J. A. Van Mieghem — “Queueing Systems with Leadtime Constraints: A Robust Optimization Approach” Link:https://www.kellogg.northwestern.edu/faculty/vanmieghem/htm/pubs/2005_MaglarasVanMieghemEJOR.pdf Why it matters: Gives a more advanced operational basis for the claim that lead-time performance depends not only on effort, but also on admission control, sequencing, and system loading. This supports the emphasis on controlled release and capacity-aware design.

  10. Frederick P. Brooks Jr. — The Mythical Man-Month Link: https://www.pearson.com/en-us/subject-catalog/p/mythical-man-month-the-essays-on-software-engineering-anniversary-edition/P200000000149/9780132119160 Why it matters: Classic support for the idea that large knowledge-work efforts do not scale linearly. Especially useful for the point that adding people, meetings, and coordination layers often increases complexity rather than restoring control.


Comments


LINNFOSS Consulting ApS - info@linnfoss.com - +45 4116 6770

INCUBA Katrinebjerg - Åbogade 15 - DK-8200 Aarhus - Denmark - ©2018 by LINNFOSS

  • LinkedIn
bottom of page