Why Agile, SAFe and PM Frameworks Don't Fix Structural Failure in Digital Transformation
- Kenneth Linnebjerg

- 6 days ago
- 7 min read
When the real bottleneck is lack of - or low quality “delivery content”
Most organizations don’t adopt Agile, SAFe, or stronger project governance because they enjoy changing methods. They adopt frameworks because something feels unsafe: delivery is unpredictable, dependencies are opaque, quality becomes negotiable late, and management loses the ability to steer with confidence. A framework promises relief. It offers cadence, roles, visibility, and a shared language for coordination.
And yet a familiar disappointment follows in many enterprise transformations: the organization implements the framework, teams work hard, ceremonies run on time, dashboards look more disciplined - while the program still stalls, still surprises itself, still accumulates rework.
This is not because frameworks are useless. It is because many programs are constrained by something frameworks do not create: the content of delivery - the refined, testable, implementable definition of what must be built.

A real-life modernization stall: capability exists, but knowledge is missing - digital transformation of product catalogue applications
Consider a situation that is increasingly common in large enterprises. You have a long-lived, home-built catalog system that has served the company well for many years. It may not be “modern” in the architectural sense, but it works - because it embodies years of decisions: product structures, accessory relationships, special rules, edge cases, and operational behavior that downstream systems and users have learned to rely on. Over time, the organization stopped thinking of this behavior as “rules” and started treating it as “how reality works.”
Now the enterprise decides to replace that catalog system with a commercial platform - such as SAP Commerce Cloud PCM - to standardize product content management, simplify maintenance, enable modern integrations, and support a future architecture. The strategic direction is sensible. The high-level intent is clear: the new platform must represent all products and accessories, and it must function operationally as the old system did, because downstream processes depend on it.
Then a quiet discontinuity occurs: the persons who built, designed, and orchestrated the catalog system retires or relocates. With them disappears the cohesive mental model of what the system really does at a requirement level - what the rules are, which exceptions are intentional, which “weirdness” is business-critical, and how the behavior is encoded. Documentation exists, perhaps, but not in the form that can be handed to a delivery vendor or a new internal team as an implementable specification. The truth is mostly in code and scattered artifacts.
At this point, the program often enters a distinctive state: activity continues, but certainty does not grow.
Teams can prepare environments, configure baseline platform capabilities, build integration scaffolding, and plan increments. Architects can draw high-level target architecture. Product Owners can prioritize themes. Governance can run. But when the organization cannot express legacy behavior as explicit, testable requirements, the delivery engine becomes structurally constrained. The work shifts into one of three modes:
Guessing, which creates rework when legacy behavior is rediscovered later
Waiting, which collapses throughput and turns governance into repeated deferral
Reconstructing, which is necessary (reverse engineering and discovery), but often under-acknowledged, under-funded, and under-structured
This is the heart of structural failure: the knowledge supply chain is broken. This is why the catalog example belongs at the beginning of a chapter about Agile/SAFe/PM frameworks:
Frameworks can organize delivery work, but they cannot substitute for missing operational knowledge. When the “content” of delivery - requirements, rules, behavioral equivalence, testable acceptance - is absent or trapped in code, the framework becomes a coordination mechanism around uncertainty rather than a system for producing outcomes.
In other words: the program doesn’t fail because it lacks method. It fails because it lacks material. The ceremonies can be perfect, the tooling can be modern, the teams can be skilled - and still the system cannot move predictably because it does not have the refined and validated definition of what it is meant to build.
What frameworks are good at - and what they quietly assume
It helps to be precise about what frameworks actually provide.
Agile methods and SAFe are, at their core, coordination technologies. They define cadences (sprints, increments), roles (PO/PM, teams, architects), and artifacts (backlogs, plans, demos). Traditional project frameworks add governance constructs: scope control, milestones, reporting, and risk structures.
Those are valuable tools - when the operating conditions are viable. Most frameworks quietly assume:
There is a definable “thing” to build (even if it evolves)
The organization can decide fast enough to support the cadence
Work can be shaped into units small enough to implement and validate
Ownership is coherent enough that decisions and acceptance are not permanently contested
Truth is safe enough that risks can surface early
In a legacy replacement program with missing system understanding, assumption #1 is violated. Not because the business is unclear about its goals, but because the legacy behavior is not explicit. “Make the new system behave like the old one” is directionally clear and operationally vague. It is not implementable until it is decomposed into testable slices.
That is the moment where a framework can organize meetings and plans - but it cannot manufacture the missing definitional asset.
The underlying forces: why the stall of digital transformation repeats even in “mature” organizations
This is where it becomes useful to talk about structural forces rather than team performance. Structural forces are the conditions that make outcomes predictable regardless of effort.
In legacy replacement programs like the catalog modernization, several forces tend to reinforce each other:
1) Knowledge trapped in code (and in people who are gone) The truth exists but is inaccessible. If a system has been built over many years, a large portion of its “requirements” are emergent behavior. When the person holding the integrated model leaves, the organization loses the ability to describe the system coherently.
2) Decision latency becomes infinite A standard delivery block is “we are waiting for a decision.” In this scenario, the deeper problem is “nobody can decide because nobody can confidently define what must remain true.” Decisions become deferrals. Deferrals become assumptions. Assumptions become rework.
3) Work cannot be shaped into governable units Backlog items remain too large or too ambiguous: “Set up the catalog,” “Model all accessories,” “Make it like before.” Those are not deliverable units. They are discovery programs disguised as backlog items.
4) Governance demands predictability while the definition is incomplete This produces a common distortion: planning becomes a performance artifact rather than a learning instrument. Teams create partial progress signals - configurations, placeholders, interface stubs—to demonstrate movement, while the core uncertainty remains unresolved.
5) Interfaces multiply Legacy replacement is rarely just “install a new tool.” It is often entangled with downstream integrations, data consumption patterns, and operational workflows. If ownership boundaries do not match the system boundary, the coordination tax becomes a structural drag.
None of these forces are “fixed” by running more ceremonies or producing more reports. A report can describe missing knowledge, but it cannot generate it.
The key reframing: modernization is often a refinement problem, not a delivery problem in digital transformations
When a program stalls, the default reaction is to “strengthen delivery”: tighten the scrum discipline, strengthen governance, demand better estimation, improve reporting, accelerate PI planning, enforce better sprint commitments.
Sometimes that helps. But in the catalog scenario, it is often misapplied. The actual bottleneck is upstream: refinement and definition.
The program needs a systematic mechanism to convert legacy behavior into explicit, testable, governable work. This is not optional overhead. It is the product of the modernization effort. If it is not treated as first-class work - with ownership, capacity, artifacts, and acceptance - the program will either guess (creating rework) or freeze (creating delay).
This is also where a deeper point becomes visible: “Agile” is not automatically adaptive if the system lacks a reliable path from uncertainty to clarity. Adaptation requires structured discovery, not just flexible planning.
What fixes structural failure: patterns that create delivery content
This is the practical turn in my Transformation Patterns approach. Frameworks alone are rarely enough because they organize execution, not knowledge creation. What is needed is a set of repeatable stabilizing patterns that change the system conditions and produce inspectable evidence.
Here are three patterns that directly address the “missing content” problem in legacy catalog replacement.
Pattern 1: Legacy Behavior Externalization
Treat requirements extraction as a planned, governed product of work.
The stabilizing move is simple but non-trivial: stop treating legacy behavior as background context and start treating it as deliverable content.
Each “behavior slice” should produce a compact evidence pack that answers:
What must remain true (behavior statement)?
What inputs and boundaries apply (data, channels, segments)?
What examples illustrate it (including edge cases)?
How will equivalence be proven (acceptance tests or validation rules)?
What legacy evidence supports it (code path, data trace, observed behavior)?
This turns “the truth is in the code” into “the truth is in an evidence-backed library that teams can implement against.”
Pattern 2: Governance-Compatible Quanta
Only allow work into delivery when it is shaped into a unit that can be built and validated.
When the program tries to fund and plan large uncertainty blobs, it creates inevitability: either delay or rework. A stabilizing move is to enforce a quantum definition that is small enough to implement and validate within a realistic cycle.
This prevents the common trap where leadership demands predictability while approving work that is structurally unpredictable.
Pattern 3: Structural Health Dashboard
Stop reporting activity. Report constraint removal. If the program’s real work is discovery and definition, then dashboards must reflect definition progress and constraint removal - not just sprint velocity and ticket throughput.
Examples of useful structural indicators in this scenario are:
number of validated behavior slices produced per cycle
average age of “undefined” high-impact items
decision latency for key modeling questions
rework ratio caused by legacy mismatch discoveries
integration defects tied to unclear behavior
The point is not to generate more metrics. The point is to make structural constraints visible enough that leadership action can remove them.
The takeaway: frameworks don’t fail in digital transformation - structures do
The catalog modernization example is not an argument against Agile or SAFe. It is a reminder of what frameworks can and cannot do.
Frameworks are valuable scaffolding for coordination. They help teams plan, synchronize, and inspect progress. But when a program’s limiting factor is missing or trapped knowledge, a framework does not generate the missing asset. It will organize work around uncertainty - but it will not convert uncertainty into implementable truth unless the organization deliberately builds that conversion mechanism.
That is why structural failure repeats: The organization keeps optimizing execution while the upstream knowledge supply chain remains broken.
If you want the framework to work, you must first make the system workable: Externalize legacy behavior, shape it into governable quanta, and create evidence packs that allow teams and vendors to implement with confidence.




Comments