top of page

AI needs Human Intelligence - Stop Governing Everything Like a “Project”

  • Writer: Kenneth Linnebjerg
    Kenneth Linnebjerg
  • Mar 7
  • 9 min read


A pattern language for enterprise application transformation — and why it mainly improves transparency, not magically better delivery


In my last post on Agentic AI and the PMO, the key point was slightly uncomfortable: Agentic AI doesn’t automatically make a PMO better. It makes it more transparent. It exposes what is already true - good, bad, and messy - faster and with less room for storytelling. The same is true for enterprise application transformation.


Most large transformation initiatives don’t stall because people are lazy or incompetent. They stall because the organization is trying to govern fundamentally different kinds of change through one generic mental model: “a project”, “a feature backlog”, “a release plan”, “a delivery train”.


That single model acts like a fog machine. It blurs the nature of the work, it mixes evidence types, and it makes it easier to hide behind status reporting. You can keep the narrative coherent for months - sometimes for years - because the organization never agreed on what “real progress” should look like in the first place.


A pattern language for transformation does not solve the work for you. It does something more modest and more powerful:

It makes your governance truthful. It forces you to measure the right things for the right kind of change. It makes hidden mismatches visible early - before the program pays for them at scale.

This post is a five-minute-to-read (in intention), but intentionally richer reduction of Chapter 3 in my Transformation Patterns work: Enterprise Application Transformation Patterns.



Transformation Patterns - All Projects are not created equal and each needs its own attention - Some projects are work intensive during refinement and others are intensive during cutover - Transformation Patterns explain the differences and provides valuable planning and execution insights for Project- and Program Managers
Transformation Patterns - All Projects are not created equal and each needs its own attention - Some projects are work intensive during refinement and others are intensive during cutover - Transformation Patterns explain the differences and provides valuable planning and execution insights for Project- and Program Managers


The structural failure: one intake funnel, one backlog, one governance story


Most enterprises have invested heavily in standardizing how demand enters the organization. That’s sensible. A single funnel reduces duplication and makes prioritization explicit.

The problem begins when that funnel produces one downstream object - one backlog narrative—and everything is forced into that shape.

In practice, enterprise application change includes categories that behave very differently:

  • Some work is about structural integrity (and its value is future delivery speed and stability).

  • Some work is about moving to new ground (and the hard part is cutover and operational readiness).

  • Some work is about creating a new domain model (and the hard part is discovery + migration, not coding).

  • Some work is about adoption (and the hard part is behavior change, not deployment).

  • Some work is about subtraction (and the hard part is dependencies and political fear, not design).

When these are all governed as “feature delivery”, predictable dysfunction appears:

You ask for demos where demos are weak evidence. You demand business case ROI where value is latent and structural. You staff product owners where architecture is the bottleneck.


You use “agile” as an explanation when the deeper issue is category confusion.

The key point is not that backlogs are bad. The key point is that a backlog is a representation - and representations can lie if they compress reality too aggressively.

Transformation patterns: a classification system beneath methods and tools


A transformation pattern is an archetype of enterprise application change with stable characteristics:

  • Intent: what the initiative is fundamentally trying to change

  • Value mechanism: how benefit materializes (direct vs. latent)

  • Dominant risks: where failure tends to concentrate

  • Evidence of progress: what “truthful progress” looks like

  • Governance logic: how you should steer, fund, and staff it

  • Role gravity: which roles must carry real accountability

This is deliberately pre-method. You can deliver any of these patterns with SAFe, Scrum, PRINCE2, or hybrids. The point is that the pattern determines what good governance looks like. And like Agentic AI in the PMO, pattern classification doesn’t fix the underlying problems by itself. It makes them harder to hide.


The eight patterns (and what they reveal)


I use eight patterns as a practical working set for enterprise applications:

  1. Refine

  2. Refactor

  3. Replatform

  4. Rebuild

  5. Replace

  6. Retire

  7. Enable

  8. Implement

An enterprise often has multiple patterns in one initiative. That is normal. What is not normal - but extremely common - is pretending that secondary patterns don’t exist. That is how you end up “just replatforming” while actually rebuilding and implementing and retiring, without staffing or governance for it.


Below is a compact description of each pattern, in a more flowing format than a checklist—while still making the differences crisp.


1) Refine - measurable business capability evolution

Refine is incremental business capability improvement inside an existing solution: workflow changes, UX improvements, pricing/bundling logic, reporting, small automation, and customer-facing enhancements.


Refine tends to be the most familiar pattern. It lends itself to product backlogs and iterative delivery. But it fails in predictable ways when the upstream system collapses: everything becomes urgent, prioritization becomes political, and teams turn into feature factories while quality and integration cost quietly rise.


Progress evidence is not just “features shipped”. It is movement in relevant business outcomes plus sustained delivery health (lead time, change failure rate, operational stability). When refinement is healthy, you typically see both.


2) Refactor - same outcomes, better internal structure

Refactor is structural improvement without changing externally observed behavior (or changing it only incidentally). Its intent is to reduce hidden coupling, improve testability, increase change safety, and reduce the cost of future evolution.

Refactor is where portfolio governance often becomes dishonest. Not intentionally - structurally.


If steering expects user-visible output each month, refactoring must either:

  • hide inside feature scope (and become ungovernable), or

  • produce theatre (cosmetic work presented as progress), or

  • be starved entirely.


Refactor needs structural evidence: decoupling boundaries, automated test coverage where it matters, deployment independence where relevant, reduced lead time for change, reduced change failure rate. If you cannot show structural progress, you do not have refactoring - you have activity.


3) Replatform - same application, new ground

Replatform is moving an application to a new runtime/platform (cloud, container platform, database engine, integration architecture) while keeping core business behavior broadly intact.


This pattern is often governed as “technical delivery” when it is really operational and risk management. The hard part is not writing code; it is understanding what must remain stable (NFRs), what must be proven (baselines), and what must be rehearsed (cutover, rollback, observability).


Replatform fails when cutover becomes a single event managed by hope. It succeeds when the organization treats migration readiness as a product with evidence: performance baselines, cost baselines, security baselines, and rehearsed operational acceptance.


4) Rebuild - new domain model, new truths

Rebuild is creating a new solution because the old one cannot be economically evolved—or because the desired business capability requires a fundamentally different domain model and architecture.


Rebuild is where organizations most easily deceive themselves. It feels like engineering, but the deepest risk is not engineering - it's discovery and migration realism.

A rebuild fails when it becomes a modern copy of the legacy, when migration is postponed, and when the program assumes that the business domain is “known” simply because it exists in the current system.


Rebuild progress must be demonstrated through validated slices: thin end-to-end increments that include real data, real operational constraints, and a credible path to migrate real users or transactions. If you cannot prove migration viability early, you are not rebuilding - you are writing a long technical thesis.


5) Retire - value by subtraction

Retire removes applications, modules, integrations, reports, and process variants to reduce cost, risk, and complexity.


Retirement has an unusual trait: it is often among the highest ROI work in the portfolio, yet it is chronically postponed because it lacks glamour and creates anxiety. It is easy to argue that “we still need it”, especially when evidence of usage and dependencies is weak.


Retire succeeds when dependency and usage evidence becomes explicit, and when shutdown is treated as a real delivery with owners, dates, and rollback planning. Retire fails when it remains a promise - something that will happen “after” the transformation, which usually means never.


6) Replace - buy and conform (or you are rebuilding)

Replace is substituting a system with a commercial product. The intended value is faster access to standard capabilities and reduced bespoke maintenance burden.


Replacement succeeds only if the enterprise is willing to standardize processes and accept the product’s constraints. If the organization insists on preserving every legacy exception through customization, replacement becomes “rebuild by procurement”.


Evidence of progress in replacement is therefore not a Gantt chart. It is explicit fit-gap decisions, controlled customization, and real process adoption. Replacement is business transformation wearing an IT suit.


7) Enable - prerequisites that unlock flow

Enable creates shared foundations that unlock future delivery: API layers, identity patterns, data foundations, eventing, observability, test automation, shared services, reusable patterns.


Enablement is the pattern most likely to drift into abstraction. If it is framed as “platform building”, it can become broad, elegant, and unconsumed - what I call platform theatre.


Enablement creates value when it is linked to a pipeline of consuming initiatives. The strongest evidence is not architecture documents; it is consumption: product teams using the enabled capability in production, with measurable reduction in friction and duplication.

Enable is not optional in large transformation. But it must be governed with adoption as a KPI, otherwise it becomes a beautiful side project.


8) Implement - where benefits become real

Implement is adoption and operationalization: training, role changes, process redesign, comms, support readiness, stabilization, and measured adoption.


In many enterprises, implementation is treated as “go-live activities”. That framing is one of the quiet killers of benefit realization.


Implementation is where the organization’s behavior actually changes. If adoption metrics are missing, if parallel processes remain, if support is underfunded, benefits will not materialize even if the solution is technically correct.


This pattern requires shared accountability: IT cannot “implement” behavior change alone, and business leadership cannot delegate it and expect success. Implementation is governance of the social system, not just the technical system.



Two distinctions that remove years of confusion


Enable vs. Refactor - Refactor improves internals of existing components. Enable creates shared prerequisites across components and teams. Confusing them leads to predictable failure modes: enablement with no consumers, or refactoring that never unlocks cross-team flow.

Replace is not an IT upgrade - Replacement is fundamentally a governance decision about standardization. If that decision is avoided, the program will drift into hidden rebuild work without the correct staffing and evidence model.

These distinctions matter because they change the questions your steering committee should ask. Pattern clarity upgrades governance quality - not by being “smarter”, but by being harder to fool.

From status theatre to evidence: how to use patterns in portfolio governance

If you want patterns to be more than vocabulary, use them as a lightweight control system:

  1. Classify each initiative by a primary pattern (and explicitly name secondary patterns).

  2. Choose evidence models by pattern (structural evidence vs. adoption evidence vs. migration evidence vs. outcome evidence).

  3. Staff deliberately: architecture gravity for refactor/enable, ops gravity for replatform/implement, domain gravity for rebuild/replace.

  4. Measure honestly: don’t force one dashboard across categories.

  5. Sequence intentionally: enablement before large-scale refinement; implementation planned early; retirement enforced as closure.


This is the same underlying philosophy as my Agentic AI/PMO argument: the system becomes transparent. Transparency can feel harsh, because it exposes underinvestment in the “unsexy” work - tests, observability, cutover rehearsals, adoption reinforcement, dependency mapping. But that exposure is precisely what allows an enterprise to correct course early.


A final note on illustrations: stage-gates + “quant gravity”


If you are using a stage-gate baseline (gates left-to-right), patterns become visually intuitive when you overlay quant gravity: which standardized work units dominate in which stage.

You keep the same footprint for every pattern, and you vary the dominance of a small set of quant families (Discovery, Enablement, Development, Migration/Cutover,

Implementation/Adoption, Stabilization/Ops, Decommission).


The result is a consistent visual language: readers learn the map once, then compare patterns quickly. It’s not just a prettier diagram. It’s a governance tool. It shows why one initiative needs heavy migration readiness near the end, while another needs heavy adoption and stabilization, while another needs structural evidence mid-stream.

Again: not magic improvement - clearer truth.




References


Source

Link

Why it matters


D. L. Parnas (1972) — On the Criteria To Be Used in Decomposing Systems into Modules (PDF)

Foundational reasoning for why decomposition matters; directly supports the “one backlog hides different work types” argument.

Melvin E. Conway (1968) — How Do Committees Invent? (PDF)

Explains why systems mirror communication structures; relevant to why one governance model tends to create coupled architectures.

Frederick P. Brooks Jr. (1986) — No Silver Bullet: Essence and Accidents of Software Engineering (PDF)

Grounding for why “new methods” don’t remove essential complexity; supports the need for structuring work via patterns.

Carliss Y. Baldwin & Kim B. Clark (2000) — Design Rules, Vol. 1: The Power of Modularity

Strong conceptual basis for latent value and option value; particularly useful for Enable and Refactor patterns.

Forsgren, Humble, Kim (2018) — Accelerate: State of DevOps / DORA Report (PDF)

Empirical anchors for delivery health evidence (lead time, change failure rate, etc.); supports the “truthful evidence” model for Refactor/Refine.

John P. Kotter (1995) — Leading Change: Why Transformation Efforts Fail

Classic explanation for why change fails at adoption and reinforcement; supports why “Implement” is not a go-live checklist.

Martin Fowler — Continuous Delivery (book overview)

Practical bridge between governance and technical practices; useful for readers who want to operationalize change-safety and release discipline.


Comments


LINNFOSS Consulting ApS - info@linnfoss.com - +45 4116 6770

INCUBA Katrinebjerg - Åbogade 15 - DK-8200 Aarhus - Denmark - ©2018 by LINNFOSS

  • LinkedIn
bottom of page