Data governance has a survival problem. The programmes start with executive sponsorship, a council, a roadmap, and genuine momentum. Two years later — sometimes after an acquisition, sometimes after a CIO change, sometimes for no clear reason — the council stops meeting, the data catalogue gets out of date, and the programme quietly dies. The next leader rebuilds something similar from scratch. This pattern is so common it is almost the default. The programmes that escape it share an operating model designed for continuity rather than charisma.
Federate the Work, Centralise the Standards
A data governance team that tries to manage every data asset centrally fails because the work is too distributed and the team is too small. A federated model puts data stewards inside the business — the people who actually know what the data means — and gives the central team responsibility for standards, tooling, and arbitration. The central team does not own the data quality work. They own the framework that makes the federated work consistent.
Tie It to Decisions, Not Just Documentation
Programmes that produce a data catalogue and call it governance tend to be the ones that decay. The catalogue is necessary, but documentation alone does not change behaviour. The programmes that endure tie governance to decisions: data quality scores influence which datasets are recommended for analysis, lineage information is required for any production model, ownership records gate access provisioning. When governance metadata changes what people can or cannot do, it stays accurate. When it just describes what is, it goes stale.
Pick Five Critical Data Elements, Not Five Hundred
A common failure pattern is trying to govern everything at once. The programme produces standards, definitions, and quality requirements for thousands of data elements. Nobody can keep up. Compliance is inconsistent. The programme becomes too heavy to operate. The opposite pattern works: identify the data elements that actually drive material decisions — customer ID, revenue recognition, regulated data fields, model training data — and govern those rigorously. Expand from there once the operating rhythm is established.
Data governance programmes that survive reorganisations have one structural feature in common: they are tied to capabilities the organisation cannot stop using. The data catalogue is integrated with the analytics platform. The data quality monitoring is part of the production data pipeline. The lineage information is visible in the BI tools the executives actually open. When the sponsor leaves, the programme persists because removing it would break things people depend on.
Roles That Are Not Optional
- Data owner — the business leader accountable for a domain of data
- Data steward — the operational expert who maintains definitions and quality
- Data custodian — the technical role that operates the platforms storing the data
- Data governance lead — the central role responsible for standards, tooling, and arbitration
- Privacy and security partners — embedded liaisons, not separate workflows
Metrics That Survive Leadership Changes
Vanity metrics — number of catalogued assets, number of stewards trained, number of policy documents — do not survive because they do not link to business outcomes. Outcome metrics survive: incidents traced to data quality issues quarter over quarter, time to onboard new analysts to a dataset, regulatory findings related to data, model retraining efficiency. These metrics show whether governance is doing useful work, and the organisation notices when they degrade.
A programme designed for continuity does not require any particular leader to keep going. It is built into the operating fabric of how data is used. When that is true, the programme outlasts the people who started it — which is the only definition of success that actually matters.