Government transformation can succeed, if we stop setting projects up to fail

RerF via Getty Images
COMMENTARY | What three state projects can tell us about why major checkpoints in government transformation rarely work as intended.
Government services are designed to be in a steady state: deliver what’s expected, keep the lights on and ensure the costs stay stable. For the most part, these function well enough without disruption.
The problem is when we try to transform those services, because government transformation is, by definition, a disruption of the steady state.
In my experience advising public sector leaders — and as a former chief audit executive — I’ve found that two issues surface in nearly every troubled transformation:
- Ineffective governance that creates false confidence
- User testing that amounts to theater, not reality
Three state examples show exactly how — and why — these failures happen.
Case Study 1: Idaho (Luma)
The Luma system was designed and implemented to bring Idaho’s enterprise resource planning, or ERP system for accounting and payroll services, into the cloud by July 1, 2021.
The first governance or directional issue appears to have come from a change in scope during this delay. Originally, the core system was meant to stay on government servers, a decision already seen as obsolete by most agencies at least a decade earlier.
Next, rather than pilot implementations with smaller departments, Idaho’s State Comptroller’s Office decided to implement it everywhere, all at once — often referred to as the “big bang” approach.
Compounding this decision was the reduction of end-user testing. Not only was the implementation exposed to getting it wrong everywhere at the same time, the project team decided not to double-check how it was really going to work in the real world.
And so much of the “automated” work — according to the state auditor’s department, the Idaho Legislative Services Office — has been diverted to spreadsheets, to massage the data before entering it, and reconfiguring financial reports because they can’t be customized in the system.
Oversight approved speed without demanding safeguards. Testing couldn’t catch flaws that hadn't surfaced – and the half-finished system went live two years late.
Case Study 2: Maine (Workday)
In 2016, the state of Maine embarked on another ERP system, this time with the Workday platform. With the same goal of modernizing their 40-year-old human resources and financial systems, Maine opted for a platform that was already used by multiple cities across the U.S., as well as a number of departments in the United Kingdom’s government.
The oversight failure became visible afterwards when Maine sued Workday, shifting blame for the project failure onto the vendor’s project support. But project accountability rested with the state, not its contractors.
While it is entirely possible that the vendor did not provide the level of support that the state felt they needed, the oversight body should have demanded that the project team take control of the situation.
Effective oversight means managing risks internally, not outsourcing them to vendors.
And, as above, despite the concerns about the information that was being provided to the oversight body, testing was scaled back and several stages were ended before completion.
Senior consultants from the vendor were allegedly not provided to the team, which impeded the testing further.
In the end, the project with an initial budget of $17 million ended with a spend of almost $55 million — and the platform was abandoned before ever being implemented.
Oversight isn't about monitoring vendor behavior — it’s about owning project risk.
Case Study 3: Michigan (MiDAS)
In 2011, reacting to a report that suggested the current unemployment benefits system had allowed for $143 million in overpayments, the Michigan legislature pushed through an overhaul. A brand new system was to be designed from the ground up, at a cost that eventually became about $44 million.
As with so many transformations before it, the decision-makers wanted to reap the savings before the transformation had even taken place. Because the goal had been to fully automate unemployment claims processing, the state laid off many of the humans that had previously made those decisions.
That decision proved catastrophic when testing was reduced and failed to simulate real-world conditions — and the errors started popping up.
Simply put, the MiDAS system started flagging fraud everywhere, with data that could not be traced to any actual dates or amounts.
MiDAS flagged 40,000 citizens for fraud — with a 90% error rate — forcing many to repay tens of thousands of dollars plus 400% penalties before compensation lawsuits reversed the findings.
Real-world testing isn’t a luxury — it’s the only shield against cascading failure.
Pattern, Not Accident
The unfortunate reality of the above examples is that the story has played out countless times in governments around the world. Governance and oversight bodies are set up to fulfill requirements, but often let the project unfold without any real questioning.
When I work with public sector clients, I always emphasize: governance and oversight must illuminate risks, not just check boxes. Testing must simulate reality, not showcase aspirations.
But many oversight bodies and project teams believe that the point of no return is the project launch.
By the time flaws surface, inertia — and political pressures — often push projects toward release rather than pause for correction.
And the consequences are often far worse than the political embarrassment of a delay.
Government projects don't fail because they try to transform.
They fail because transformation demands a different kind of leadership — one that’s willing to see the risks before they become headlines.
Matthew Oleniuk is a public sector transformation and leadership advisor and former Chief Audit Executive who has provided oversight of billions of dollars in government transformation projects. Through his practice at TheRiskInsider.com, he helps senior leaders protect ambitious project outcomes and navigate high-stakes delivery environments.
NEXT STORY: Hiding exploitation behind an app