Software quality: the true cost of “technical issues”

Taken from
Agenda Digitale
Apr 2026

Digital service disruptions cost companies hundreds of thousands of dollars per hour. In most cases, the cause is not a hardware failure but software that has not been adequately tested. Testing debt and the role of QA explain why quality is a matter of governance.

When a bank’s app or an e-tailer’s platform crashes, or when a healthcare services portal becomes unresponsive, people commonly refer to these as “technical issues.”A generic expression that often hides a very different reality: in most cases, these disruptions originate not from a hardware failure or a congested network, but from software that has not been adequately tested.

When the app crashes: the hidden cost of “technical issues”

As evidence of the economic impact of these disruptions, annual data from ITIC (Information Technology Intelligence Consulting) show that for more than 60% of enterprise companies, a single hour of downtime of critical systems results in costs exceeding $300,000, and can surpass €1 million in the most severe cases.

The quality of an application is something invisible, often underestimated despite being central to any organization delivering digital services. It is, in fact, the element that determines, on the one hand, service availability—its ability to withstand load and avoid disruptions at critical moments—and, on the other, its functional correctness, meaning the assurance that the system does exactly what it was designed to do. But that’s not all: software quality is also an indispensable link in the chain of securityas it attests to the absence of vulnerabilities that could compromise data, transactions, and user identities. Ignoring software testing, or relegating it to a merely residual and compressed phase of an IT project, means exposing organizations to tangible risks across all three of these dimensions.

Testing debt: the invisible risk that erodes digital services

In the IT world, much is said about technical debt, which refers to the accumulating costs caused by rushed software development or technical compromises, such as poorly written functionality or a non-scalable architecture. But there is a parallel form of debt, equally dangerous and far less scrutinized: testing debt.

The impact of this neglect is massive: according to reports from CISQ (Consortium for Information & Software Quality), the Cost of Poor Software Quality (CPSQ) exceeds $2 trillion per year in the United States alone, and the largest share of this spending is precisely due to the resolution of operational problems generated by software defects that were not detected before release.

A negative impact that surfaces every time a development cycle is closed without adequate test coverage, for example when testing is rushed to meet a deadline or regression tests are skipped because an update appears to be “routine.” The consequences are never immediate: they emerge months later, often in production, often under load, and very often with the end user.

The most severe digital service disruptions—the ones that make headlines, trigger audit procedures, and damage an organization’s reputation—rarely stem from a single bug. More often, they are the result of a silent erosion of application quality: systems that were never properly tested and can therefore fail at any time, under any condition.

Testing debt does not only manifest as service outages—often at the most critical moments, such as sales peaks, tax deadlines, or the delivery of services—with a consequent loss of user trust. It also translates into security vulnerabilities that surface only after they have already been exploited, and into remediation costs that are exponentially higher compared to a preventive approach.

QA as a governance tool

Reducing testing debt requires a shift in how organizations treat software quality, adopting a Quality Assurance (QA) does not consist of a simple final testing phase, but rather a set of practices, processes, and responsibilities that accompany software development from start to finish. It defines in advance the criteria by which a system can be considered ready, plans systematic testing of functionality, load, security, and regression, and documents both the issues found and the decisions made.

In organizations that deliver digital services, this approach is not just a matter of operational efficiency; it is a governance tool.The documentation produced by QA processes—test plans, defect reports, acceptance criteria—is the foundation on which release decisions, service contracts, and reporting are based. In other words, it is the raw material through which an organization demonstrates that it has control over what it puts into production.

A mature QA approach makes it possible to answer questions that organizations almost always ask only after an incident has occurred: was the system ready for production? Who certified what? Which risks had been identified, and which had been consciously accepted?

It is therefore no surprise that in large organizations QA is evolving from a final control function into one of continuous monitoring across the entire software lifecycle. The goal is no longer merely to find bugs before release, but rather to build software quality into every phase in order to reduce risks and ensure the continuity of services that people rely on every day.

Software quality is much more than testing.

Ultimately, delivering efficient digital services is a bet on the reliability of the technology adopted. But reliable technology does not emerge by chance; it is the result of adequate processes and tools, and of a culture that places software quality at the center.

Many organizations are still tied to traditional development models in which testing is perceived as a final phase, almost a bureaucratic ritual to be completed as quickly as possible. Yet analyses consistently show that every defect found in production costs on average ten times more than one identified during development.

The future of digital services depends not only on the adoption of innovative technologies, but on a paradigm shift: quality cannot be a by-product of the process. It must become a governance lever, integrated into project dashboards and release cycles, and supported by continuous maintenancebecause even after the go-live, every update requires rigorous testing to ensure reliability, security, and service continuity.

All of this comes at a time when the credibility of a company, a brand, or a public institution is increasingly measured by the continuity and quality of its digital services. It should not be underestimated that citizens’ and users’ expectations have risen, driven by the service levels of large digital-native players, and therefore their judgments—and their choices—depend more and more on the software quality that underpins every one of their online experiences.

By Antonio Burinato, General Manager of Innovaway

Share on
crossmenuchevron-downchevron-right