The Weekly Reflektion 12/2026
We really do want to believe that the safety of our operation is good, and that risks are being managed to an acceptable level. Sometimes we manage to convince ourselves by selecting the information that confirm the belief and ignore contradictory information. Once we have established our view, it takes a lot of effort to change it. Unfortunately, the game changer may be a Major Accident, and it is with this bitter experience that the picture fades and reality rushes in.

Are you too easily convinced that safety is OK?
The Piper Alpha disaster remains one of the most powerful case studies in industrial safety, not only because of the scale of the tragedy but also because of what the investigation revealed about management attitudes toward risk. The explosion and fires that destroyed the platform on 6 July 1988 killed 167 people and exposed deep weaknesses in safety culture within offshore oil operations.
The public inquiry led by Lord Cullen concluded that the technical failures that led to the accident were closely tied to organizational and managerial shortcomings. One of the inquiry’s key observations was that senior management had been too easily convinced that safety was being maintained satisfactorily. Safety systems appeared robust on paper, but in practice they were fragile and poorly enforced. Management accepted assurances that procedures were working without sufficiently challenging them or verifying their effectiveness.
A central issue involved the permit-to-work system, which was intended to ensure that equipment under maintenance was not operated. On the day of the accident, a condensate pump had its safety valve removed for maintenance and a temporary blind flange installed. Documentation indicating that the pump should not be used was not clearly communicated during the shift handover. Later that evening, operators started the pump, leading to a gas leak that quickly ignited. The resulting explosion and fire eventually escalated, and gas risers ruptured and flames engulfed the platform. While the immediate cause was a breakdown in communication, the deeper problem was that the system relied on procedures that management assumed were functioning correctly.
This kind of misplaced confidence illustrates a broader phenomenon often described by Brandolini’s law, sometimes called the bullshit asymmetry principle. The principle states that it takes far more effort to refute incorrect claims than it does to produce them. In the context of safety management, overly optimistic assurances, for example, “the procedures are adequate” or “the system is working as designed”, can spread easily within an organization. Challenging those assumptions requires time-consuming investigation, detailed audits, and sometimes confrontation with established beliefs.
At Piper Alpha, management appears to have accepted simplified narratives about safety performance. Reports and documentation suggested compliance with procedures, and the absence of major incidents reinforced the belief that the system was effective. However, the effort required to thoroughly test and question those assurances, through inspections, independent verification, and critical review, was much greater than simply accepting them. Over time, this asymmetry allowed weak safety practices to persist.
The Brandolini effect helps explain why complacency can take root in organizations. Once a reassuring narrative becomes established, it is cognitively and organizationally expensive to dismantle it. Engineers or operators who raise concerns must assemble detailed evidence and overcome institutional inertia, while the default position of “everything is fine” remains relatively easy to maintain. Without deliberate mechanisms that encourage skepticism and verification, management may unintentionally reinforce this imbalance.
The lessons from Piper Alpha therefore extend beyond technical design or procedural failures. The disaster highlighted the importance of active safety leadership, where management does not merely accept reports of compliance but continually seeks evidence that systems are actually working. Effective safety culture requires persistent questioning, independent auditing, and openness to uncomfortable information.