The Weekly Reflektion 11/2026

Computer simulations are used extensively in industry and numerical analysis is an inherent part of the simulation process. Finite element analysis in the design of structures and computational fluid dynamics in aerodynamics, are two examples. The more we use these simulations the more familiar we become and the less critical we are of their flaws and limitations. We take it for granted that the results of the simulation will lead to an acceptable design. Aesop, when composing his fables, and Chaucer, when composing his tales, both highlighted the danger of familiarity breeding contempt. And they didn’t have a computer.

Does your familiarity breed contempt?

Mathematician Manil Suri in his Scientific American discussion of crack modeling, raised a concern on the application of computer simulations. These often produce results that appear precise and authoritative, even when the underlying assumptions are flawed. According to Suri, the mathematics of cracking and structural failure is extremely sensitive to modeling choices. Small inaccuracies in geometry, material behavior, or boundary conditions can lead to dramatically different predictions about when and where cracks will form. Without critical review by experienced and competent engineers, the simulation results could lead to a flawed design.

On 23 August 1991, the Sleipner A concrete gravity base structure (GBS) sank during a controlled ballast test in the Gandsfjord near Stavanger, Norway. The structure collapsed suddenly as cracks opened in the concrete cells of the platform. Within minutes, seawater flooded the structure and the platform sank to the seabed. Fortunately, the fourteen workers aboard escaped safely. The accident resulted in an economic loss of about USD 700 million and became one of the best examples of failures involving computer simulation in engineering.

The Sleipner platform was a Condeep structure and consisted of concrete chambers whose walls had to withstand large hydrostatic pressures during construction and operation. To predict these stresses, engineers relied heavily on finite element analysis (FEA) using the software NASTRAN. Investigations following the accident found that the computer model contained a critical error in the finite element mesh and stress calculations. Specifically, the analysis underestimated shear stress in the concrete walls by roughly 47 percent. The design therefore failed to meet the safety margins required for the loading conditions. In effect, the structure was built according to a simulation that gave engineers an overly optimistic estimate of its strength. The Sleipner incident did not mean that the software itself was fundamentally flawed. Instead, the failure highlighted a deeper issue, trust in simulations without sufficient verification and engineering judgment. Computer models depend on many assumptions, including material properties, boundary conditions, mesh density, and load cases. Even small modeling errors can propagate into large design mistakes if results are accepted uncritically. One key lesson was the need for independent validation of simulation results. 

After the accident, engineers recalculated the stresses using improved models and discovered that the wall thickness was inadequate. The revised design for the replacement platform increased reinforcement and modified the structure to withstand higher shear forces. Greater emphasis was also placed on peer review and cross-checking between analytical methods.

The sinking of Sleipner A occurred at a pivotal moment when digital tools were transforming engineering practice. The event demonstrated that simulations, while powerful, are not substitutes for engineering skepticism. Reliable design requires a combination of computational analysis, physical understanding, and rigorous verification. The disaster thus became a landmark case study in the responsible use of computer modeling in large-scale infrastructure projects.

Reflekt AS