The Weekly Reflektion 06/2025
Artificial Intelligence can change the way we work and the way society works. The impression that is being created is that the potential is almost unlimited and increased use of AI seems inevitable. From the use of ChatGPT on the mobile phone to satisfy our curiosity, to the efficient automatic control of complex systems,AI has arrived and is here to stay. The question of the risks associated with the application of AI are also being considered, and concerns are being raised. Unfortunately, our experience with new technology shows we may be in for a rough ride.
How will you manage the risks associated with AI?

Norwegian Ocean Industry Authority (Havtil) main theme for 2025 is ‘Artificial intelligence is also a risk factor’. It is worth quoting a few points from the Havtil web site.
Where the energy sector is concerned, AI continues to be integrated into ever more technologies – including those used in safety-related operations. AI-based systems represent a key resource and can help to reduce risk. But they may also do the opposite. Industries exposed to major accident risk are particularly vulnerable.
The challenge is to take a broad view and consider AI in an integrated perspective. To ensure the safe and secure use and maintenance of such systems, their development must rest on an interplay between people, technology and organisations. We must also ensure that AI does not make us more vulnerable to external threats and malicious actions.
Three key points that are illustrated here are the rewards, the risks and the vulnerabilities associated with AI. We are trying to balance these, but like a child with a new toy, we want to try it out and see what it can do. We have plenty of experience with the rewards of new technology getting the better of the risk and hindsight telling us that we messed up. The above picture was from the Havtil film and visualises the concern of ‘black boxes’ steering our lives.
On 1st January 2025. the Norwegian government published statistics on road traffic accidents in 2024. There were 90 fatalities, and this was the lowest number since 1947, excluding the corona years where traffic was significantly lower. The reduction in 2024 was particularly welcome since 2022 and 2023 had shown an increase. The government has set an objective to achieve zero fatalities by2040.
Would the number of fatalities have been lower if vehicles were self-driven and everything controlled by AI? At what point will the technology be so good and reliable that it can be demonstrated that the number of fatalities would be significantly reduced? When we reach this point does the government have a moral duty to protect its citizens by mandating the application of self-driving vehicle technology? How does our society then look and how vulnerable do we become to failures?
When Boeing modified their 737 aircraft to create the 737 Max they introduced, the MCAS (Manoeuvring Characteristics Augmentation System). 346 people died in two air disasters as a reminder of the consequences of getting the black box wrong. Reflekt covered the 737 Max story in three Reflektions in weeks 29, 30 and 31 in 2022.
Havtil include the following message in the introduction to their main theme. We will leave it there for now, although we will return to the application of AI later.
Responsible use of AI is in the interest of everyone in the industry. Ultimately, responsibility for ensuring this rests with management.