Skip to content
New EU rules on artificial intelligence (AI):

Norwegian companies must take control

A new law could result in multimillion fines and is the reason why DNV is focusing on helping other companies to use artificial intelligence safely.

"Artificial intelligence is being used in more and more areas. It is my impression that many Norwegian companies have not yet realized that these regulations will be adopted by the EU as early as next year. So this is something they have to deal with very shortly,” says Frank Børre Pedersen in DNV. 

He is programme director and head of the artificial intelligence section in DNV. This safety and consulting company has been one of the consultative bodies for the new EU legislation.


Rapid developments

Artificial intelligence and machine learning currently help companies to carry out tasks both faster and better. The availability of massive amounts of data and cheap computing power has led to rapid developments in artificial intelligence in recent years. 

We have all encountered artificial intelligence through chatbots, marketing, and consumer technology. Some innovations, such as autonomous cars and ships, rely entirely on artificial intelligence. 

In the future, more and more safety-critical systems will be supported by artificial intelligence, including mobile networks, energy systems, and health technologies, to name just a few. If the technology used by these complex systems does not work as intended, or even fails, the consequences are huge. 

"That's why it's important to ensure the safe use of artificial intelligence. A lot of the technology is already available. The next major step is to build trust in the systems and the results they deliver – not only for the owner of the system, but also for all those affected," Pedersen says, referring to the new EU legislation. 

"DNV's public service role is to ensure that things are safe enough and work in industry and business, as well as in society as a whole.  This is the core of what we do at DNV," he adds.


Before an accident happens

A highly skilled professional environment in DNV is working on areas that will have a major impact on how artificial intelligence can be used safely. 

Carla Ferreira, a risk management expert in DNV's artificial intelligence section, works at the intersection of physical reality and artificial intelligence. She has a master’s degree in engineering and a PhD from Brazil and has taken further studies in England. Carla is researching how to build trust in artificial intelligence and machine learning used in systems that are critical to engineering work.

Her team works in the area where natural and artificial intelligence meet. They use data from computer-simulated models that mimic a physical environment, such as a wind turbine, to build trust in how artificial intelligence will respond when the system encounters an unknown event.

The team can enter a lot of data about handling a serious accident – without the accident actually having happened. 

"We also want artificial intelligence to take uncertainties into account and be able to tell us how certain or uncertain it is – and when we can't trust it," she continues 

"It’s only if artificial intelligence can be trusted and proven to work in accordance with the new rules that it can be used and provide value," Pedersen emphasises. 

"This is precisely why the work carried out by Carla and her team is so important," concludes Frank Børre Pedersen.