Why does Norway need its own law for artificial intelligence?
Norway is working on establishing its own law for artificial intelligence. What will it mean for Norwegian citizens? AI expert Eirik Agnalt Østmo explains how.
Imagine if you lived in a society where artificial intelligence (AI) is used to spy on you during class or work. What would you think about that?
Or what if it was used to rank your value as a human being based on how you behave at school, workplace, or in your free time?
In 2024, the EU adopted the world's first law for AI. It will determine how AI should be developed and used in the EU and EEA countries in a responsible way, and for avoiding the mentioned horror scenarios. The entire legislation will be in effect in 2026.
Today, the Norwegian government is working on making its own AI law. It will be similar to the EU’s but adapted to Norwegian conditions and regulations.
Meant to maintain our safety
Eirik Agnalt Østmo is a researcher at the AI centre SFI Visual Intelligence at UiT The Arctic University of Norway. He says the law is important for ensuring that we feel safe in a society where AI is being used in more areas.

It is about taking control of a technology that is developing at a rapid pace – and preventing it from being abused.
"AI is developing at an extremely fast rate, and we still haven't seen the technology's full potential. That's why we need rules for how AI systems should be developed and used," Østmo explains.
Hold the AI developers accountable
While AI technology can help us in many different ways, some systems may pose a risk to people's rights, health, and well-being. For example, they can be used to manipulate people into doing things or exploit individuals in vulnerable situations.
Fortunately, not all AI systems are equally dangerous – such as the kind that suggests which movie you should watch on Netflix. That is why the AI law divides the systems into four different risk categories: minimal, limited, high, and unacceptable risk.
The greater the risk an AI system poses, the stricter the requirements developers must follow.
"This helps to hold those who provide AI solutions accountable, while also determining how the systems should be used depending on their risk category," Østmo explains.
Unwanted AI systems
Unacceptable AI systems – such as those that can be used to rank individuals as good or bad citizens based on their behavior – will be banned in Norway. The ban ensures that the technology is not used to violate our privacy.
"These are systems that have such a high potential for harm that we do not want them in our society," Østmo says.
The ban also prevents people from using the technology to limit our freedom of speech – for example by monitoring people that attend political rallies or demonstrations. Earlier in 2025, the Hungarian government planned to use AI to identify and fine individuals who took part in Pride celebrations.
"People must have the right to say and believe what they want without the fear of being monitored or recorded. That is why AI should not be used this way," he says.

Prevent discrimination
Today, AI is used to decide who should get a mortgage or to find the best job candidate. These are examples of high-risk AI systems.
While these systems can make these tasks quicker and simpler, they have been known to discriminate individuals based on their gender, skin color, or sexual orientation. This happens when the AI is trained on historical data – that is, text, images or videos that may contain outdated attitudes or stereotypes.
For example, an AI system from Amazon began to prefer job applications from male applicants, since it is often men that apply for or work in ICT-related jobs. An AI-based credit card system from Apple was investigated in 2019 for giving lower credit card limits to women.
The AI law will be important to ensure that the systems do not discriminate against anyone.
"The AI systems must treat everyone equally. The law serves as a tool to comply with this principle" Østmo says.

Limit fake news
AI chatbots imitate human intelligence in a very convincing way. Social media like TikTok can show fake videos that look surprisingly real. That’s why it's not always easy to know when you are talking to a machine or a human, or to separate false information from reality.
"Transparency is a central keyword in the AI law. It means that developers must ensure that we always know when we are interacting with AI. That is why clear labeling of AI-generated content is important.
“If we know that we are talking to a machine, or that an image was created by AI, it can help us be more critical of the information. This can help combat fake news and misleading content on social media and the internet," Østmo explains.
Important to ensure safe AI development
While the AI law aims to ensure that AI technology is safe for everyone, some are worried that the regulations are too strict. Some people believe it may slow down AI's development – for example, the development of AI systems that can help doctors detect diseases in our bodies.
But it is very important that AI development is safe, Østmo says. The most essential thing is that the technology benefits people – and that we do not release unfinished or harmful AI systems into society.
"If we do not have clear rules for what is acceptable use of AI and what is not, technology companies will decide that for themselves. That is why it is important to have a law that ensures safe AI development," Østmo concludes.
About the risk categories
The Norwegian AI law sets different requirements for AI systems based on their risk level.
Minimal risk: AI systems that involve little to no risk. The AI law does not set any specific requirements for these systems. Most AI systems fall into this category.
Limited risk: AI systems that must comply with transparency requirements, meaning the user must be made aware that they are interacting with AI.
High risk: AI systems that may interfere with people’s fundamental rights and must follow stricter requirements.
Unacceptable risk: AI systems that violate fundamental values and basic human rights, for example by manipulating or exploiting people’s vulnerabilities, or categorizing individuals in ways that have negative consequences for them.
Source: regjeringen.no
Kortnytt fra Department of Physics and Technology
-
Fiskeri- og havbruksvitenskap - bachelor
Varighet: 3 År -
Fiskeri- og havbruksvitenskap - master
Varighet: 2 År -
Akvamedisin - master
Varighet: 5 År -
Bioteknologi - bachelor
Varighet: 3 År -
Arkeologi - master
Varighet: 2 År -
Musikkteknologi
Varighet: 1 År -
Computer Science - master
Varighet: 2 År -
Geosciences - master
Varighet: 2 År -
Technology and Safety - master
Varighet: 2 År -
Physics - master
Varighet: 2 År -
Mathematical Sciences - master
Varighet: 2 År -
Computational chemistry - master
Varighet: 2 År -
Biologi - bachelor
Varighet: 3 År -
Rettsvitenskap - master
Varighet: 5 År -
Luftfartsfag - bachelor
Varighet: 3 År -
Arkeologi - bachelor
Varighet: 3 År -
Informatikk, datamaskinsystemer - bachelor
Varighet: 3 År -
Informatikk, sivilingeniør - master
Varighet: 5 År -
Geovitenskap- bachelor
Varighet: 3 År -
Kjemi - bachelor
Varighet: 3 År -
Samfunnssikkerhet - bachelor
Varighet: 3 År -
Automasjon, ingeniør - bachelor (ordinær, y-vei)
Varighet: 3 År -
Samfunnssikkerhet - master
Varighet: 2 År -
Farmasi - master
Varighet: 2 År -
Romfysikk, sivilingeniør - master
Varighet: 5 År -
Klima og miljøovervåkning, sivilingeniør - master
Varighet: 5 År -
Bærekraftig teknologi, ingeniør - bachelor
Varighet: 3 År -
Forkurs for ingeniør- og sivilingeniørutdanning
Varighet: 1 År -
Anvendt fysikk og matematikk, sivilingeniør - master
Varighet: 5 År -
Barnevern - bachelor
Varighet: 3 År -
Sosialt arbeid - bachelor
Varighet: 3 År -
Praktisk-pedagogisk utdanning for trinn 8-13 - årsstudium (deltid)
Varighet: 2 År -
Vernepleie - bachelor
Varighet: 3 År -
Internasjonal beredskap - bachelor
Varighet: 3 År -
Datateknikk, ingeniør - bachelor (y-vei)
Varighet: 3 År -
Barnevern - bachelor
Varighet: 3 År -
Vernepleie - bachelor (deltid)
Varighet: 4 År -
Droneteknologi, ingeniør - bachelor
Varighet: 3 År -
Bygg, ingeniør - bachelor
Varighet: 3 År -
Bygg, ingeniør - bachelor
Varighet: 3 År -
Bygg, ingeniør - bachelor (y-vei)
Varighet: 3 År -
Datateknikk, ingeniør - bachelor
Varighet: 3 År -
Datateknikk, ingeniør - bachelor
Varighet: 3 År -
Elkraftteknikk, ingeniør - bachelor
Varighet: 3 År -
Elektronikk, ingeniør - bachelor
Varighet: 3 År -
Elektronikk, ingeniør - bachelor (y-vei)
Varighet: 3 År -
Maskin, ingeniør - bachelor
Varighet: 3 År -
Maskin, ingeniør - bachelor
Varighet: 3 År -
Maskin, ingeniør - bachelor (y-vei)
Varighet: 3 År -
Satellitteknologi, ingeniør - bachelor
Varighet: 3 År


