top of page

Liability for Errors

What is it?

AI systems — whether predictive, generative, or decision-supporting — can and will make mistakes. These may include factual inaccuracies, incorrect decisions, discriminatory outputs, or harmful recommendations. The key question is: Who is responsible when this happens?

Legally, responsibility is often unclear, especially when AI is used in complex ecosystems involving multiple actors: developers, data providers, system integrators, and users. And even though AI may appear to act and "think" independently, it cannot be held accountable as a legal entity.

Ethically, it also raises difficult questions: Does the organization implementing the AI have a special responsibility to anticipate and prevent errors? Or is it the developer’s responsibility? Or the user’s, who interprets the output? Without clear frameworks, there is a risk that responsibility will be shifted around — and ultimately lost.

Examples:
  • Both Amazon and Google have had to withdraw AI-based hiring or advertising tools because they discriminated against certain groups. These cases clearly showed that the company behind the technology cannot disclaim responsibility, even if it was the system that made the mistake.


  • In 2024, Google launched the AI Overviews feature, designed to provide users with quick, AI-generated answers to their search queries. Shortly after launch, the feature was criticized for offering erroneous and sometimes dangerous advice, such as suggesting eating stones or using glue on pizza.

What to consider?
  • If you use AI in your organization, you should clarify who is responsible for what — technically, legally, and ethically. Establish clear procedures for how errors are handled and how users can appeal or have their case reviewed by a human.


  • You should never use AI for critical decisions without the possibility of human intervention. And you should be aware that responsibility is not just about assigning blame — it is also about taking ownership of the consequences of the technology you introduce to the world.


A responsible AI strategy requires more than just technology. It demands leadership, ethics, and transparency.

Peter Svarre - petersvarre.dk - Nørrebrogade 110A, 1.tv, 2200 København N, tlf: 40409492, mail: peter@petersvarre.dk

bottom of page