top of page

Humans in the loop

What is it?

AI systems are powerful but not infallible. They can make both random errors and more systematic mistakes such as bias and misinterpretations. Therefore, it is often necessary to have a human "in the loop" — meaning a person who monitors, approves, or can intervene if something goes wrong. This is especially important in contexts where errors could have serious consequences for people.

However, it is important to understand that human involvement is not always necessary. In low-risk systems — such as spell checkers, image sorting, or simple recommendation engines — it can be perfectly responsible to let the system operate without human intervention. It all depends on assessing the context and the level of risk.

Examples:
  • In the healthcare sector, AI is used to analyze medical images and patient data to assist doctors in diagnosing diseases and developing treatment plans. This includes applications in radiology, pathology, neurology, and more. Although AI can improve diagnostic speed and accuracy, it is crucial that doctors and healthcare professionals remain involved to ensure correct interpretation and decision-making.


  • There are documented cases where AI algorithms used in recruitment processes have exhibited bias, leading to discrimination against certain groups based on factors such as names or postal codes. As a result, qualified candidates have been excluded from job opportunities based on their ethnicity or gender. To prevent such outcomes, human involvement is essential in the recruitment process to monitor and correct AI decisions.

What to consider?
  • If you use AI for search purposes, it is important to verify the sources AI refers to — or to find sources yourself, as AI often hallucinates or guesses. When using AI to generate text, you should double-check facts and names before publishing anything.


  • If you deploy AI chatbots for customer service or citizen interaction, users should always have the option to contact a human — especially if the AI’s response is unsatisfactory or confusing. There must be a clear path to human assistance to ensure that no one gets "trapped" in automation.


  • When automating decisions — such as selecting job candidates, allocating benefits, or conducting risk assessments — it is crucial that human oversight and intervention are possible. Without it, there is a risk of reinforcing errors and injustices, as the systems continue to operate without anyone noticing or stopping them.


Use AI with respect for both its capabilities and its limitations. Sometimes, the system can run independently — but often, human judgment is the best guarantee of quality, fairness, and responsibility.

Peter Svarre - petersvarre.dk - Nørrebrogade 110A, 1.tv, 2200 København N, tlf: 40409492, mail: peter@petersvarre.dk

bottom of page