Bias and Discrimination
What is it?
Bias or discrimination is arguably the single greatest challenge in using modern artificial intelligence, which relies on statistics and data. The problem lies in the fact that AI is trained on data, and data never represents reality one-to-one. Sometimes this is because the data is incomplete, and other times because the data reflects human biases that also exist in the real world.
When developing or implementing AI systems, you can reduce bias by being thorough in collecting and preparing data, but it is impossible to eliminate all bias from artificial intelligence. People, societies, and viewpoints are diverse, which makes it both philosophically and practically impossible to create a global AI that represents all people and perspectives in a bias-free manner.
Examples:
Try asking your favorite generative AI model to create images of different professions. You'll quickly notice historical and cultural biases in how various roles are portrayed. For instance, doctors are often shown as older white men, while nurses tend to appear as young women.
A few years ago, Amazon attempted to create a recruitment AI, which turned out to be biased against women. It was likely trained on existing data from Amazon’s HR department—data showing that men historically performed better and earned more than women at the company.
In the Netherlands, a government was forced to resign after it was revealed that the tax authority had developed an AI system to detect social fraud. The system disproportionately penalized people of non-Dutch ethnic backgrounds, because statistically and historically, this group had been more frequently involved in such cases.
What to consider?
Always consider—and if possible, test—whether there might be biases in the AI systems you work with.
If you're training your own AI, always reflect on how the data is selected.
If you're working with generative AI, consider counteracting bias in your prompts. For example, write “female doctor” instead of just “doctor” when generating images.
Be cautious about automating important decisions if there's a risk of systematic bias—especially if the system could end up violating fundamental human rights.