Transparency
What is it?
Many AI systems—especially those using machine learning or generative AI—make rapid and complex decisions without explaining how they arrive at their results. But as AI becomes more influential in our lives, it is more important than ever for people to understand how and why these systems act the way they do.
AI can make mistakes, reinforce biases, or provide misleading information—especially if trained on incomplete or skewed datasets. And if the system operates as a “black box,” where no one—not even the developers—understands the exact decision-making logic, it becomes impossible to monitor, correct errors, or hold anyone accountable.
Transparency doesn’t necessarily mean that every citizen should be able to read code or understand complex statistical models. It means that experts, authorities, and responsible organizations must have access to examine how systems function—and that users should be informed about what the AI’s responses or assessments are based on.
Examples:
Several generative AI models, such as ChatGPT and Google’s Gemini, have faced criticism for lacking transparency about how they were trained—and what data their answers are based on. When you don’t know whether a text is inspired by scientific articles, social media, or advertisements, it’s difficult to assess its credibility.
In the EU, work on the AI Act has led to requirements for greater openness around so-called “foundation models.” Developers of large language models are now required to document their training data and conduct risk assessments—partly to ensure that the models do not spread misinformation or reinforce bias.
At the same time, OpenAI has made it possible to see which plugins and add-ons a ChatGPT model uses in some professional versions—a step toward greater transparency for advanced users.
What to consider?
Think about how you can explain what your AI systems are based on. If you use generative AI to write texts, translate, or analyze data, it is often important to make it clear that the output is automatically generated—and what limitations it has.
Allow experts, ethics boards, or authorities to review your models if they are used for critical purposes. And if your AI systems make decisions about people (e.g. hiring, finance, surveillance), you should ensure that both the logic and the data foundation can be reviewed and validated.