top of page

Truth and Credibility

What is it?

As generative AI becomes more advanced and widespread, it’s increasingly difficult for ordinary people to distinguish what is real from what is artificially created. AI can generate texts, images, videos, and voices that are nearly indistinguishable from reality—and this raises serious questions about truth, trust, and responsibility.

This isn’t just a societal issue—it’s also something that companies, organizations, and individuals using AI must take seriously. On the one hand, there is an ethical responsibility not to contribute to misinformation and public confusion. On the other hand, there is a risk of damaging one’s own credibility if AI is used in ways that seem manipulative or dishonest.

In some cases, it may be entirely legitimate to use AI-generated content—but only if it’s clearly disclosed and does not mislead.

Examples:
  • In 2023, the Danish child welfare organization Børns Vilkår ran a campaign about child abuse using AI-generated images of children with visible signs of abuse. In this case, AI was a way to avoid using real children in vulnerable situations—and because the use of AI was clearly declared, the campaign was generally perceived as responsible and respectful.


  • A different outcome occurred with Amnesty International, which launched an Instagram campaign featuring AI-generated images of police violence in Colombia. When it became known that the images were not authentic, the campaign lost credibility, and Amnesty was criticized for undermining its own authority as a documentary organization. The campaign was withdrawn—a clear example of how things can go wrong when AI is used without considering the context.


  • A third example is a video from the Danish People's Party (Dansk Folkeparti), featuring a fake version of Prime Minister Mette Frederiksen announcing the cancellation of the Pentecost holiday. Some interpreted it as satirical political communication, while others saw it as manipulation and a violation of personal integrity. The debate highlights how blurred the lines have become when politicians use AI—and how important it is to consider the consequences for both individuals and the public.

What to consider?
  • Always consider how AI-generated content might be perceived by your audience. Is there a risk that someone might be misled—and how can you minimize that risk? Think about the context, and be especially careful in political, journalistic, and human rights-related situations.


  • Clearly disclose when content is AI-generated—both for ethical reasons and to protect your own credibility. Be ready to explain why you’re using AI and how the content was created. And ask yourself: What truth am I contributing to—and what trust might I be undermining?


In a time when anything can be fabricated, honesty and transparency are the most important building blocks of digital trust.

Peter Svarre - petersvarre.dk - Nørrebrogade 110A, 1.tv, 2200 København N, tlf: 40409492, mail: peter@petersvarre.dk

bottom of page