top of page

Hacking and Data Hacking

What is it?

Artificial intelligence opens up new possibilities — but also new forms of hacking. Whereas traditional hacking is about breaking into a system, AI hacking often involves tricking the system from the outside. The unique aspect of machine learning is that systems react to their environment and data — and can therefore be manipulated without needing direct access to the underlying technology.

For example, it has been demonstrated that self-driving cars can be hacked simply by altering their surroundings. By placing a small piece of tape on a stop sign, the car’s AI can be confused into misreading the sign — even though a human driver would have no doubt. This isn’t necessarily malicious; often, it's people testing the limits of technology — or just playing with it. But regardless of intent, it is a form of behavior-based hacking that anyone working with or using AI should be aware of.

Examples:
  • A Vietnamese security firm demonstrated how Apple's facial recognition on the iPhone X could be bypassed using a 3D-printed mask created from a photo of the phone’s owner. It required neither extensive technical knowledge nor system access — just a creative exploitation of how the AI recognizes faces.


  • Another example comes from Google's ongoing battle against black hat SEO — attempts to manipulate the search engine's algorithms by creating fake links, excessive keyword usage, or networks of websites referencing each other. While not hacking in the classic sense, it still involves tricking AI into making incorrect decisions.

What to consider?
  • If you work with AI, you should think beyond traditional IT security. You should also consider how your AI systems might be manipulated from the outside — and how people might react to them. This is especially critical for systems that interact with the physical world or make decisions based on inputs from users, images, sensors, or text.


  • Think in worst-case scenarios: What happens if someone tries to deceive your system? Also consider how you can test and strengthen your systems to withstand the creativity and resistance of the real world.


  • Finally: Unethical or opaque systems often invite hacking — whether as a form of protest or experimentation. The more responsible and transparent an AI system is, the less temptation there is to "poke at it."

Peter Svarre - petersvarre.dk - Nørrebrogade 110A, 1.tv, 2200 København N, tlf: 40409492, mail: peter@petersvarre.dk

bottom of page