'Vulnerable' AI tricked into thinking a turtle is a rifle and a cat is guacamole

Researchers believe their findings pose a 'practical concern'

Aatif Sulleyman
Thursday 02 November 2017 18:20 GMT
Comments
AI can be tricked into misidentifying things

Your support helps us to tell the story

Our mission is to deliver unbiased, fact-based reporting that holds power to account and exposes the truth.

Whether $5 or $50, every contribution counts.

Support us to deliver journalism without an agenda.

Head shot of Louise Thomas

Louise Thomas

Editor

Researchers have found a way to trick Google’s AI into completely mis-identifying real-world objects.

They managed to make InceptionV3, an image classifier, think a 3D-printed turtle was actually a rifle.

Crucially, it kept making the same mistake even when it viewed the turtle from “a variety of angles, viewpoints, and lighting conditions”.

The researchers, a team of MIT students called labsix, managed to trick the system using “adversarial examples”, which are like optical illusions for neural networks.

However, adversarial examples were previously considered a “theoretical concern” rather than a practical one, as they only worked successfully on 2D images. Even then, they weren’t particularly robust.

For instance, the researchers tricked the system into thinking a picture of a cat was actually guacamole. However, it correctly identified the cat when the same picture was rotated slightly.

“While minute, carefully-crafted perturbations can cause targeted misclassification in a neural network, adversarial examples produced using standard techniques lose adversariality when directly translated to the physical world as they are captured over varying viewpoints and affected by natural phenomena such as lighting and camera noise,” they wrote.

“This phenomenon suggests that practical systems may not be at risk because adversarial examples generated using standard techniques are not robust in the physical world.”

However, the researchers believe their new findings show that adversarial examples make AI “vulnerable” and actually do pose a “practical concern”.

As well as tricking InceptionV3 into thinking the 3D-printed turtle was a rifle, they managed to fool it into mis-identifying a real baseball as an espresso, once again from multiple angles.

“The examples still fool the neural network when we put them in front of semantically relevant backgrounds; for example, you’d never see a rifle underwater, or an espresso in a baseball mitt,” they added.

It’s a worrying discovery that raises fears that AI systems could be fooled into seeing things that don’t exist, and recognising objects as something completely different - both of which could lead to dangerous outcomes.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in