Open to hack attacks? Asahi Shimbun via Getty Images
When it comes to AI, seeing isn鈥檛 always believing. It鈥檚 possible to trick machine learning systems into hearing and seeing things that aren鈥檛 really there.
We already know that wearing a pair of snazzy glasses can fool face recognition software into thinking you鈥檙e someone else, but research from Facebook now shows that the same approach can fool other algorithms too.
The technique 鈥 known as an adversarial example 颅鈥 could be used by hackers to trick driverless cars into ignoring stop signs or prevent a CCTV camera from spotting a suspect in a crowd.
Advertisement
Show an algorithm a photo of a cat that鈥檚 been manipulated in a subtle way and it will think it鈥檚 looking at a dog. The alterations, however, may be so slight that a human would never be able to tell the image had been tampered with.
Learn more at New 女生小视频 Live in London:
Moustapha Ciss茅, an AI researcher at Facebook, and his colleagues figured out that a similar technique 聽– which they have called Houdini – can be used to fool both voice recognition and machine vision systems, by adding such small amounts of digital noise to images and sounds that humans would not notice.
To find a way of deceiving such systems, a hacker would just need to know what an algorithm is seeing or hearing when faced with a particular situation.
While there鈥檚 no evidence that these kinds of attack have been used in the real world, at the University of Oxford says it鈥檚 only a matter of time before we will see them being used. 鈥淎t the moment we have no protection,鈥 she says, although systems vulnerable to these attacks are already in use in CCTV cameras.
at Pennsylvania State University agrees. 鈥淚t is likely that machine learning will be exploited to attack systems. More research is needed to invent new machine learning techniques so we can use them as a defence,鈥 he says.
Sleight of hand
Ciss茅 found that the types of image classification algorithm used in driverless cars could be made to ignore pedestrians or parked cars. 鈥淚 think we should worry about how we can ensure that the neural networks we put in cars are safe,鈥 he says.
The Facebook study showed that this approach can also be extended to voice recognition systems. Ciss茅鈥檚 team inserted a small amount of digital noise into a voice recording of a person speaking a phrase, and played that recording to the Google Voice speech recognition app.
Presented with this adversarial example, the app thought it was hearing a completely different sentence to the one that was actually spoken.
But not everyone is sure that attacks will work in the real world. David Forsyth at the University of Illinois at Urbana-Champaign built a fake stop sign that was digitally altered to try and fool such algorithms.
He found that when the signs were viewed by a moving camera 鈥 as they would be from a driverless car 鈥 they didn鈥檛 actually fool the algorithm. Adversarial examples might work under perfect conditions, he says, but in the real world factors such as lighting and viewing angles might make them much less successful. 鈥淭he attacks may be more difficult to deliver than they seem,鈥 he says.
AI research lab OpenAI responded to Forsyth鈥檚 paper with a that showed it is possible to trick image recognition algorithms even if the image is viewed from different distances and angles.
The main problem is that we still don鈥檛 know why algorithms are so responsive to minute changes that humans would never even notice, says Forsyth. 鈥淲e basically don鈥檛 understand what鈥檚 going on inside them.鈥
Reference:
Read more: The road to artificial intelligence: A case of data over theory;
Robots are stronger, faster, more durable鈥 and hackable
Topics:



