Is it a horse? Weegee(Arthur Fellig)/International Center of Photography/Getty
Oi, AI – what do you think you鈥檙e looking at? Understanding why machine learning algorithms can be tricked into seeing things that aren鈥檛 there is becoming more important with the advent of things like driverless cars. Now we can glimpse inside the mind of a machine thanks to a test that reveals which parts of an image an AI is looking at.
Artificial intelligences don鈥檛 make decisions in the same way that humans do. Even the best image recognition algorithms can be聽聽in images that are just white noise, for example.
It鈥檚 a big problem, says Chris Grimm at聽Brown University聽in Providence, Rhode Island. If we don鈥檛 understand why these systems make silly mistakes, we should think twice about聽trusting them with our lives聽in things like driverless cars, he says.
Advertisement
Learn more at New 女生小视频 Live
So Grimm and his colleagues created a system聽聽when it decides what the image is depicting. Similarly, for a document-sorting algorithm, the system highlights which words the algorithm used to decide which category a particular document should belong to.
Peek inside
It鈥檚 really useful to be able to look at an AI and find out how it鈥檚 learning, says , a researcher at Google. Grimm鈥檚 tool provides a handy way for a human to double-check that an algorithm is coming up with the right answer for the right reasons, he says.
To create his attention-mapping tool, Grimm wrapped a second AI around the one he wanted to test. This 鈥渨rapper AI鈥 replaced part of an image with white noise to see if that made a difference to the original software鈥檚 decision.
If replacing part of an image changed the decision, then that area of the image was likely to be an important area for decision-making. The same applied to words. If changing a word in a document makes an AI classify a document differently, it suggests that word was key to the AI鈥檚 decision.
Grimm tested his technique on an AI trained to sort images into one of 10 categories, including planes, birds, deer and horses. His system mapped where the AI was looking when it made its categorisation. The results suggested that the AI had taught itself to break down objects into different elements and then search for each of those elements in an image to confirm its decision.
Horse鈥檚 head
For example, when looking at images of horses, Grimm鈥檚 analysis showed that the AI first paid close attention to the legs and then searched the image for where it thought a head might be 鈥 anticipating that the horse may be facing in different directions. The AI took a similar approach with images containing deer, but in those cases it specifically searched for antlers. The AI almost completely ignored parts of an image that it decided didn鈥檛 contain information that would help with categorisation.
Grimm and his colleagues also analysed an AI聽trained to play the video game Pong. They found that it ignored almost all of the screen and instead paid close attention to the two narrow columns along which the paddles moved. The AI paid so little attention to some areas that moving the paddle away from its expected location fooled it into thinking it was looking at the ball and not the paddle.
Grimm thinks that his tool could help people work out how AIs make their decisions. For example, it could be used to look at聽,聽making sure that they don鈥檛 accidentally come up with the right answers by looking at the wrong bit of the image. 鈥淵ou could see if it鈥檚 not paying attention to the right things,鈥 he says.
But first Grimm wants to use his tool to help AIs learn. By telling when an AI is not paying attention, it would let AI trainers direct their software towards relevant bits of information.
Reference: 聽
Topics:



