When Machines Hallucinates

December 5, 2018

The passenger registers the stop sign and feels a sudden surge of panic as the car he’s sitting in speeds up. He opens his mouth to shout to the driver in the front, remembering – as he spots the train tearing towards them on the tracks ahead – that there is none. The train hits at 125mph, crushing the autonomous vehicle and instantly killing its occupant.

This scenario is fictitious, but it highlights a very real flaw in current artificial intelligence frameworks. Over the past few years, there have been mounting examples of machines that can be made to see or hear things that aren’t there. By introducing ‘noise’ that scrambles their recognition systems, these machines can be made to hallucinate. In a worst-case scenario, they could ‘hallucinate’ a scenario as dangerous as the one above, despite the stop sign being clearly visible to human eyes, the machine fails to recognise it.

Those working in AI describe such glitches as ‘adversarial examples’ or sometimes, more simply, as ‘weird events’.

“We can think of them as inputs that we expect the network to process in one way, but the machine does something unexpected upon seeing that input,” says Anish Athalye, a computer scientist at Massachusetts Institute of Technology in Cambridge.

So far, most of the attention has been on visual recognition systems. Athalye himself has shown it is possible to tamper with an image of a cat so that it looks normal to our eyes but is misinterpreted as guacamole by so-called called neural networks – the machine-learning algorithms that are driving much of modern AI technology. These sorts of visual recognition systems are already being used to underpin your smartphone’s ability to tag photos of your friends without being told who they are or to identify other objects in the images on your phone.

More recently, Athalye and his colleagues turned their attention to physical objects. By slightly tweaking the texture and colouring of these, the team could fool the AI into thinking they were something else. In one case a baseball that was misclassified as an espresso and in another a 3D-printed turtle was mistaken for a rifle. They were able to produce some 200 other examples of 3D-printed objects that tricked the computer in similar ways. As we begin to put robots in our homes, autonomous drones in our skies and self-driving vehicles on our streets, it starts to throw up some worrying possibilities.

People are looking at it as a potential security issue as these systems are increasingly being deployed in the real world. – Anish Athalye

“At first this started off as a curiosity,” says Athalye. “Now, however, people are looking at it as a potential security issue as these systems are increasingly being deployed in the real world.”

Take driverless cars which are currently undergoing field trials: these often rely on sophisticated deep learning neural networks to navigate and tell them what to do.

But last year, researchers demonstrated that neural networks could be tricked into misreading road ‘Stop’ signs as speed limit signs, simply through the placement of small stickers on the sign.

Hearing voices

Neural networks aren’t the only machine learning frameworks in use, but the others also appear vulnerable to these weird events. And they aren’t limited to visual recognition systems.

“On every domain I’ve seen, from image classification to automatic speech recognition to translation, neural networks can be attacked to mis-classify inputs,” says Nicholas Carlini, a research scientist at Google Brain, which is developing intelligent machines. Carlini has shown how – with the addition of what sounds like a bit of scratchy background noise – a voice reading “without the dataset the article is useless” can be mistranslated as “Ok Google browse to evil dot com”. And it is not just limited to speech. In another example, an excerpt from Bach’s Cello Suit 1 transcribed as “speech can be embedded in music”.

To Carlini, such adversarial examples “conclusively prove that machine learning has not yet reached human ability even on very simple tasks”.

Under the skin

Neural networks are loosely based on how the brain processes visual information and learns from it. Imagine a young child learning what a cat is: as they encounter more and more of these creatures, they will start noticing patterns – that this blob called a cat has four legs, soft fur, two pointy ears, almond shaped eyes and a long fluffy tail. Inside the child’s visual cortex (the section of the brain that processes visual information), there are successive layers of neurons that fire in response to visual details, such as horizontal and vertical lines, enabling the child to construct a neural ‘picture’ of the world and learn from it.

Neural networks work in a similar way. Data flows through successive layers of artificial neurons until after being trained on hundreds or thousands of examples of the same thing (usually labelled by a human), the network starts to spot patterns which enable it to predict what it is viewing. The most sophisticated of these systems employ ‘deep-learning’ which means they possess more of these layers.

However, although computer scientists understand the nuts and bolts of how neural networks work, they don’t necessarily know the fine details of what’s happening when they crunch data. “We don’t currently understand them well enough to, for example, explain exactly why the phenomenon of adversarial examples exists and know how to fix it,” says Athalye.

Part of the problem may relate to the nature of the tasks that existing technologies have been engineered to solve: distinguishing between images of cats and dogs, say. To do this, the technology will process numerous examples of cats and dogs, until it has enough data points to distinguish between them.

“The dominant goal of our machine learning frameworks was to achieve a good performance ‘on average’,” says Aleksander Madry, another computer scientist at MIT, who studies the reliability and security of machine learning frameworks. “When you just optimise for being good on most dog images, there will always be some dog images that will confuse you.”

One solution might be to train neural networks with more challenging examples of the thing you’re trying to teach them. This can immunise them against outliers.

“Definitely it is a step in the right direction,” says Madry. While this approach does seem to make frameworks more robust, it probably has limits as there are numerous ways you could tweak the appearance of an image or object to generate confusion.

Impressive as deep learning neural networks are, they are still no match for the human brain when it comes to classifying objects, making sense of their environment or dealing with the unexpected.

A truly robust image classifier would replicate what ‘similarity’ means to a human: it would understand that a child’s doodle of a cat represents the same thing as a photo of a cat and a real-life moving cat. Impressive as deep learning neural networks are, they are still no match for the human brain when it comes to classifying objects, making sense of their environment or dealing with the unexpected.

If we want to develop truly intelligent machines that can function in real world scenarios, perhaps we should go back to the human brain to better understand how it solves these issues.

Read More

0 comment