AI

Images altered to trick machine vision can influence humans too

research

Published
Authors

GAMALELDIN ELSAIED and Michael Mozer

A new research shows that even precise changes on digital images, designed to confuse computer vision systems, can also affect human awareness

Computers and humans see the world in different ways. Our biological systems and artificial systems in machines may not notice the same visual signals. Trained nervous networks can be fully misleading the classification of images by the exact disorders of a picture that a person will not notice.

Artificial intelligence systems through such hostile images may indicate a fundamental difference between human and machine perception, but they led us to explore whether humans, too, have – through the conditions of the test subject to control – may gain sensitivity to the same disorders. In a series of experiences published in Nature Communications, we found evidence that human rulings are systematically affected by the rivalry disorders.

Our discovery sheds light on the similarity between human and the machine, but it also explains the need for more research to understand the impact of hostile images on people, as well as artificial intelligence systems.

What is the image of the litigation?

The hostile image is the skillful changed through a procedure that causes the artificial intelligence model to misuse the image contents with confidence. This deliberate deception is defined as an aggressive attack. The attacks can be targeted to cause the artificial intelligence model to be classified as a cat, for example, or you may be designed to make the model see anything except vase.

Left: The artificial nervous network (AnN) correctly classifies the image as vase, but when it is disturbed by random pattern apparently across the entire image (the middle), with the enlarged density of illustrative purposes – the resulting (right) image (right), confidence, offends a cat.

These attacks can be hidden. In a digital image, each pixel is individual in the RGB image on a 0-255 scale represents the intensity of individual pixels. The hostile attack can be effective even if no pixel is modified in more than two levels on this scale.

The rivalry attacks can succeed on material objects in the real world, such as causing the stop mark as a sign of speed. In fact, security concerns have prompted researchers to investigate ways to resist infection attacks and reduce their risks.

How is the human perception affected by hostile examples?

Previous research has shown that people may be sensitive to great disorders in the images that provide clear signals. However, less is understood about the effect of the most accurate numerical attacks. Do people reject the disturbances in the image as harmless noise, or can it affect human perception?

To find out, we have conducted controlled behavioral experiences. In order to start, we took a series of original photos and carried out two aggressive attacks on each of them, to produce many husbands from turbulent images. In the moving example below, the original image is classified as “vase” by a model. The two turbulent images are then classified by the rivalry attacks on the original image by the form, with high confidence, as the hostile targets “CAT” and “TUCK”, respectively.

After that, we showed the human participants a pair of pictures and asked a targeted question: “Which cat resembles?” Although any of the image does not look like a cat, it was obligated to choose some, and they usually reported their feeling that they were choosing an arbitrary option. If the stimulation of the brain is not sensitive to microscopic attacks, then we expect people to choose each image of 50 % of the average time. However, we found that the selection rate-which we refer to in the name of cognitive bias-was more reliable than the chance of a wide range of troublesome pairs, even when no pixel was seized at more than two levels on a scale 0-255.

From the participant’s point of view, it seems to be asked to distinguish between almost two identical images. However, scientific literature is full of evidence that people benefit from weak cognitive signals in making options, or signs that are very weak for them to express confidence or awareness). In our example, we may see a vase of flowers, but some activities in the brain teach us that there is a hint from the cat.

Left: Examples of pairs of aggressive images. The upper pair of the images is skillfully disturbed, with a maximum of 2 pixels, to cause the nerve network to be classified as a “truck” and “cat”, respectively. A human volunteer asks, “What is more like a cat?” The lower pair of the pictures is clearly manipulated, with a maximum of 16 pixels, to be classified as a “chair” and “sheep”. The question this time is “Which is like sheep?”

We conducted a series of experiences that excluded the potential artifacts of the phenomenon for our natural communication paper. In each experience, the participants chose the hostile image reliably the opposite question for more than half the time. Although human vision is not vulnerable to hostile disorders as is the vision of the machine (machines no longer determine the original image category, but people still see them clearly), our work shows that these disturbances can bias people towards decisions taken by machines.

The importance of safety and security research artificial intelligence

Our primary discovery that can be affected by the perception of man – albeit skillfully – through hostile images raises critical questions for safety and security research, but using official experiences to explore similarities and differences in the behavior of visible systems of artificial intelligence and human perception, we can benefit from buildings to build AI systems more secure.

For example, our results can teach future research that seeks to improve the durability of computer vision models by better align them with human visual representations. Measuring human allergies for hostile disorders can help judge this compatibility with a variety of computer vision structures.

Our work also shows the need for more research in understanding the broader effects of technologies not only on machines, but also on humans. This, in turn, highlights the continuous importance of cognitive science and neuroscience to better understand artificial intelligence systems and their potential effects and we focus on building safer and safer systems.

Learn more

2024-01-02 16:00:00

Related Articles

Back to top button