AI

Topographic Neural Networks Help AI See Like a Human

The models of artificial intelligence have improved significantly over the past decade. However, these gains have led to neurological networks that are not involved in effect, which are effective, with many characteristics with human vision. For example, CNNS is often better to note texture, while humans respond more strongly to shapes.

Recently published paper in Nature Human Behavior has partially consumed this gap. It describes a new nervous network from all over the sects (All-Tnn), which, when trained in natural images, has developed a specialized organized structure that is similar to human vision. All TNN simulates better human spatial biases, such as expecting a plane closer to the top of the image, and operating it on a much lower energy budget than other nerve networks used to see the machine.

“One of the things you notice when you look at the way to seek knowledge in the brain, is that it is essentially different from how it is requested in deep nerve networks, such as nerve nets,” he said. Tim C KetzmanA full professor at the Institute of Knowledge Sciences in Osnabrück, Germany and the participating honor of the paper.

Human -like networks learn the human -like biases

Most of the viewing systems used today, including those found in applications such as Google Photos and Snapchat, are used, some forms of tafary nerve network (CNN). CNNS repeats an identical feature of many spatial sites (Known as “weight sharing”). The result is a network, when set, looks like a tightly recurring pattern.

The structure of the TNN network is very different. Instead it looks soft, with relevant neurons organized in groups, but they have never been repeated. The images that are planned for spatial relationships in the All-Tnn network look like the topography of a mountainous area, or a group of microorganisms displayed under a microscope.

This visual difference is more than just a comparison between beautiful images. Kitzmann said that the weight sharing CNNS is a basic deviation from biological brains. He said: “The brain cannot, when you learn something in one place, copy that knowledge to other sites.” “While in CNN, you can. It is a geometric penetration to be more efficient in learning.”

The TNN network avoided that distinctive through the architecture approach and the various training.

Instead of weighing, the researchers gave each spatial location in the network a set of learning parameters. After that, to prevent this from creating an unorganized chaotic features, they added a “soft restriction” in training that encouraged neighboring neurons to learn similar features (but is ever identical).

To test if this was translated into the vision of the machine with more human -like behavior, the researchers asked 30 human participants to identify the objects that are briefly flashing at different screen sites. Although the All-Tnn is still not an ideal representation of human vision, it has proven three times that is strongly associated with the human vision of the CNN network.

Zijin Lu, a co -author of the paper, said that the improved relationship in every TNN with human vision was driven by how the network learned spatial relationships. “For human beings, when you discover certain things, they have a typical position. You already know that shoes are usually on the bottom, on the floor. The plane, it’s above,” explained.

The team members who work on nerve networks in all theoretical sects that their approach would lead to more human -like vision. Simon Ryukov

Human behavior does not mean better performance, but it reduces energy

The strongest link between the All-Tnn network and human vision shows how machines can be taught to see the world in a more similar human being, but it does not necessarily lead to a better network in the classification of images.

CNNS remained 43.2 % accurate. Each TNN has achieved the classification accuracy of 34.5 % to 36 %, depending on the network composition.

What lacks accuracy, though, gained efficiency. Each TNN consumed much lower energy than the tested CNN, with the CNN used more than 10 times in operation. It is striking that this may be achieved despite the fact that TNN was about 13 times larger than CNN (about 107 million teachers for TNN for approximately 8 million for CNN).

All-Tnn’s efficiency thanks to a new network structure. In general, in general, the network can focus on the most important parts of the images instead of processing everything uniformly. “I got a large group of different neurons that could respond, but the fracture only responds,” Ketzman said.

All-Tnn efficiency can have effects on a machine vision on low-power devices. However, Kietzmann and Lu emphasized that energy efficiency was not their primary goal, or what they found more interesting in the results of the paper. Instead, they hope that the new network structures, such as TNN, will provide a more complete framework for understanding intelligence-with both artificial and human.

KIETZMANN note that the pursuit of size seems inconsistent with what is known about how to develop real minds (which have access to much lower data and use much lower energy). Networks that try to imitate human -like behaviors can provide an alternative to seeking to achieve the range (by using more training data and training larger models with more parameters) at any cost.

“There is this trend, which is a feeling that the size is very boring to answer the primary question about how perception occurs,” Ketzman said.

From your site articles

Related articles about the web

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-07-08 13:00:00

Related Articles

Back to top button