Real-Time Speech from Brain Signals Achieved

By analyzing nervous signals, the computer façade in the brain (BCI) can now almost instantly synthesize the man’s speech that has lost his voice due to nervous disease.
Researchers warn that it will remain a long time before he finds such a device, which can restore speech to paralyzed patients, using daily communication. However, hope is that this action “will lead to a path to improve these systems more – for example, by transferring technology to industry,” says Maitreye Wairgkar, the project scientist at the California Neurology Laboratory.
One of the main possible applications of computers in the brain is to restore the abilityCommunicate with people who can no longer speak due to illness or injury. For example, scientists have developed a number of BCIS can help translate nerve signals into a text.
However, the text alone fails to capture many major aspects of human speech, such as intonation, which helps to transfer the meaning. In addition, the text -based connections are slow, says wairagkar.
Now, the researchers have developed what they call a nervous mixture from the brain to the brain that can decipher nervous activity in sounds in actual time. They detailed their results on June 11 in the magazine nature.
“Loss of the ability to speak due to neurological diseases is a devastating matter,” says Wairgkar. “The development of a technology can go beyond the damage to the nervous system for speech recovery can have a significant impact on the lives of people who suffer from speech loss.”
Drawing nerve maps to restore speech
The new BCI nervous activity is set using four accurate electric sects. In total, scientists have put 256 small electric qualities in three areas of the brain, the most important of which is the previous ventral bladder, which plays a major role in controlling the muscles behind the speech.
“This technology does not read the minds” or “reading internal ideas,” says wairgkar. “We record from the brain that controls the muscles of the speech. Therefore, the system does not produce a sound except when the participant voluntarily tries to speak.”
The researchers implanted the BCI in a 45 -year -old volunteer with atrophic lateral sclerosis (ALS), which is also nervous dentenas known as Le Girge’s disease. Although the volunteer can still generate sound sounds, he was unable to produce a clear speech on his own for years before BCI.
Neurological cells recorded the nervous activity that resulted in the patient when the patient tried to read the sentences on the screen loudly. Then the scientists trained the deep AI model on this data to produce his intended speech.
The researchers also trained the artificial intelligence model to remove sound on the patient’s records before his condition so that BCI could manufacture his voice before the cases. The patient stated that listening to the made sound “made me feel happy, and felt like a real voice”, noticing the study.
https://www.youtube.com/watch?Nerve cells reproduce the man’s speechUC Davis
In experiments, scientists have found that BCI could discover the main aspects of intended vocal intonation. They had an attempt to speak with groups of sentences as data, which had no changes on the field, or as questions, which involve the height of the stadiums in the ends of the sentences. The patient also confirmed one of the seven words in the sentence, “I never said that she had stolen my money” by changing her stadium. (The sentence has seven different meanings, depending on the word that is emphasized.) These tests revealed an increase in nervous activity towards the ends of the questions and before emphasizing the words. In turn, this allows the patient to control the BCI sound enough to ask a question, to emphasize specific words in a sentence, or singing the three tunes.
“Not only what we say, but also how we say it is equally important,” says Wairgkar. “To improve our speech helps us to communicate effectively.”
In general, the new BCI can get nervous signals and produce sounds with a delay of 25 milliseconds, allowing the semi -fixed speech to be synthesized, says wairagkar. BCI also proved sufficiently flexibility to speak with false false words, as well as “AHH”, “EWW”, “OHH” and “HMM”.
The resulting sound was often clear, but not constantly. In the tests in which human listeners were forced to copy BCI words, they understood what the patient said about 56 percent of the time, from about 3 percent of when BCI was not used.
The nervous records of the BCI participant shown on the screen.UC Davis
“We do not claim that this system is ready to use to speak and make conversations by a person who has lost the ability to speak,” says Wairgkar. “Instead, we have shown evidence of the concept of what is possible with the current BCI technology.”
In the future, scientists plan to improve the accuracy of the device – for example, with more electrodes and the best Amnesty International models. They also hope that BCI companies will start clinical trials that include this technology. “It is not yet known whether BCI will work with people completely closed” – that is, almost completely paralyzed, except for eye movements and day, adds wairagkar.
Another interesting research trend is to study whether this talk can be useful for people with language disorders, such as the loss of speech ability. “The current target population cannot speak due to muscle paralysis,” says Wairgkar. “However, its ability to produce language and perception is still sound.” On the other hand, you notice that future work may achieve speech restoration of people who suffer from damage to the brain that produce speech, or with disabilities that prevented them from learning to speak since childhood.
From your site articles
Related articles about the web
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-06-19 16:00:00