Dr. Ariel Tankos from the School of Medicine at Tel Aviv University and the Tel Aviv Medical Center Suraski (Ichilov Hospital) and Dr. Ido Strauss, Director of the Functional Neurosurgery Unit at Ichilov School and the School of Medicine in Tel Aviv University (Photo: Social media page of Tel Aviv University).
Researchers from Tel Aviv University and Tel Aviv Sourasky Medical Center have achieved a groundbreaking scientific breakthrough that can transform thoughts into words, enabling speech without the use of the mouth.
The technology could enable people with paralysis from brain injuries, brainstem strokes, or amyotrophic lateral sclerosis (ALS) to articulate words artificially using only the power of thought.
During the experiment, a patient with depth electrodes implanted in his brain imagined saying one of two syllables. The electrodes transmitted the electrical signals to a computer, which then vocalized them.
The report results were recently published in “Neurosurgery,” the official publication of the Congress of Neurological Surgeons, under the title: “A speech neuroprosthesis in the frontal lobe and hippocampus: decoding high-frequency activity into phonemes.”
The study was led by Dr. Ariel Tankus of Tel Aviv University (TAU) School of Medical and Health Sciences and the medical center, along with Dr. Ido Strauss, director of the hospital’s functional neurosurgery unit.
Dr. Tankus explained that the experiment demonstrated that the artificial speech could be articulated with 85% accuracy.
“The 37-year-old patient in the study is an epilepsy patient who was hospitalized to undergo resection of the epileptic focus in his brain,” Tankus and his colleagues wrote. “He has intact speech and was implanted with depth electrodes for clinical reasons only. During the first set of trials, the participant made the neuroprosthesis produce the different vowel sounds artificially with 85% accuracy. In the following trials, performance improved consistently. We show that a neuroprosthesis trained on overt speech data can be controlled silently.”
After implanting depth electrodes in the patient’s brain, he was asked to articulate two syllables, “a” and “e,” out loud. The researchers recorded the patient’s brain activity while he spoke the syllables and then trained artificial intelligence models to identify the brain cells that indicated the desire to say “a” or “e” based on their electrical activity.
As the computer learned to recognize the pattern of electrical activity in the patient’s brain associated with the two syllables, the patient was instructed to imagine saying “a” and “e.” The electrical signals in his brain from that imagining were translated by the computer, which then played the pre-recorded sounds of the syllables.
The Israeli researchers were the first in history to make such a breakthrough, Tankus said.
“In this experiment, for the first time in history, we were able to connect the parts of speech to the activity of individual cells from the regions of the brain from which we recorded. This allowed us to differentiate among the electrical signals that represent the sounds /a/ and /e/. At the moment, our research involves two building blocks of speech, two syllables. Of course, our ambition is to get to complete speech, but even two different syllables can enable a fully paralyzed person to signal ‘yes’ and ‘no,’” he said.
In the future, the researchers believe it will be possible to train a computer to translate thoughts into speech in the early stages of ALS, while the patient can still speak. The computer would learn to identify and interpret the electrical signals in the patient’s brain and use that knowledge to express the patient’s thoughts in artificial speech, once the patient has lost the ability to speak.
“And that is just one example. Our study is a significant step toward developing a brain-computer interface that can replace the brain’s control pathways for speech production, allowing completely paralyzed individuals to communicate voluntarily with their surroundings once again,” they wrote.