Skip to main content
Part of complete coverage on
Vital Signs

Paralyzed man 'turns thoughts into sounds'

By Mark Tutton for CNN
A diagram of the system developed by Boston University researchers.
A diagram of the system developed by Boston University researchers.
STORY HIGHLIGHTS
  • Researchers say they have developed a system that can turn thoughts into vowel sounds
  • The system is being tested on a paralyzed man who can only communicate by blinking
  • Eventually, he may be able to hold complete conversations in real time
RELATED TOPICS

London, England (CNN) -- An experimental system is letting a paralyzed man turn his thoughts into the beginnings of real-time speech, according to researchers.

Erik Ramsey, 26, from Georgia, in the U.S., suffered a stroke after a car accident at the age of 16, leaving him with Locked-in Syndrome.

That's the same condition suffered by Rom Houben, the Belgian man who was last month discovered to have been wrongly diagnosed as being in a persistent vegetative state for 23 years.

Ramsey is completely paralyzed and currently able to communicate only by blinking his eyes. But researchers at Boston University have implanted an electrode into his brain that lets him convert his thoughts into vowel sounds produced by a voice synthesizer, according to a paper published December 9 in the online journal PLoS ONE.

The technology is an example of a "Brain Computer Interface" (BCI) -- systems that let people use their thoughts to communicate with computers.

Other BCIs have been developed that allow people with Locked-in Syndrome communicate using just their thoughts, but these systems use brain signals to "type out" words on a computer screen, producing only one or two words a minute, according to Boston University's Professor Frank Guenther.

Guenther said his system converts thoughts into vowel sounds in real time, adding that it could one day let those with the condition hold normal conversations.

"What's different about this is that the user directly controls sound output of the computer, rather than typing in words," Guenther told CNN.

"The user learns to control it more like a prosthetic tongue than a typing system. He's directly trying to control the sound output by thinking about making sounds with his mouth."

The Boston researchers first identified the part of Ramsey's brain involved in producing speech by scanning his brain while he attempted to speak.

They then surgically implanted a Neurotrophic electrode -- a glass cone less than one millimeter in length, containing three wires each thinner than a human hair -- into the speech-related motor cortex in Ramsey's brain.

The user learns to control it more like a prosthetic tongue than a typing system.
--Frank Guenther, Boston University.

When Ramsey tries to produce vowel sounds the electrode records neural signals from his brain. An FM radio transmitter implanted underneath the scalp then sends them wirelessly across his skull.

The signals are decoded by a computer, finally driving a speech synthesizer, which produces audible vowel sounds.

Guenther said the whole process, from thought to sound production, takes about 50 milliseconds -- around the same speed as normal speech.

Melody Moore Jackson researches BCIs at Georgia Institute of Technology. In 1998, she was part of the team that was the first to implant recording electrodes in humans. In those cases, locked-in patients had an electrode implanted in the part of the brain responsible for physical movement. The patients would then imagine moving part of their body in order to move a cursor on a computer screen.

But Moore Jackson says the new research represents a major step forward with the technology.

"It's a pretty significant achievement to be able to achieve speech recognition from brain signals, especially from the Neurotrophic electrode, which is attached to a fairly limited number of neurons," Moore Jackson told CNN.

Guenther said the electrode in Ramsey's brain was implanted four years ago. Its two-channel recording system only allows 20 to 40 neurons to control the voice synthesizer, but Guenther said newer, 32-channel systems, would allow over 100 neurons to be used.

"We're currently working on synthesizers that allow consonants to be produced relatively easily and we are also working on the hardware that would let us implant a 32-channel system in the next patient," he told CNN.

"That should give us a dramatic improvement in the user's capability for producing sound, which will allow consonants to be produced, and then complete words."

He added that it would probably be five to 10 years before the technology would be available to the general public.

Christopher James, of the Signal Processing and Control Group at the University of Southampton, in England, said the research was "quite novel and very promising."

But he cautioned that repeatability might be a problem, with BCI systems often working well for one individual, but not others, and added that invasive brain surgery is always a risk.

 
Quick Job Search