Manga News: Check out these new manga (5/25/15 - 5/31/15).
New Forums: Visit the new forums for Boku no Hero Academia!
Forum News: Cast your votes to determine the best parent in the Anime Showdown.
Mind reading soon?
Scientists Use Brain Waves to Eavesdrop on the Mind
Computer model seeks to decode heard language, but research is preliminary.
WEDNESDAY, Feb. 1 (HealthDay News) - Scientists may one day be able to read the minds of people who have lost the ability to speak, new research suggests.
In their report, published in the Jan. 31 online edition of the journal PLoS Biology, University of California, Berkeley researchers describe how they have found a way to analyze a person's brain waves in order to reconstruct words the person hears in normal conversation.
This ability to decode electrical activity in an area of the auditory system called the superior temporal gyrus may one day enable neuroscientists to hear the imagined speech of stroke or other patients who can't speak, or to eavesdrop on the constant, internal monologues that run through people's minds, the researchers explained in a journal news release.
"This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig's disease [amyotrophic lateral sclerosis] and can't speak," Robert Knight, a professor of psychology and neuroscience, said in the news release. "If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit."
However, the study's first author, post-doctoral researcher Brian Pasley, noted that "this research is based on sounds a person actually hears, but to use this for a prosthetic device, these principles would have to apply to someone who is imagining speech."
He explained that "there is some evidence that perception and imagery may be pretty similar in the brain. If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device."
For the study, Pasley's team tested two different computational models that were designed to match spoken sounds to the pattern of activity in the electrodes when a patient heard a single word. The better of the two models reproduced a sound that was close enough to the original word so that the researchers could correctly guess the word.
The aim of the research was to reveal how the human brain encodes speech, and to then pinpoint the aspects of speech that are necessary for understanding.
"At some point, the brain has to extract away all that auditory information and just map it onto a word, since we can understand speech and words regardless of how they sound," Pasley said. "The big question is, what is the most meaningful unit of speech? A syllable, [or an even smaller unit of language, such as the sound of each letter]? We can test these hypotheses using the data we get from these recordings."
The American Speech-Language-Hearing Association has more about adults speech and language difficulties.
(SOURCE: PLoS Biology, news release, Jan. 31, 2012)