2011年6月23日星期四

Outdoor Alcohol Advertising And

Translating Nerve Signals into WordsIn the Rosetta stone
new study, the microelectrodes were used to detect weak electrical signals from the brain generated by a few thousand neurons or nerve cells.Each of two grids with 16 microECoGs spaced 1 millimeter (about one-25th of an inch) apart, was placed over one of two speech areas of the brain: First, the facial motor cortex, which controls movements of the mouth, lips, tongue and face -- basically the muscles involved in speaking. Second, Wernicke's area, a little understood part of the human brain tied to language comprehension and understanding.The study was conducted during one-hour sessions on four consecutive days. Researchers told the epilepsy patient to repeat one of the 10 words each time they pointed at the patient. Brain signals were recorded via the two grids of microelectrodes. Each of the 10 words was repeated from 31 to 96 times, depending on how tired the patient was. Then the researchers looked for patterns in the brain signals that correspond to the different words by analyzing changes in strength of different frequencies within each nerve signal, says Greger.The researchers found that each spoken word produced varying brain signals, and thus the pattern of electrodes that most accurately identified each word varied from word to word. They say that supports the theory that closely spaced microelectrodes Rosetta Stone Chinese
can capture signals from single, column-shaped processing units of neurons in the brain.One unexpected finding: When the patient repeated words, the facial motor cortex was most active and Wernicke's area was less active. Yet Wernicke's area lit up when the patient was thanked by researchers after repeating words. It’shows Wernicke's area is more involved in high-level understanding of language, while the facial motor cortex controls facial muscles that help produce sounds, Greger says.The researchers were most accurate -- 85 percent -- in distinguishing brain signals for one word from those for another when they used signals recorded from the facial motor cortex. They were less accurate -- 76 percent -- when using signals from Wernicke's area. Combining data from both areas didn't improve accuracy, showing that brain signals from Wernicke's area don't add much to those from the facial motor cortex.When the scientists selected the five microelectrodes on each 16-electrode grid that were most accurate in decoding brain signals from the facial motor cortex, their accuracy in distinguishing one of two words from the Rosetta Stone Chinese Levev 1-3
other rose to almost 90 percent.In the more difficult test of distinguishing brain signals for one word from signals for the other nine words, the researchers initially were accurate 28 percent of the time -- not good, but better than the 10 percent random chance of accuracy. However, when they focused on signals from the five most accurate electrodes, they identified the correct word almost half (48 percent) of the time.It doesn't mean the problem is completely solved and we can all go home, Greger says. It means it works, and we now need to refine it’so that people with locked-in syndrome could really communicate.The obvious next step -- and this is what we are doing right now -- is to do it with bigger microelectrode grids with 121 micro electrodes in an 11-by-11 grid, he says. We can make the grid bigger, have more electrodes and get a tremendous amount of data out of the brain, which probably means more words and better accuracy.

0 评论:

发表评论

Twitter Delicious Facebook Digg Stumbleupon Favorites More