![]() |
| (JohnnyGreig/E+/Getty Images) |
According to The Scientist, scientists can now "decode" people's thoughts without even touching their heads.
Previously, mind-reading methods involved putting electrodes deep in people's brains. Instead, a noninvasive brain scanning technique known as functional magnetic resonance imaging (fMRI) is used in the new method, which is described in a report that was posted on September 29 to the preprint database bioRxiv.
Because active brain cells require more energy and oxygen, this information provides an indirect measure of brain activity. fMRI tracks the flow of oxygenated blood through the brain.
Because the electrical signals released by brain cells move much more quickly than blood moves through the brain, this scanning method cannot capture real-time brain activity.
Surprisingly, the authors of the study discovered that, while they were unable to produce word-for-word translations, they were still able to decode the semantic meaning of people's thoughts using this imperfect proxy measure.
Senior author Alexander Huth, a neuroscientist at the University of Texas at Austin, stated to The Scientist, "If you had asked any cognitive neuroscientist in the world 20 years ago if this was doable, they would have laughed you out of the room."
The team scanned the brains of one woman and two men in their 20s and 30s for the new study, which hasn't been peer reviewed yet. Over several sessions in the scanner, each participant listened to 16 total hours of various podcasts and radio shows.
After that, the team fed these scans to a computer program they called a "decoder," which compared the patterns in the recorded brain activity with the patterns in the audio.
According to Huth, the algorithm could then take an fMRI recording and create a story based on its content, which would "pretty well" match the original plot of the podcast or radio show.
To put it another way, the decoder was able to deduce from the brain activity of each participant what story they had heard.
However, there were some errors made by the algorithm, such as switching characters' pronouns and using the first and third person. Huth stated that it "knows what's happening fairly accurately, but not who is doing the things."
The algorithm was able to fairly accurately explain the plot of a silent movie that the participants watched in the scanner in additional tests. It might even retell a story that the people in the group had imagined telling.
The long-term goal of the research team is to improve this technology so that it can be used in brain-computer interfaces for people who are unable to speak or type.

0 Comments