Scientists use AI to decipher words and sentences from brain scans
研究者たちは、脳スキャンからの同じ活動を使用して、個人が何を考えているかを解読しました。(English) Researchers used the same activity from brain scans to decipher what individuals are thinking.
Scientists use AI to decipher words and sentences from brain scans
Emerging neurotechnology could one day help paralysed patients communicate - but it could also raise privacy concerns
//Summary -Level-C2//
Scientists have developed an AI technique to translate brain scans into words and sentences, which could aid communication for individuals with paralysis or brain injuries. The method uses functional magnetic resonance imaging (fMRI) to tap into the brain's speech-producing areas and decode imagined speech. Despite privacy concerns and technical limitations, such as fMRI's costly and cumbersome nature, this research represents a significant advancement in brain-computer interfaces. Future development may simplify and make the system more accessible for practical use.
A)
An artificial intelligence (AI)-based technique can translate brain scans into words and sentences, reports a team of computational neuroscientists. Although still in its early stages and far from perfect, the new technology could eventually help people with brain injuries or paralysis regain the ability to communicate, the researchers say.
The study "shows that with the right methods and better models, we can decode what the subject is thinking," says Martin Schrimpf, a computational neuroscientist at the Massachusetts Institute of Technology who was not involved in the work.
B)
Other research teams have developed brain-computer interfaces (BCIs) to translate a paralysed patient's brain activity into words. However, most of these approaches rely on electrodes implanted in the patient's brain.
Less successful have been non-invasive techniques based on methods such as electroencephalogram (EEG), which measures brain activity through electrodes placed on the scalp. EEG-based BCIs have only been able to decode phrases and can't reconstruct coherent speech, Schrimpf says.
Previous BCIs also focused on people trying to speak or thinking about speaking, so they relied on areas of the brain involved in producing speech-related movements. As a result, they only worked when a person was moving or trying to move.
C)
Now, Alexander Huth, a computational neuroscientist at the University of Texas at Austin, and colleagues have developed a BCI based on functional magnetic resonance imaging (fMRI) that taps more directly into the speech-producing areas of the brain to decode imagined speech. This non-invasive method, commonly used in neuroscience research, tracks changes in blood flow in the brain to measure neural activity.
D)
As with all BCIs, the aim was to associate each word, phrase or sentence with the particular pattern of brain activity it evoked. To gather the necessary data, the researchers scanned the brains of three participants while each listened to about 16 hours of storytelling podcasts, such as The Moth Radio Hour and The New York Times' Modern Love.
Using this data, the researchers created a series of maps for each subject, showing how the person's brain responded when they heard a particular word, phrase or meaning.
Because fMRI takes a few seconds to record brain activity, it does not capture each specific term, but rather the general idea with each phrase and sentence, say, Huth. So his team used the fMRI data to train the AI to predict how a particular person's brain would respond to language.
E)
At first, the system struggled to turn brain scans into speech. But then the researchers incorporated the GPT natural language model to predict which word might come after another.
Then, using the maps generated from the scans and the language model, they ran through different possible phrases and sentences to see if the predicted brain activity matched the actual brain activity. If it did, they kept that sentence and moved on to the next.
F)
Subjects then listened to podcasts that had not been used in training. Little by little, the system produced words, phrases and sentences, eventually generating ideas that matched exactly what the person had heard. The technology was particularly good at getting the gist of the story, even if it didn't always get every word right.
G)
It also worked when a person told a story or watched a video. For example, in one experiment, people watched a film without sound while the system tried to decipher their thoughts.
For example, when one person watched an animated movie of a dragon kicking someone to the ground, the system spat out: "He's knocking me down." All this happened without the participants being asked to speak. "That shows that what we're getting at here is something deeper than just language," says Huth. "The system works on the level of ideas."
The system could one day help people who have lost the ability to communicate due to brain injury, stroke or locked-in syndrome, a form of paralysis in which people are conscious but paralysed.
To do this, however, the technology not only needs to be developed by using more training data, but it also needs to be made more accessible. In addition, because it relies on fMRI, the system is expensive and cumbersome, but Huth says the team aims to do this with more straightforward, more portable imaging techniques such as EEG.
H)
Although it's still far from decoding random thoughts in the real world, the advance raises concerns that, as the technology improves, it could mimic a form of mind reading.
"Our thought when we got this working was, 'Oh my God, this is kind of scary,'" Huth recalls. To address these concerns, the authors tested whether a decoder trained on one person would work on another - it didn't.
Consent and cooperation also seemed critical because if people resisted by doing a task such as counting instead of listening to the podcast, the system could not decode any meaning from their brain activity.
I)
Still, privacy is a primary ethical concern with neurotechnology, says Nita Farahany, a bioethicist at Duke University. Researchers should consider the implications of their work and develop safeguards against misuse early on.
"We need everyone to be involved in ensuring this is done ethically," she says. "The technology could be transformative for people who need the ability to communicate again, but the implications for the rest of us are profound."
Scientists use AI to decipher words and sentences from brain scans
Budding neurotechnology could someday help paralyzed patients communicate—but could also raise privacy concerns
https://www.science.org/content/article/scientists-use-ai-decipher-words-and-sentences-brain-scans
Add info)
AI Technology Makes Words and Sentences From Brain Scans
https://nativecamp.net/textbook/page-detail/2/19570?v=1684991384
New technology may help those with brain injuries or paralysis regain the ability to communicate. This is thanks to a new artificial intelligence technique, which can translate brain scans into words and sentences.
"Using the right methods and better models, we can decode what the subject is thinking," said computational neuroscientist Martin Schrimpf.
Previous techniques have been less effective. One method required electrodes to be implanted in the patient's brain, which is quite invasive. Another approach measured brain activity via electrodes attached to the scalp, but this could not reconstruct coherent language.
This new brain-computer interface technique taps more directly into the language-producing areas of the brain. It is a non-invasive method commonly used in neuroscience research.
The researchers scanned the brains of three participants who had to listen to at least sixteen hours of storytelling podcasts. Then, the researchers produced a set of maps for each subject that specified how the person's brain reacts when it hears certain words or phrases.
This data was then used to train AI to predict how an individual's brain would react to language. It took a while for the system to work, but finally, the AI system could produce words, phrases, and sentences that made up a story.
This new technology could benefit many people, but some ethical considerations exist. Nita Farahany, a bioethicist, said that researchers should develop safeguards against misuse.
//Discussion//
1. Would you like to work on developing AI? Why or why not?
-> Yes, I would like to work in AI development.
It is no longer an avoidable technology.
And it is necessary to make it an excellent tool to open up our future.
If it becomes a tool that can only be abused, the future will be much harsher with more wars and crimes.
We must create a better world.
2. Do you prefer reading books or listening to podcasts? Please support your answer.
-> I prefer listening to podcasts.
I also prefer watching videos and movies.
It is very familiar because you can feel it immediately through sight and hearing.
3. Would you be willing to be a participant in an invasive study? Please explain.
invasive=(especially of plants or a disease) tending to spread prolifically and undesirably or harmfully.
-> No, I wouldn't participate.
Because it is dangerous and very responsible work.
I think it's a subject that more experts should take the time to study.
4. Is it okay to conduct experiments even if there are some ethical considerations? Please discuss.
-> I don't want to be tested myself.
Because I don't want others to know what I'm thinking.
However, if I have some disability and the experiment is helpful to others, I may accept the investigation if the conditions are good, such as privacy being protected.
5. Would you instead communicate by speaking or writing? Please support your answer.
-> Of course, it might be better to speak or write both. However, I think that talking to people is fundamentally necessary for humans.
For three years, we have been restricted from meeting and talking to people in the corona situation.
Even if we could talk online, I realized that face-to-face meetings are also important.
6. Do you think AI will one day replace human writers and storytellers? Please share your thoughts.
-> I don't want to think so, but I think the time will come when AI writers will be as active as humans.
Because we have read and understood the great writers of the past and created new stories in our own words.
I think AI can do that too.
However, to be a hit among humans, we need stories that humans can understand and let us look just a little ahead into a new era.
It is easy to imagine that writers, composers, directors, and artists will all become AI.
Still, I hope we can meet with humans who can create a new world different from the past.