March 31, 2020
Cerebro is coming.
This is a great way to investigate thoughtcrime.
New technology that can read minds and turn thoughts into complete sentences with the help of AI is giving hope to people who can’t speak.
Researchers from the University of California say their technology is able to translate brain activity into English word by word with the help of machine learning.
The technology could revolutionise the way people who can’t speak or move are able to communicate, as it is more natural than existing tools, the team say.
It can revolutionize way more than that.
I’m sure this kind of technology gives hope to people who can’t speak, but it gives much more hope to the people who want to use tools like this for other ends.
Powerful people will become more powerful with this technology, and considering who’s holding the power in the West nowadays, the picture is quite dark.
It has an accuracy rate of 97 per cent – more than twice as high as other brain-signal decoding devices and works by mapping activity of neurons to words.
Translating neurons to words enables it to type word sequences on a computer interface in real time – which can then be read out by a synthetic voice.
The current process of giving people without a voice the ability to speak can be incredibly slow as it requires monitoring residual eye movement or muscle twitches.
They use these movements to mark letters that spell out words – but it is slow.
Researchers found that in many cases information needed to produce fluent speech is still in the brain and so this technology can help them express those words.
Corresponding author Dr Joseph Makin, of California University in San Francisco, said: ‘Average word error rates are as low as three per cent.’
The study was carried out on four epilepsy patients who had been fitted with brain implants to monitor their seizures.
Their neural activity was turned ‘word by word into an English sentence – in real time,’ said Dr Makin.
Previous techniques have had limited success – with efficiency far below that of natural speech.
In earlier studies using a similar technique they could only decode fragments of spoken words – or less than 40 percent of words in spoken phrases.
So Dr Makin and colleagues used artificial intelligence, or machine learning, to link the behaviour of brain cells directly to sentences.
His team also found brain regions that strongly contributed to speech decoding were also involved in speech production and speech perception.
The approach decoded spoken sentences from one patient’s neural activity with an error rate similar to that of professional-level speech transcription, said Dr Makin.
Additionally, when the AI networks were pre-trained on neural activity and speech from one person before training on another participant, decoding results improved.
This suggests the approach may be transferable across people.
Further research is needed to fully investigate the potential and to increase decoding beyond the restricted language, Dr Makin added.
They could apply a similar approach to things other than words.
Soon they’ll have a nice little tool to browse people’s thoughts that will allow them to immediately convict anyone of thinking the wrong things.