Non-Invasive Brain Interfaces Possible, Not Common as Tech Progresses
Using the mind to move objects has been a magician’s trick for centuries. About 40 years ago, a Russian got famous by bending spoons with his mind, though that was a trick. While humans can use voice commands to connect digitally, by dialing a phone and locating photos, companies are still working on the ultimate mind-meld⏤enabling humans to make machines do their bidding with the power of thought.
Wired.com took an in-depth look at the latest progress of mind-melding with computers.
The quest to meld-mind with a machine dates back to at least the 1970s, when scientists began in earnest to drill into peoples’ skulls and implant the first brain-computer interfaces—electrodes that translate brain cell activity into data. Today, BCIs can regulate tremors from Parkinson’s disease and restore some basic movement in people with paralysis. But they are still surgically implanted and experimental. Even so, the likes of Elon Musk already envision a future where we’ll all have chips in our brains, and they’ll replace our need for keyboards, mice, touchscreens, joysticks, steering wheels and more.
A man named Sid Kouider has joined Elon Musk and others who dream of making this mind-merger a reality with his company NextMind.
A story on venturebeat.com describes how NextMind plans to market the device that fits on the back of a user’s head for the low consumer price of $399.
Arielle Pardes, a wired.com writer, tried the NextMind device during a demo in December. The device is small at 60 grams, about the size of a kiwi fruit. Similar to an electroencephalogram, or EEG, used to record electrical activity in the brain, it’s similar to tools Kouider used as a professor of neuroscience. His Paris lab specializes in studies of consciousness. The functional dry electrodes require contact with the skull, along with a proprietary material that Kouider said is “very sensitive to electrical signals.” To control digital devices through the NextMind headset, users must create a neural profile that shows how the visual cortex works when the user focuses on certain objects. It only takes a few minutes, but the device works better with pre-training, just as voice recognition needs training.
NextMind developed demos and games to attract developers to create more advanced ones. One game is similar to Nintendo’s Duck Hunt, which requires the user to “shoot” the ducks with their brain.
“I focused my gaze on the ducks and, in less than a second, they exploded,” Pardes wrote. “This little magic trick was repeated through a series of demos. I changed the channel on a mock TV set by glancing at one corner of the screen. I cracked a digital vault by concentrating on the right numbers on a pincode. I changed the colors on a set of smart lightbulbs that Kouider had set up for me. So it worked as predicted.”
CTRL-Labs, acquired by Facebook for $1 billion in September, released a developer kit last year for a similar neural interface using dry electrodes. The armband captures signals from nerves. Their demos show off the company’s visionof making users more capable.
read more at wired.com
Leave A Comment