It brings me comfort to know that such a fallback will eventually exist, should I need one.
Note that specialists are saying that another promising drug from Scholar Rock [3] would probably prevent any further weakening if used in conjunction with my current treatment. Unfortunately, the FDA takes a long time to approve new medications and I have heard this one is particularly special because there is potential for abuse by athletes.
[1]: https://en.wikipedia.org/wiki/Spinal_muscular_atrophy
[2]: https://en.wikipedia.org/wiki/Nusinersen
[3]: https://scholarrock.com/our-pipeline/spinal-muscular-atrophy...
Strangely that simple example was the most powerful part for me. I've done that so many times and it was such a fun experience. Now he gets to re-live that joy (and follow up shame) again!
- These types of BCI are effectively an array of switches. You typically map a motor thought eg. "Move your arm up" => Moving the cursor up. This maybe how then you control a game such as chess if it has keyboard shortcuts. Eye movement could be done in the same way but there are easier ways. Interestingly to measure these motor commands you dont really need intracortical BCI. You can do it with surface EEG. Sticking it inside your head - closer to centres where you can measure intentional thought makes the signal cleaner and more reliable
- The big breakthroughs is really making this intracortical stuff safer and long term. Its getting there. But this isnt it
The big wins out there - are in speech BCI. Thats hardcore. Even the two main studies doing this - each of the participants requires a LOT of training time to make a Machine model work efficiently.
Apple has done great stuff with eye tracking on Vision Pro, but it required completely rewriting the UI for literally everything. Not something we have the luxury of doing for accessibility for quadriplegics.
Source: built an eye tracker and eye-controlled UI at a startup and got acquired by Google
I'm not sure I can believe this "This system is likely already working better than an eye tracker would for cursor control" - the training to use this stuff isnt just 'magic'. I do agree though with "and it will certainly improve" - yeah iterate.. iterate..
But sure - "the potential" is the key thing in all this. Just the cost to benefit ratio is pretty dramatic right now.
He appears to just think about where the mouse should go and then be able to click and click-and-hold. Seems like multiple inputs which an eye tracker wouldn't do. Unless maybe its just configured to click when the cursor pauses on a spot?
Also the user experience seems better than attaching electrodes to your head. It seems to just work wirelessly. It is always there and sometimes he has to recharge it.
Yeah - this is wireless. Better than some systems which have been, no joke, a box at the top of your head with a HDMI cable in it..
> He appears to just think about where the mouse should go and then be able to click and click-and-hold. Seems like multiple inputs which an eye tracker wouldn't do. Unless maybe its just configured to click when the cursor pauses on a spot?
This is really the key question. The dwell technique you note is what most eyetrackers do (although far better to use a binary technique - eg a blink - to select because of the midas touch effect). Its built into to Windows/MacOS and iOS now. I have a sneaky feeling the reason why its chess is you can encode the positions "X1 to Y2". You can then do a transformer model to decode intentional speech..
If that is the case - then if a person actually speaks whats the benefit right now for this indvidual? (yes - that he doesnt have to say it. BUT sub-vocal speech is already achievable without invasive surgery..)
Further clarification: when doing conventional EEG, the signal quality is so fragile that even blinking can produce recording artifact.
Also, there's the whole "put a shower cap with conductive gel" things that makes it very impractical for every day use.
You are correct in that one could add more inputs but that only works if you can use the inputs. The individual in the video has full control of his head which many people do not. All I can do, for example, is use like two fingers.
Look at the starship program for an example of where you can get in 20 iterations
[1] https://www.washingtonpost.com/video-games/2020/12/16/brain-...
[2015] https://www.smithsonianmag.com/smart-news/paralyzed-woman-op...
[2013] With a non-invasive interface: https://iopscience.iop.org/article/10.1088/1741-2560/10/4/04...
[2011] https://journals.plos.org/plosone/article?id=10.1371/journal...
It will be quite a while before we know if they've managed to mitigate the issues enough to last a lifetime (and support upgrades), but it's better than previous devices.
Personally, I think the most exciting part of Neuralink and other companies working on BCIs is the fact that they're trying to keep these implants in long-term, and scale the deployment significantly. Most academic BCI research thus far has just been trials, without patients getting to keep the implants long term.
Still early days for this tech but it seems impressive.
Civ 6 is drivable completely with the mouse, and other than editing gold amounts in trade offers a little more quickly there’s not much reason to use anything but a mouse for it.
Neuralink has now achieved product market fit
Factorio on the other hand took a while for me to start liking but that’s because it has a huge learning curve. I eventually got into it but it’s not something I crave playing.
I supposed we could just jack in directly, though I really don't want surgery latching onto my optic nerve.
[0] https://www.newscientist.com/article/dn9633-calculating-the-....
Tiny spoiler warning I guess though not really, it’s just background world building that was used as motivation for side character’s growth. In the book, there was a Hitler-esque villain who existed long before the characters were born. The villain killed many billions of people. But through cloning, the societies of Earth punish this villain for their entire life by feeding them torturous scenarios through their brain implants. These were scenarios like being chased and eaten by a tiger, running naked through a frozen tundra, execution, etc.
The clone thought it was entirely real because it was all in their brain implant, even though they were safe in a jail cell. And as an extra Black Mirror-y twist, anyone in that society could tune in with their own implant to watch the clone being tortured.
I’m not really trying to cast doom and gloom on this brain implant tech, I think it’s neat. I was just reminded of the book I read when you mentioned simulating tactile impressions and virtual worlds. Pleasant simulations would be great, but even “benignly” scary ones like a virtual haunted house in your brain could be terrifying. (As someone who hates haunted houses.)
Seems pretty obvious to me that these ideas originated long ago in the scientific world and were (beautifully) expanded upon by science fiction authors (again, many decades ago).
Such online training might be necessary to deal with brain plasticity - ie. The optimal set of neurons to read to determine X/Y mouse movement right now might not be the same set it was an hour ago.
Such plasticity can be seen in regular humans too when they say 'whoa, I haven't used a pen for months - let me get used to writing again!'.
But giving it the benefit of the doubt, this looks mind blowing to me !
It was kind of known that the research and tech is almost there for a while already, but seeing the demonstration live like that - incredible !
But then it takes me back to those Musk companies - maybe it's just a repackaged already available research presented in a nice way - making us believe it could be 'deployed' in real world, while in reality it can only be done in a very controlled environment. And we are led to believe that we are '2 weeks away' from it being widely available. Hope we're wrong here.
https://www.pcmag.com/news/tesla-faked-autopilot-video-engin...
While not an Musk company, it is worth remembering Theranos demos being convincingly faked as well. I don't know if it is happening here but it does happen.
The mouse seems to move very nicely and smoothly (60 FPS?) which presumably means the neural net which converts raw sensor data to mouse movements is running in ~15 milliseconds.
Most neural nets don't do a forward pass in 15ms unless they're either tiny or the GPU is very powerful.
Let's see, I think after that, the next product should be Magic Missle. Or maybe Sanctuary?
But let's see, we are really at the beginning.
IIRC they are doing that test in pigs right now
They are solving the easiest things first of course
https://www.nature.com/articles/s41586-023-06094-5
https://www.cea.fr/english/Pages/News/brain-computer-interfa...