This Greg Egan short story is a useful intuition pump about the possibilities. Not recommended for children. Or before trying to sleep. https://philosophy.williams.edu/files/Egan-Learning-to-Be-Me...
It would be great if instead of "a clinical trial to demonstrate that the Link is safe and useful" we could have a clinical trial to determine whether or not it is.
> My parents were machines. My parents were gods. It was nothing special. I hated them.
and how the story mixes adolescence and feeling special with philosophical ramifications (hinting that focusing on the philosophical ramifications is just an adolescent attempt at feeling special?)
The narrator falls into the same trap at the end, assuming that he is the 1-in-a-million exception. He doesn't realize that everyone has the same experience, they just process it in a healthier way. AI Catcher in the Rye.
There's a week where the jewel and the brain are still paired, but the jewel is in control. The hospitals monitor that the two are similar to within tolerance, but somehow this jewel slips through the net. What makes you think there's more to it than the 'one in a million' explanation?
If you upvote this comment, people will see this spoiler warning before the spoilers.
Probably that would be beneficial to a substantial fraction of the people reading the thread.
> He is now able to control a cursor with his thoughts to browse the internet, play games, and continue his educational journey with greater independence.
Once reliable and cheap, the tangible difference this tech is going to make to people's lives is pretty wild.
Curious to know how accurate the cursor movements and clicks are. For example, here he is playing polytopia: https://www.youtube.com/watch?v=mgY70ZWCL1g
In polytopia, a misclick can be about as frustrating/costly as a mouseslip in chess (when you move a piece to the wrong square by mistake).
[1] https://arstechnica.com/science/2024/05/neuralink-to-implant...
0: https://www.vox.com/future-perfect/2022/12/11/23500157/neura...
I also eat meat, it just seems a bit ironic to me.
Eating an animal at least ostensibly has positive value for the people doing so. However, there are plenty of forms of "animal testing" that confer zero positive value. For instance, testing the wrong compound or inserting the wrong implant confers zero benefit. Having improper controls, "testing" nonsensical theories, repeating stale results poorly, inadequate data collection, etc. are just a few ways a test procedure can be totally useless or even actively harmful.
This also ignores one of the other aspects of animal testing which is as a dry run or rehearsal for actual application. You do it right in animals so you are practiced at doing it right for when you need to do it right in humans. "Oh yeah, we royally screwed up in every rehearsal, but we will nail it in production." is not an acceptable approach. You look at the care taken during their practiced procedures on less critical subjects to determine if their practiced procedure is adequate for more critical subjects. A process that kills far more test subjects than others or achieves middling results relative to resource expenditure or that treats subjects as disposable for "advancing science" is not a process fit for human subjects. Assuming ingrained cultural process deficiencies will magically disappear when using changing subjects is foolish.
These are just some of the reasons why people eating a ridiculous number of animals does not and should not waive our invalidate concerns about animal testing procedure.
This is reductive and lacking any form of nuance. If I eat chicken, should I automatically be okay with heavily industrialized chicken farms, or even setting chickens alight for entertainment? Just because one evolved to be an omnivore doesn't mean one is okay with all forms of killing animals.
(Also of course a lot of the critics don't eat meat, and it's also true that the rest of us should stop, starting from factory farmed meat)
Why would you cease developing BCIs? It’s not ethical to force another sentient being into biological R&D on their own body. OTOH there’s no problem to enroll someone to a dangerous mission if they’re truly voluntary and get a benefit from it.
“telepathy” gtfo, they’re trying to give their brainchips marketing hype synonyms like how Altman calls ChatGPT AI when really, it’s not artificial intelligence, it’s just ML. But ML sounds a whole lot less exciting in the marketing pitch.
but it's not the AI that Altman is selling, ditto for ML
I mean, we all know helping disabled is not the end-game objective of neuralink. And right now, from a very cynical point of view, disabled people constitute a large reservoir of cobayes and free marketing for Neuralink
I don’t know how much has been invested in R&D on Neuralink, but I doubt we have ever invested that much money in any other technology to provide autonomy to the disabled.
And it is not perfectly clear to me that, for the sole prospect of helping paralysed people, Neuralink is the best way to go. It sure is the one that looks the coolest, but it’s going to be very expensive, hard to fix when something goes wrong, and it is also hard to trust. Those issues do not seem to be avoidable
Don’t get me wrong, I admire the huge QoL gain for the three patients. As individuals, they sure benefited from this. Idk if the same is true of the disabled as a social group
Can you tell us more what you surmise we all think is the end-game objective?
Musk's original stated end-game objective is to give humans a chance against ASI by removing the biggest impediment that humans have to communicate digitally, the keyboard.
This is hard to believe as the truth, as it is extremely short-sighted. If ASI can think 1000x faster than a human brain, and with much more intelligence, then what does giving humans even a 100x improvement in I/O achieve? Also, if ASI is achieved, then it will continue to self-improve. The meat brain is stuck at our current speed.
Please see my HN profile for a privacy rant about the downsides, which only assumes a read capability. Once a write capability is introduced, I mean you gotta be kidding me. Who should you trust with that power? The answer is no one.
Generally speaking, the demo is always about finding the green ball on top of a red cube, or the person who went missing in a land slide, but what sells it is detecting and aiming at the dissident hiding under a truck.
And isn't it weird how "think of the children" is always ridiculed but "think of the paralyzed etc." is just fine? I've seen it countless times in the last decades. Just recently when I said on here I want "AI" art to be marked as "AI" made and someone claimed I don't care about the people who have Parkinson's and can't hold a brush, but wouldn't answer why we can't mark it anyway. It's not the people with Parkinson's that want to pass of their creations as hand-made. They're just getting used.
Sure, paralyzed people would love to be able to control a cursor with their mind etc., but even more than that they don't want cuts to social programs, that enable them a dignified life beyond "making them as functional as a healthy person", to allow tax cuts for the super rich. They want friends to have time for them instead of working 3 jobs, that sort of stuff. But Musk and his spiritual brethren are gleefully moving in the opposite direction, as fast and ruthlessly as they can.
So I say this particular doctor is three butchers in a trench coat. I can't prove it, because I can't read minds, but nobody else can either, and this is the "bet" I'm going with. Vulnerable and sick people can only have things that would a.) help super rich people with the same conditions and b.) enable more persecution and exploitation, and an easier discard of undesirable, unproductive or rebellious members of society.
Corporate cyborg parts are an already-predicted nightmare, already taking place, unfolding in slow motion, and soon it will breach the sanctity of human thought.
That's not what is happening here. These tools (Neuralink and others) enable people who are disabled to participate more in society.
* Everyone hates Elon
* For most this is enough to hate neuralink.
* 15ish+% think that embedding stuff in your brain from any company is a bad idea(TM)
* 5-ish% think this is not worth working on at all, or not worth the animal / human research costs
* In the know folks point out that tech like this has been around for roughly 10 years, but research hasn’t progressed past the point where brain injury isn’t a major risk -> this is too early
I don’t read anything here about human autonomy; each of the guys written about have my utmost respect for not just committing suicide — they must be incredibly tough, persistent and positive humans, full stop. The idea that they can’t or shouldn’t be able to weigh the risks and benefits of tech like this feels infantilizing, in the worst way - infantilizing from people who have full mobility.
At any rate, I applaud a company trying to help people like this, EVEN IF their long term goal is an ad-supported BCI (although TBH Elon’s always had significantly better revenue ideas than ads), and I applaud the first few folks willing to risk their health to get access to a better life, and help people down the line from them.
For example, this article discusses medical implants. Safety of those is very important. When the owner of the company is actively dismantling oversight that ensures safety, this directly impacts whether we can trust this product.
I agree that HN should be mostly politically neutral, and for the most part it is. For topics involving Musk, however, one simply cannot ignore their problematic attitude towards anything that might inconvenience them.
This is a piece of marketing from a private company. It is a good thing that people raise criticism missing from it.
until we have a solution to the problem that is Elon Musk, and potential future Elon Musks, this type of technology can only be a net negative to society.
I think basically any of the leaders who brought us the technology we are using today are cults of personality like this, we just forget about the ones that aren't contemporary. I have yet to see us grow without them.
This kind of behavior is not befitting of a company that will need to cultivate an incredible amount of trust from customers before they buy into the idea of a brain implant.
Elon is so effective as a leader he seems to break people’s brains. No other person could have started this company and had even half this success. There’s a reason all the most talented flock to his companies, despite “conventional wisdom” saying they shouldn’t. It takes a lot of self deception to ignore the reality that he obviously must be doing something right.
Well, you have hit the nail on the head. My misanthropic view is that most people are a deluded lot.
I'm sure there are. There may also be people with The Com background (https://cyberscoop.com/the-com-764-cybercrime-violent-crime-...) working on it too: https://krebsonsecurity.com/2025/02/teen-on-musks-doge-team-...
yea more known as the DOGE guy but worked at Neuralink before that. Imagine the potential for abuse.
One of the great examples of this is the infamous "Pedo guy" incident in which he showed himself as very unempathetic and petty the moment people dismissed him as he attempted to hastily insert himself into a tragic moment.
He's also regularly sued people exercising their free speech to comment on or criticise his financial interests, knowingly attempting to drown influential people he doesnt like in legal fees and frivolous lawsuits.
In the past he has participated in doxing governmental employees who might cause him financial damages, often encouraging his followers to harass beuraucrats and lawyers who are just doing their legal jobs.
There are plenty of examples of Elon regularly engaging in bullying of others who may not have access to the resources he does, its not just limited to these few examples.
In my eyes, any measure of success or wealth will never excuse how a person conducts themselves in public. And I think Elon no longer thinks that the rules apply to him as so many are willing to overlook his behavior due to worshiping his money and influence. Elon's nazi salute is the perfect example of this.
So my original statement still holds. Neuralink has a very large mountain to climb when it comes to consumer trust. Products in the Healthcare industry can massively impact people's lives, especially when they dont work as intended. Any company that participates in this space is morally and ethically required to be empathetic to the lives that they impact. And this level of empathy is not something that I see coming from the man behind neuralink which I think should disqualify it as a company with the potential to impact a lot of people.
Those are just two of very many recent examples.
He has access to a lot of money so maybe these people working on it should continue to work for him. Maybe he wants to charge an outrageous fee for it but ultimately at some point down the road if he can do it others will to and it will be common place for those who need it and probably common place for those who don't need it but want it.
I'm sure he wants to sell it to those who need it, but I don't think that this means he cares that much whether it's successful as a medical device. He generally cares whether some device appears to work well enough that he can sell it, especially to investors, and far less about whether it actually solves a problem/doesn't introduce worse problems.
Tesla FSD is the best example of something he's been selling for at least 7 years now without it actually working as advertised. Cybertruck was sold long before it came out, and now they're producing only a trickle. Roadster has been sold by the tens of thousands and it's not even in a design phase yet. Solar Roofs was presented to investors as a working product when it was a plastic mockup. There are probably others.
https://www.news-medical.net/news/20230824/Brain-computer-in...
https://news.brown.edu/articles/2012/05/braingate2
https://www.ahajournals.org/doi/10.1161/STROKEAHA.123.037719
If you google "BCI brain computer interface paralyzed" you will find a wealth of researchers and organizations working on it which are not Neuralink.
That seems pretty benign compared to what a neural implant could be made to do to someone.
Well of course the device doesn't have to be programmed to be controlled by the host, does it ? Torture entirely by manipulating the compute substrate your mind runs on would be effective† and yet very easy to do... so this is in fact just another torture device.
† Effective in the sense that it would inflict needless misery on people, that's what torture is actually for, it's not an effective interrogation strategy and never has been.
Black Mirror: https://en.wikipedia.org/wiki/Men_Against_Fire
just as an example
I don't like Musk and I find Neuralink spooky in terms of their overall goals, but it's hard to deny how much this invention helps people.
The future of non-elites is unknown. But hopefully either the elites will be magnanimous, or non-elites will create new occupations that will at once, be able to create wealth, and not be able to be performed by bots. Not sure what those new occupations will be? But human ingenuity is an incredible thing, especially if the system remains market capitalism based. Because that will mean your rent and food will depend on you coming up with something to do. I think people will think of something.
If not? Well, let's just say the future might not hold societies as pleasant for non-elites as the societies of today.
A Year of Rick Astley (hey it almost rhymes)
It's tragic in a way. If he stuck to same playbook as practiced by many other early tech billionaires, spending his life on investing, philanthropy, himself and family, the world would probably not have things like common reusable rockets, widespread EV adoption or massive satellite constellations.
His willingness to pour money, and ability get others to pour their money, into various extremely risky ventures, is what made all of that possible. Eventually it would happen anyway, but probably much later.
But I suspect, that very same personality traits that enabled him to do this, are responsible for his current state. Over the years he has lost his self control, to the point that he looks almost childish. Handful of years ago, he opposed people he now works with.
He's now undermining his own companies, with his actions. Even people like Murdoch or Thiel look better in comparison. Not because of what they do, but because they are less visible.
Everything he has ever done, will now be viewed in much worse light. His reputation, sabotaged by the only person who could accomplish that feat. Himself.
Viewed by whom? By you and a bunch of other neurotics that consumed too much CNN?