C. Elegans: The worm that no computer scientist can crack - https://news.ycombinator.com/item?id=43490290 - March 2025 (130 comments)
On one extreme, we cannot even solve the underlying physics equations for single atoms beyond hydrogen, let alone molecules, let alone complex proteins, etc. etc. all the way up to cells and neuron clusters. So that level of "good" seems enormously far off.
On the other hand, there are lots of useful approximations to be made.
If it looks like a duck and quacks like a duck, is it a duck?
If it squidges like a nematode and squirms like a nematode, is it a [simulation of a] nematode?
(if it talks like a human and makes up answers like a human, is it a human? ;)
ISTM that the answer is "in a way yes, in a way no".
Yes, in that we reasonably conclude something is a duck if it seems like a duck.
No, in that seeming like a duck is not a cause of its being a duck (rather, it's the other way round).
When we want to figure out what something is, we reason from effect to cause. We know this thing is a duck because it waddles, quacks, lays eggs, etc etc. We figure out everything in reality this way. We know what a thing is by means of its behavior.
But ontologically -- ie outside our minds -- the opposite is happening from how we reason. Something waddles, quacks & lays eggs because it is a duck. Our reason goes from effect (the duck's behavior) to cause (the duck), but reality goes in the other direction.
Our reasoning (unlike reality) can be mistaken. We might be mistaking the model of a duck or a robot-duck for a real duck. But it doesn't follow from this that a model duck or a robot-duck is a duck. It just means a different cause is producing [some of] the same effects. This is true no matter how realistic the robot-duck is.
So we may (may!) be able to theoretically simulate a nematode, though the difficulty level must be astronomical, but that doesn't mean we've thereby created a nematode. This seems to be the case for attempting to simulate anything.
At least this is my understanding, I could be mistaken somewhere.
I think this is also one possible answer to the famous 'zombie' question.
> Something waddles, quacks & lays eggs because it is a duck.
Or: something does those things, period. We notice several such somethings doing similar things, and come up with an umbrella term for them, for our own convenience: "duck." I'm not sure how far different that is from "is a duck", but it feels like a nonzero amount.
I guess where I'm going is: our labels for things are different from the "is-ness" of those things. Really, duck A and duck B are distinct from each other in many ways, and to call them by one name is in itself a coarse approximation.
So if "duckness" is a label that is purely derived from our observations, and separate from the true nature of the thing that waddles and quacks, then does some other thing (the robot duck) which also produces the same observations, also win the label?
Luckily, I'm a solipsist, so I don't have to worry about other things actually existing. Phew.
No, if it doesn't do everything else a duck does. You can have a robot dog, but you won't need to take it to the vet, feed it, sweep up it's hair, let it go outside to go potty, put up a warning sign for the mailman, or take it for a walk. You can have a simulated dog do all those things, but then how accurate will the biological functions be in trying to model it's physiology over time?
Will it give us insights into real dog psychology so we can better interact with our pets? Or does that need to happen with real dogs and real human researchers? Wildlife biologists aren't going to refer to simulated ducks to research their behavior in more depth. They'll go out and observe them, or bring them into the lab.
Not to say I'm fully convinced, but I can see the appeal.
What I'm trying to say is: as long as the simulation fulfils the objectives set out, it's useful, even if it is very far from the real thing.
Then the next question is: what are the objectives here?
I'm pretty sure behavior is simulated all the time in everything from migration to predator prey dynamics, to population dynamics, and so on. If we don't use simulations to understand all the little nuances and idiosyncrasies of behavior right now that's probably just because at present that's extremely difficult to model. But I suspect they absolutely would be used if such things were available. Of course, they would be treated as complementary to other forms of data, but wouldn't be disregarded outright.
Is there truly so little that makes up the soul of a duck? No mention of laying eggs? Caring for it's young? Viciously chasing children across the lawn of the local park? (I know that's usually the prevue of Geese, however I have seen ducks launch the occasional offensive against too curious little ones)
For example you can simulate traffic without simulating the inner workings of every car's engine, or even understanding how the engine works.
Or maybe by "working understanding" you mean "we have a black box that does the thing we wanted."
https://en.wikipedia.org/wiki/Philosophical_zombie
We really have no idea whether consciousness is something that can arise from computation, or whether it is somehow dependent on the physical substrate of the universe. Maybe we can create a virtual brain that, from the outside, is indistinguishable from a physical brain, and which will argue vociferously that it is a real person, and yet experiences no more conscious qualia than an equation written on a piece of paper.
I don't understand this argument. How is the computer running the computation not part of the "physical substrate of the universe"? _Everything_ is part of the universe almost by definition.
The answer to that would appear to be, no.
Basically, even if it's a simple computation engine, can we put that simulation through the stimulus our brain experiences (not easily) and will lack of that turn into entirely differently behaving system?
Meanwhile, the most advanced simulations are still rough approximations with little to no realism rather than "in this specific conditions and with this specific neural arrangements I made artificially, it behaves similarly to a real nematode", a good way to make a self-fulfilled prophecy.
Ah, so this is where 45% of my salary goes.
https://www.humanbrainproject.eu/en/follow-hbp/news/2023/09/...
https://www.abc.net.au/news/science/2025-03-05/cortical-labs...
In short - we are a long way from being able to simulate a nervous system. Our knowledge of neuronal biochemistry is not there yet.
So many philosophical, ethical and legal questions. And unsettling possibilities.
We will probably have to deal with this someday.
This is quite an extraordinary claim with no extraordinary evidence.
As said elsewhere in this thread we can at this moment not even simulate single atoms.
I see no reason to believe at all that we will ever be able to simulate a human brain.
Unless you want my simulation here:
if ishungry:
Eat(FindFood())
else:
PracticeFingersnowboarding()That alone wouldn't be enough to fully clone a person's consciousness. There is information stored in the actively firing synapses. For example short-term memory seems to be stored by sending signals in a loop, and there might be more such mechanisms. Those signals are obviously lost once the brain is dead. Another issue are hormones. The same brain regulated by a different (simulated) body might behave completely different. And then there are probably a lot of unknown unknowns. Despite decades of research there are still a lot of open questions, and more questions will become apparent once we actually start simulating complex brains.
But that doesn't mean that those early methods wouldn't be useful, both for science and for more questionable efforts. For example accessing the long-term memory of a recently deceased might be comparatively viable if given enough funding
1: https://edition.cnn.com/2024/05/15/world/human-brain-map-har...
I suspect this wasn't your intention, but I feel this heavily undersells how much work is involved in "scaling up" to simulating a human brain. I wouldn't even say that it is inevitable, because there are so many unsolved questions and unknown-unknowns.
There are decades of research and we are still an unknown and large number of years away from doing this. Fusion power is more tractable that this.
It's not even clear whether our current approach to computation will ever be able to do this. We might need completely novel types of computers, maybe organic-machine hybrids.
I'm not even touching on the very real and serious ethical questions of simulating human level consciousnesses.
To wit, no one expects human brains to be capable of arbitrarily complex computation.
I'm actually happy it's a long way off. Feels like the richer humans would live with cheat codes, and the others wouldn't.
Ego death is a brutal suboptimum. It's tragic that any entity brought into and knowing of its own existence has to die and be forever annihilated.
If humanity has only one goal, and that goal was to achieve immortality for all humans henceforth [1], that would be a noble cause for our species.
I hate that those I care about will cease to exist.
Fuck death.
[1] Maybe we get lucky and they master physics, reverse the lightcone, and they pull each of us out of the ether of time with perfect memories to join them. Sign me up. I consent.
Just like we can't really predict weather (as another complex system) too far ahead, we can't really predict how something this significant changes brain development — IMHO at least.
If there's consciousness after death (in whatever form), then it is clearly not the end, just a part of a much longer - possibly infinite - journey. Even better!
In either case: it's better to stop worrying about what may come after and enjoy the journey to the fullest!
I kindly disagree :-). I think I'd rather not be immortal but live in a world with nature and animals than be imortal in a jar. Right now we don't manage to be immortal, and we are extinguishing the animals... the worst of both worlds?
Nearly all mystics (and many if not most neuroscientists) also come to the conclusion that our world of the senses is an illusion. This doesn't mean that the illusion doesn't have rigid laws, but it does challenge the materialistic assumption that the soul, or consciousness, becomes nothing at the time of physical/biological death.
If that is too fuzzy and mystical, I'd also suggest reflecting more deeply on the concept of technologically facilitated immortality of physical life on earth. For me, it is clearly a dead end. It can only lead to a complete annihilation of every human value.
I need to croak so that there's room in the world for my great-grandchildren.
>If humanity has only one goal,
Humanity pursues, best that I can tell, extinction instead of immortality. It has this really weird premature transcendence hangup.
Except that's not the goal and never will be the goal. If some immortality technology is ever created, it won't be for all. The Elon Musks, Sam Altmans, and Donald Trumps of the world will live forever. You will die.
> I hate that those I care about will cease to exist.
> Fuck death.
There's a much simpler and more achievable solution to that problem: change your belief system.