The brain implements some kind of fairly general learning algorithm, clearly. There's too little data in the DNA to wire up 90 billion neurons the way we can just paste 90 billion weights into a GPU over a fiber optic strand. But there's a lot of innate scaffolding that actually makes the brain learn the way it does. Things like bouba and kiki, instincts, all the innate little quirks and biases - they add up to something very important.
For example, we know from neuroscience that humans implement something not unlike curriculum learning - and a more elaborate version of it than what we use for LLMs now. See: sensitive periods. Or don't see sensitive periods - because if you were born blind, but somehow regained vision in adulthood, it'll never work quite right. You had an opportunity to learn to use the eyes well, and you missed it.
Also, I do think that "clever initialization" is unfortunately quite plausible. Unfortunately - because yes, it has to be simple enough to be implemented by something like a cellular automata, so the reason why we don't have it already is that the search space of all possible initializations a brain could implement is still extremely vast and we're extremely dumb. Plausible - because of papers like this one: https://arxiv.org/abs/2506.20057
If we can get an LLM to converge faster by "pre-pre-training" it on huge amounts of purely synthetic, algorithmically generated meaningless data? Then what are the limits of methods like that?