Presumably you're assuming if the information is there at all - if the necessary data hasn't been scrambled beyond the noise floor of the scrambling process - then there's something for magic (because you're really talking about magic here) to work with.
So, please (a) set out your claim with precision (b) back up your claim.
* What is the information you need to recover?
* To what degree is it scrambled?
* What of it is scrambled below the noise floor of the process?
* How do you know all this? (wrong answer: "here's a LessWrong/Alcor page." right answer: "here's something from a relevant neuroscientist.")
For comparison: even a nigh-magical superintelligent AI can't recover an ice sculpture from the bucket of water it's melted into. It is in fact possible to just lose information. So, since you're making this claim, I'd like you to quantify just what you think the damage actually is.
In that sense, 'a sufficiently advanced AI' is not magic, because when people say that they definitively have something in mind, at least the people I often discuss this with do.
In short: if you're smart(fast, precise, determined) enough to look at the individual molecules of a puddle of brain-goo. And if you can infer the way it has collapsed by ray tracing those molecules back through how they collided with each other/the walls of your mold then it should be possible to reconstruct the spatial form of the brains at least. That's a pretty big IF obviously, but equally obviously not impossible. If only you can look deep/far/fast enough.
If you want to be theoretical about it, then yes. There is probably an upper bound on how smart/big an AI mind can possibly be. And thus there is a limit on how much information it can extract from arbitrary systems. So I agree with your assertion that there is information that even the smartest of all AIs cannot possibly reconstruct, but I'm not sure that the brain is such a structure.
Any justification about why/how 'a sufficiently advanced AI' could come about is more questionable.
Many knowledgeable people are making guesses based on our current understanding of intelligence/computation/AI, and then extrapolating. The paradoxical thing is that on the one hand AI-doomsday speakers tell us no to anthropomorphise (for good reasons) with the motives of an AI, but on the other hand apply human reasoning/understanding to predict such machines/patterns.
This is probably not quite at the requested standard of backing up a claim, and sounds very like "but you can't prove it isn't true!" But I'm not the one making a claim.
In any case, please back up your claim. What is "the absolute sense"? How does it differ from "in a practical sense", with examples?
> In short: if you're smart(fast, precise, determined) enough to look at the individual molecules of a puddle of brain-goo. And if you can infer the way it has collapsed by ray tracing those molecules back through how they collided with each other/the walls of your mold then it should be possible to reconstruct the spatial form of the brains at least. That's a pretty big IF obviously, but equally obviously not impossible. If only you can look deep/far/fast enough.
Noise floor. In this case, thermal noise.
Also, you literally can't know that much about all the molecules in your puddle of goo. (Heisenberg.) We do not live in a Newtonian universe.
http://phys.org/news/2014-09-entropy-black-holes.html
http://phys.org/news/2014-09-black-hole-thermodynamics.html
Ben Crowell, phd in physics:
http://physics.stackexchange.com/questions/83731/entropy-inc...
The reason I didn't/don't back up those claims is because I'm really not knowledgeable about those subjects. I'm not sure what good sources are/how legitimate they are, but I have read it one day. Even if I cannot interpret the technical jargon behind it and/or give more nuance to my claim due to a low understanding of the subject.
Given a bit of googeling to "can information be lost", "conservation of information" one finds the articles I linked to above.
But you have dodged my refutal of your initial claim, the one I was really responding to, that: 'sufficiently advanced AI' is not just a stop-gap-word for magic. Because in this case it doesn't stand for "I don't know how or why but this and this", but instead it stands for "I don't know why(in the motivational sense), but given a bigger brain one can use and interpret finer instruments, which in turn enables us to extrapolate further back in time".
Quantum information is, in some decently well-regarded theories, a conserved quantity. Classical information is not, in any major theory.
>In short: if you're smart(fast, precise, determined) enough to look at the individual molecules of a puddle of brain-goo. And if you can infer the way it has collapsed by ray tracing those molecules back through how they collided with each other/the walls of your mold then it should be possible to reconstruct the spatial form of the brains at least. That's a pretty big IF obviously, but equally obviously not impossible. If only you can look deep/far/fast enough.
Again: classical information is not a conserved quantity. A puddle of brain-goo will likely tell you more than a puddle of non-brain goo about the person who used to be that brain, but there is a very strong limit to what it can tell you. You cannot, so to speak, extrapolate the universe from one small piece of fairy cake.
(Disclaimer: I have previously donated to the Brain Preservation Foundation precisely because I think the issue deserves investigation by mainstream, non-wishful scientists so that people who want to... whatever it is they're planning on, can do it.)
>Many knowledgeable people are making guesses based on our current understanding of intelligence/computation/AI, and then extrapolating. The paradoxical thing is that on the one hand AI-doomsday speakers tell us no to anthropomorphise (for good reasons) with the motives of an AI, but on the other hand apply human reasoning/understanding to predict such machines/patterns.
The thing about "sufficiently advanced AI" is that it dodges the basic issues. A sufficiently advanced AI is just a machine for crunching data into generalized theories. It can only learn theories in the presence of data. Admittedly, the more data it gets from a broader variety of domains, the more it can form abstract theories that give usefully informed prior knowledge about what it can expect to find in new domains. But if it can use detailed knowledge about brains-in-general to reconstruct a puddle of brain-goo into a solid model of a human brain, solid enough to "make it live", that's not because of some ontologically basic "smartness" about the AI, it's because the AI has the right kind of learning machinery for crunching data about specific and general things together to allow it to learn and utilize very large sums of domain knowledge. These sums could possibly larger than any individual human might obtain in the course of a single 20-year education, from kindergarten to PhD, but the key factor in "AI's" understanding of the natural sciences will ultimately be experimental data and domain knowledge derived from experimental data.