In the current state, all of the aspects of what me might consider a standard "dream" are present, they just exist within the wrapper of technology. The standard sufficiently advanced technology is magic argument. Truly, all technology is magic, just accomplishing magical goals with a symbolic wrapper of tech.
Another way to look at it, is we all live within a light hologram. All of existence is a light hologram. It is like a bright point of light that we are continually looking closer at, and as we do, we discover the complexities of color and form within what was originally just a wash of light. All/No Colors -> White/Black -> RGB, ect... Researchers have already demonstrated the capacity to create subatomic particles using nothing but the confluence of light. It isn't a huge leap to then surmise that all "matter" is actually condensed or bound light.
We are effectively consciousness objects sharing the delusion that we all possess "physical" bodies and interact with a material world, but its a reflective argument, because the material objects we interact with are only material to things within the simulation of materialism. Its like we're all on the Star Trek holodeck, but the joke is that we're all actually the Doctor, and all the meat things which think they exist within the "real" world, are fundamentally just as immaterial as the Doctor - sharing only the one thing they all possess, which is consciousness, and the ability to observe, interact with, and build knowledge about their external world.
Sure we could be in a simulation. It's interesting to ponder on it. You'll begin forming answers when you dissect the different components, relationships, interactions, and try to create your own version of it : Say AGI.
;)
It's about as plausible and well substantiated as the Christian God. Which isn't to say it's impossible, but just that it's almost entirely an article of faith. There are a few anecdotes which maybe hint at its possibility. But nothing even approaching a rational explanation of how it might be built.
Also, the model of cognition you are describing (information processing or I/O) is very old, from the 1960s. It was inspired by the discovery of the computer. There are other models, like Ecological Psychology, Embodied Cognition, Distributed Cognition.
It is tempting to draw a box around the brain and posit that it's the most important interface and that all of the important information passes through it. But whenever you break down so-called "cognitive" phenomena, you find that often very little information passes through that barrier. The lions share of encodings remain outside the brain, and for any given animal task, quite a lot of information processing happens outside the brain.
In a weird way, the information processing model is a vestige of the notion of a soul. If you really accept "physical fundamentalism" as OP describes it, then the interface between the brain and the rest of the world is nothing at all. Just a ribbon of atoms with a name. No more interesting than the interface between your stomach lining and the bacteria inside, or between the vibrations in front of your mouth and all nearby ears.
The only reason to center the brain/environment interface is to try to separate what you consider to be the essential identity of a person from their physical grounding. I.e. to maintain a model which includes a soul.
False dichotomy. As far as I'm remotely up-to-date on theoretical neuroscience both information processing and embodied cognition are correct. The brain processes the interoceptive and exteroceptive information received from the body, computes first- and second-order statistics, and uses those to emit actions that explore-and-exploit the body and environment (including the environment's capacity to process information or convenient structures in the environment that don't bear memorizing).
>The only reason to center the brain/environment interface is to try to separate what you consider to be the essential identity of a person from their physical grounding. I.e. to maintain a model which includes a soul.
Sure. If you want a complete description of a person, at minimum you need their whole body, including but not limited to their brain.
AGI beliefs don't compare to religious beliefs just because you say so. You simply don't have the same knowledge background as people who think AGI is realistic or even likely. Maybe you have more or better knowledge, or maybe they do... but just saying they're wrong is pointless.
The boundary for systems-level consideration of intelligence is often placed around the brain because that's most convenient, not because nothing interesting happens at larger (sociology, ecology) or smaller (Drosophila, Portia) scales.
Having done the research myself, having the computational models in front of me, and currently developing iterative capability levels, I can assure you its plausible and I've developed some pretty complex infrastructures and systems during my time in industry.
I encountered the same issue in industry when I was developing the network infrastructure equipment that ensures your packets traverse the net. "You can't do that". "That isn't possible". "You can't shave off 300ms from that process. No one has touched that code in 10 years"
Why yes you can and I personally already have a track record of doing what is said can't be done. Everything is implausible until its made plausible. So, You'll never truly know until you try.
The biggest hurdle blocking people's way is that they're choosing to run down the same path. Why should you expect different results when everyone is approaching the problem the same way? That and, instead of the most knowing people getting in the trenches and attempting to write code, they remain in the philosophical camp and their works are tossed across the wall to the applied engineering camp. Rarely do you find someone who wears multiple hats or straddles the fence. I straddled the fence, saw what I saw, and now I'm developing it.
There are few who want to start from scratch and build up models. Many are ripping models from work done in the 60's,70's, and 80's without a second thought as to what was the thinking behind them. I chose an alternative path. It's paying off.
The model of consciousness that is being used is actually not detailed in any way (purposely). So, it is not very old. I have a stack of annotated white papers on the pioneers from the 60's/70's/80's centered on this inquiry and present day papers on : Global workspace theory (GWT), integrated information theory (IIT), etc. I fail to see any deep connection between my approach and their approach.
I intentionally haven't given any detail about the computational model of consciousness that is at the center of the architecture nor even the slightest detail on how to implement it. Given the climate in this space, I hope you can appreciate why.
You see a box. I see a relationship. There are no broad boxes over anything I am developing. The diagram was made in simple reduced form to help one conceive of the ties and flows to and from the world. People sing high praise of OpenAI and OpenAI gym. I experimented with similar open source packages when attempting to create a virtual environment for testing my code. I resolved on different packages and developed my own gym. I needed more access to the core/gut functions. From what I can tell, there are several other groups/companies/individuals that have done the same. No mention of them ever. No praise. Which is fine but it just goes to show you how there are likely numerous groups making headwinds in this area that no one has ever heard of.
>In a weird way, the information processing model is a vestige of the notion of a soul. If you really accept "physical fundamentalism" as OP describes it, then the interface between the brain and the rest of the world is nothing at all. Just some atoms of many. No more interesting than the interface between your stomach and your brainstem, or between the vibrations in front of your mouth and all nearby ears.
Interface/Relationship. There are no 'boxes' until you create one.
> The only reason to center the brain/environment interface is to try to separate what you consider to be the essential identity of a person from their physical grounding. I.e. to maintain a model which includes a soul.
Objective reality (governed by strict laws like physics).. Subjective experience. Pay close to attention to the wording I use as I don't give many details.
Seeing will eventually be believing. Once made manifest, you won't be able to deny its plausibility. Seems one can save a lot of time skipping attempts to try to convince people and just get to the development.
But yeah, consciousness isn't that serious. You just have to think outside the box to begin making progress on it. Whether or not were in a simulation is immaterial. The word 'simulation' really loses its meaning once you peer deep into the constructs that underly the universe. What does that even mean and how, even if you discovered it was a simulation, would you alter it in any meaningful way. Don't you think the person who created it, given how amazing it is, had the wherewithal to implement safe-guards/alerts? Or even made universal laws that forever restricts you from certain things? It's better that you focus on how it works than trying to define it. It makes for good story telling but I'd rather just dig in, understand it, and make use of that understanding instead. Again, do you want to sit around philosophizing and dreaming about it all day long or do you want to start converting that understanding into something ground breaking?
P.S - A component of the research that was conducted centered heavily on physics/quantum physics. It is quite important to understand the 'environment' and its laws when working on AGI.
Doesn't feel like there's ultimately any way out of this line of reasoning. What would it take to prove to you that you are indeed not in a simulation of some kind? as the only methods of providing proof are also parts of the simulation.
Simulations are just as real. Why wouldn't they be? What makes "natural laws enforced by reality" really any more "real" than "natural laws enforced by a simulation"?
Physical systems aren't about anything, and don't represent anything on their own. It's the entire problem of intentionality.
This line of thought is really old and goes back to Plato (see allegory of the cave) but formalized by Descartes (see evil genius https://en.m.wikipedia.org/wiki/Evil_demon)
In that case, the argument is undermined.
The only way to do it would be to fake it by generating the appearance of a thorough simulation rather than the reality of one. In which case the arguments put forward for wanting to perform a real simulation - to simulate history and so forth - break down because you'd only be emulating the appearance of it not simulating it.
The only way out of this I can see is if the universe containing the simulator were vastly more complex than ours, such that in comparison our universe would be trivial to simulate. But then why would they do it? Our universe would be nothing like theirs. In principle this is possible, but it massively reduces the chances that our world is a simulation because only a subset, and quite possibly a vanishingly small subset, of possible universes would be capable of hosting the simulation. Possibly fewer universes that there are universes like ours. At which point the odds of ours being a simulation collapse.
[0] http://www.anecdote.com/2007/10/the-billiard-ball-example/
Does someone really need to build a computer that carries out the simulation for a universe to be “real”? If there is a set of rules defining a universe, one can say that the universe already exists without having to simulate it.
The same goes for universes that are capable of life that can simulate other universes nested inside them. And, indeed, universes nested three times, four times, all the way to infinity.
There number of nested universes is a much larger infinity that the number of non-nested "root" level universes. Thus, picking a universe at random (ours), the probability that it is a simulation is 1.
Marvin Minsky made this argument and it's a compelling one. But rather than meaning all universes are simulated, it means it doesn't matter whether they are or not because for all possible universes they will exist as root universes and as simulations within more complex universes and there's no meaningful distinction between those. It's not the argument I'm making though.
[0] https://en.wikipedia.org/wiki/Kochen%E2%80%93Specker_theorem
A simulation is only dealing with form.
It seems to me a sufficiently precise simulation would necessarily capture meaning. If meaning is critical to decision making and that decision making is precisely simulated, then the simulation must also capture meaning.
That is the very bone of contention here.
> If meaning is critical to decision making.
I don't believe it is in the general case. Our current crop of Go winning machines seem to indicate otherwise.
So we consider the space of meaning and each point in that space can map to a multiplicity of (possibly similar or possibly entirely distinct) forms. The mapping function is the sentient actor extracting/projecting meaning from[/to] form.
Example: ဖခင်, predak, پدر, 父亲, father.
Simulations can tell nothing new about the true nature of reality, because any simulator would reflect the current assumptions up to date and will be based on an oversimplified model. Weather simulation is not a weather. Map is not a territory.
Why?
There is no need to invoke the spiritual or supernatural to see that consciousness is a problem for science.
Searle's objection still seems to hold some water, though. Consciousness does not seem to be a computation because whether something is a computation is observer-relative; for some observers this set of electrical flickering makes sense as a computation to produce a sunflower-like pattern of points based on emitting branches in directions of (pi * the golden ratio) radians... but for the vast majority of observers probably it doesn't seem like anything until I print out a picture of the result; and even then it might not mean anything to those observers (they might be blind, or they might not associate it with sunflowers, or they might have alien brains so differently wired from mine that they simply cannot appreciate art the way that humans can). We actually have formally defined computation to be observer-relative in precisely the way that the status of what words a book contains and what those words together mean is observer-relative (think that in some other parallel universes the English language was exactly the same but that the words for 'cat' and 'dog' were transposed, and so this same book tells a somewhat different story in those worlds).
The problem is that my two bunnies seem to be quite conscious, to say nothing of myself or my girlfriend. It's not just that they're conscious-relative-to-me-but-it-depends-who's-looking... if that's true then it's a very different perspective which almost nobody takes seriously and practices. My bunnies just seem to be conscious, full stop. They appear to have both interests and the capacity to feel pain (observer-relative consciousness), but it appears to be more than just an appearance! In some sense they are objective observers who their own consciousness is relative to; therefore they are objectively conscious in a way that computations just don't seem to be objectively anything.
The hope of the functionalist approach to consciousness, with its common-sensical "anything which could replace this airy-fairy consciousness stuff in all of its functional roles would be equally justified to be called conscious," is therefore that as processes with no-intrinsic-meaning become more complex and more involved, there is some way to say "no, the parts of that don't have much intrinsic meaning by themselves, but you put them together and then this thing is objectively computing X or Y, there is just no other way for an observer to view it, it has passed a complexity threshold beyond which there is only one interpretation of it." Our books, with the cat <-> dog substitution looming in our minds, clearly don't pass this threshold by-and-large, but perhaps things more complicated processes than those books' narratives can?
We can certainly attach semantics to the numbers (e.g. saying this bit pattern represents dollars, or a spaceship's shield %), and that is observer-relative. But that is completely different from categorizing or understanding a process as a computation in general, in terms of logic.
I'll stop short of implying that it's the same for the human brain, because nobody should be pretending to understand the brain at this point in history. However, this does provide a way to see how it could be true for the brain, if it is ever determined that the brain is precisely equivalent to a computer.
Let me put it to you a different way: suppose that I put two bags of some substance on one side of a beam attached to a fulcrum, and one bigger bag containing a substance that looks similar, on the other side of the beam. Suppose that the beam continues to tilt such that the bigger bag is resting on the ground. There are definitely some observer-relative ways to read this situation; but is there an observer-independent way to read it, which goes beyond what I have already said about it in this paragraph?
I mean, I like the idea, but it just seems so implausible.