Humes Problem of Induction, for instance, is exactly an example of philosophical practice grappling with these unanswerables.
My favorite philosopher however remains Heraclitus, had we chosen to go his way we might have had less stupid questions, like “is the cat in the box dead or alive?” and instead we might have straight up came up with the answer “the cat is dead, alive and all the states between dead and alive, and we’re fine with that”. Unfortunately Aristotle was not fine with accepting the many “states” of the world “happening” all at the same time and went for the binary True-False way, bad-mouthing Heraclitus in the process. We certainly did manage to build a more efficient society by following Aristotle’s way but I think we have reached a local maximum, or it certainly looks that way. Maybe reverting to the pre-Socratics will help us go over this local maximum.
Hume's problem of induction is arguably the last substantial thought on the subject, right up through Popper's bridge problem.
What does it mean to "deal with" unknowable things in this case? If philosophy is claiming this as their purview, what are they going to do with it? I'd posit philosophers have two options:
1. Say, "I don't know." This is the better option, in my opinion, because it's honest, but scientists already said that, so why do we need philosophers to say the same thing? You can speculate beyond this and posit it from the beginning as "if this thing we don't know is true is true, then it would have this effect". But in other fields this would generally be a very low-value sort of discussion--respectable institutions would not, for example, give a lot of funding to scientific experiments which presuppose unstudied phenomena. You'd study the unstudied phenomenon first and come to conclusions there before moving on to further experiments which presuppose it. Philosophy isn't hurting anything by taking this approach, but it's not adding anything to what science has already done.
2. The second option is, you do what philosophers do all-too-often: simply present your speculation as fact, perhaps hiding a "I don't actually know" in a footnote somewhere so you can point to it when criticized. A common variant of this is teaching ridiculous ideas as equally valid, and then saying you're just teaching history of philosophy when criticized. This is how, for example, you get the categorical imperative taught in schools: it's trivial to come up with counterexamples where everyone behaving a certain way would be horrible, but if you point this out, philosophers will often simply say that they're just teaching Kant because he's historically important. Yet Kantian ethics are taught right next to much more realistic ethical ideas, and students often can't differentiate which ones make any sense and which ones don't. This would be like teaching flat earth-ism in science class, and then saying "it's history of science" when criticized. It's a motte and bailey argument[1] and it's dishonest and harmful to rational thought.
It seems to me that science has taken us as far as it's useful to go with regard to determinism, and philosophy has nothing of value to add on the subject.
Hume's Problem of Induction isn't comparable here. In that case, Hume is asking a question which science hasn't/can't ask, which is somewhat useful. I don't think, however, that Hume really answers the question, and I don't think it would be useful to pretend that we know the answer. In the case of Hume's Problem of Induction, philosophy adds the question but not the answer: with superdeterminism, science has already asked the question, and philosophy can't answer it any better, so philosophy has nothing to contribute.
I believe it's impossible to completely isolate any segment of that universe (E.G. to make it smaller and thus predictable within the capability bounds of a larger universe) without literally removing it from that universe. That no matter what every part of an existing universe interacts with every other part, even if very, very, indirectly.
As for the question of free will: I believe the biology is largely deterministic. For me, that leaves the main set of questions in the direction of all of the elements that might happen between, outside, or otherwise beyond our current understanding of how the universe works. I feel that if there is any actual freedom in free will that is where it comes from; otherwise it's just the RNG being too complex to understand completely masking the lack of actual choice.
Wolfram proposes an interesting solution to the question of free will, that does not require any randomness: computational irreducibility. It is the hypothesis that for some computations there is only one way to perform. That is if you try to predict what an AI will chose, your only option is to create an exact copy and let that copy to make the choice.
The article mentions that Bell's inequality was in a similar position in the past.
Like all interpretations, it's mathematically equivalent to any other. It's just a question of what helps you think about the problem, and I don't think many people find it very edifying. You can replace the box with a random number generator, which is at least small enough to fit in your pocket. The superdeterminism box appears to have been crammed full of untold centillions of answers... none of which are accessible beforehand.
If there were reason to think that the superdeterminism box were somehow smaller -- if it all really came down to just one random bit, say, that had been magnified by chaotic interactions to appear like more -- that would attract some attention. And I suppose it would be conceptually testable, by running Laplace's demon in reverse, except that that's not possible either from inside the universe.
So it doesn't really come as a surprise that superdeterminism falls behind MWI or Copenhagen or even pilot wave, because each of those hands you something that you can use to mentally organize the world. Superdeterminism just seems to hand you a catchprase, "As it was foretold in the Long Ago -- but which I just found out about".
Superdeterminism also plays nicely with the simulation hypothesis. You seed the virtual machine with some randomness and the physical laws and then you run the simulation.
There's nothing wrong with that. I just don't think people find it very useful as an organizing principle, so it doesn't attract a lot of attention.
So either way, you've got a probability distribution. And at this point people just apply Occam's Razor and get on with their lives. You can theorize an infinite number of systems that work exactly like the real world. The question is whether they're useful.
Like the Many Worlds Interpretation!
To calculate this secret table you must simulate all the interactions and path in the universe until it ends, because you must know which particles will be entangled, which result will have the "random" generator in the experiments, ...
So the universe is only a movie that follows the random choices made at the beginning of the universe. But the choices are not arbitrary, they have the correct values so when the events really happen they follow the laws of physics. For example, the random choices at the beginning of the universe make it look that you can't transmit information faster than light.
Physics study the laws of the real universe, but we can redefine Physics as the study of the laws that study the random number generator. Both real-Physics and initial-rng-Physics follow special relativity. Bot agree about QM. Both agree about the Bell inequality.
So with Superdeterminism we solve the problem of QM in the real word, because everything we is already determined. Now the problem is how the RNG at the beginning of the universe work to simulate QM and all the other effects. Let's call the study of the RNG Physics. Now the problem is as hard as before Superdeterminism.
What superdeterminism says, is that there exists local and deterministic evaluation rule that will compute consecutive states of the universe, but simply because of the way the rule works experimenters far away end up always choosing the experiments that yield correct results.
Superdeterminism is unpopular because the existence of such evaluation rule seems very unlikely.
> Where do these correlations ultimately come from? Well, they come from where everything ultimately comes from, that is from the initial state of the universe. And that’s where most people walk off: They think that you need to precisely choose the initial conditions of the universe to arrange quanta in Anton Zeilinger’s brain just so that he’ll end up turning a knob left rather than right. Besides sounding entirely nuts, it’s also a useless idea, because how the hell would you ever calculate anything with it? And if it’s unfalsifiable but useless, then indeed it isn’t science. So, frowning at superdeterminism is not entirely unjustified.
If this is the correct meaning of superdeterminism, then it doesn't make sense. Saying that there are some unknown rules that explain something is not a scientific theory.
You can solve the quantum gravity problem saying that there are some unknown rules that explain that. You can solve the renormalization problem saying that there are some unknown rules that explain that. You can solve everything saying that there are some unknown rules that explain that.
Actually I would like to know more about provable violations if Bell's theorem as I am somewhat attached to local determinism and haven't seen an experiment that I would consider convincing. I mean the theories behind the experiments are sound, but I'm not sure they're actually measuring what they think they are measuring due to limitations in the experiment setup -- in order to prove a violation of locality your system cannot be in a cyclostationary equilibrium.
In such an equilibrium the system state effectively becomes a standing wave so you risk measuring an effect that was actually a result of a previous cycle and mistakenly interpret it as being a result of the current cycle -- implying a violation in locality because the "cause" was outside of the light cone of the effect. Note that this is analogous to confusing the group and phase velocities of a radio wave (https://www.quora.com/What-is-the-difference-between-phase-v...).
I don't know where you're getting this from, but it doesn't describe quantum systems on which Bell inequality violations have been experimentally confirmed (such as photon pairs from parametric down conversion).
The only "loophole" that has not been completely closed at this point is that we don't have 100% efficient detectors, but we have detectors that are well over 90% efficient so the claim that somehow all the stuff that will "fix" the Bell inequality violations is hiding in the small percentage of photons not being detected isn't very compelling.
In the experiments generating the photon pairs from parametric downconversion, for example, does the entire system start up, send 1 photon which gets split into the entangled photon pairs which then go to the detectors -- with no other photons generated?
If there is a warm-up period for the equipment or other photons are emitted or absorbed then there is the potential for memory effects that could interfere with the measurements.
For instance if we treat light as a wave then the cosine correlation with angle we see in the basic "two entangled photons with polarizing lenses experiment" is exactly what we would expect to see. The difficulty is simply resolving this with the particle nature of photons. If the experimental system has memory then it could easily have the phase of the effective wave or some other function of the history of photons encoded in the state of the system.
There are probably some ways to compensate for these memory effects and demonstrate their (non)existence, but I am not a physicist.
And there is something very Taoistic about homotopy type theory too.
Also, I feel that both superdeterminism and homotopy type theory have traces of the holographic principle in them in a somewhat conceptual or abstract way.
Perhaps there exists a nice correspondence between superdeterminism and homotopy type theory that can be used to extend (in a purely functional and categorical way) the simulation hypothesis into a full-fledged theory (and perhaps with its own nice little axiomatic system) to make sense of reality.
http://en.wikipedia.org/wiki/Principle_of_explosion
If superdeterminism explains quantum mechanics, why not cosmic inflation? Why not matter asymmetry? Why not abiogenesis? Why not Brexit? Superdeterminism, by construction, can explain everything — and all there’s left to do is pray to God.
Firstly I don't think anyone has actually formalised superdeterminism in a way that the principle of explosion can be logically introduced to formally undermine superdeterminism. What you are doing here is akin to stretching the conceptual relevance of Godel's incompleteness theorems and trying to use it to prove or disprove the existence of God.
Basically I don't see how it makes sense to say that superdeterminism contains a principle of explosion. Perhaps my interpretation of superdeterminism is very different from yours. Or maybe I simply don't see the picture as you do. If that is the case please enlighten me.
Secondly I think you are missing the point of superdeterminism here.
There is something very computational (and perhaps Taoistic) about superdeterminism. Apparently under this framework the whole notion of "explaining things" is nullified and becomes meaningless. It occurs to me that our everyday notion of "explaining things" exists at a lower abstraction level and thus loses relevance in the face of superdeterminism. I believe if you really want to undermine superdeterminism as a theory (or as a philosophy), the more relevant question here to ask is: is there anything useful/meaningful about reality (or the universe) that can be inferred assuming superdeterminism? And then of course if you are a scientist you would then ask: are they experimentally verifiable?
To even ask the question you must deny your own premise. If indeed superdeterminism is true, then any experimental verification is nullified by definition: the results of any and all experimentation is itself superdetermined regardless of any scientific framework.
The superdeterminism "solves" the problem by claiming that there is no problem to begin with, and the results look non-local only because the experimenters always pick experiments that look non-local.
How can a local deterministic theory create such complex behavior as thinking people, and at the same time constrain it in a way that time taken to play mario level is correlated with a photon experiment a year later, is left for the reader as an exercise.
It is serene. Empty.
Solitary. Unchanging.
Infinite. Eternally present.
It is the mother of the universe.
For lack of a better name,
I call it the Tao. It flows through all things,
inside and outside, and returns
to the origin of all things. The Tao is great.
The universe is great.
Earth is great.
Man is great.
These are the four great powers. Man follows the earth.
Earth follows the universe.
The universe follows the Tao.
The Tao follows only itself.
It is little better than the presumption that planets move because a prime mover moves them. It is, in essence, to give as the final answer: "Planets move as they do because they cannot do anything else."
Which doesn't make it wrong...