> Of course the state can be written in the form |xx> + |yy>. I never denied that. The point is that it can be written in other forms. So it's equally correct to say we'd "experience" |zz> + |ww> + |zw> + |wz> as to say we'd experience |xx> + |yy> so there's no reason to say we'd "obviously" experience only the latter. Your argument is just "that expansion is always available", but since other expansions are also always available I don't see what the force of this argument is.
If there's a simple description of the wavefunction that's valid then there should be a correspondingly simple description of our experiences that's valid. The fact that there's also a more complicated valid description of the wavefunction is neither here nor there. It's like looking at a basket of 4 apples and asking why your experience doesn't correspond to there being 6 - 2 apples.
> Quantum states without superselection (e.g. the entangled states of the form you are considering) leave Bayesian conditioning undefined. As the paper mentions this is a direct consequence of the Kochen-Specker theorem via non-unique orthogonal expansion. It's not a red herring but a rigorously proved theorem.
Ok, I take your point, saying that we can just condition is overly flippant: if there are cross terms (i.e. entanglement) then classical conditional probability doesn't always accurately describe the behaviour of a system, and of course that's true for a system that includes experimenters inside it. But if we treat an experimenter's conditioning as creating entanglement, like any other QM interaction, and treat the subsequent evolution of the system quantum-mechanically, then there's no problem.
> A good example of the issues with pre-measurement alone is here https://arxiv.org/abs/2003.07464. You can't just treat the device as simply entering some CHSH or GHZ style entangled state and think that solves everything about measurement. It doesn't via the theorem I gave in the paper above (and other issues).
That paper amounts to nothing more than redefining "outcome" as something that cannot be in a superposition, and then using this to argue that it makes their unfounded notion of decoherence physically meaningful. If we assume that experimenters are physical systems that can undergo superpositions like any other, then of course Bell-style "no hidden variables" results apply when those variables are the outcomes of experiments. Big whoop. (Would you find the following argument convincing: "Pre-measuring the polarisation of the photon might have one of two possible results, so it doesn't have an outcome according to any reasonable notion of "outcome". Therefore if any observer has measured a photon's polarisation, a physically meaningful process of decoherence must have occurred"? Put like that it's hopefully obvious that this is nothing more than asserting the primacy of the Copenhagen interpretation).
> Note how this involves hard mathematics, not vague talk about "obvious features of subjective experience". I'll also note that this is a general feature of discussions about this stuff among non-physicists online, especially programming communities like this one, the knowledge is stuck in the late 1970s.
Look, I'm not a big fan of credentialism, but I do have a master's in this from a reputable institution. If working physics has found a compelling argument that there's something mysterious about measurement or experience, then that knowledge hasn't made its way as far as even taught postgrad courses, yet alone the wider public, and the blame for that has to rest with the physicists. (I rather suspect that there's no such argument that has reached any significant consensus among working physicists, and that that the "late 1970s" view in the public sphere reflects that).