I also wish I could just know what people in a thousand years learn about the universe. It seems like the knowledge we gain (science and technology) is exponential and just started a few years ago.
It also feels like this contribution is so cool, even more seeing that Hawking is sharing it with us things after dying somehow.
One person's mundane work is another person's magical wizardry.
Also, unfortunately there's no evidence that I know of that our knowledge is going to continue growing at an exponential pace. It seems an equally plausible hypothesis that we're reaching a plateu that may last for quite a length of time.
(But... I personally have my hopes up that we'll be doing Star Trek style space exploration within the next millennium!)
Absolutely. Sometimes I get excited about a topic and try to learn as much as possible, but once you peek behind the curtain it's just as mundane as anything else.
Most of the time an abstract is hard to understand because the concepts have been abstracted. If you just break it down and pick it apart, it's not hard to understand, though it can be time consuming depending on what you already know.
Aren't these people just theorists? Black hole is a theory and not a proven reality that can be directly observed.
Here's what I got..
- For a long time we thought that any information (matter/light) that goes into a blackhole is lost forever and is "corrupted".
- Hawking believed this for a long time and said “God not only plays dice, but he often throws them where they can’t be seen." No one really knows _how_ the blackhole actually "corrupted" the information but had some nutty theories.
- 30 years later (in 2004) Hawking changed his mind and said that information can actually be retrieved from a blackhole.
- A dude named Andrew Strominger recently discovered that black holes have this "soft hair" property that can be "read" to theoretically "see" what is inside the blackhole.
- Hawkings last paper says that he thinks the information inside will be re-emitted when the black hole evaporates.
TL;DR: Hawking for a long time thought matter/information that went into a blackhole was lost forever - and then changed his mind about it.
Assuming Hawking radiation exists (which seems very likely based on what we know about relativistic and quantum physics), it must carry quantum information in the form of position/momentum, photon polarization, etc, and it's not clear where else that information could possibly come from. (Orthogonal-basis measurements can sort of generate classical information out of nothing, but not in a sense that's useful here.)
chuckle
OpenSSL Changelog.txt:
v5.1.0 released on 2172-03-21:
- ...
- add support for Sagittarius A* message digest
- ...If only that were true; that'd be no problem at all.
The problem is procedural, and has to do with slicing up spacetime-filling fields into field-values on spacelike hypersurfaces (values-surfaces). I'll focus on one procedure -- there are others that have their place as well.
In a spacetime without any black holes at all, we can take any such values-surface whereupon all the values are specified, and from that we can recover all the values of the spacetime-filling fields everywhere in the spacetime. This is the https://en.wikipedia.org/wiki/Initial_value_formulation_(gen...
The important thing about the initial value formulation is that we can on our chosen values-surface perturb a single field-value, and trace the consequences to neighbouring values-surfaces, and their neighbouring values-surfaces, and eventually recover the whole set of spacetime-filling fields everywhere in the spacetime. Indeed, one family of slicing, https://en.wikipedia.org/wiki/Hamiltonian_constraint#Hamilto... lends itself to https://en.wikipedia.org/wiki/Canonical_quantum_gravity (CQG). CQG works everywhere in the absence of strong gravity, and even provides a clear definition of strong gravity in terms of renormalization: http://www.preposterousuniverse.com/blog/2013/06/20/how-quan... (Below I'll generalize this to the Effective Field Theory (EFT)).
If we have no black holes, and no early singularity, the effective theory is almost certainly correct everywhere in the space-time. (Here I won't even consider the early universe problem; there is a problematical ultradense phase in the Hot Big Bang model that requires beyond-the-standard-model physics that wrecks fields-of-the-standard-model values-surfaces before we get to strong gravity.)
If we add black holes, but without Hawking Radiation (that is, they only ever grow) on each hypersurface we have to "cut out" the field-values at the boundary of any region containing strong gravity. These regions are, crucially, well inside the event-horizons of massive black holes. That is, the EFT does not end at horizons, it ends near gravitational singularities.
While there are some annoyances, for most reasonable slicings, we can still recover the full spacetime-filling fields everywhere in spacetime. The field-values that enter the horizon are trapped within the horizon, and eventually they are trapped within our "cut out" region. As our black holes never evaporate, those field values have no impact on future slices. We have, however, found ourselves with a new constraint that picks out a direction of time: the future is the direction in which the "cut out" has no impact, but the past is one in which the "cut out" emits field-values. That's the main source of annoyance, and stresses the "initial" part of "initial values". Picking out just any surface will only guarantee you recovery of the future successor surfaces; in most cases you cannot even in principle recover the past values-surfaces, with the result that you also cannot recover the whole set of spacetime-filling fields. THIS is the incompatibility between quantum mechanics and general relativity.
(In practice, researchers -- including Hawking in his original Hawking Radiation paper -- choose to study "eternal" black holes that never grow or shrink, so that the field-values are always recoverable everywhere outside the horizon. However, because the black hole doesn't grow, you have to play some tricks to deal with matter that crosses the horizon. Those tricks lead to the negative-energy particles in Hawking's paper and in many popularizations of Hawking Radiation. In a more realistic model, one would let the black hole grow or shrink, and do away with the need for negative energy altogether, although it would not have been tractable for Hawking to take that more realistic approach in the fancifully named "Black hole explosions" paper of 1974, https://www.nature.com/articles/248030a0 ).
Let's condense the point made above: we cannot reconstruct the full past of a black hole that forms by gravitational collapse of matter. (This gives rise to the black hole uniquess theorems and in particular https://en.wikipedia.org/wiki/No-hair_theorem ). Without black hole evaporation, we can still predict the full future.
If we add black hole evaporation via thermal Hawking Radiation, we have a new problem that breaks the future predictability as well. Black holes at every time in their history from initial collapse to final evaporation emit Hawking quanta fully determined by their no-hair parameters[1]. In a typical black hole, the mass parameter is the driving term. If one starts with an initial values-surface just before strong gravity appears, then the very next (future) values-surface probably has Hawking quanta. The spectrum of the Hawking quanta is statistical: it is, in quantum field theory terms, a mixed state. But the spectrum of all the quanta in the fields just before strong gravity arises is a pure state. In more relaxed terms, we have full knowledge of the pure state, but we can only talk in terms of statistics for the mixed state.
The problem persists across the whole of the future spacetime: a Hawking quantum can fly off to infinity, and for realistic fields (e.g. the standard model), it may interact with other matter at arbitrarily large distances from the black hole. (Hawking Radiation was initially modelled with all matter represented as a non-interacting scalar field; the field-values of the Hawking scalars propagate to infinity, but don't really matter all that much in the model. But if a small-mass black hole emits an electron-positron pair, the former could fly off and meet a proton some time in the future, and probably we would want to know about a proton gas being neutralized with the result that it may begin to collapse gravitationally, whereas in the absence of Hawking electrons, it likely would not. Although the initial model was very restricted, these sorts of implications were almost immediately clear: large scale effects can be triggered by Hawking radiation, and as Hawking radiation is inherently probabilistic, we have a cosmic Schroedinger's Cat problem.)
So, back to your words:
> cryptographic mixing function
Hawking radiation converts a pure state into a mixed state. A cryptographic mixing function converts a pure state into a pure state in a way which is hard to trace.
Now, back to this article. Hawking et al. decided to break the no-hair theorem, and to decorate black holes in such a way that you can still recover the past of a (never-evaporating, always-growing, no Hawking radiation) whole spacetime from a values-surface on which there is already strong gravity. Additionally, the same mechanism allows one to recover the whole future of the spacetime from a values-surface on which there is strong gravity (and thus Hawking radiation). The downside is that one has to have the full set of values on the fields with strong-gravity, and those will (under the idea in the OP paper) include extremely low energy "soft hair" particles (the OP paper does not decide whether "soft hair" is just photons, or may be the whole set of standard model particles; as with the original 1976 paper Hawking and his coauthors consider a restricted form representation of all matter in the spacetime).
So in a way, what they are doing is introducing a "cryptographic mixing function" to avoid producing a mixed state. You get determinism everywhere (instead of determinism before strong gravity, and probability after) in initial-values formalisms, by doing away with the no-hair theorem (which raises questions about the uniqueness of theoretical black hole models like Schwarzschild and Kerr).
It is an interesting idea that deserves further study (and will get it), but it is too early to make bets on whether it will be fully succcessful at repairing the "damage" that strong gravity does to the EFT.
Moreover, it is not an answer to the question, "what happens in strong gravity", and in particular does not prevent the formation of a gravitational singularity inside a black hole. It also has nothing to say about what happens at extremely high energies (much higher than the electroweak scale) in the early hot, dense universe.
However, just making the EFT work in a wider variety of spacetimes is a fine goal!
- --
[1] The Hawking radiation when a black hole initially forms by gravitational collapse of matter is pretty extreme and is relevant to the early black hole and its immediate environment. It's hard enough to take into account that the difficulty gets its own name: the backreaction problem. The gravitational backreaction (much less matter interactions) of hairs produced at young black holes is not mentioned in the OP paper by Hawking et al. :/
It’s funny how we talk of information “not being lost” since it’s emitted as radiation... would we be able to decypher anything? If not, it’s still lost.
What’s news is that it isn’t all lost, that the starting state has any influence at all on what comes out. I am only an enthusiastic amateur at this, but I get the impression this is as surprising as the discovery of Hawking Radiation in the first place.
No, that's emitting random particles (or antiparticles) from pairs created near the horizon, where one falls in and the other escapes.
That's not getting the information that falls in out.
In many ways I would argue that Einstein would be a more apt comparison. Einstein based most of his fundamental musings by identifying issues you see when combining different fields and then using thought-experiments to think about these issues. Hawking similarly operated through though-experiments and trying to piece together different concepts from different fields (though he focused far more on cosmology).
Einstein was a brilliant scientist (obviously), but people often over-complicate his work. In many ways, the real beauty of Einstein's work was just how simple and fundamental it was -- and how un-intuitive the final conclusions are. Newton is a whole different ballgame (even though he was wrong about absolute space and time -- but that conclusion took Maxwell's equations to discover).
I think GR is at Newton's level. They say most of Physics is very iterative, and if X didn't discover Z, the probably another person Y would have 5-10 yrs later. But this is not true for GR. GR came out of the blue, it wasn't strictly required to explain anything important back then. It was just Einstein sitting down, doing thought experiments about elevators in space, then a huge, incomprehensible (to me) mental leap to manifolds and tensors, and the Einstein equation. You can try this yourself: read his popular GR book (it's excellent), then pick up a GR textbook and read the first chapter, and see if you could get from the thought experiments to constructing the math.
It's hard to compare it to Newton, bc Newton also had to invent Calculus, but it's up there.
It brings a stupid happy smile to my face every time I think about it.
Sure. Hawking's work greatly extended our understanding of black holes is fascinating and important to our understand of both the origin of the universe and it's ultimate demise. For the question of "What was the Universe prior to the Big Bang", the Hartle-Hawking state is, at current, one of our best answers - likely a singularity of both space and time, meaning that the idea of a boundary for the beginning of time isn't something that actually exists.
Newtonian physics are important in that at the energy and mass levels we experience life in, they work out to be close enough to how things actually work as to not be meaningfully distinct.
But they're not actually (the most) correct. Special relativity, general relativity, and quantum mechanics show that we have more correct understandings of physics than Newton's, and certain ideas of Newton's are incompatible with our current understanding. Gravity is an interesting and simple one - with Newtonian physics, the apple falls to earth. However, Galileo would have disagreed with Newton, had they been able to discuss the topic. Leibniz did disagree. Newton was a believer in an absolute frame of reference in the universe, whereas the underpinning of general relativity is that all frames of reference are relative - that if you use the apple as your frame of reference, it is the earth falling towards it. And this wasn't something new that came from Einstein and Hilbert - Galileo recognized this, Leibniz recognized this, etc. Even when Newton was using Galileo's principal of relativity to develop Newtonian physics, he diverged on this fairly central part.
In fact, the work of Einstein and others has shown us that gravity is almost certainly not a force at all, that mass is not attracting mass over a distance.
TLDR: Newtonian physics aren't actually "correct", yet we venerate him because of how important of a body of work they are. Hawking's work on black holes and singularities is important to our understanding of where the universe came from and how it will end.