All messages that you choose to transmit follow some probability distribution with each message having finite, nonzero probability. This is what the author gets wrong, on a very fundamental level. If this were not true, your messages would require an infinite number of bits to transmit.
Studies of the usefulness of hash functions are based on comparisons to an "ideal" hash function called a random oracle. The idea behind a random oracle is that you can't reverse it, but you can try to remember what outputs it gives for known inputs.
This "perfect hash function" has been extensively studied and still suffers from all of the flaws that the author mentions in this article. And yet people prove rather strong properties of cryptography systems under the random oracle assumption. So the author's criticisms, fortunately, have no bearing or relationship to real cryptography, because the good systems are already designed to be immune to such attacks.
I'm on mobile so I can't really give a play by play but the article is really based on nothing more than bad math.
Hash functions. In case of a good hash function the information about the actual distribution of the inputs is erased. That the hashes are uniformly distributed does not imply that the inputs are uniformly distributed which in turn renders any conclusion about the distribution of the inputs after applying a transformation invalid.
Uninformative priors. This seems to be more of a philosophical problem I don't really know much about. But as far as I can tell if one tries to quantify a lack of information in a naive way using probability distributions, then one gains information, for example that the value is uniformly distributed over some range, and this information has consequences like specific distributions of derived values.
So the attempt to quantify a lack of information turns into a self-defeating endeavor, not necessarily because of any inherent information but because of the information injected during the modeling process.
But this is well-known, and because it's well-known, no sane cryptographic system cares that hash outputs leak information that way. For example, look at PBKDF2, HMAC, or various asymmetric key authentication schemes.
As far as I know, there's no publicly known one-way cryptographic function for which there's a provable minimum level of effort to invert.
On the hash function front, where the author seems concerned about something, perhaps the lesson is that filling up hash tables to the 90% level before expansion is pushing too hard.[1] At 70% fill, the hash function doesn't have to be near-perfect, just not awful.
[1] http://accidentallyquadratic.tumblr.com/post/153545455987/ru...
Zero information is more like not knowing what probability distribution a variable comes from. Rather than hiding "what value does x take", we can hide "what probability distribution does x come from". That's one level up. (Then you could ask what the likelihood of x having a given probability distribution is, and so on -- zero information is having none of this information, all the way up.)
I think what you are saying when you use the phrase "zero information" is really "zero knowledge".
The article seems to want to say that "zero information" ought to be a probability distribution such that for all functions f, f(X) is the same zero-information distribution, i.e. from nothing we get nothing. The point is that no such X exists, because every probability distribution encodes some amount of information. What we want is some function F such that for every X in a large class of probability distributions, F(X) is uniform. Which is exactly why x^2 is a terrible hash function.
Non-linear transform of a random variable:
https://www.youtube.com/watch?v=hQjk4ClpuUk
Linear transform of a random variable:
A good hash function ensures that, whatever information you have about the input, you have no more information upon seeing the hash. If you believe x is equiprobable over some range, seeing H(x) should cause you to continue to believe it's equiprobable over that range. If you believe x is the square of some random variable y which is equiprobable in some range, seeing H(x) won't convince you that x is equiprobable!
If your prior for x is equiprobable over the space of Microsoft Word documents confessing to high treason and 0 probability elsewhere, seeing H(x) won't tell you which Word document it is, but it certainly won't cause you to believe that it might be an MP3, either.
Does he know _why_ c is constant? Had he mastered general relativity enough to be 100% sure that it can't vary in time or space? Or that our current understanding of physics is total and there absolutely can't be something that we don't know about speed of light?
https://en.m.wikipedia.org/wiki/Time-variation_of_fundamenta... https://en.m.wikipedia.org/wiki/Variable_speed_of_light
But that is not true, we have -some- information concerning the value of x, namely that the value of x is equiprobable.
In any probability class, you will learn that you can't have an uniform distribution on an infinite set.
Edit: As pointed out, the correct assumption is not unboundedness, but having infinite measure (since then no normalization constant exists). I thought they were equivalent, but a simple counterexample is [0,1] \cup Z.
If I multiply x by 0, the distribution is now just 0.
If I take abs(x) now it is positive.
Are these confusing to anyone?
But if we move it out of math, it's easier to understand: if you have some guesses about x, and I tell you nothing, you have the same guesses about x. You don't stop having guesses.
i(x) = x // We know i is uniformly distributed just as x is - preserve "unknowability"
f(x) = x^2 // We know f has higher probability between [0,1] than [1,2]
g(x) = 1 // We know g is always 1.
Just because your inputs are random, doesn't mean your output is - the implementation matters.
An observer X has a prior estimate of the distribution d1 of variable x.
The source material yields a distribution d2 of variable x.
Then for that observer, the information gained from the source is something like |d1-d2|.
The ideal hash function leaks no information. In practice, we can only make one leak very little over a long time.
Actually,the speed of light in a vacuum is an upper bound. The actual speed of light can be substantially slower. https://en.wikipedia.org/wiki/Slow_light
Some cosmological theories do not require the speed of light to be constant and fixed, and replace that axiom with requirements on the relationship of the speed of light with other physical parameters.
> … it is simply not possible for light to be still, or even propagate at a different speed in the same medium.
So he's already acknowledged the fact that the speed of light depends on the medium. His point, though, is that light must still travel at this speed (even though the speed itself depends on the medium).