To put some numbers on it, a good cryptographic hash should produce random-looking output with no way to predictably influence the output, and no visible correlation between the input and the output. Put in A, get out "random number" B, except that every time you put in the same A, you get out the same B.
If you have a hash that matches that description, then your only hope is brute force, so you look at the size of the hash. If you're breaking SHA-256, for example, that's 256 "random" bits.
If you have a specific hash that you want to generate, it will take you on average 2^255 attempts. (On average you have to try half the possibilities before you find a match.) If your computer can do a billion hashes per nanosecond then that means you'll spend on average 2^255 / (10^9 * 10^9) seconds, or about 10^41 times the current age of the universe.
If you just want to find any collision, then the birthday paradox comes into play, and on average you'll need to try roughly the square root of the number of possibilities before you find a collision. At a billion per nanosecond that's 2^128 / (10^9 * 10^9) second, or a mere 780 times the current age of the universe. Plus you have to figure out how to store all those intermediate results.
If MD5 fit this property it would still be good. Not as good as SHA-256, because it's only 128 bits, but plenty sufficient. 128 bits with the hypothetical billion hashes per nanosecond gives you 390 times the current age of the universe to find a specific hash. A collision is easier, at 18 seconds, but a billion hashes per nanosecond is also orders of magnitude faster than you'll realistically be able to do, and you'll need 2^64 * 128 bits = 300 exabytes of storage.
The problem with MD5 is that it does not fit this "random number" property. You can manipulate the input with somewhat predictable results in the output if you're clever, and that means you can generate a collision much easier than it would require for brute force.
My understanding is that more modern hashes are believed/hoped to have this property, but it's unknown. And not only that, but it's unknown whether any hash could exist with that property, or whether it's a theoretically impossible goal.
This is from 1998 but the relevant parts - https://www.schneier.com/essays/archives/1998/05/the_crypto_...
>Cryptographic algorithms have a way of degrading over time. It's a situation that most techies aren't used to: Compression algorithms don't compress less as the years go by, and sorting algorithms don't sort slower. But encryption algorithms get easier to break; something that sufficed three years ago might not today.
>Cryptographic algorithms are all vulnerable to brute force--trying every possible encryption key, systematically searching for hash-function collisions, factoring the large composite number, and so forth--and brute force gets easier with time. A 56-bit key was long enough in the mid-1970s; today that can be pitifully small. In 1977, Martin Gardner wrote that 129-digit numbers would never be factored; in 1994, one was.
>Aside from brute force, cryptographic algorithms can be attacked with more subtle (and more powerful) techniques. In the early 1990s, the academic community discovered differential and linear cryptanalysis, and many symmetric encryption algorithms were broken. Similarly, the factoring community discovered the number-field sieve, which affected the security of public-key cryptosystems.
DES was used in the 70s, now it can be brute forced in a few days (with the right hardware).