I've in the past been annoying about saying I think we should just call all password hashes "KDFs", but here's a really good illustration of why I was definitely wrong about that. A KDF is a generally-useful bit of cryptography joinery; a password hash has exactly one job.
Including the username in the hash input gives you guaranteed domain separation between users that you don't get from salts/nonces. Its a generally good idea if you have a hash function with unlimited input size (all modern cryptographic hash functions except bcrypt have unlimited input size).
I'm kind of baffled how they came to use bcrypt for this. Bcrypt is not exactly subtle about only supporting 72 bytes of input. And this is at a company who provides auth as a service; I've got to imagine they had multiple engineers who knew this (I guess not working on that code). Hell, I know this and I've only used bcrypt twice and I'm nowhere near a security/crypto guy.
The naming overlap between the two is bad, so the industry has tried to move towards naming the two differently. Password hashing functions are not ideal KDFs, even though a particular primitive may be secure for use as a KDF. That's a root of some of the confusion.
Also I'm not sure the average developer understands the distinction.
> Also I'm not sure the average developer understands the distinction.
I'm an average developer. I'm not sure that I understand exactly. What should I be reading, or what can you tell me?Thank you!
I remember :-): https://news.ycombinator.com/item?id=42899432
Product - we need to provide service availability even if the AD is down
Engineer - Ok, may be we can store the ~credentials in cache
Security - oh, in that case make sure everything in cache is hashed properly with the recommended Bcrypt algorithm
Engineer - We got the approval from the security, we are in much safer zone, lets deliver and get a win
In addition to (true) KDFs, people often want a HKDF (HMAC-based Key Derivation Function) or hierarchical deterministic key derivation function (HDK).
I’m wondering if okta was inspired by those.
I feel gross calling a function that just blatantly ignores part of its input a hash, much less a password hash. It's like calling a rock a fish, because they're both in water, despite the lack of swimming. In any case, a hash that ignores some of its input is certainly not a cryptographically secure hash, so why is it being used for crypto?
The analogy is something like creating a hash map whose insert function computes the slot for the key and unconditionally puts the value there instead of checking if the keys are the same during a collision. No amount of tinkering with the hash function fixes this problem. The algorithm is wrong. A hashmap should survive and be correct even giving it a hash function that always returns 4.
It's valid to assume "it will never happen" for 128 bits or more (if the hash function isn't broken) since chance of a random collision is astronomically small, but a collision in 64 bits is within realm of possibility (50% chance of hitting a dupe among 2^32 items).
Think about it, that's what hashing passwords relies on. We don't store a plaintext password for a final check if the password hash matches, we count on a collision being basically impossible.
A hashmap is different, because it's using a much weaker hash function with far fewer security guarantees.
Plus, you're assuming the original values are even kept around for comparison. The cache key likely just mapped to something simple like a boolean or status flag.
Now, the data structure they're using for a cache will use some sort of hash table, likely in memory, so maybe they've got the 192-bit bcrypt "key" and then that's hashed again, perhaps well or perhaps badly [e.g. C++ really likes using the identity function so hash(12345) = 12345] but then a modulo function is applied to find the key in an index and then we go groping about to look for the Key + Value pair. That part the API probably took care of, so even if the hash has size 6-bits, the full 192-bit key was checked. But the original data (userid:username:password) is not compared, only that 192-bit cache key.
Poor API design can make it easier for other contributing factors (checking cache here, but could also be not running load tests, not fuzzing, human error, etc.) to cause incidents.
I'm glad to see this come out, plus which libraries handle out of bounds conditions with errors vs. fix-up the input to cause silent failures.
I co-maintain a rate-limiting library that had some similar rough edges, where it wouldn't always be obvious that you were doing it wrong. (For example: limiting the IP of your reverse proxy rather than the end user, or the inverse: blindly accepting any X-Forwarded-For header, including those potentially set by a malicious user.) A couple years back, I spent some time adding in runtime checks that detect those kinds of issues and log a warning. Since then, we've had a significant reduction in the amount of not-a-bug reports and, I assume, significantly fewer users with incorrect configurations.
72 is the max length of id, username, and password combined. If that combination is over 72, then failure and the cache key would not have been created. So, no, the attacker would not need to guess only one character of a password.
The API was designed to generate a hash for a password (knowledge factor) and for performance and practical reasons a limit has been picked up (72). The chances that some one knows your first 72 characters of password implies that the probably is a lot higher for the abuser to have remaining characters too.
While smaller mistake here in my opinion was not knowing the full implementation details of a library, the bigger mistake was trying to use the library to generate hash of publicly available/visible information
There's nothing wrong with a limit. The problem is that the library silently does the wrong thing when the limit is breached, rather than failing loudly.
As a mistake, it's fine. Everyone writes up things like that. But defending it as an affirmatively good decision is wild.
``` (defmethod derive-key ((kdf bcrypt) passphrase salt iteration-count key-length) (declare (type (simple-array (unsigned-byte 8) (*)) passphrase salt)) (unless (<= (length passphrase) 72) (error 'ironclad-error :format-control "PASSPHRASE must be at most 72 bytes long."))...) ```
Also I'm not sure what functionality the authentication cache provides, but their use of bcrypt(userId + username + password) implies the password is kept around somewhere, which is not the best practice.
OT. Has Argon2 basically overtaken Bcrypt in password hashing in recent years?
That depends on how exactly it was used. If it was simply used to check if previous authentication was successful (without the value containing information who it was successful for) then single long password could be used to authenticate as anyone.
Only if everyone uses the same long prefix for password.
It's great to hear that Zig covered both cases. However, I'd still prefer the opposite behavior: a safe (without truncation) default `bcrypt()` and the unsafe function with the explicit name `bcryptWithTruncation()`.
My opinion is based on the assumption that the majority of the users will go with the `bcrypt()` option. Having AI "helpers" might make this statistic even worse.
Do you happen to know Zig team's reasoning behind this design choice? I'm really curious.
These choices are documented in the function's docstring, but not obvious, nor do they seem encoded in a custom version.
`bcryptWithTruncation()` is great for applications entirely written in Zig, but can create hashes that would not verify with other implementations.
The documentation of these functions is very explicit about the difference.
The verification function includes a `silently_truncate_password` option that is also pretty explicit.
Also neither seems to warn about the NUL issue.
[0] I assume for compatibility purpose, but it still seems very dubious.
If you don't see the mistake, Google 'yaml.safe_load'.
Is there a reason I might be missing?
Hash functions themselves are general purpose and don't protect against low entropy inputs (low entropy passwords). They also don't protect against rainbow tables (pre-calculated digests for common or popular passwords). For password hashing you want something slow and something with unique entropy for each user's password to prevent rainbow attacks.
It doesn't solve the problem of weak passwords, but it's the best that can be done with weak passwords. The only improvement is to enforce strong passwords.
> SHA-2 family of hashes was designed to be fast. BCrypt was designed to be slow.
Slow == harder to brute-force == more secure.
The node crypto module is essentially an API that offloads crypto work to OpenSSL. If we dig into OpenSSL, they won't support bcrypt. Bcrypt won't be supported by OpenSSL because of reasons to do with standardisation. https://github.com/openssl/openssl/issues/5323
Since bcrypt is not a "standardised" algorithm, it makes me wonder why Okta used it, at all?
I remember in uni studying cryptography for application development and even then, back in 2013, it was used and recommended, but not standardised. it says a lot that 12 years on it still hasn't been.
Similarly, a developer I worked with once claimed that CRC32 was sufficient verification because CRC32s changed so drastically depending on the data that they were difficult to forge. He was surprised to find out not only is it trivial to update a CRC32, but also to determine the CRC polynomial itself from very few samples.
I've tried to stay positive and explain that they will fool nearly everyone, the technology they used is usually recognizable to the type of people that would want to bypass it for their own gain (or knowledge, assuming white hat types poking around). Usually putting a spin on how they came up with something that looks secure was a great idea, but the type of people that will exploit something like what they built, will recognize patterns easily (and now that AI is around, you could even make them feel better by stating how there is software built to recognize these patterns).
[Im assuming the usual definition of salt where it is known by the attacker... a pepper would be fine]
How does a company whose only job is security screw that up so badly?
One of the points of the article is that documentation isn’t enough: one cannot allow callers to misuse one’s API.
While I don't have any answers to this, I've realized that it's an ideal showcase of why fuzzy testing is useful.
On the other hand, I'm not a security developer at Okta.
Silently truncating the data is about the worst way to deal with it from a security standpoint. No idea why that decision was made back in the day.
https://gist.github.com/neontuna/dffd0452d09a0861106c0a46669...
bcrypt::non_truncating_hash()
https://docs.rs/bcrypt/latest/bcrypt/Funnily, TFA later also suggests that such function should exist...
amusingly, the python "library" is just a thin wrapper around the same rust library.
protip: a lot of cryptography primitives actually aren't that complicated in terms of the code itself (and often can be quite elegant, compact and pleasing to the eye). if it's important, probably worth just reading it.
it's what people wrap them with or the systems they build that get messy!
About solutions, Django hashes by default the password only with a salt. I'm not sure why it would be valuable to combine user_id+username+password. I've always assumed that using salt+password was the best practice.
cache_key = sha(sha(id + username) + bcrypt(pass))
with sha256 or something.Is there any security issues with that? I'm a "newb" in this area, so I'm genuinely curious about the flaws with the naive approach
Weird take. Usernames are often chosen by the user. Less so in corporate world but definitely not unheard of
https://www.boehringer-ingelheim.com/media-stories/press-rel... has a camilla.krogh_lauritzen@boehringer-ingelheim.com at 48 characters, for example.
- MUST be non-reversible, including against tricks like "rainbow tables"
- should be somewhat expensive to discourage just trying all possible passwords against a (leaked) hash
KDF is a key derivation function. The value will be used as a key in, say, AES. The important properties are:
- should distribute entropy as well as possible, across the required width of output bits
- reversibility less important as the derived key shouldn't be stored anywhere
- may or may not want artificially inflated cost to discourage cracking
There's either (1) nobody competent enough there to know (which is likely not true, I had a pentester friend recently join, and she is very good), or, more likely (2) management doesn't care and/or doesn't give enough authority to IT security personnel.
As long as clients don't have any better options, Okta will stay this way.
Yes, this is very true.
Also some companies realize that they can screw up royally because they do no have the proper knowledge, and authentication is not a core business of theirs.
I can understand them. I also use mail systems I am not that happy with, but I have this comforting idea that if they have a problem, 3B people are waiting together with me for it to be solved, and that's the kind of pressure that helps.
(It's still got the truncates-at-72 problem with PASSWORD_BCRYPT, though.)
Don't conceive your own cryptographic hacks. Use existing KDF designed by professionals.