Often, you have load balancers that are SSL endpoints, so the data is decrypted at that point.
You can start to see the problem already. What if there is a different bug, and so a dev starts logging requests somewhere down the line? You accidentally start logging cleartext passwords. Oops. Facebook was fined for this not that long ago.
But if the password is encrypted, then it’s not really an issue, and the black box blob can be forwarded to a login microservice. There, the team decrypting will be on higher alert.
So depending on the structure of various teams, you now have fewer teams that need that kind of security oversight and can move faster.
Smaller blast radius of something goes wrong.
[1] https://twitter.com/sroussey/status/1347688753221931010?s=21
However a better approach to this problem is to not rely on shared secrets. Use public key signature tech to stop worrying about mistakenly logging a secret. If you never had it, you can't lose it.
If you were to log literally every byte of the plaintext traffic when I sign into GitHub (e.g. maybe you're a GitHub ops person), you don't get the ability to sign into GitHub as me. There's a WebAuthn signature step, my signature is authentic, and you can even verify that from your log if you want, but you'd need to make a new signature to sign in, and you can't do that because the key needed to make my signature never left my hands.
Even better, GitHub defuses their liability because as well as a (presumably hashed) password that could be broken by a hypothetical attacker they've got a public key for me, and learning that public key doesn't help the attacker do anything, at their site or anywhere else. Even - unlike with the SSH public keys GitHub holds - to identify people, since WebAuthn public keys are deliberately uncorrelated you can't match my GitHub key against a Facebook key for example.
hash(static-pepper, username, password) * 250k
That + tagging the password with "password++" or something means that you're a lot safer against the major issue of leaking a password before it's stored, for example the mistake that definitely happens everywhere of "let me just add request logging, whoops there's everyone's plaintext passwords". You can always search your logs for the 'password++' tag and alert if you find it, and if that does happen at least you know an attacker isn't going to have an easy time extracting a plaintext password - it buys you time.
And if an attacker gets SQLi or whatever and dumps the passwords they're that much harder to crack - you've added hundreds of thousands of iterations of key stretching, and it's totally distributed to clients so you don't even have to worry about it blowing up your db/ auth service CPU.
And it's trivial to implement, which is the really important part. ZKP is a lot more work, but what I described is like 5 extra minutes and pretty trivial.
Though this is obviously better from a wire-interception PoV, it means that you can't enforce any password policies, or maintain a list of leaked/bad passwords (e.g HIBP)
You'd actually end up hashing it twice. Once using the salt to go from plaintext to what the sever has stored and then again using the challenge.
It has problems though. The strength of your password hashing would be limited by what the weakest client could do, rather than what the server could do. Asymmetric encryption ends up being simpler.
Making decisions as if you are immune to mistakes is a good way to ensure you prove yourself wrong in the worst way.
Making decisions to prevent things which should be impossible is how you build reliability.
It seemed to work for them, security researchers went to town on it and while they quickly discovered there was a second encryption layer below SSL, they were unable to determine what it was and how to crack it.
IIRC their encryption was never broken, and thanks to that track record, they slowly increased daily spend limits over the mobile app.
Long term, because they were very forward-thinking and they had competent native app developers (as opposed to the competition who struggled for years with mobile web / crossplatform tech), they increased their market share by a lot, now being the largest bank in NL; can't find historical data, but they went from 37% in 2016 to 40% in 2018.
There is a misconception that the responsible disclosure system reflects real security threats, but it unfortunately doesn’t. The areas of expertise in the real world are different, and sticking a bunch of crypto in like that tends to be a case of making your eventual problems more complex, bigger, and harder to find.
I maintained (and still do maintain) it was security through obscurity and a waste of engineering effort that should've been spent on actually hardening the banking API server and migrating it to a modern stack.
I thought it inevitable and indeed - it got cracked twice anyway (despite the use of Arxan, extensive anti-debugging functionality and rewriting the crypto on at least one occasion).
Disagree that hardening the API server is any better. This is the approach common in the US market, and my team has broken everything available there too. Also disagree with insinuations that these banks don't have good, modern stacks. Barclays in particular is great. Way better than any challenger bank.
Lloyds also took a similar approach to Barclays but they did a better job than Barclays did (although Barclays did a great job themselves too) and so we never got around to finishing it before we pivoted to the US market. As far as I know it's still unbroken, although I'm pretty sure my colleagues could easily break it today. We've since developed far more sophisticated reversing techniques.
But indeed as you describe, since 2012 or so, the ING app is formidable and built by amazing people.
Source: I work at ING, in IT, but in a completely different area.
> "Het authenticatie protocol ziet er goed doordacht uit. Er wordt niet vertrouwd op SSL of TLS. In plaats daarvan gebruikt ING een extra encryptielaag waarvoor het wachtwoord wordt afgesproken via het SRP protocol. Ook genereert elk mobiel device een eigen profileId en een public/private sleutelpaar", merkt Van den Berg op.
In English:
> "SSL/TLS isn't trusted, instead, ING uses an extra encryption layer the password of which is negotiated using SRP. In addition, every mobile device generates an own profileId and a public/private keypair"
Assuming SRP refers to this https://en.wikipedia.org/wiki/Secure_Remote_Password_protoco...
- “Remember the password” barely ever works, even on desktop. Since I don't quite log in every day due to being too old for that, I have to redo the process every time—on a machine that I bought with my own money just for myself and intend to protect with both technical means and physical force.
- Somehow copy-pasting passwords from KeepassX/XC doesn't work on Mac, with the shortcut. Not sure if this is a misfeature of Steam, but I have to paste the password to an editor first and then copy out of there into Steam. (Seems though that ‘paste’ in the context menu does work—this might've changed since I first noticed the issue.)
- And of course, the weird variation on 2fa, via email, instead of the good regular TOTP. As is tradition by now, I'm also given the choice of installing yet another app on the phone, which somehow doesn't quite seem to serve my interest.
I also don't have to log in every time I use it, that's not a steam-problem.
My experience is that a lot of this stuff works really well if you're using Steam regularly, and completely falls apart if you use it once a month.
It has to be a bug, or maybe a security feature for accounts of a certain size?
They also strong-arm you into using the app. If you log into a new device (or Steam thinks it's a new device since you cleared cookies) and you don't use their app for 2FA, then the device will not be able to trade or use the market for 7 days. They only waive this restriction if you use their app for 2FA and it has been active for at least 7 days.
It's a bit frustrating since the Community Market/Trading is likely only used by a minority of users, but seemingly a ton of login limitations are imposed because of it.
It's probably because it moves a significant amount of money, between trading cards, CSGO knives, TF2 hats, etc. Of course, nothing comparable to banking systems and general-purpose marketplaces, but I personally think those protections only add to the product.
Fighting with bullshit like this is not what I'm looking for when I want a game, so screw it, if a game needs Steam, I don't need the game.
Source: I used to do customer service for Blizzard and a large part of our work was dealing with accounts compromised by gold sellers.
I reverted to e-mail. I only have free software on my phone, and don't regret that choice.
This has been my experience too. I still check the box every time I log in.
It happens to me only when I keep switching machines (sometimes I play on Linux, sometimes on Windows) => I guess that it's some kind of security check.
If I stick all the time to a single machine then I basically never have to re-login (if I don't stop playing for something like 1 month or longer).
I like Steam, it’s convenient and it works, but I don’t think “being ahead of the curve” is in their dictionary.
This was a LONG time ago when things being secure on the internet wasn't a given to most people.
Is that for browser or client? I had issue with the browser for the past 6 or so years. Every time I bring it up, a few others mention having this issue but not everyone. I think it's an account based issue since it happens on any device I use. It only happens with Steam and no other site.
I also was annoyed by email code thingy until I found this recently.
fun rotateKeys() {
publishNewKey(key = generateKey(), timestamp = now())
schedule(::rotateKeys, 1, TimeUnit.HOUR)
}
if `generateKey` and `publishNewKey` take around ~1s then you'll observe exactly this behaviour - the timestamps will start drifting from some original value.It is (likely) because they use geographically distributed terminating load balancers, perhaps owned by someone else or run in someone else’s POP, and are trying to prevent passive collection of passwords.
Edit: oh I see, you think the timeframe of this code goes back to when logging in via http:// was acceptable. Maybe.
Remember also it wasn't so long ago we were talking about things such as POODLE attacks. For all we know, some bad implementation of TLS 1.3 could default to some crappy easy to crack algorithm.
I believe there was a paper (can't find it now) that speculated about the cost to crack a specific TLS setup to be about $10 million USD in processing, going back some years. (I think it was in reference to some half of VPN traffic at the time using the same keys.) If Moore's law still applies in any sense, that cost likely halves every two years and people only really change their passwords if they have to.
Another reason is that it reduces risk server side if you are never handling user passwords - at worst an attacker gets a temporary hash that's valid for a short time, specific to that server. Maybe they can do some harm during that time, but you can ultimately revoke that key and undo the changes to the user's accounts.
This is incomplete. TLS does allow for ciphers that enable Perfect Forward Secrecy (PFS) to prevent this. Those ciphers are not the most commonly used ones, but to describe TLS the way you do implies it's a flaw in TLS.
> Perfect Forward Secrecy (PFS) to prevent this.
Sure, it was simplified. I can't remember exactly what the support was like for PFS? And given it probably requires additional exchange for DH, I imagine it would be disabled due to resources reasons.
The proposition is that the NSA has a large black budget, and it could plausibly have done the math to unwind DH with the most popular 1024-bit DH primes, and certainly would be able to do this for 512-bit DH.
Nobody does this in 2021. Your browser is using X25519 which is the same concept but with Elliptic curves instead of modular exponentiation of integers.
If Steam were concerned about certain TLS parameters, they could just ensure they never agree the worrying parameters. It wouldn't make any sense to instead bake some other mechanism for login and then trust TLS for everything else.
I trust we all agree that storing cleartext passwords in a database and doing a simple string compare is a problem so I won't rehash that bit.
If a login server is compromised then attackers can harvest cleartext passwords. It's the same class of problem with a reduced attack surface.
There is no good reason to transmit a persistent authentication secret as part of authentication. Just don't do it.
On the backend, it knows the random string it sent to the user, and it has the hashed password in its DB, so it can do the same algorithm and compare the results.
Actually, I've seen this done -worse- elsewhere, where they were actually encrypting the password, using a symmetric key. So if you sniffed the traffic and never loaded the website, I guess, you'd not know the actual password...but you wouldn't need it; it as as good as for the purposes of logging in. If you did load the website, you could still determine what the plaintext password was.
It was really irritating, since I had to figure out what the encryption scheme of a backend app was doing (when I only had access to the frontend code, and the datastore).
On the backend they already have crypto_hash(salt, password), they know the token they sent so they can build the same hash and see if it matches. This way the backend actually never has access to the non-hashed password.
The only inconvenient I can see is that you can't transparently rehash on login on the backend if you decide to migrate to a different, potentially stronger hash algorithm later. But then again if the worry is that passwords could leak in the backend, using hashes makes it effectively impossible by construction.
I guess nobody gets fired for using RSA. But at the same time doing "serious" crypto in JS always feels icky to me.
Then there is the UX problem where mechanism like that would have to be implemented on the browser level (and in fact it is as Authorization: digest is mostly what you are proposing) which according to some leads to “ugly and confusing” UI.
This is used e.g. to swap trade offers in realtime, i.e., a trade offer with the actual account is replaced by a trade offer with a bot with a similar looking profile (all set up automatically). All of this is done in the timeframe between the user setting up the trade offer and the actual 2FA mobile confirmation of this trade.
People are being phished like this for years and Valve fails to take the responsibility to implement a simple anti automation measure at the part of API key generation (e.g. email confirmation or captcha).
The monetary damage done to users is probably in the high thousands, if not millions, at this point in time.
I love gaming on Windows & PC and would love to have the PC have a "Big Picture mode" friendly UI, _throughout_ the OS. Some gimmicks I have had to resort to are to set up my PC Sign-In to be _without_ a password and on a _local account_ on my Win10 PC, along with having Steam start in BigPicture on startup. This way I can switch on my PC and have my controller connect to start gaming just like a console; but way better graphics of course :)
It's these tiny affordances that collectively add up to great User Experience features.
What might be some solutions to this? I have yet to see anything that is standardized for this purpose. Other than, loosely here, 2fa token for purpose for login only, but is still without knowing whether transmission to endpoint was over secure channels.
But I’m happy to see it got some attention after all!
It's good enough for major financial institutions and available as a service using any number of providers (like Okta, AWS Cognito, Auth0).
I love Valve and Steam but their game launcher client (Steam) lives in the stone age when it comes to use of technologies.
That said, it's far better than the competition so I am still happy, but it is still annoying.
> the login page also sources jQuery version 1.8.3 which was released in November 2012
Wow, that's a prime example of "if it works, don't change it"
Like others have suggested, I get the impression this system is assuming TLS will work and perhaps isn’t trusting the server the password ends up on.
Doesn't that essentially reduce the password's strength? Especially if there's a lot of non-ascii characters in it....
To avoid admins (or hackers) in enterprise "SSL breaker" boxes from exfiltrating passwords.
Even if it wasn't the main reason, it probably played a role. Some small time admins in education facilities would probably have an easy time with this stuff and wouldn't get caught doing it.
It has a kind of homey feel thanks to the owl and the little + symbols in the background wallpaper
Oh how far we've come. /s
User client requests a public key from Steam's servers (rotated each hour), and sends passwords encrypted with it.