The infuriating thing is that this isn't necessary for CLI tooling. The reason this approach is taken is that you need a way to get the token to a local process even if the user is doing authentication in a browser. This can be avoided by having the process listen on localhost, and then have the login flow redirect to localhost (including the token) on successful completion.
Unfortunately this doesn’t work for CLI tools on a remote machine (like say for vscode remote over ssh ). The browser redirect to localhost won’t work because the CLI tool isn’t on localhost.Haven't seen anything like this, I'll try to bring this up with the openssh folks.
Setting up SSH tunnel would be possible, but a major pain, as every source/dest combination will need to have its own port, and every signin should specify the port number.
Compared to the current system, which prints a URL in terminal which I just need to click, it would be a major usability regression.
For consumers, my suggestion is for federation providers (auth0,github, google,etc...) review and human-approve applications that ask users authorizations.
In my experience, tools don't see a difference between a 409/disconnect. They just see "error, need to reauth" (Docker, cough)
In the case of CLI app authorization (where you are proving that the refresh + access tokens are being retrieved on the same device that issued the request), the CLI could generate a local key, store it in the TPM/keychain, and then in the browser you could prove that you have access to that same key.
For devices, direct attestation could authenticate the device making the request (e.g. as a legitimate MacBook Pro, or something).
Of course, this depends on services choosing to implement such flows, and when you introduce a requirement for a TPM or similar, plus multiple cryptographic steps, implementors are likely to get lazy and just do something that works but is insecure (or they implement the flows badly with home-rolled crypto).
This is, as they say, a "known issue". Bearer tokens were defined in RFC 6750 and the thought was that more types of tokens would follow, including some that bound tokens and clients.
It took a while.
RFC 8705, mentioned elsewhere in thread, is one approach.
Another is DPoP, which was discussed at Identiverse in 2022. Here's a presentation about the approach: https://www.youtube.com/watch?v=cot40RRoPsc
Here's the current draft: https://datatracker.ietf.org/doc/html/draft-ietf-oauth-dpop-... (not sure how close they are to finishing, haven't see much activity on the mailing list about it lately, though).
The attacker may well be using a MacBook Pro, a real one. With a TPM.
Cross device webauthn is the better solution here but it's still vulnerable to the oauth phishing called out here.
Their most interesting suggestion is to use the Hybrid transport of CTAP2.2 (not published yet) to perform cross device authorization in a secure way.
This involved proving proximity over Bluetooth Low Energy and a key exchange. Then the Webauthn flow happens over an encrypted channel through a TURN server.
Problem is that your cli tool now needs access to BLE. We're not there yet.
It seems to me that you're on evilsite.com and you get a screen to authorize your AWS account, which evilsite.com then gets and can log in to your AWS account. In that case, however, I'm aware that I'm browsing evilsite.com, so what's the issue?
It's like evilsite.com requesting OAuth permissions to my Twitter account, no? We don't need the RFC for that, it's just what OAuth normally does, and you're supposed to be careful who you give permission to, no?
I think this is what the AWS Client VPN client for Ubuntu does. So AWS does have the method in their tool set somewhere, though I imagine it's owned by an entirely different team than their CLI.
I'm confused, isn't having the device listen on localhost necessary for the device authorization grant flow? What's the alternative (that, apparently, people are doing but shouldn't be)?
https://embracethered.com/blog/posts/2022/device-code-phishi...