True indeed.
> Are you saying that they're polling all the hijacked accounts at a high frequency to detect trades they could intercept?
Yes.
I have to admit, the "milliseconds before" part was just wrong because I failed when trying to oversimplify for attention.
> it's "a trade with foo (whom you've had as a friend for 20 days), where you give a xyzzy and receive a quux". Are the users just blindly approving trades worth thousands without even verifying?
Often, the attackers focus on swapping trade offers that are initiated from a 3rd party, e.g., a trusted middleman marketplace site that requests your item (with nothing in return) that you want to offer. 3rd party sites take a lot of blame for "stolen items" because people don't even understand how this scam works.
Here, the few seconds are between the 3rd party offering the trade and the compromised user accepting the trade, not between the user accepting the trade in the browser and on his phone. Since the phished user is not aware of the 3rd party site's account in the first place (it is not one of his friends), it is very easy to clone all the observed account details and transform a scam bot account into looking like it is the one from the 3rd party site. Actually, there are characteristics that cannot be spoofed, but an ordinary user, not even aware that he was phished and that someone has control over his account who can do such things, will not notice this.
Now, you could argue that preventing 3rd party sites from existing could also solve this issue. However, I see a valid use case in these 3rd party sites. The goal of my suggestion is to counter these attacks with minimal effort without disabling automated trading capabilities completely:
> A captcha would be just be minor irritation for the attacker, and anyone who can be phished into logging in can be phished to approve the key generation.
I agree that it would only make the attack harder, not impossible, but considering the usual workflow I still see this as an improvement - as a first step.
The phishing is usually done by setting up a "legit" website, e.g. for skin trading, skin gambling or even any other non-financial purpose that requires authentication via Steam. This "legit" website then spawns a malicious "Login with Steam" OpenID credentials popup, rendered inside (!) the web page. This means, the website itself draws (depending on your OS and browser) a perfectly fine looking Browser popup window inside the legit page. It basically spoofs the browser UI itself. Laypeople get fooled easily by this, they sometimes do not even question why the window cannot be dragged out of the page, if they even try. These web apps are built in top-tier quality because obviously, the profit potential is huge. There is probably even a framework sold to easily recreate such pages at this point.
What I'm trying to say is: Getting the user to login is easy because it's part of the legit workflow. The API key generation - not so much.
Basically, everything I'm asking for is to make it hard to automatically transform a normal user account into a bot account used to automate trade offers. I know that there is a valid use case for automated bot accounts and automated trade offers. But the automation of the action to enable such functionality for an account should be prevented at all cost, and it should be explicitly requested from the user, including a warning.
Probably you are saying something similar with that statement with which I agree:
> the bigger problem here is that the API keys are unscoped
TL;DR: I think that preventing automated Steam web API key generation is the best short-term solution considering effort to make the attack a lot harder for the scammers.