The real trick is that TwitterBot and you see different pages. For TwitterBot, which always clearly identifies itself (and some other signs like whether it is from Twitter's network infrastructure), the flow is t.co -> attacker.site -> legitimate.site, and so shows in the card (technically called unfurling) the details of the legitimate site, including the coveted legitimate domain name. For you, the attacker.site detects that you're not TwitterBot and do whatever phishing attempt they need to do. Of course, if you do check the domain name on your browser, it won't work... but let's be honest, that's just a fraction of people here, not even including the general public.
Others ask why TwitterBot does redirections, and it seems that everyone here forgot that marketers love their Bit.ly and Sprinklr links so much that Twitter needs to have a concession here (and no, you can't just whitelist them because some companies uses their own different shortlinks like t.co, fb.me, g.co, msft.it, redd.it, and youtu.be).
Why not just directly serve the redirection as seen by TwitterBot? Because a) marketers and analytics and b) because services like Branch (app.link) and Adjust does redirect users differently depending on their specific device (like Windows vs macOS vs Linux (or even a specific distro!) vs iOS vs Android).
Whoever made this decision at Twitter should have a think about themselves.
It won't be terribly hard to build a top 50 list of url shorteners etc that cover the vast majority of the traffic.
As far as I know, users cannot view the metrics for t.co links, or am I mistaken about that?
Marketers/users cannot view t.co metrics, but even if they could, they'd want to use their own url shorters anyway I'm sure... so twitter has to have the t.co previewer follow arbitrarily many redirects.
Does Twitter only unfurl t.co URLs? If so, why would they write separate code for unfurling t.co with ?amp=1 vs without ?amp=1 ? And why would Twitter unfurl a t.co link past the first non-t.co URL? I guess that's the vuln, right, that they don't stop after the first non-t.co URL?
Like how banks have that "only type your password if we show your correct profile picture". Almost like if embedded tweets could be "signed" by twitter in a way that would register in my browser in a graphical way that would not be known to the server itself (ie putting the twitter logo next to a tweet is easily faked). But if content appearing to be from twitter was verified instead, and had my custom chosen avatar next to each showing both my key and twitter's key had signed the text.
Maybe not making sense, if anyone wants to play this back clearer go for it :)
(For example, if you had some sort of "signed iframe", the page would probably find a way to show the part from twitter that says "verified" but cover up the part that it's supposed to be actually verifying with something else).
This is the part where I imagined having a custom client side image. That way the server doesn't know what the "verified" image actually looks like. Could be a picture of my face, for example.
It discusses a fascinating point about browser UI: when the browser displays something inside its chrome, where a malicious page could render arbitrary pixels, it must establish a visual bridge back to the "trusted zone" (the chrome), providing proof that is in fact trusted content.
(The author points out that new APIs allowing writing to the entire screen means it's hopeless and we're all doomed though.)
however, this is part of AMP, and so web developers who prefer the "do whatever the fuck you want" aspect of the internet push back against it. the ability to ensure that a web link contains the same content at a future date as it did when it was initially crawled was one of the things we destroyed along with AMP.
Do they? My banks did that years ago, and they also stopped doing it years ago.
>Of the 63 participants whose responses to prior tasks had been verified, we were able to corroborate 60 participants’ responses to the removal of their site-authentication images. 58 of the 60 participants (97%) entered their passwords, de-spite the removal of the site-authentication image
See https://security.stackexchange.com/a/19801 which summarises https://sites.google.com/site/ianfischercv/emperor.pdf
On Slack you can implement custom unfurling that does more than just show the title/description/image. See docs here: https://api.slack.com/reference/messaging/link-unfurling. I'm currently building one such custom integration
I've never heard the term before now, but I interpreted it to mean following redirects to get the end page.
> But if you clicked the link, you could just see the link though right. How is this fooling anyone?
It fools you before you click the link. After you do, you're no longer fooled, as long as you pay attention to the URL bar. The obvious problem is people who don't pay attention the the URL bar.
Another problem might be you're forbidden from viewing certain sites at work, you see a link that goes to news.ycombinator.com knowing that's safe, but then go to a forbidden site instead.
Another problem would be browser 0-days. A link to news.ycombinator.com would be safe assuming it hasn't been compromised itself, but a different website might spring a browser 0-day on you.
In many contexts, curl | sh is an alternative to adding some kind of additional repository to install a third party package — and in most package managers this is done as root anyway, with arbitrary pre-install and post-install scripts.
I’m not really sold on how curl | sh (with https) is any less secure than blindly following steps to add a repo.
I used to strongly dislike curl | sh, and if there’s some looming security risk beyond accidentally trusting bad actors who couldn’t be bothered to go to all the effort of setting up a repo then I’d genuinely like to know.
Then since the bot sees the correct site it redirects to the correct site
It's like writing `[google.com](notgoogle.com)` and making out like its a significant security flaw or new idea.
That's what it has been for me in Facebook, Twitter, Instagram, etc. their own mangled URL. And no, I don't trust those either.
>So how can we now trust that twitter's shortening links only go to twitter?
They never did.
Edit: Just checked his example, and yes. It looks like the hover link is still a random garbled `t.co/XYZ` and not `uniswap.org`. I'm still right and it's still pretty dumb.
A better example would be to show "google.com" and somehow redirect to "phishing.com"... but that's not really possible without control of "google.com"
It seems to me that the author does not need control over the second domain, just the first and third. But the user will never see the first URL, only the second.
The attacker doesn't need control over uniswap.org or harrydenly.com to make this work.
They only need control of harrydenly if they want to serve a phishing page. But as I said above this is redundant and they could just use this domain to also serve the redirection. Example below:
* (Twitter bot) phishing.com -> redirects -> fishtanks.com
Twitter bot makes shortened link t.co/aaa (but the preview shows fishtanks.com)
* (User) t.co/aaa -> phishing.com
An open redirect bug in phished site should allow this scenario:
A) Set up the offending link, redirecting to the phishing.com site.
B) When receiving the twitter bot, redirect back to a safe page on the original site for the summary. I understand twitter shows either the original URL or the final URL, but doesn't care for phishing.com in the middle.
C) Don't redirect back for non twitter traffic, so they end up on phishing.com.
A complex scenario, but perhaps enough to show that redirect bugs also matter.
And earlier there was a Google redirector that was forgotten about and was being used to redirect to phishing sites.