So, even ignoring the issue with cold caches, how about the two megabytes of code that the browser may need to go through to render the page?
What on earth is twitter doing on a given page that needs two megabytes of code?*
* As of March 2012, for I'm sure this will look silly when there are 2 gigabyte pages, 20 years from now.
According to his charts most of Twitter's page is JS/CSS and presumably set to heavily cache. Very little is data. Once you've done the first page load Twitter's pages will load quite fast and efficiently. While quite a lot of JS, this is good design, not bad.
This means that to get the requested information on first load, the user has to wait for a 2MB download, followed by another roundtrip Ajax request to twitters tweet retrieval API. This results in the perception of a slow page load, even though the tweet itself comes down the pipe in only 100ms.
In my view, Twitter is an excellent case study in some of the pitfalls of thick-client application design. Mainly that ignoring the first-time-use case will result in widespread perception among users that your app is slow, even when every page view after the first is lightning fast.
[1] http://www.metafilter.com/113541/The-Magic-Dollar-Sign#42233...
[2] http://www.metafilter.com/113541/The-Magic-Dollar-Sign
Google and Yahoo have been preaching for years that even fractions of a second of page load time have a big impact on how likely users are to give up and go somewhere else. Caching doesn't help anyone if the user doesn't come back. It would really be better (for Twitter) to make that first, uncached page load the fastest one of all.
Well of course they worry about it. I can ditch Google and go to Yahoo, DDG, Bing, hell Lycos might still be around, if the Google homepage takes too long to load. Obviously same for Yahoo. Neither of them produce or host the content (search, news) that you're going there for.
That's not quite the same situation as Twitter, GMail, Facebook and whatnot. Where else are you going to go? Your mail is in GMail, or your friends tweet is on Twiter, their wall is on Facebook and they are only there. The "instant load or I'm out of here" doesn't apply so much to them. It really only affects new users who have no other attachment to the service.
I doubt it's much of a problem for Twitter.
EDIT: There are still many devices and locations that do not have high speed access. Just because a page loads quickly (relatively) due to the fact that you are sitting on top of a OC192 link does not mean that the load time is dog slow for say someone in Rwanda.
In fact, web devs should review their sites through a network simulator as part of the QA process.
It typically takes anywhere from 4-8 seconds before the first tweet appears, and another 2-4 seconds before the page is fully loaded. Twitter is quite definitely the slowest website I visit on a regular basis.
(Never mind that at least for me, tweets are increasingly often turning up as search engine hits)
Today PG posted some HN stats: http://news.ycombinator.com/item?id=3669947 It seems that HN is still running on one server!
Don't fool yourself. Twitter is bad design, not good.
The point of the OP is that a user is loading 2 megs to view a single tweet. A medium that is literally defined by its brevity, should not need to send 2 megs to your browser so that you can read 140 characters.
I've used Twitter not very long ago (since then I've visited a forum, a piece of docs, left my browser idle half an hour, and visited HN), now I've just looked in about:cache and twitter/twimg only have a handful of profile pictures.
Worse than that, he seemed to lack some basic understanding of the modern web. The Twitter webpage is not simply a "page" in the traditional sense - it is one instance of the Twitter client app, that happens to be written in Javascript and runs in a Browser.
2MB for a rich client app? Doesn't sound like overkill to me.
I wonder why twitter doesn't simply serve up a static "fake" page for direct links. That could easily weigh in under 100kb and display instantly. Then make all links on that page boot up the "real" twitter.
That way the long wait is at least mitigated until the user actually starts to interact with the page (which in 99% of cases he'll never do because he only wanted to read that one tweet).
But the problem here (according to the author and I tend to agree with him) is that one needs to load too much junk for too little actual content; whether the junk in question is cached or not.
Now, ideally, a tweet's 140 characters shouldn't weight 2 MB but Twitter users need tools to act upon those tweets: re-tweet, follow a link, etc. and those tools come with a cost.
A relatively high cost but one we can afford, with the help of caching.
Granted 2MB is pretty big, but the only people I see complaining are techies; valid complaints but do not effect Twitter's bottom line.
Twitter obviously caches 99% of this stuff. I absolutely agree that 2.2MB on one page seems absolutely insane, but that doesn't match up with the experience every time you load the page. And I imagine it's pre-caching code that runs on other pages as well, so you most likely only ever take that download hit once.
Yes, they should bring that amount down. No, it probably isn't going to be a priority.
Twitter pages don't even seem particularly complex functionally or graphically, so why should the payload be so large?
I never follow Twitter links because of exactly this (well and the fact that they trap you with forwards). It is a terribly negative experience. Even if I had all of those files cached the parsing and execution of all of that JavaScript is far from instantaneous.
Same deal with TechCrunch -- stopped visiting because it is such a script-laden monstrosity that it seriously diminishes the experience.
It doesn't have all the features, but easily makes up for that by the fact that you can actually click around as much as you like without your browser getting all slumpy from loading huge pages or doing javascript.
You'd expect caching to help, but there's a lot of truth in the jQuery tax article[1]: even if you got the code cached, executing it all takes a significant amount of time, and the sluggishness is made worse by the fact that during this time your CPU is busy, unlike with data transfers which at most cost some memory.
I don't use the m.twitter.com site all the time, but I switch often enough whenever I get too annoyed by default Twitter's slowness.
The only real downside (for me) is that you can't click through to a full resolution version of a profile picture. Otherwise all the basic features are in there.
[1] http://samsaffron.com/archive/2012/02/17/stop-paying-your-jq...
Talking about 140 chars is irrelevant, a tweet, is a 140 (unicode) chars handler for a (mini social) graph, and this is how we should look at it.
In that particular page he's talking about[1] there are 10 profiles info (status owner + 9 retweeters) embedded within the page so when you click on a profile thumbnail you get the profile modal with some basic info and "Follow" button etc.
381Kb out of those 450 belongs to his own background image [2].
In other words, twitter does a very good job at making their service fast and speedy.
1. https://twitter.com/#!/bos31337/status/172156922491969536
2. https://twimg0-a.akamaihd.net/profile_background_images/9706...
User-defined image backgrounds can also be up to 800k-ish for twitter too right?
It really adds up for a large thread.
The warm-cache situation only matters to people that are on Twitter all the time, so this possibly has the effect of keeping existing users happy while raising a barrier to new users (or pushing infrequent users to completely abandon the system).
Secondly, you're only looking at download size (the importance of is really to imply a certain amount of time it takes to download those resources). Since Twitter's JS platform is about doing all of the HTML rendering on the client-side that has to be taken into account (this wasn't a concern in the past when it was mostly done on the server-side). If the page takes 3s ~ 5s to load even with a warm cache and a tiny (size) ajax-request, then it sort of defeats the purpose (unless your purpose is to relieve server-load instead of in-browser load times).
Their front-end performance situation has sadly never gotten better… and has definitely gotten worse.
We started to build the visualization project anyway, and it got us a little bit of fame and a lot of consulting work: http://twistori.com
And just under a year later, we published a book on front-end performance: http://jsrocks.com
But I still wish we could have fixed their damn front end. Every time somebody tweets a link to a tweet and it opens up as a web page on my iPhone and I have to watch a blank screen for 10 freaking seconds before the tweet actually shows up, I die a little inside.
This story amuses & horrifies people who believe that startups are more flexible, responsive, & sane than big companies. At this time, Twitter the company was definitely smaller than 30 people… around 15 if memory serves, but I'm not sure. It was definitely small, either way. Meanwhile Twitter the site was growing in popularity by leaps & bounds every second. I'm sure the bandwidth saved alone would have paid back our consulting fees in a matter of a few weeks, or less.
This is why people use the twitter app instead of the site.
Remember dailup back in the day? When a T1 would have been overkill for a single household? Look at those numbers from today's perspective: a T1 is only 1.5 Mbps, thats just 192 KBps.
And even if you have large bandwidth, the latency that comes into play is another factor, not to mention folks opening 50 tabs which causes delays in opening and rendering a new one.