http://thenextweb.com/twitter/2011/01/06/new-years-eve-set-a...
I don't believe Twitter is trivial, but I think they're perceived as more complex than they are - WordPress + Reddit + Heroku only have like 10% that number combined. Apples to oranges, but those 3 companies would be doing more everythings per second than Twitter when you combine them.
Twitter's fundamental problem also is a harder one to scale than something like Heroku or Wordpress. For those hosted sites, you can shard easily by host, so that each of the 100,000 Heroku-hosted sites can get its own EC2 instance(s) and behave pretty much independently. You can't do that when the point of your site is that any action might instantly be broadcast to thousands of followers. High-fanout writes are not an easy problem to solve.
------------------
Color me unimpressed.
At some point, I was collecting 40GB/day of financial data (and that's after bzip2ing them .. probably 200GB/day before); This was done on hardware costing $30K (which was two equivalent machines with 4GB, each having 20*1TB in a raid configuration -- this was a hot-backup configuration) and the operation run (coded, supervised, administered) by 2 people.
I'm extrapolating from your numbers: Let's say you have 70GB over 14 days = 5GB/day; Let's assume Twitter has 100GB/day of text twits (which, incidentally, means ~1 billion tweets per day which I highly doubt, as they took few years to get to the 1 billion mark, and last I heard they were at less than 100 million twits/day)
Then, at this day and age (numbers selected for 2 years ago, when they had their last infrastructure revision), what you do is buy 20 servers with 8GB of memory each (for, say $5K each), plus a little redundancy, and store all the latest twits in memory, and the most popular user's older twits as well; everything else on disk. Throw in cheap web front-ends that don't even need a local disk, load balancing, and a gigabit ethernet backplane. You're still under $200K in equipment.
Yes, the code is not going to be trivial, but for $100K and 3 months you can get a stunt programmer (I know a few who can do it and won't charge as much).
A run-of-the-mill RDBMS is the wrong tool for this job; Basically, run of the mill anything; but that does not make it incredibly hard.
I think $300K for hardware and software, you can get a Twitter clone that performs as well.
Twitter is successful, but that's not thanks to good engineering.
I've also done the 40 GB/day of NYSE TAQ data financial analysis thing, and the 1000+ trades/second real-time financial analytics thing. And I work on Google Search, and have a passing familiarity with how other Google products scale.
The scaling challenges of batch financial models vs. real-time financial processing vs. information retrieval vs. email vs. social products are very different. Even going from a model of the web where it's static and changes every few months (like Google of 2004) to one where sites get update every few minutes and users expect to see the updates immediately in search results (like Google of today) requires vastly different technology.
The main thing about scaling that I've learned from working at a couple places that require it is to go into it with a fresh mind each time, and really pay attention to what the requirements are and what you can cut corners on. There're some general principles you should know (eg. Jeff Dean's "Numbers you should know", memory is much faster than disk, cut out layers of abstraction that you don't need), but in order to apply them effectively, you really need to pay attention to the details of your problem domain.
If you think you can solve Twitter's scaling problems, they're hiring, they're pre-IPO, and they're probably giving out decent chunks of stock.
TLDR; It is not the vertices in the twitter graph but the number of edges in it that is unprecedented.
RSS strikes me as an almost identical process other than the time subscribers wait to check for new content, and there are feeds with millions of subscribers. So now the problem is reduced to doing it in a timely fashion.
This is why I wonder if they're really that immensely complicated - isn't Google doing much the same thing, and possibly even at a bigger scale at one point, with their reader and FeedBurner?
Another company with a ton of employees.
Probably not that many on the FeedBurner team itself, but a lot of people at Google working on the general problem of keeping things unreasonably responsive at massive scale.