My website (saurik.com) is seriously written in JavaScript. I was doing this long before node was popular, and so it is designed "horribly sub-optimally": it is using Rhino, which is not known for speed. I use XSL/T as a templating engine to build the page layouts, which is also not known for speed. Every request is synchronously logged to a database. I get over 50k HTML pageviews a day, most for one recent article which I posted a few weeks ago: when I posted it, I was getting well over 3k pageviews per hour.
I do not do any caching: I generate each page dynamically every time it is accessed. I seriously dynamically generate the CSS every request (there are some variables). Even with 3k HTML pageviews per hour, that's less than one complex request per second. How does one even build a website that can't handle that load? That is what I'd seriously be interested in seeing: not "how do I handle being #1 on Hacker News", but "why is it that so many websites are unable to handle two requests per minute".
That will get real sideways real fast. I do concur that 287 concurrent requests with 3k page views over a day doesn't even make sense mathematically.
To amplify your comment, processors today process billions of instructions per second. Even if all 3000 pageviews _did_ hit within one minute, thats hundreds of millions of instructions available per pageview. His pages just aren't complex enough to require that many instructions to serve.
tahoecoder's image to "prove" his load indicates he had 287 visits within a 45 minute window. Allowing hundreds of _billions_ of instructions per page served. Please do give me a break.
At Quantcast we handle 800,000 HTTP requests per second, and process 30 petabytes a day, so it really is possible to handle actual high loads.
In fact, this number (apparently a feature of Chartbeat) claims to be measuring "concurrent people sitting at a computer looking at pages from your site", not concurrent requests or "hits" to your webserver, or even concurrent HTTP connections (which may be idle for long periods of time). This number is almost entirely meaningless for the purposes of discussing your site's load. Imagine an HTML5 JavaScript game that took an entire day to play: with one request per second you may find yourself with tens of thousands of "concurrent visits".
what?
If you're on Wordpress, install WP Supercache. That's 80% of the solution, right there. Install equivalent whole-page caching for any other framework or system and tell your HTTP server how to pick it up; that should leave you prepared for hundreds of RPS.
We're at the stage where people are posting the equivalent of "how I survived skipping lunch". It's not 1997 any more, tens of thousands of visits is a link from a moderately popular twitter account or a medium-size metropolitan newspaper.
I'm sorry to seem so uncharitable. I'm just not sure what value these posts add.
Should I not try to help out the community with blog posts about my experience? Should we just cater to the experts?
I'm not sure what to tell you, except I fall squarely on the "old" side of that divide, having had to nurse Wordpress installations for nigh on 10 years now.
But every time -- every time -- an uncached Wordpress blog is linked to and dies with the famously unhelpful "Error establishing a database connection", somebody pops up to mention WP Supercache and/or W3 Total Cache.
Actually, if I have a pet peeve, it's that non-terrible caching isn't part of the Wordpress core. Probably breaks on gawdawfulhost.com or something, god forbid that 99.999% of the internet be better off from core architectural improvements when we could be working on the fifteenth new admin redesign!!1!
Edit: I realise now that you weren't talking about Wordpress and thus, my own pet obsession is clearly revealed.
You're doing everything wrong then. I had a website going through 6000 concurrent users sometimes and which was hosted on a very cheap mutualised server! I didn't realize so many people had no idea about simple caching techniques.
When I'm building an app I will use memcached with redis. But this post was just about getting a host environment/workflow for situations in which you want the ease of development that server side languages provide (shared code includes, etc.) and not have to deal with things like caching.
The only thing that's changed is the site's migration to S3 from Linode, and the addition of Cloudfront!
There were a ton of comments that were incredibly constructive and valid. I appreciate it's your site and you're not beholden to anybody, but almost all the criticism that was given was ignored.
Also, you can edit comments on HN rather than leaving a second as an afterthought.
Of course tweaking web servers and playing with your stack can be fun, but if you just want to build your site and let someone else handle the back-end performance and scaling issues, then there are solutions for that.
[1] http://fennb.com/microcaching-speed-your-app-up-250x-with-no...
EDIT: it's back, one or two refreshes later.