$ pip install dotcloud
$ echo 'frontend: {"type": "nodejs"}' >> dotcloud.yml
$ echo 'db: {"type": "mongodb"}' >> dotcloud.yml
$ dotcloud push $MYAPP
$ dotloud scale $MYAPP frontend=3 db=3
This will deploy my nodejs app across 3 AZs and setup load-balancing to them, deploy a Mongo replicaset across 3 AZs, setup authentication, and inject connection strings into the app's environment. It's also way cheaper than AWS.The only difference with OP's setup is that the Mongo ports are publicly accessible. This means authentication is the only thing standing between you and an attacker (and maybe the need to find your particular tcp port among a couple million others in dotCloud's pool).
(disclaimer, I work at dotCloud)
3 AWS Small instances cost under $200 / mo and come with 1.7GB of RAM each.
The dotCloud pricing calculator is coming up with $700 / mo for 3 mongodb instances with 1.7GB of RAM.
Obviously this isn't an apples to apples comparison. But How are dotCloud instances different from AWS instances?
* For a clean architecture you want to isolate each Mongo and node process in its own system. So you need 6 instances, not 3.
* You'll need load-balancers in front of these node instances. That costs extra on AWS, and is included on dotCloud.
* Did you include the cost of bandwidth and disk IO in your estimate? Those are extra on AWS, but included on dotCloud.
* Monitoring is extra on AWS. It's included on dotCloud.
* I love to have a sandbox version of my entire stack, with the exact same setup but separate from production. That's an extra 2 instances on AWS (+io +bandwidth +load-balancing +monitoring). It's free on dotCloud, and I can create unlimited numbers of sandboxes which is killer for team development: 1 sandbox per developer!
* We only charge for ram usable by your application and database. AWS charges for server memory - including the overhead of the system and the various daemons you'll need to run.
* For small apps specifically, you can allocate memory in much smaller increments on dotCloud, which means you can start at a lower price-point: the smallest increment is 32MB.
I didn't even get into the real value-add of dotCloud: all the work you won't have to do, including security upgrades, centralized log collection, waking up at 4am to check on broken EBS volumes, dealing with AWS support (which is truly the most horrible support in the World, and we pay them a lot of money).
+ Our support team is awesome and might even fix a bug in your own code if you're lucky :)
From what I remember when trying to get the copy/paste instructions written for Kandan so others could deploy to dotCloud to get started with it, there was a steeper learning curve for dotCloud than for other services (with the possible exception of CloudFoundry, which was a sunk cost for us at that point anyway...). Maybe the on-boarding is a bit tough for new users? Where do you see the most significant drop-off in your funnel? If you don't mind sharing, of course.
I think there are 2 reasons.
One reason is simply that we're better at building the product than the buzz. As it turns out "developer buzz" is not organic, it is something that must be engineered, like any other feature of the product. There are people who specialize in crafting and projecting an image of success in a way that appears authentic. It is a difficult and highly specialized, not to mention it involves a fair amount of "fact distorsion" that doesn't appeal to us.
The second reason is that we're successful without it. When you and I say "developer buzz" we usually mean "HN-reading bleeding edge developer buzz", but 99.999% of our addressable market doesn't read HN. We crave our peer's appreciation and respect as much as everyone else - but at the end of the day, that's not what pays the bills. In our case, mid-aged developers and IT managers looking to remain competitive while circumventing office politics to get their development server in 6 weeks instead of 8 - that's what pays the bills.
On the other hand, our Cassandra cluster runs on ephemeral drives and it's way better than EBS even with the guaranteed IOPS thing. Everyone should definitely give this option a try.
According to a dev, they haven't even talked about it. Simply hasn't ever come up.
So Reddit's gonna keep going down like this. Don't be like Reddit.
Doesn't everything in their track record indicate that regions are nicely partitioned from each other? Even the biggest region failures they've had have stayed completely isolated to that region.
Given that AWS are running the same software across regions and have the same people & processes in place, and further that there's software that runs across regions (e.g. S3), I'd wager it's not long before we have a multi-region outage.
Finally, some of the multi-AZ problems in the past were compounded because as one AZ went down everyone hammered the other AZs, taking out the APIs at least. That's when everyone believed that AZs were isolated. Now that people know that's not the case, those same systems are going to be hammering across multiple regions.