In particular, I think the idea of embedding a Procfile in a Docker image is really clever; it neatly solves the problem of how to distribute the metadata about how to run an image.
This is how I currently use Docker:
1) Custom base image with all the things my company needs like supervisord, libpq, etc..
2) Custom per-service base images like ones with Java for our Clojure services or Python for our research services which are built off of the base.
3) A release consists of pulling the latest version of the base image, example, acme-python, and then injecting the latest project code into it.
My concern here essentially boils down to the image repo. Github needs to add container storage because while I admire Docker Hub's efforts, I don't trust it.
1. We build docker images on every commit, in CI, and tag it with the git commit sha and branch (we don't actually use the branch tag anywhere, but we still tag it). This is essentially our "build" phase in the 12factor build/release/run. Every git commit has an associated docker image.
2. Our tooling for deploying is heavily based around the GitHub Deployments API. We have a project called Tugboat (https://github.com/remind101/tugboat) that receives deployment requests and fulfills them using the "/deploys" API of Empire. Tugboat simply deploys a docker image matching the GitHub repo, tagged with the git commit sha that is being requested for deployment (e.g. "remind101/acme-inc:<git sha>").
We originally started maintaining our own base images based on alpine, but it ended up not being worth the effort. Now, we just use the official base images for each language we use (Mostly Go, Ruby and Node here). We only run a single process inside each container. We treat our docker images much like portable Go binaries.
For example: I've gone searching through the blog posts, github readme, and KONG documentation, but I still have no idea _why_ it needs Cassandra. What does it store in there?
Note: I think the only 3rd party thing I'd call self-hosted is colocation where I delivered the server, they plugged it in, and the most they do is reboot it for me.
This is the level of engineering/communication I always shoot for, and which (somewhat disappointingly) is rare where I've worked.
We'd love to see one standard too. Personally, I think it's good to have a lot of competing solutions right now (ECS vs Kubernetes, Docker vs Rocket, etc) and we'll see things settle in the next couple of years as containerization becomes more common.
For anyone looking at a Dokku alternative, Cloud Foundry isn't one.
Openshift is nice though.
The reason we went with vulcand is that it natively supports what we wanted to do i.e. route to micro-services based on dynamic etcd driven configuration. To do the same thing in nginx (at the time), we would have either had to use confd or custom lua.
That kind of reminds me of https://xkcd.com/927/
Sorry if that's not the case. I've also played briefly with Flynn and Deis and I haven't found anything that complicated that would need a whole rewrite and changing the entire approach. Moreover with Deis I can easily change providers (DO, AWS, Azure, etc.) and with Emprire I'm bound to ECS. At least that was my first impression, I have to read more.
While Empire itself may be tied to AWS, your app is still a portable, 12-factor, Heroku-compatible app. You can run it elsewhere.
Empire doesn't actually lock you into ECS. The scheduling backend is pluggable and could support Kubernetes/Swarm in the future.
So, in theory you could autoscale just like you always would. Monitor stats for a host, if a bunch of them start to run low on resources, kick off an autoscaling event.
That said, there's been quite a bit of talk about integrating Empire with Autoscaling, so that when, say, ECS couldn't find any instances with resources free for a task, Empire could kick off the autoscaling events for you. Could be pretty awesome :)
Just putting this out there in case anyone is looking for an alternate open-source PaaS.
I've never personally used it before (self-hosted), but it may be something that someone out there is looking for.
https://www.vultr.com/pricing/ is 20% cheaper right now at least.
Both VMs are single core, 1GB RAM. DO gives you 30GB SSD, but AWS has a freely adjustable disk size. Upscaling from 8 to 30GB is another $2 - but how many single-core low ram instances use double-digit GB?
In the middle, DO has 8 core, 16GB for $160/mo, AWS has 4 core 16GB for 185/mo + storage.
At the top end of the DO offerings, DO's 20-core 64GB machine in $640/mo, and AWS's 16-core 64GB machine is $725mo + storage (not much). The difference in pricing is not that crazy, and you get a crapload of extra free features on AWS.
Those AWS prices are with the "On-Demand pricing". If you're willing to lock-in for a year, reduce by 1/3. The argument that DO is "OMG cheaper" than AWS is no longer valid.
[1] https://www.digitalocean.com/pricing/ [2] http://aws.amazon.com/ec2/pricing/
http://serverbear.com/compare?Sort=Host&Order=asc&Server+Typ...
DO 1GB instance at $10/month has a UnixBench of 1041 [1], to beat that with AWS you have to spend $374/month.
Also, with the t2.micro you get an EBS disk, whose I/O you have to pay in addition to the instance cost. You also have to pay for the bandwidth out of the chosen AWS region. This is not the case on DO.
AWS complicated pricing makes comparison like yours very difficult and error-prone: I would suggest to go with AWS only if you need the particular features (like ELB, SQS, VPC, etc.) that DO doesn't offer.
[1] http://serverbear.com/1989-1gb-ssd--1-cpu-digitalocean [2] http://serverbear.com/240-extra-large-amazon-web-services
More than one startup has been killed purely by AWS hosting costs in the past 5 years.
Also, OP required Redshift. DO does not offer that.
DO/Linode don't offer the equivalent, which means maintaining your own.. which is fine, but if you're relatively small, or a single person... time you dedicate to operations tasks is time you aren't developing features and/or fixing bugs. One's business is paramount... technology is just a tool to serve that.