* http://digitalocean.com (not docker-specific but they have a great docker image)
* http://rackspace.com (not docker-specific but they have a great docker image)
EDIT: sorted alphabetically to keep everyone happy :)
Early members of that "seed" community included engineers from Twilio, Heroku, Soundcloud, Koding, Google, Meteor, RethinkDB, Mailgun, as well as the current members of the Flynn project.
I got to meet the docker team (a lot of french dudes in the team!). Very passionate, technically super sharp, and really fun ! They were interested in my point of view and opinion. Plus, there lead dev knows how to party from what i have seen during a meetup !
Things that impressed me:
1. super passionate
2. he was very receptive and quick at squashing bugs I reported (real or not)
3. docker was super portable (the same across all linux distros)
4. they (the docker team) had real solutions for the long application deployment times that were plaguing me
Everyone seemed to know it would succeed, which is rare around here.
Unless you're going to try and trademark them all.
The simplicity of the service is an opportunity to attract people without a lot of webdev chops, so why not make it super simple?
Feel free to innovate - you're a startup, and it's what we love about you.
Other than that, this looks great! I'm excited for you guys.
I hope that saves you some time.
Thanks!
There's a great explanation here: http://blog.docker.io/2013/10/docker-0-6-5-links-container-n...
With the new Links functionality, this is much easier, but are you planning to ever have the ability to use a single Dockerfile to deploy an application which may contain multiple images (with links between them)? I want to be able to do "docker build ." and have my application up and running when it finishes.
This is my favourite Docker offer so far. I've been looking for something to replace dotCloud's deprecated sandbox tier for just playing around, and it looks like this fits the bill.
I configured and launched a machine with redis and node in less than 5 minutes. Very cool.
How will you isolate instances from each other? My instance appears to have 24 GB of RAM and 12 cores, and it looks like I can use all of it in my instance.
So say I have a fancy Django image, and a fancy Postgres image.
How do I then have the Django one learn of the Postgres one's IP, and then auths (somehow), and then communicates seperately.
Also, the recommended advice for "production" is to mount host directories for the PostgreSQL data directory. Doesn't this rather defeat the point of a container (in that it's self contained), and how does that even work with a DaaS like this? I'm pretty confused. Is there an idiomatic way in which to do this?
Do service registration/discovery things for Docker already exist?
Docker now supports linking containers together:
http://blog.docker.io/2013/10/docker-0-6-5-links-container-n...
> Also, the recommended advice for "production" is to mount host directories for the PostgreSQL data directory. Doesn't this rather defeat the point of a container (in that it's self contained)
The recommended advice for production is to create a persistent volume with 'docker run -v', and to re-use volumes across containers with 'docker run -volumes-from'.
Mounting directories from the host is supported, but it is a workaround for people who already have production data outside of docker and want to use it as-is. It is not recommended it you can avoid it.
Either way, you're right, it is an exception to the self-contained property of containers. But it is limited to certain directories, and docker guarantees that outside of those directories the changes are isolated. This is similar to the "deny by default" pattern in security. It's more reliable to maintain a whitelist than a blacklist.
We give you a standard Docker instance in the cloud - all the tools work exactly the same as they do locally. You can even instantly open a remote bash shell, like the now-famous Docker demo!
If you build the container on a service like this testing it is hard or in some cases even impossible. For example acceptance tests with Selenium.
Gemfile.lock and similar version binding tools help, but prebuild containers bring the deployment stability to whole new level and is the reason why I'm exited about Docker and containers in general.
Do they support prebuild containers?
Sounds like a yes.
but a traceroute points to AWS…
What could possibly go wrong?
If you offer infrastructure services and don't tell people where and how you provide it, you can't be serious, too.
It is comforting that Heroku is also using LXC for dynos. Would be interesting to know how much in-house adjustments to the kernel and LXC has been made to ensure the hardening.
Are people running Linux VMs on their Macs to build containers?
I like the idea of this service. But both the client side and the server side have to be easy. Unless I'm missing something it seems like they made the server side really easy, but the client side is still annoying.
In short, yes, just run a VM.
You need a recent version of the linux kernel that supports Linux Containers. It's best if you can run Ubuntu 13 somewhere.
> Are people running Linux VMs on their Macs to build containers?
FreeBSD supports jails which are similar to linux containers in a way, but OSX does not. So unfortunately you're going to have to run a VM, checkout vagrant and docker though. [1]
[1] https://developer.apple.com/library/mac/documentation/Darwin...
[2] http://www.chromium.org/developers/design-documents/sandbox/...
I wanted to spin up an instance of Sphinx Search but no idea how to go about doing it.
Maybe creating a set of tutorials will help with this. I can think of two advantages. The first being customers like myself will love it. Second, similar to Linode and their tutorials it will drives a lot of traffic and establishes your reputation as docker experts. Will probably build a lot of back-links too as people link to your tutorials.
UPDATE: I'd also be interested to hear about Digital Ocean-style "shared" (but non-private) networking—basically, any network adaptor with a non-Internet routable IP address. ;)
Docker is a simple description of an internet server including the various services required (mysql, httpd, sshd, etc. - the bundle being call a deck).
It seems then you can create a server elsewhere (eg on your localhost), generate the docker description of that and use that description to fire up a server (either a VM or dedicated) using the service in the OP.
Am I close?
Could I use this to do general web hosting?
Edit: and looking at digitalocean.com it appears I can activate and deactivate the "server" at will, so I can have it online for an hour for testing and pay < 1¢?
I don't 100% know if the containers themselves are hosted by Hetzner or not, but Hetzner is more of a budget provider than something you host production sites on.
I've heard many mixed review about their network and mostly their support which isn't up to scratch. We'll see what happens but from what I see, if someone decides to abuse the service, Hetzner might just take down the whole server without warning just like OVH do.
http://www.hetzner.de/en/hosting/produkte_rootserver/px120ss... (I'm guessing they are using something similar to this).It's a pretty powerful and cheap server but if you search hard enough you can find something equivalent in the States for around the same price.
If you need real HA you should perhaps use more than one provider anyway. Or what are your recommendations?
Since Docker is still in beta, it's not production ready yet anyways. Docker could still go through a lot of changes between now and 1.0.
ETA: Whoops, I got the pricing wrong. It's $5 per instance. I was thinking you would get 1GB of RAM and 20GB of space to run as many containers as you like. That makes it not as cheap as I was originally thinking.
though adding "cards" to a "deck" sounds intuitive.
I'm trying to come up with better terminology. something with ships and containers...
When I created a Deck (default Sinatra Hello World) and converted it to a Drop, it did just that: it removed the Deck and created a Drop.
I guess I thought it would keep the Deck so that I could see the configuration that I had chosen to create it. Is this a Docker thing where, once you've created it, you don't see the config any longer? I don't think it is but I've not honestly played with Docker yet. $5 a month is a low ask for me to try it out.
Also, when it comes time to pay for a Deck/Drop and you don't have credit card info saved, it forwards you to that page... but, after entering the info, you're not put back into the process. You're dumped back into the Deck page. That seemed odd to me... wasn't sure if it had been converted or not.
I wish the word 'manifest' wasn't used in so many contexts because, if you're going to stick with the container shipping analog, it would have made more sense to have Manifests, Containers and Ships. That's just me though... who knows. ;)
All in all, cool service. Look forward to playing around with it this weekend.
EDIT: I see that you can create a copy of the Deck that created a Drop... still seems odd that the default behavior is to blow it away upon creation of a Drop.
Otherwise, cool!
Also, forgive my ignorance, but what would it take to be able to "add containers" in the same way that you can add dynos on Heroku?
Best of luck guys!
Does anyone know where DO servers are located?
From here: http://www.enterprisenetworkingplanet.com/datacenter/digital...