1) I see Kamal was an inspiration; care to explain what differs from it? I'm still rocking custom Ansible playbooks, but I was planning on checking out Kamal after version 2 is released soon (I think alongside Rails 8).
2) I see databases are in your roadmap, and that's great.
One feature that IMHO would be game changer for tools like this (and are lacking even in paid services like Hatchbox.io, which is overall great) is streaming replication of databases.
Even for side projects, a periodic SQL dump stored in S3 is generally not enough nowadays, and any project that gains traction will need to implement some sort of streaming backup, like Litestream (for SQLite) or Barman with streaming backup (for Postgres).
If I may suggest this feature, having this tool to provision a Barman server in a different VPS, and automate the process of having Postgres stream to it would be game changer.
One barman server can actually accommodate multiple database backups, so N projects could do streaming backup to one single barman server.
Of course, there would need to be a way to monitor if the streaming is working correctly, and maybe even help the user with the restoration process. But that effectively brings RTO down to near 0 (so no data loss) and can even allow point in time restoration.
2) yes yes yes! I really like litestream. Also backup is one of those critical but annoying thing that Sidekick is meant to take care of for you. I'll look into Bearman. My vision is like we would have one command for most popular db types and it would use stubs to configure everything the right way. Need to sort out docker-compose support first though...
I'll concede there's probably a little more hands on work doing things this way, but I do like having a good grip on how things are working rather than leaning on a convenient tool. Maybe you could convince me Sidekick has more advantages?
I'd also not want to have cloudflare as an extra company to trust, point of failure and configuration to manage.
But isn't this a little too tied to Cloudflare?
Caddy as a reverse proxy on that VPS would also give us free HTTPS. The downside is less security because no CF tunneling.
1. nginx + letsencrypt
2. forward based on host + path to the appropriate local docker
3. run each thing in the docker container
4. put Cloudflare in front in proxy DNS mode and with caching enabled
Your thing is obviously better! Thank you.
How do you run the containers on your VPS tho? You could still use Sidekick for that!
I think your setup is one step up in security from Sidekick nonetheless. A lot more work it seems too
Also, all of these tools have great documentation on getting up and running, but SIGNIFICANTLY LESS INFO ON HOW TO MAINTAIN OVER THE LONG TERM. If I was going to start using a tool like Sidekick, Kamal, or Dokku I would want clear answers to the following:
- How do I keep my VPS host up and running with latest security updates? - How do I update to more recent versions of Docker? - How do I update services that maintain state (eg. update to a new Postgres version) - How do I seamlessly migrate to a new host (perhaps as a way to solve the above)? - How should I manage and serve static resources & user media? (store on host or use cloud storage?) - How do I manage database migrations during an update, and how do I control that process to avoid downtime during an update?
I just spent an entire evening transferring a side project to a new VPS because I needed to update Postgres. The ideal self-hosting solution would make that a 20 min task.
One thing I’ve noticed is the prevalence of Docker for this type of tool, or the larger self-managed PaaS tools. I totally get it, and it makes sense. I’m just slow to adapt. I’ve been so used to Go binary deployments for so long. But I also don’t really like tweaking Caddyfiles and futzing with systemd unit files, even though the pattern is familiar to me now. Been waffling on this for quite a while…
If you legitimately need to run your software on multiple OSes in production, by all means, containerize it. But in 15 years I have never had a need to do that. I have a rock solid bash script that deploys and daemonizes an executable on a linux box, takes like 2 seconds to run, and saves me hours and hours of Dockery.
When we ran it on kubernets, without touching it, it broke itself in 3 years.
Docker is fantastic developement tool, I do see real value in it.
But kubernets and whole ecosystem? You must apply updates or your stuff will break one day.
Currently I am using docker with docker compose and GCR, it does make things very simply and easy to develop and it's also self documenting.
I believe fly.io uses that. Not sure if OP’s tool does that
https://news.ycombinator.com/item?id=41358020
I wrote up my own experiences too (https://blog.notmyhostna.me/posts/selfhosting-with-dokku-and...) and I can only recommend it. It is ~3 commands to set up an app, and one push to deploy after that.
It feels much more dangerous to have such a system instead in place and provide false sense of security. Users know best what kind of data they need to backup, where they want to back it up, if it needs to be encrypted or not, if it needs to be daily or weekly etc.
Most of these I checked don't, but a recent Ubuntu version is perfectly fine to use as-is.
> Is there any automation for creating networks of instances?
Not that I'm aware, it would also defeat the purpose of these tools a bit that are supposed to be simple. (Dokku is "just" a shell script).
I also wanted to be able to remove Dokku if needed and everything would continue to run as before. Both of these work very well with Dokku.
Best part is that I can just dump whole docker-compose.yml files in and it just works.
I'll definitely be trying it out, although I do have a pretty nice setup now which will be hard to pull away from. It's ansible driven, lets me dump a compose file in a directory, along with a backup and restore shell script, and deploys it out to my server (hetzner dedicated via server auction).
It's really nice that this handles TLS/SSL, that was a real pain for me as I've been using nginx and automating cerbot wasn't the most fun in the world. This looks a lot easier on that front!
Then it finds the compose file based on the app name. It templates in the domain name wherever needed in the compose file, and if it's meant to be public it'll setup a nginx config (which runs on the host, not in docker). If the folder with the compose file has a backup.sh and restore.sh it also copies those over, and sets up a cron for the backup schedule. It's less than 70 lines of yaml, plus some more for restart handlers.
The only bit that irks me is the initial tls/ssl setup. Certbot changes the nginx config to insert the various certificates, which then makes my original nginx config out of date. I really like nginx and have used it for a long time so feel comfortable with it, but I've been considering traefik and caddy for a while just to get around this.
Although another option for me is to use a cloudflare tunnel instead, and then ignoring certificate management altogether. This is really attractive because it also means I can close some ports. I'll have to find some time to play around with traefik and caddy first though!
I'm somewhat surprised not to see this more often. I'm guessing supporting multiple linux versions could get unwieldy, I focused on Ubuntu as my target.
Differences that I see.
* I modeled mine on-top of docker-plugins (these get installed during the bootstrapping process)
* I built a custom plugin for deploying which leveraged https://github.com/Wowu/docker-rollout for zero-downtime deployments
Your solution looks much simpler than mine. I started off modeling mine off fly.io CLI, which is much more verbose Go code. I'll likely continue to use mine, but for any future VPS I'll have to give this a try.
Here’s a bash script I posted a while back on a different thread that does similar thing if of interest for anyone. It’s probably less nice than op’s for ex it only works with digitalocean (which is great!) - but it’s simple small and mostly readable. also assumes docker - but all via compose, with some samples like nginx w auto-ssl via le.
Docker != app. Perhaps it'd be more accurate to say, "to host any Docker container"?
I mean on rootless containers, yes, a lot of apps that need access to the underlying system might not work, but they are usually system stuff, not the kind you want to host on a VPS anyway. But when running as root I can't think of many.
But even ignoring those, if I'm going to spend all the time needed to containerize everything in to Docker images myself, why wouldn't I just run the programs directly and not deal with the overhead and extra work?
Does this only support a single app?
Nice project but the claims (production ready? Load balance on a single server?) are a bit ridiculous.
Thank you for pointing this out. When I was looking to install caddy, I was specifically looking for something without using docker since my VPS is 1g / 1cpu and that is what I based my comment off. When was reading the sidekick docs seemed by running one command that it would first install sidekick and then install the cert/app all with one docker file but now I am not even sure about that.
Appreciate you pointing that out, now I am back into analysis paralysis on which one I should use
I would love for it to support docker-compose as some of my side projects needs a library in python but I like having my service be in go, so I will wrap the python library in a super simple service.
Overall this is awesome and I love the simplicity, with the world just full of serverless, AI and a bunch of other "stuff". Paralysis through analysis is really an issue and when you are just trying to create a service for yourself or an MVP, it can be a real hinderance.
I have been gravitating towards Taskfile to perform similar tasks to this. God speed to you and keep up the great work.
So you can do 'docker build -t localhost/whatever' and then 'docker run localhost/whatever'. Also worth checking out podman to more easily run everything rootless.
If all you need is to move images between hosts like you would files, you don't even need a registry (docker save/load).
i’m building https://www.plainweb.dev and i’m looking for the simplest way to deploy a plainweb/plainstack project.
looks like sidekick has the same spirit when it comes to simplicity.
in the plainstack docs i’ve been embracing fly.io, but reliability is an issue. and sqlite web apps (which is the core of plainstack) can’t have real zero downtime deployments, unless you count the proxy holding the pending request for 30 seconds while the fly machine is deployed.
i tried kamal but it felt like non-ruby and non-rails projects are second class citizens.
i was about to document deploying plainstack to dokku, but provisioning isn’t built-in.
my dream deployment tool would be dokku + provisioning & setup, sidekick looks very close to that.
definitely going to try this and maybe even have it in the blessed deploy path for plainstack if it works well!
I'll reach out on twitter
Considering the ease of setup the README purports, a few hours of dealing with this might save me a couple hundred bucks a month in service fees.
As a side note, any reason why you decided against using docker in swarm mode as it should have all these features already built it?
- install docker
- run docker swarm init
- create yaml that describes your stack (similar to docker-compose)
- run docker stack deploy
That's basically it. My go-to solution when I need to run some service on single VPS.If you want to just run a single container, you can also do this with `docker service create image:tag`
I wonder, though. Why Ubuntu? Why not Debian?
With all due respect to Cannonical, Ubuntu is not really suitable. It is not aimed at developers
Unless it has changed since I left it in a fury, it takes too much control away from you with the Snap system.
docker-compose with a load balancer (traefik) is fairly straightforward and awesome. the TLS setup is nice but I wildcard that and just run certgen myself.
The main thing I think that's missing is some sort of authentication or zero trust system, maybe vpn tunnel provisioner. Most services I self host I do not want to be made public due to security concerns.
I'm going to have to look into this pterm thing.
I now run more than one app into one single VPS.
Very cool stack.
But does anyone have a resource or link that explains the idea to make a service which OP shared here?
Because frankly, I'd feel lost reading the code from one file at a time without knowing where to start.
Plus it's written in Go which I have I am not familiar with.