Remove-Item -Path X:\test\ -Recurse -Force
del X:\test -rec -forhttps://architecture.lullabot.com/adr/20211006-avoid-command...
Tailscale <https://tailscale.com/> can remove the need to open port 22 to the world, but I wouldn't rely on it unless your VPS provider has a way to access the server console in case of configuration mistakes.
Also, restarting ssh will not boot you out of the session (your session has already been forked as a different process), so leave your terminal window open (to fix any screwups) and then log in on a separate window on the new port and just make sure you can get in.
For backups, don't set up logins from your main server(s) to your backup server; log in from your backup server to your main server. That way, if someone breaks into your main server, they can't get into your backup server.
Additionally you can further tighten controls of incoming logins with the use of AllowGroups to tighten your controls on which groups can log into the system. This would mitigate a scenario where an adversary is able to escalate enough privileges to write an .authorized_keys file to a non-privileged user which may have a shell still configured.
Finally, unless you're treating this server as a bastion host of sorts, you probably should disable forwarding for agents or X11 etc. We've seen a lot of adversaries move laterally due to this agent forwarding.
Probably not, as that’s one of the first things they do.
That said, I feel like all this fail2ban stuff is very much cargo culting in the selfhosting community. I’ve had my VPS SSH server on port 22 with no fail2ban for slightly over a decade, exposed to the public internet (home server is behind tailscale, VPS hosts the stuff I always want accessible from everywhere). Bots try it, they fail, the end. Maybe I’m missing something, but I have yet to find a good reason for the added complexity.
https://arstechnica.com/security/2024/07/regresshion-vulnera...
It gives just enough info about the origin and nature of attempted intruders without overwhelming detail.
That got me thinking: how do other self-hosters/homelabbers here go about automating their server setups? None/purely manual? One big shell script? Multiple scripts wrapped in a Makefile (or justfile, or other command runner)? More enterprisey provisioning/automation tools like Ansible, Puppet, etc.?
If I would do basically the same over and over I'd probably go with a script, ansible cookbook or similar, but as of now the manual route is totally fine.
What's nice about is that it doesn't require any specialized knowledge beyond bash - and that's something which is pretty easy to learn and great to know. It also attracts, IMO, the type of developers who avoid chasing new trends.
This sets up everything I need so I can treat my servers as livestock instead of pets - that is, so I can easily slaughter and replace them whenever I want, instead of being tied to them like a pet.
It’s like Ansible, but you write Python directly instead of a YAML DSL. Code reuse is as simple as writing modules, importing them, and calling whatever functions you’ve written in normal Python.
I find it almost as easy as writing a shell script, but with most of the advantages of Ansible like idempotency and a post-run status output.
Written in bash also
Most of my actual tools now are running in docker via Nomad.
Wish it included server monitoring as a section.
From personal experience, I would just pay someone else for a SaaS monitoring solution. It will almost universally be cheaper and more reliable.
If you really wanted to run your own, Prometheus is probably the way to go. Local storage should be fine as a data store for self-hosted. Grafana can be used for dashboarding, and either Grafana or AlertManager can do the alerting component.
It’s really not all that worth it for self-hosted scale, though. Running all that in the cloud is going to cost basically the same as buying a DataDog license unless you’re at 3-ish hosts, and more than that if you’re doing clustered monitoring so you aren’t blind if your monitoring host is down.
Besides I fail to see any DevOps tenets in it, quite the opposite: a shell script at the bottom is little in the way of reliable automation.
To me this post reads more like someone relatively new to server management wanted to share their gathered tips and tricks, i.e. me 10 years ago when I started my self-hosting journey :-D
I'm not sure I understand the distinction?
With incremental it's full backup + inc1 + inc2 +... forever, each backup depends on the previous.
To restore from an incremental you need the last full backup and all the incrementals inbetween. If you do say a full backup every month, you'd need up to 30 good incremental backup sets to be able to restore.
For the differential you just need the last full backup in addition.
Obviously the differential one might take more and more space, depending on the changes.
Full Backup -> Differential Backup
Incremental backups are:
Full Backup -> Incremental Backup [-> Incremental Backup ...]
At least that's how it is with Macrium.
About a year ago I swear everyone was going to podman, but in the last few months I see nothing but docker references.
Podman is supposed to be drop-in. Well, it was advertised. I haven't touched anything in six months.
I use it and prefer it, much so. Mostly because of rootless (I know docker has made attempts to improve this in the last year or so), not futzing with my iptables and a better handling of pushing images between hosts (it's been over a year since I touched any of that infra, I just remember it being more of hassle with docker which took a "our way or no way" approach).
The biggest issue I have with Podman is the pace of its improvement against the rate of Debian releases!
I think podman is more secure and simpler, but not as ergonomic to have locally (it’s not quite a drop in for docker. No real docker compose support for example)
Podman is the default for k8s last I heard
Some distributions (like openSuSE) also enable KbdInteractiveAuthentication by default so just disabling PasswordAuthentication won't work.
david@desktop:~$ nmap -p 22 --script ssh-auth-methods becomesovran.com
Starting Nmap 7.92 ( https://nmap.org ) at 2024-08-25 23:31 EDT
Nmap scan report for becomesovran.com (162.213.255.209)
Host is up (0.066s latency).
rDNS record for 162.213.255.209: server1.becomesovran.com
PORT STATE SERVICE
22/tcp open ssh
| ssh-auth-methods:
| Supported authentication methods:
| publickey
| gssapi-keyex
| gssapi-with-mic
| password
|_ keyboard-interactive
Nmap done: 1 IP address (1 host up) scanned in 0.86 seconds
david@desktop:~$
As far as I can tell AuthenticationMethods publickey is the right way to do it these days but I'd love to know if that's not the case. ssh -v localhost echo 2>&1 | grep continue
(obviously replacing "localhost" with whatever server you want, and you can put anything you want where "echo" is but that's the best no-op I've come up with)Cheers.
[1] I'm DevOps there! ;)
Here is how setting this all up would like in NixOS (modulo some details & machine-specific configuration). It's <100 lines, can be executed/configured with a single CLI command (even from a different machine!), rolled back easily if things go wrong, and can be re-used on any NixOS machine :)
{
networking = {
# Server hostname
hostName = "myserver";
# Firewall
firewall = {
enable = true;
allowedTCPPorts = [ 80 443 2222 ];
};
};
# Users
users.users = {
newuser = {
isNormalUser = true;
home = "/home/newuser";
hashedPassword = "my-hashed-pwd";
openssh.authorizedKeys.keys = [ "my-pub-key" ];
};
};
# SSH
services.openssh = {
enable = true;
ports = [ 2222 ];
settings = {
PermitRootLogin = "no";
PasswordAuthentication = false;
AllowUsers = [ "newuser" ];
};
extraConfig = ''
Protocol 2 # Use only SSH protocol version 2
MaxAuthTries 3 # Limit authentication attempts
ClientAliveInterval 300 # Client alive interval in seconds
ClientAliveCountMax 2 # Maximum client alive count
'';
};
services.fail2ban.enable = true;
# Nginx + SSL via LetsEncrypt
services.nginx = {
enable = true;
recommendedOptimisation = true;
recommendedProxySettings = true;
recommendedTlsSettings = true;
virtualHosts = {
"example.com" = {
locations."/" = {
proxyPass = "http://localhost:8080";
proxyWebsockets = true;
};
forceSSL = true;
enableACME = true;
};
};
};
security.acme = {
acceptTerms = true;
defaults.email = "myemail@gmail.com";
certs."example.com" = {
dnsProvider = "cloudflare";
environmentFile = ./my-env-file;
};
};
# Logrotate
services.logrotate = {
enable = true;
configFile = pkgs.writeText "logrotate.conf" ''
/var/log/nginx/*.log {
weekly
missingok
rotate 52
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}
'';
};
# Bonus: auto-upgrade from GH repo
system.autoUpgrade = {
enable = true;
flake = "github:myuser/nixos-config";
flags = [
"-L" # print build logs
"--refresh" # do not use cached Flake
];
dates = "00:00";
allowReboot = true;
randomizedDelaySec = "45min";
};
}Getting into it has a learning curve, but it's honestly so much easier in a lot of ways, too.
I recently tried to get into NixOS for the sake of learning something new. Struggling to find a proper reason to use this as a personal daily-driver.
Ansible/Puppet or NixOS would be better, but this is what works in Self Hosting.
Security is an onion, you can add layers. There is no perfect security. You can add hurdles and hope you make yourself too difficult for you adversary. Some hurdles add more than others, and not using well known ports is on the lesser end of the scale. You might still find it worthwhile, just so you have cleaner logs to sift through.
And then maybe automating all of it with something like Ansible.
When I looked at it, it was like “yeah you can run Docker or k3s,” and I think Hashicorp had their own version, but it seemed like folks didn't really bother? Also like setting up virtual networks among VPSes seemed like it required advanced wizardry.
I have enough things that I'm 100% confident I'd have run into dependency issues by now without containerization, but with Docker files it's trivial to keep them separate. As a bonus, compose.yml files are basically the lingua franca for describing deployments these days, so you can almost always find an example in the official docs for any given service you might want to host and get lots of help.
Depends on what you're deploying, really.
If it's one Go service per host, there's no real need. Just a unit file and the binary. Your deployment scheme is scp and a restart.
For more complicated setups, I've used docker compose.
> Also like setting up virtual networks among VPSes seemed like it required advanced wizardry.
Another 'it depends'.
If you're running a small SaaS application, you probably don't need multiple servers in the first place.
If you want some for redundancy, most providers offer a 'private network', where bandwidth is unmetered. Each compute provider is slightly different: you'll want to review their docs to see how to do it correctly.
Tailscale is another option for networking, which is super easy to setup.
But I'm also one of those weirdos that does all of their development in a VM. I might be a tiny bit paranoid.
> Also like setting up virtual networks among VPSes seemed like it required advanced wizardry.
Did you try Nebula? Once you get the hang of it it's pretty simple.