This is what I do in 2021:
* Set up spiped[1] in front of SSH
* Install and setup nftables[2].
* Lock down every service as much as possible in systemd[3]. (If the service is built-in the distro, just use drop in files[4])
[1] https://www.tarsnap.com/spiped.html
[2] https://wiki.archlinux.org/title/nftables
[3] https://ruderich.org/simon/notes/systemd-service-hardening
[4] https://wiki.archlinux.org/index.php?title=Systemd&oldid=704...
There is a gulf of difference between hardening a Linux server for an independent web shop vs running Linux at Google. And this article very much feels like it's aimed at sysadmins of the former rather than SRE's of the latter (the fact that they're not even running configuration management like Puppet is a dead give away of that fact)
I agree that there is a difference between running Linux at a FAANG and running Linux for an independent web-shop.
However, my advice was targeted at the hobbyists like me who likes to run their own webserver. (Independently from my employer) And I think it is appropriate for independent web-shop as well.
Auditing is not just reserved to bigcorps, I personally like to log diffs between "nft list ruleset" and "cat /etc/nftables.conf" on my personal servers. If you run fail2ban this becomes impossible.
Also, IMHO, fail2ban doesn't really solve the problem, a botnet attack could try to bruteforce your SSH. All it does is try to prevent one person from trying too much, it can also lock you out during an emergency. spiped is, IMHO, easier to setup and cheaper to maintain. It also provides a higher degree of protection. (As I explained before, it is a 256bit combination port knocker)
I personally think that fail2ban is a cargo-cult.
Moving the ssh port (even without port knocking) does a lot more to cut out log messages.
Or make ssh IPv6 only.
spiped might be great, but I found the above on their website. The fact that it has 6k lines of code does not mean that it lacks security vulnerabilities... at all. It does not make it that unlikely either. You still have to audit it. Less LOC just means it will consume less time to do that, but it is of no guarantee that it is more unlikely to have security vulnerabilities.
Plus they could have used better crypto.
Here is my personal reasons why I use spiped:
* spiped is transparent by using ProxyCommand[1]. This allow me to do "ssh host" and thanks to my ssh_config, it just connects.
* spiped can be run in a very hardened way.[2] It just needs to listen() to a socket, connect to another one, and access a key file. Wireguard needs complex network access, it needs to create interfaces and open raw sockets.
* spiped is much simple to manage, just run a daemon. With wireguards there are two possibilites:
** Every host runs wireguard, you might need to connect to multiple hosts at the time, you need to manage internal IP conflicts, etc...
** One central wireguard server, you have a single point of failure, and can't ssh anywhere if this host is down.
Don't get me wrong, I love wireguard, use it all the time as a VPN, but I don't think it's appropriate as a layer of protection in front of my SSH server.
Both Wireguard and Spiped are written by very smart people.
[1] https://man.openbsd.org/ssh_config#ProxyCommand
[2] https://ruderich.org/simon/notes/systemd-service-hardening
"You can also use spiped to protect SSH servers from attackers: Since data is authenticated before being forwarded to the target, this can allow you to SSH to a host while protecting you in the event that someone finds an exploitable bug in the SSH daemon -- this serves the same purpose as port knocking or a firewall which restricts source IP addresses which can connect to SSH."
Also, do I really HAVE to change something so that it is secure? Isn't a Ubuntu server secure out of the box? With a strong, unique root password of course.
* manually (like this guide)
* via CI/CD using tools like Packer
* Cloned (eg CloneZilla, or cloud snapshot)
* via configuration management (eg Puppet, Chef, Ansible, etc)
* via other initialisation methods such as CloudInit
Aside from the manual option, there’s no wrong way to any of these. And some of these approaches compliment some of these other approaches too. Many of these approaches will have a multitude of different solutions available that differ significantly in set up.
A lot of the time it boils down to preferences as much as it does best practices.
As for why servers aren’t locked down more from the outset. Some distros are. And there’s images of popular distros that have been pre-hardened for you too. Ubuntu isn’t the best for secure defaults but it’s target audience is more diverse than RHEL (Redhat Enterprise Linux). And as I’m sure you’re aware, security is often a trade off between convenience. So Ubuntu takes the approach of being slightly more convenient for the average user at the cost of being less secure by default.
Even your question if it's already secure by default is meaningless if you don't say what you are using the computer for and what kinds of threats you are protecting against.
I think people working at RedHat are more competent in moving security forward on Linux than what Ubuntu does. Ubuntu hardly innovates at all. Its target market seems to be desktop users (or server admins that are only familiar with the Desktop version). Personally I wouldn't put Ubuntu (or any other distribution) on a server without an elaborate playbook to tailor it to my needs (on Ubuntu that playbook is always more complex from my experience). This is where Ubuntu fails for me because it makes some weird assumptions as to what I want in terms of security (which are absent in Debian). YMMV.
Although I think that a distribution's goal should be accessibility and configurability - in that regard all of them don't prioritize security features as much as I'd like to see (but knowing myself I probably would complain the second these features become too opinionated - which they most certainly would - which is why I think Debian does the right thing with not making opinionated assumptions).
Ubuntu compared to Debian standard install is more bloated, interim releases are much buggier, and Ubuntu LTS is less stable than Debian stable. Ubuntu's root certificate store is constantly outdated (though the same issue might also be on Debian). Their apparmor configuration lags behind, ... whatever is good they usually inherit from Debian.
All distributions could do more to lock down processes with seccomp-filters in systemd. Would be interesting to see what lynis⁰ discovers when comparing a fresh server install between Ubuntu and others. In over 20 years I have seen some real shit-shows in production with all distributions except Debian (again ymmv).
Jason Donenfeld, the creator of Wireguard said about Ubuntu on the latest¹ SCW podcast:
> Ubuntu is always, a horrible distribution to work with, ...
> Well, they [Ubuntu] sort of inherit from Debian, but they're like not super tuned in to what's going on and like not really on top of things. And so it was just always, it's still a pain to like make sure Ubuntu is working well. but I don't know, it's not too much interesting to say about the distro story, just open source politics as usual.
while somewhat anecdotal I trust that Jason knows what he is talking about having been on the linux security kernel team for ages and familiar with the quirks of various downstream vendors. His development cycle for WG is: implement -> decompile -> formal-verification -> rinse/repeat :-/
All of Linux security is a shit show. This is why grsecurity is charging money for it's service.
¹ https://securitycryptographywhatever.buzzsprout.com/1822302/...
Uhh what? Isn't it's largest target cloud/server distro deployment?
> Ubuntu's root certificate store is constantly outdated
Uhh for me cacerts updates what twice a year? Certainly it's a lot easier for me to keep it updated on ubuntu than rhel/centos.
>Their apparmor configuration lags behind, ... whatever is good they usually inherit from Debian.
Apparmor and SELinux are objective failures for the most part. The entire point of snap/flatpaks is to hide away the nonsense configuration in favor of an actual permission model. I would say snaps are actually enabling apparmor to be used and enforced unlike the generic apparmor profiles generated.
>Jason Donenfeld, the creator of Wireguard said about Ubuntu on the latest¹ SCW podcast:
What specific aspects is he referring to here? Wireguard has been baked into the kernel. I can understand packaging updates being a mess, and updating universe/lts but that is problematic for every Linux OS out there.
This is precisely why snaps were introduced. You now have apparmor/seccompf enforced permission model and an easy way for developers to directly push to multiple Ubuntu versions without having to worry about OS compatibility.
SSH is in all likelihood the most secure server software that you can have on a Linux box. Everything else you put in front of it is likely to be a downgrade.
One advantage is that if your firewall is setup right it's completely invisible, as unauthenticated UDP packets are dropped, as is the case with any other, unused, UDP port.
I still configure SSH to best practices just in case a configuration blunder inadvertently causes the firewall to accept connections.
If the IP address doesn't change very often, it's not a bad idea to set up a dynamic DNS script and base your allow list on that subdomain rather than the raw IP address.
> If SSH turns out to have a massive vulnerability that bypasses keyauth then every service on the net will be torn down
those seem to contradict each other.
i agree that black-/whitelisting should not be the center of you security architecture but it sure helps in the scenario of authentication bypass vuln.
Assuming you're not travelling ofc.
Ps. By reference, I'm from Belgium. Odds are slim that a scanning IP would come from here to my abroad server.
Almost every time I've used a WAF's geo-IP blocking tool I've either personally experienced or had customers complain about being blocked incorrectly.
If you're dynamically getting IP addresses and you're allow-listing based on country of origin, expect to get locked out eventually even if you're sitting in the same place.
PasswordAuthentication no
Of course you should make really sure to actually have a working public key in your users "~/.ssh/authorized_keys" file and/or in "/root/.ssh/authorized_keys" otherwise you might lock yourself out of the server.But the point here is: given the choice, you should never log in regularily with a ssh password if you can also use a key.
In addition to that I have found that using the setting:
AllowUsers "example_account@aaa.bbb.ccc.ddd"
(If you are connecting from a static IP) Will cut down on log spam and connection initializations by a tremendous amount.My servers only have `root` and my own sudo user, (and other default system users). I also run all apps on Docker. I don't think this would be an issue for me.
https://netflixtechblog.com/linux-performance-analysis-in-60...
Using the root account at all is obsolete imho. Fedora, CentOS and RHEL all allow me to skip setting a root password and just use my admin user.
Say ssh disallows password login but I know the root password, if I ssh on to the box as another user I can then su to the root user. If the root user does is locked I can't do this.
Security measures to deploy depend on the risk level involved, e.g., potential costs of being hacked.
Measures like SELinux, grsec, fwknop, snort, IDS (tripwire or Samhain), HSMs, hw entropy sources, split SSH/TLS, and microservices compartmentalized into VM containers rather than Docker have their place.
I know it's a lot to ask, but maybe there is such a guide available that does not just fall back to talking about provider-specific features (e.g. IAM).
You just need an ssh connection to the target servers, with python installed on them. Of course, you have to write rules for setting up a server, provisioning users, monitoring (deploying Prometheus, pushing logs to a central server...). There various plugins for integrating with providers, but the basic features are provider independent.
Ansible is far from perfect (dependency on python, inconsistent syntax, abuse of aliases, missing a strict mode...), but it's rather easy to learn and I've used it successfully (at a small scale).
These things show up, but are completely irrelevant to security.
* use -t ed25519 to generate keys, much more efficient for same security compared to RSA
* don’t use ufw. It easily becomes a big mess and is a pain to manage with ansible. firewalld is a much better high-lever firewall. Preferably with nftables backend.
If you have a bit bigger fleet and manage a CA you could look into using signed SSH certificates instead of public keys. That way you can provision access centrally without adding individual keys to individual servers.
One thing that's not immediately obvious is that docker does not care about your firewall.
I’d also ditch the use of any shared credential other than the emergency root password, which should be locked away and not actually known by any people. Your mechanism for syncing ssh pubkeys (which, btw, isn’t specified in the article, which in my experience means it doesn’t really exist :D) on the shared account should instead populate the user keys directory and there should be one logon per user.
https://lamda-chops.bearblog.dev/steps-i-take-when-setting-u...
Anyone else found it sus that the OP didn't mention backups?
If you're concerned you will lose 5 minutes of work make use of snapshotting if available.
Of course the first-5-minute title is hyperbolic, but backups are besides the point when you're first setting up and securing a machine.
I don't think it is. I've managed a server directly connected to the internet with a US government IP, and it was being port scanned from a Chinese IP within minutes of being turned on. If you are a target, then there is an adversary out there that is patiently waiting for the opportunity to exploit an unpatched vulnerability in new installs, as if your security is otherwise good it might be how they get their foot in the door on your network.
(In our case I really did have a "5 minute plan" to login as soon as the fresh install was booted, setup a firewall, lockdown the ssh server, and install fail2ban ASAP. I'd then check system logs to see if anyone got in before proceeding. Time was of the essence.)
ChallengeResponseAuthentication noThis article is from 2013 after all.