When PerSourcePenalties are enabled, sshd(8) will monitor the exit
status of its child pre-auth session processes. Through the exit
status, it can observe situations where the session did not
authenticate as expected. These conditions include when the client
repeatedly attempted authentication unsucessfully (possibly indicating
an attack against one or more accounts, e.g. password guessing), or
when client behaviour caused sshd to crash (possibly indicating
attempts to exploit sshd).
When such a condition is observed, sshd will record a penalty of some
duration (e.g. 30 seconds) against the client's address.
https://github.com/openssh/openssh-portable/commit/81c1099d2...It's not really a reversable patch that gives anything away to attackers: it changes the binary architecture in a way that has the side-effect of removing the specific vulnerability and also mitigates the whole exploit class, if I understand it correctly. Very clever.
That's a previously-announced feature for dealing with junk connections that also happens to mitigate this vulnerability because it makes it harder to win the race. Discussed previously https://news.ycombinator.com/item?id=40610621
On June 6, 2024, this signal handler race condition was fixed by commit
81c1099 ("Add a facility to sshd(8) to penalise particular problematic
client behaviours"), which moved the async-signal-unsafe code from
sshd's SIGALRM handler to sshd's listener process, where it can be
handled synchronously:
https://github.com/openssh/openssh-portable/commit/81c1099d22b81ebfd20a334ce986c4f753b0db29
Because this fix is part of a large commit (81c1099), on top of an even
larger defense-in-depth commit (03e3de4, "Start the process of splitting
sshd into separate binaries"), it might prove difficult to backport. In
that case, the signal handler race condition itself can be fixed by
removing or commenting out the async-signal-unsafe code from the
sshsigdie() function
The cleverness here is that this commit is both "a previously-announced feature for dealing with junk connections", and a mitigation for the exploit class against similar but unknown vulnerabilities, and a patch for the specific vulnerability because it "moved the async-signal-unsafe code from sshd's SIGALRM handler to sshd's listener process, where it can be handled synchronously".The cleverness is that it fixes the vulnerability as part of doing something that makes sense on its own, so you wouldn't know it's the patch even looking at it.
- /\* Log error and exit. \*/
- sigdie("Timeout before authentication for %s port %d",
- ssh_remote_ipaddr(the_active_state),
- ssh_remote_port(the_active_state));
+ _exit(EXIT_LOGIN_GRACE);[1] https://security-tracker.debian.org/tracker/source-package/o...
https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2024-6387 (tracking task)
https://bugzilla.redhat.com/show_bug.cgi?id=2294905 (Fedora 39 issue)
Amazon Linux 2023 is affected; Amazon Linux 1 & 2 are not. Status updates will be posted to https://explore.alas.aws.amazon.com/CVE-2024-6387.html
GLSA 202407-09: https://glsa.gentoo.org/glsa/202407-09
Package metadata & log: https://packages.gentoo.org/packages/net-misc/openssh
> Successful exploitation has been demonstrated on 32-bit Linux/glibc systems with ASLR. Under lab conditions, the attack requires on average 6-8 hours of continuous connections up to the maximum the server will accept. Exploitation on 64-bit systems is believed to be possible but has not been demonstrated at this time. It's likely that these attacks will be improved upon.
void
sigdie(const char *fmt,...)
{
#ifdef DO_LOG_SAFE_IN_SIGHAND
va_list args;
va_start(args, fmt);
do_log(SYSLOG_LEVEL_FATAL, fmt, args);
va_end(args);
#endif
_exit(1);
}
to this: void
sshsigdie(const char *file, const char *func, int line, const char *fmt, ...)
{
va_list args;
va_start(args, fmt);
sshlogv(file, func, line, 0, SYSLOG_LEVEL_FATAL, fmt, args);
va_end(args);
_exit(1);
}
which lacks the #ifdef.What could have prevented this? More eyes on the pull request? It's wild that software nearly the entire world relies on for secure access is maintained by seemingly just two people [2].
[1] https://github.com/openssh/openssh-portable/commit/752250caa...
[2] https://github.com/openssh/openssh-portable/graphs/contribut...
void CloseAllFromTheHardWay(int firstfd) //Code here must be async-signal-safe! Locks may be in indeterminate state
{
struct rlimit lim;
getrlimit(RLIMIT_NOFILE,&lim);
for (int fd=(lim.rlim_cur == RLIM_INFINITY ? 1024 : lim.rlim_cur);fd>=firstfd;--fd)
close(fd);
}
Although to be honest, getrlimit isn't actually on the list here: https://man7.org/linux/man-pages/man7/signal-safety.7.htmlBut I hope that removing the comment or modifying code with a comment about async-signal-safe might have been noticed in review. The code you quoted only has the mention SAFE_IN_SIGHAND to suggest that this code might need to be async-signal-safe
It's open source. If you feel you could do a better job, then by all means, go ahead and fork it.
You're not entitled to anything from open source developers. They're allowed to make mistakes, and they're allowed to have as many or as few maintainers/reviewers as they wish.
https://gist.github.com/richhickey/1563cddea1002958f96e7ba95...
obligatory xkcd https://xkcd.com/2347/
1. Using a proper programming language that doesn't allow you to setup arbitrary functions as signal handlers (since that's obviously unsafe on common libcs...) - e.g. you can't do that in safe Rust, or Java, etc.
2. Using a well-implemented libc that doesn't cause memory corruption when calling async-signal-unsafe functions but only deadlocks (this is very easy to achieve by treating code running in signals as a separate thread for thread-local storage access purposes), and preferably also doesn't deadlock (this requires no global mutexes, or the ability to resume interrupted code holding a mutex)
3. Thinking when changing and accepting code, not like the people who committed and accepted [1] which just arbitrarily removes an #ifdef with no justification
4. Using simple well-engineered software written by good programmers instead of OpenSSH
Point (3) seems like a personal attack on the developers/reviewer, who made human errors. Humans do in fact make mistakes, and the best build toolchain/test suite in the world won’t save you 100% of the time.
Point (4) seems to imply that OpenSSH is not well-engineered, simple, or written by good programmers. While all of that is fairly subjective, it is (I feel) needlessly unkind.
I’d invite you to recommend an alternative remote access technology with an equivalent track record of security and stability in this space — I’m not aware of any.
Except for those of us who live in a world where most of their OS and utilities and libraries were originally written decades before Rust existed, and often even before Java existed. And where "legacy" C code pretty much underpins everything running on the public internet and which you need to connect to.
There's a very real risk that reimplementing every piece of code on a modern internet connected server in exciting new "safe" languages and libc-type things - by a bunch of "modern" programmers who do not have the learning experience of decades worth of mistakes - will end up with not just new implementation of old and fixed bugs and security problems, but also with new implementations that are incompatible in strange and hard to debug ways with every existing piece of software that uses SSH protocols as they are deployed in the field.
I, for one, and not going to install and put into production version 1.0 of some new OpenSSH replacement written in Rust out Go or Java, which has a non zero chance of strange edge case bugs that are different when connecting to SSH on different Linux/BSD/Windows distros or versions, across different cpu architectures, and probably have subtly different bugs when connecting to different cloud hyperscalers.
This is actually an interesting variant of a signal race bug. The vulnerability report says, “OpenBSD is notably not vulnerable, because its SIGALRM handler calls syslog_r(), an async-signal-safer version of syslog() that was invented by OpenBSD in 2001.” So a signal-safety mitigation encouraged OpenBSD developers to put non-trivial code inside signal handlers, which becomes unsafe when ported to other systems. They would have avoided this bug if they had done one of their refactoring sweeps to minimize the amount of code in signal handlers, according to the usual wisdom and common unix code guidelines.
I notice that nowadays signalfd() looks like a much better solution to the signal problem, but I've never tried using it. I think I'll give it a go in my next project.
https://github.com/bminor/musl/blob/master/src/misc/syslog.c
Everything there is either on stack or in static variables protected from reentrancy by the lock. The {d,sn,vsn}printf() calls there don't allocate in musl, although they might in glibc. Have I missed anything here?
https://www.freebsd.org/security/advisories/FreeBSD-SA-24:04...
> Finally, if sshd cannot be updated or recompiled, this signal handler race condition can be fixed by simply setting LoginGraceTime to 0 in the configuration file. This makes sshd vulnerable to a denial of service (the exhaustion of all MaxStartups connections), but it makes it safe from the remote code execution presented in this advisory.
Setting 'LoginGraceTime 0' in sshd_config file seems to mitigate the issue.
> If the value is 0, there is no time limit.
Isn't that worse?
(As the advisory notes, you do then have to deal with the DoS which the timeout setting is intended to avoid, where N clients all connect and then never disconnect, and they aren't timed-out and forcibly disconnected on the server end any more.)
> In our experiments, it takes ~10,000 tries on average to win this race condition; i.e., with 10 connections (MaxStartups) accepted per 600 seconds (LoginGraceTime), it takes ~1 week on average to obtain a remote root shell.
You can belt and suspenders with an external tool that watches for sshd in pre-auth for your real timeout and kills it or drops the tcp connection [1] (which will make the sshd exit in a more orderly fashion)
[1] https://security-tracker.debian.org/tracker/CVE-2024-6387
One thing which (as an independant person, who isn't doing any of the work!) is it often feels like in order to 'win', people are expected to find a full chain which gives them remote access, rather than just finding one issue, and getting it fixed / getting paid for it.
It feels to me like finding a single hole should be sufficient -- one memory corruption, one sandbox escape. Maybe at the moment there are just too many little issues, that you need a full end-to-end hack to really convince people to take you seriously, or pay out bounties?
So there is a need to differentiate between "real" security bugs [like this one] and non-security-impacting bugs, and demonstrating how an issue is exploitable is therefore very important.
I don't see the need to demonstrate this going away any time soon, because there will always be no end of non-security-impacting bugs.
Or there is no real consideration if that's actually an escalation of context. Like, "Oh if I can change these postgres configuration parameters, I can cause a problem", or "Oh if I can change values in this file I can cause huge trouble". Except, modifying that file or that config parameter requires root/supervisor access, so there is no escalation because you have full access already anyhow?
I probably wouldn't have to look at documentation too much to get postgres to load arbitrary code from disk if I have supervisor access to the postgres already. Some COPY into some preload plugin, some COPY / ALTER SYSTEM, some query to crash the node, and off we probably go.
But yeah, I'm frustrated that we were forced to route our security@ domain to support to filter out this nonsense. I wouldn't be surprised if we miss some actually important issue unless demonstrated like this, but it costs too much time otherwise.
I believe this has happened to curl several times recently.
Hospitals often try to make this argument about unsecure MySQL connections inside their network for example. Then something like heart bleed happens and lo and behold all the "never see an adversary" soft targets are exfil.
Let me give you a different perspective.
Imagine I make a serialisation/deserialisation library which would be vulnerable if you fed it untrusted data. This is by design, users can serialise and deserialise anything, including lambda functions. My library is only intended for processing data from trusted sources.
To my knowledge, nobody uses my library to process data from untrusted sources. One popular library does use mine to load configuration files, they consider those a trusted data source. And it's not my job to police other people's use of my library anyway.
Is it correct to file a CVE of the highest priority against my project, saying my code has a Remote Code Execution vulnerability?
However, if you (or anybody else) catch a program passing untrusted data to any library that says "trusted data only", that's definitely CVE worthy in my books even if you cannot demonstrate full attack chain. However, that CVE should be targeted at the program that passes untrusted data to trusted interface.
That said, if you're looking for bounty instead of just some publicity in reward for publishing the vulnerability, you must fullfil the requirements of the bounty and those typically say that bounty will be paid for complete attack chain only.
I guess that's because companies paying bounties are typically interested in real world attacks and are not willing to pay bounties for theoretical vulnerabilities.
I think this is problematic because it causes bounty hunters to keep theoretical vulnerabilities secret and wait for possible future combination of new code that can be used to attack the currently-theoretical vulnerability.
I would argue that it's much better to fix issues while they are still theoretical only. Maybe pay lesser bounty for theoretical vulnerabilities and pay reduced payment for the full attack chain if it's based on publicly known theoretical vulnerability. Just make sure that the combination pays at least equally good to publishing full attack chain for 0day vulnerability. That way there would be incentive to publish theoretical vulnerabilities immediately for maximum pay because otherwise somebody else might catch the theoretical part and publish faster than you can.
No need to imagine, the PyYAML has that situation. There have been attempts to use the safe deserialization by default, with an attempt to release a new major version (rolled back), and it settled on having a required argument of which mode / loader to use. See: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=PyYAML
Yes, it is correct to file a CVE of the highest priority against your project, because "only intended for processing data from trusted sources" is a frankly ridiculous policy for a serialization/deserialization library.
If it's your toy project that you never expected anyone to use anyway, you don't care about CVEs. If you want to be taken seriously, you cannot play pass-the-blame and ignore the fact that your policy turns the entire project into a security footgun.
There are almost always various weaknesses which do not become exploitable until and unless certain conditions are met. This also becomes evident in contests like Pwn2Own where multiple vulnerabilities are often chained to eventually take the device over and remain un-patched for years. Researchers often sit on such weaknesses for a long time to eventually maximize the impact.
Sad but that is how it is.
It should be.
> Maybe at the moment there are just too many little issues...
There are so many.
Minimal patches for those can't/don't want to upgrade: https://marc.info/?l=oss-security&m=171982317624594&w=2
[1] https://www.tarsnap.com/spiped.html
* spiped can be used transparently by just putting a "ProxyCommand" in your ssh_config. This means you can connect to a server just by using "ssh", normally. (as opposed to wireguard where you need to always be on your VPN, otherwise connnect to your VPN manually before running ssh)
* As opposed to wireguard which runs in the kernel, spiped can easily be set-up to run as a user, and be fully hardened by using the correct systemd .service configuration [4]
* The protocol is much more lightweight than TLS (used by stunnel), it's just AES, padded to 1024 bytes with a 32 bit checksum. [5]
* The private key is much easier to set up than stunnel's TLS certificate, "dd if=/dev/urandom count=4 bs=1k of=key" and you're good to go.
[1] https://packages.debian.org/bookworm/spiped
[2] https://www.freshports.org/sysutils/spiped/
[3] https://archlinux.org/packages/extra/x86_64/spiped/
[4] https://ruderich.org/simon/notes/systemd-service-hardening
They run sshd with the -D option already, logging everything to stdout and stderr, as their systemd already catches this output and sends it to journal for logging.
So I don't see anywhere they would be calling syslog, unless sshd does it on its own.
At most maybe add OPTIONS=-e into /etc/sysconfig/sshd.
1. It affects OpenSSH versions 8.5p1 to 9.7p1 on glibc-based Linux systems.
2. The exploit is not 100% reliable - it requires winning a race condition.
3. On a modern system (Debian 12.5.0 from 2024), the researchers estimate it takes: - ~3-4 hours on average to win the race condition - ~6-8 hours on average to obtain a remote root shell (due to ASLR)
4. It requires certain conditions: - The system must be using glibc (not other libc implementations) - 100 simultaneous SSH connections must be allowed (MaxStartups setting) - LoginGraceTime must be set to a non-zero value (default is 120 seconds)
5. The researchers demonstrated working exploits on i386 systems. They believe it's likely exploitable on amd64 systems as well, but hadn't completed that work yet.
6. It's been patched in OpenSSH 9.8p1 released in June 2024.
Stupid question, perhaps, but if those two lines inside the sshd_config are commented out with '#', does this mean that grace period and max. sessions are technically unlimited and therefore potentially vulnerable?
Found my own answer: If the values are commented out, it means that the default values are being used. If the file hasn't been modified the default values are those you see inside the config file.
In our experiments, it takes ~10,000 tries on average to win this race
condition, so ~3-4 hours with 100 connections (MaxStartups) accepted
per 120 seconds (LoginGraceTime). Ultimately, it takes ~6-8 hours on
average to obtain a remote root shell, because we can only guess the
glibc's address correctly half of the time (because of ASLR).
MaxStartups default is 10The exploit tries to interrupt handlers that are being run due to login grace period timing out - so we are already at a point where authentication workflow has ended without passing all the credentials.
Plus, in the "Practice" section, they discuss using user name value as a way to manipulate memory at a certain address, so they want/need to control this value.
Alerting is useless, with the volume of automated exploits attempted.
More resourceful attackers could automate attempted exploit using a huge botnet, and it'd likely look similar to the background of ssh brute force bots that we already see 24/7/365.
10:30:60 is mentioned in the man for start:rate:full, so I set mine to that value.
Thanks for the quote
Mitigate by using fail2ban?
Nice to see that Ubuntu isn't affected at all
In theory, this could be used (much quicker than the mentioned days/weeks) to get local privilege escalation to root, if you already have some type of shell on the system already. I would assume that fail2ban doesn't block localhost.
No mention on 22.04 yet.
AMD to the rescue - fortunately they decided to leave the take-a-way and prefetch-type-3 vulnerability unpatched, and continue to recommend that the KPTI mitigations be disabled by default due to performance costs. This breaks ASLR on all these systems, so these systems can be exploited in a much shorter time ;)
AMD’s handling of these issues is WONTFIX, despite (contrary to their assertion) the latter even providing actual kernel data leakage at a higher rate than meltdown itself…
(This one they’ve outright pulled down their security bulletin on) https://pcper.com/2020/03/amd-comments-on-take-a-way-vulnera...
(This one remains unpatched in the third variant with prefetch+TLB) https://www.amd.com/en/resources/product-security/bulletin/a...
edit: there is a third now building on the first one with an unpatched vulnerabilities in all zen1/zen2 as well… so this one is WONTFIX too it seems, like most of the defects TU Graz has turned up.
https://www.tomshardware.com/news/amd-cachewarp-vulnerabilit...
Seriously I don’t know why the community just tolerates these defenses being known-broken on the most popular brand of CPUs within the enthusiast market, while allowing them to knowingly disable the defense that’s already implemented that would prevent this leakage. Is defense-in-depth not a thing anymore?
Nobody in the world would ever tell you to explicitly turn off ASLR on an intel system that is exposed to untrusted attackers… yet that’s exactly the spec AMD continues to recommend and everyone goes along without a peep. It’s literally a kernel option that is already running and tested and hardens you against ASLR leakage.
The “it’s only metadata” is so tired. Metadata is more important than regular data, in many cases. We kill people, convict people, control all our security and access control via metadata. Like yeah it’s just your ASLR layouts leaking, what’s the worst that could happen? And I mean real data goes too in several of these exploits too, but that’s not a big deal either… not like those ssh keys are important, right?
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Reg file data sampling: Not affected
Retbleed: Not affected
Spec rstack overflow: Vulnerable: Safe RET, no microcode
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Srbds: Not affected
Tsx async abort: Not affected
Only regular stuff( https://www.openssh.com/txt/release-9.8 )
Darn - here I was hoping Alpine was properly immune, but it sounds more like "nobody's checked if it works on musl" at this point.
1. Find things that are 0.0.0.0 port 22, example, https://gist.github.com/james-ransom/97e1c8596e28b9f759bac79...
2. Force them to the local network, gcloud compute firewall-rules update default-allow-ssh --source-ranges=10.0.0.0/8 --project=$i;
Defensive programming tells us to minize code in signal handlers and the safest is to avoid using the signal at all when possible :).
Compared to ssh, wireguard configs feel too easy to mess up and risk getting locked out if its the only way of accessing the device.
An RCE in wireguard would be enough -- no need to compromise both.
Because an $80 black box wireless KVM from a foreign country is way more secure! (Just kidding, though it is not internet-accesible by default.)
https://archlinux.org/packages/core/x86_64/openssh/
edit be sure to manually restart sshd after upgrading; my systems fail during key exchange after package upgrade until restarting the sshd service:
% ssh -v 192.168.1.254
OpenSSH_9.8p1, OpenSSL 3.3.1 4 Jun 2024
... output elided ...
debug1: Local version string SSH-2.0-OpenSSH_9.8
kex_exchange_identification: read: Connection reset by peer
Connection reset by 192.168.1.254 port 22
> NB. if you're updating via source, please restart sshd after installing, otherwise you run the risk of locking yourself out.
https://github.com/openssh/openssh-portable/commit/03e3de416...
Edit: Already reported at https://gitlab.archlinux.org/archlinux/packaging/packages/op...
> Hidden path communication enables the hiding of specific path segments, i.e. certain path segments are only available for authorized ASes. In the common case, path segments are publicly available to any network entity. They are fetched from the control service and used to construct forwarding paths.
---
- OpenSSH < 4.4p1 is vulnerable to this signal handler race condition, if not backport-patched against CVE-2006-5051, or not patched against CVE-2008-4109, which was an incorrect fix for CVE-2006-5051;
- 4.4p1 <= OpenSSH < 8.5p1 is not vulnerable to this signal handler race condition (because the "#ifdef DO_LOG_SAFE_IN_SIGHAND" that was added to sigdie() by the patch for CVE-2006-5051 transformed this unsafe function into a safe _exit(1) call);
- 8.5p1 <= OpenSSH < 9.8p1 is vulnerable again to this signal handler race condition (because the "#ifdef DO_LOG_SAFE_IN_SIGHAND" was accidentally removed from sigdie()).
edit: maybe i should add an iptable rule to only allow ssh from my IP.
What benefits do you see? I mean, you still expose some binary that implements authentication and authorization using cryptography.
I think that even RBAC scenarios described in the link above should be achievable with OpenSSH, right?
I don't use port-knocking but I really just don't get all those saying: "It's security theater".
We had not one but two major OpenSSH "near fiasco" (this RCE and the xz lib thing) that were both rendered unusable for attackers by using port knocking.
To me port-knocking is not "security theater": it adds one layer of defense. It's defense-in-depth. Not theater.
And the port-knocking sequence doesn't have to be always the same: it can, say, change every 30 seconds, using TOTP style secret sequence generation.
How many exploits rendered cold dead in their tracks by port-knocking shall we need before people stop saying port-knocking is security theater?
Other measures do also help... Like restrictive firewalling rules, which many criticize as "it only helps keep the logs smaller": no, they don't just help keep the logs smaller. I'm whitelisting the three ISP's IP blocks anyone can reasonably be needing to SSH from: now the attacker needs not only the zero-day, but it also need to know he needs to be on one of those three ISPs' IPs.
The argument that consists in saying: "sshd is unexploitable, so nothing else must be done to protect the server" is...
Dead.
Unless you are talking about your own personal use-case, in which case, feel free to follow your deepest wishes
Firewall is a joke, too. Who can manage hundreds and thousands of even-changing IP ? Nobody. Again: I'm not talking about your personal use-case (yet I enjoy connecting to my server through 4G, whereever I am)
Fail2ban, on the other hand, is nice: every systems that relies on some known secret benefits from an anti-bruteforce mechanism. Also, and this is important: fail2ban is quick to deploy, and not a PITA for users. Good stuff.
If your connections crap and there’s packet loss, some of the sequence may be lost.
Avoiding replay attacks is another whole problem - you want the sequence to change based on a shared secret and time or something similar (eg: TOTP to agree the sequence).
Then you have to consider things like NAT…
Port knocking renders SSH unusable: I'm not going to tell my users "do this magic network incantation before running ssh". They want to open a terminal and simply run ssh.
See the A in the CIA triad, as well as U in the Parkerian hexad.
In a world where tailscale etc. have made quality VPNs trivial to implement, why would I both with port knocking?
I used port knocking for a while many years ago, but it was just too fiddly and flaky. I would run the port knocking program, and see the port not open or close.
If I were to use a similar solution today (for whatever reason), I'd probably go for web knocking.
In my case, I didn't see it as a security measure, but just as a way to cut the crap out of sshd logs. Log monitoring and banning does a reasonable job of reducing the crap.
It's not security theater but it's kind of outdated. Single Packet Authentication[0] is a significant improvement.
> How many exploits rendered cold dead in their tracks by port-knocking shall we need before people stop saying port-knocking is security theater?
Port knocking is one layer, but it shouldn't be the only one, or even a heavily relied upon one. Plenty of people might be in a position to see the sequence of ports you knock, for example.
Personally, I think if more people bothered to learn tools like SELinux instead of disabling it due to laziness or fear, that is what would stop most exploits dead. Containers are the middleground everyone attached to instead, though.
OpenBSD is notably not vulnerable, because its
SIGALRM handler calls syslog_r(), an async-signal-safer version of
syslog() that was invented by OpenBSD in 2001.
Saving the day once again.Just this morning was the first time I read the words MT-Safe, AS-Safe, AC-Safe. But I did not know there were “safer” functions as well.
Is there also a “safest” syslog?
For starters.
And if you want to simply go by vulnerability counts, as though that meant something, let's throw in MenuetOS and TempleOS.
I kid, but really you probably shouldn't on Production. You should be exporting your logs and everything else. The host or VM bootstrapped golden images with everything as needed.
It is okay to start that way and figure out your enternals but that isn't for Production. Production is a locked down closed environment.
Recomment from another Hacker News post.
Might actually make for a fun side project to build a SSH server using that library and see how well it performs.