So are we all going to jump back to pre-forked, multi-process Apache now, tack on a TLS slave daemon, and ignore gaping big holes in the application layer?
In the short term your user is compromised whether it's a cookie, an AES key for the TLS session (which will presumably still have to be resident in the process sending you data), a credit card number in a POST request, or your certificate master key.
Anyone who can intercept my traffic in close to real time, and wishes to target me, is going to know I'm talking to amazon.com, IP x.y.z.f, and that that's where they should target their Heartbleed attack for a good stab at accessing my PHP session cookie or TLS session AES key.
There are some cases, like e-mail phishing, where this isn't the case of course... but then a redirection service would be sufficient to let me script an attack against many sites.
Cookies are remarkably sensitive, but they can be far more easily rotated. I can make sure that every cookie is rotated transparently every day or so and leave that running as a sensible background precaution. If we had infrastructure that let us renew our TLS keys every 24 hours or so, this wouldn't be such a big deal (it would still be a big deal, but not quite as bad as it is today). But TLS keys have an expiry of usually years.
The sad thing is... we do. 24 hours is a bit much, but why not have a different certificate for each server? The whole point of a certificate chain is to give us the flexibility to issue and revoke certificates from lower down in the tree... of course most of us serfs don't get the privilege of using our own intermediates.
Oh... and we're repeating some of the same mistakes in DNSSEC. Looking at deploying DNSSEC I kept reading that the general idea of the KSK was to function as a long-term key, and the ZSK as a short term key, but I have yet to see a method of managing things with the KSK offline that isn't like pulling teeth. The latest BIND requires that both the KSK and ZSK private keys be resident on your primary nameserver when you switch on the "auto-dnssec" magic.
Still, at least setting up DNSSEC is free.
Think of the problems credit card processors deal with: Hiding the keys themselves from their own employees, so that getting a root password is not enough to be able to just take all the credit card information. You don't want the key in any filesystem, and you don't want the key in an easy to retrieve memory location. You end up with servers that require multiple people to boot up, as the keys only really appear when multiple people provide their own piece of the secret.
Eventually, enough security leads to the risk of data loss, as an error can make the keys become unrecoverable.
This is why we have to add security breach detection, and make recovering from a breach easy and having low consequences. Linus said that with enough eyeballs, all buts are shallow. With enough attackers, all systems are insecure.
However, my personal webserver isn't a bank. Not everyone can justify spending this much money on a HSM to get this level of assurance. What I'm proposing is a simpler solution that isn't robust against sophisticated attacks (eg when the attacker manages to get root), but is far more robust to some classes of the common attacks we see today (where the attacker can read any memory/file that the webserver has permissions to see).
HSM = Hardware Security Module (http://en.wikipedia.org/wiki/Hardware_security_module)
Of course, this requires using Apple's APIs, which are poorly documented and a pain in the neck even compared to OpenSSL. It's also not suitable for servers.
Be careful that in our haste to secure the private keys, we ignore easier attacks. The article seems to gloss over an attacker hacking the web server, when in fact that gives them such powers that going on to grab the private key might not even be attempted.
Looking past OpenSSL, C didn't magically become a safe language in a week, either; this approach guards against a real problem in C that is not limited to a single bug in OpenSSL: over-reading off the end of a valid buffer.
I work at a pretty security conscious company (this might be an understatement, we're pretty big on security), and even as a developer on the inside I'd have to get pretty creative to get access to our production servers.
So instead of having the key to the hard to change site certificate on many vulnerable front-line servers, it rolls up a key and on boot sends a certificate signing request to a hardened internal system?
However, I don't think X.509 supports the concept of CA certs being limited to signing only subdomains (could be wrong), and you have a large industry that prefers the status quo of you having to pay them for each cert you mint.
This ends up with ridiculous things like tying payment to the lifetime of the certificate, which allows for things like "2 year certs", which are obviously less secure than 2×1 year certs.
But having your server roll it's cert every 12 hours from a more secure cert elsewhere would be a very nice feature.
• Instead of issuing plain leaf-node certs, CAs could (and would) issue CA-certs by default.
• You'd be able to issue as many plain certs as you like, using your own CA-cert, and revoke them as often as you like. (OCSP would be much more necessary here.)
• The current CAs would be renamed to "global CAs": their power would come from the fact that they have no subject (or their subject is '.') in their CA-certs.
• Anyone owning a domain would become the CA for its own subdomains. (foo.tumblr.com would be signed by Tumblr's CA; foo.s3.amazonaws.com would be signed by the Amazon AWS CA; etc.)
It seems primarily geared at clients rather than servers, but in theory can be used for both (I'm not even sure you can load your openssh server key into ssh-agent, can you?)
Yes, actually, as of OpenSSH 6.3 you can. (I wrote most of the patch that added that feature.) However, even without doing that the OpenSSH server performs crypto operations in a separate process from the network-facing child process (unless you've disabled UsePrivilegeSeparation). The purpose of having the server talk to an ssh-agent was to allow keeping your host keys encrypted on-disk or loading them from a smart card.
No need, servers only need to do signature verifications during authentications, thus they only need users/clients public keys which must be listed as authorized_keys.
Edit: I maybe didn't fully grasp you question, if you were referring to ssh host keys, in this case to my knowledge you're right they can not be used with ssh-add.
Does an open spec HSM module exist? I can be somehow sure that linux and apache/nginx don't have backdoors as the source is audited by many people, but I need to be "sure" of my HSM too.
opencryptoki has a softhsm too, but again, it appears to run in process. Same problems.
> You can use it to explore PKCS #11 without having a Hardware Security Module.
The same amount of security can probably be obtained by just launching a process on server startup to do this with sufficient isolation from the parent process. I believe OpenSSH does something along these lines to run most of its code as an unprivileged user. It's probably even possible to do this seamlessly based on the existing SSL config directives in apache/nginx requiring no more intervention from the sysadmin than upgrading to a newer version.
PKCS#11 has a few irritants, but it's a fairly sensible API. and it's already implemented by many things (browsers, gnome-keyring, ssh, ...). OpenSSL, GnuTLS at least both support it via one mechanism or another, my only real complaint from the webserver side is that the configuration knobs aren't really plumbed through.
PKCS#11 is a little funny looking and has some small rough edges, but it's actually reasonably designed and easy to implement from scratch. That's not something I can say for many of the other PKCS standards.
It's apparently not supported by Apache/nginx nor does a suitable software-HSM exist to use it, so you're basically writing both ends of the communication. But if you do go with a separate daemon PKCS#11 may very well be a good solution. I just think forking off a process yourself is much cleaner for the use case of securing a web server.
Much more likely that they'd just hack the web server and MITM you or something.
Personally I think the web server should do the encryption. As it is the part of the software that contains the sensitive information, AKA the content. You can get new keys you can't get new content.
When you say "you can get new keys" which is true (although startssl appears to be the fly in this particular ointment), browsers don't validate CRLs, so the old keys are still just as valid as the new ones. Which makes getting new keys potentially worthless.
This is providing similar protections for your TLS keys to what your database server already applies.
Protecting content involves protecting keys. So to prioritize protecting content, you have to prioritize protecting keys.