I'm not talking the low-level OS applications here... I'm talking end-user applications and major exposed services.
For that matter, each of those applications needs to be updated, vetted and packaged... it's a matter of the level and completeness of packages.
smtpd? httpd? sqld? sshd? ntpd?
This may be illuminating, it's the list of RPMs that have a requirement containing the string ssl: # for RPM in `rpm -aq --qf '%{NAME}\n'`; do rpm -qR $RPM | grep -iq ssl && echo -n "$RPM "; done python-ldap libssh2 mailx Percona-Server-server-55 abrt-addon-ccpp libfprint qpid-cpp-client-ssl perl-Crypt-SSLeay ntp httpd-tools openssh openssl-devel pam_mysql redhat-lsb-core Percona-Server-client-55 sssd-common perl-IO-Socket-SSL qt squid openssl098e elinks ipa-python python-nss compat-openldap Percona-Server-shared-55 systemtap-runtime perl-Net-SSLeay python-libs Percona-Server-shared-compat openssl systemtap-client qpid-cpp-server-ssl certmonger python-urllib3 openssh-server nss_compat_ossl openldap percona-toolkit pyOpenSSL git sssd-common-pac wget ntpdate openssh-clients openssl systemtap-devel postfix nss-tools perl-DBD-MySQL libcurl curl sssd-proxy libesmtp
That's just for SSL, which while it's used in many applications and services, generally is limited to items that communicate externally, for the most part. What about when it's a core library that everything uses? tzdata updates often... We want correct time representations, right? Gzip is used by a lot of applications. What about glibc?
It's not an easy problem, but that's why I'm interested in how it turns out.
As to end-user applications, if you are developing using Perl, Python, etc.. then developing against the host is your choice... that said, having the result deployed separately might be a better option, in a container a valid one.
You bring up a great example... OpenSSL has a minor change, you upgrade that package, and everything runs fine, except a ruby gem you rely on is broken, and your system is effectively down, even if that ruby app isn't publicly facing, and only used internally. If they were in isolated containers, you could upgrade and test each app separately without affecting the system as a whole. Thank you, you've helped me demonstrate my point.
That said, in general there aren't dozens of applications that are run on a single system that matter to people... and most of those that are could well be isolated in separate containers and upgraded separately. Via docker-lxc, your python apps can even use the same base container and not take as much space.. same for ruby, etc... and when you upgrade one, you don't affect the rest.
I've seen plenty of monolithic apps deployed to a single server, where in practice upgrading an underlying language/system breaks part of it.
Myself, for node development, I use NVM via my user account (which there are alternatives for perl, python, ruby) that allow me to work locally with portions of the app/testing, but deployment is via a docker container, with environment variables set to be able to run the application. It's been much smoother in practice than I'd anticipated.