You'd rather use a dedicated load balancer like Route53 or haproxy. Don't think choosing Apache or Nginx is the right option for those really. Plus something like vert.x is very usable as a load balancer already.
>SSL Termination
Just about everything handles this already. Current best practice is to use SSL for all communication between your own servers anyway, so there's no gain. If you SSL terminate on your load balancer, these days you want to use a new SSL connection between your load balancer and application server anyway if possible.
> serving static content, caching, compression
New app servers like vert.x support Linux sendfile and handle very well for serving static content etc. Currently, nearly everyone uses Cloudflare to handle all of this anyway. No real reason to duplicate it if Cloudflare is set to handle it.
> centralized logging
Centralized logging is usually done by sending all of your logs off from each service/server to be aggregated on a dedicated box running Logstash or whatever. You don't use your reverse proxy for this?
>using different applications on the same ur space
From using the web, I don't think this is done anymore. In fact it seems to be the opposite - foo1.app.com, foo2.app.com seems to be the trend. Basically the opposite of multiple apps on 1 domain because of the big move towards microservices. Extra domains are the cheapest thing there is.
> added benefit of another layer of security
Security doesn't work that way in my experience. It's more about minimizing attack surface. If you use node and nginx and apache then any exploit that hits any of those 3 will hit you. If you only use node, then you can only get hit by exploits on node. So I'd argue it's the opposite. The more layers, the less secure.
> nginx/apache are really good at what they do
Sure, but you need to find the most efficient tool to handle your needs with the least amount of complexity. Only add something if it solves an issue that you can't solve in a simpler way just as well.