Higher complexity = larger attack surface.
For example, if they used a firewall with one of Cisco's infamous backdoors.
https://www.zdnet.com/article/cisco-removed-its-seventh-back...
If your network is secure, and a well configured machine only listening on port 22 is pretty secure, you have to ask how the production machine will interact with the outside world. Well every update is an inverse remote code execution. You are getting remote code from an external location and then running it directly in production. So while you might trust FreeBSDs package manager, do you trust Cisco? Do you trust SolarWinds? Even if you do, it's hard to argue that your attack surface hasn't been increased.
For example you have your main machines set up securely, and requests go through the firewall. Of course a compromised firewall doesn't make it easier to compromise the main machines. But because your users are going through the firewall, _they_ might now become vulnerable to some classes of redirection attacks.
Even in a scenario where you're using the firewall as a passthrough, you're still looking at a scenario where (For example) your DNS entries are now pointing to a machine you have less control after. It might not mean that now HTTPS doesn't work anymore, but that (combined with some other mistake) might be enough.
One potential class of vulnerability might be related to recent git client issues: the software your client is using might have an issue that would be a security issue when connecting to an untrusted source. You wouldn't try to get a keylogger on your clients' machines, and the software is always pointing at your own domain etc etc. But the firewall vulnerabilities have opened up that angle of attack!
It's definitely a balancing act, and dependent on how much you trust each layer of your stack.