I disrecommend UFW.
firewalld is a much better pick in current year and will not grow unmaintainable the way UFW rules can.
firewall-cmd --persistent --set-default-zone=block
firewall-cmd --persistent --zone=block --add-service=ssh
firewall-cmd --persistent --zone=block --add-service=https
firewall-cmd --persistent --zone=block --add-port=80/tcp
firewall-cmd --reload
Configuration is backed by xml files in /etc/firewalld and /usr/lib/firewalld instead of the brittle pile of sticks that is the ufw rules files. Use the nftables backend unless you have your own reasons for needing legacy iptables.Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.
Newer versions of firewalld gives an easy way to configure this via StrictForwardPorts=yes in /etc/firewalld/firewalld.conf.
In my own use I have 10.0.10.11 on the vm that I host docker stuff. It doesn't even have its own public IP meaning I could actually expose to 0.0.0.0 if I wanted to but things might change in the future so it's a precaution. That IP is only accessible via wireguard and by the other machines that share the same subnet so reverse proxying with caddy on a public IP is super easy.
> Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway.
Like I said in another comment, drop Docker, install podman.The only non-deprecated way of having your Compose services restart automatically is with Quadlet files which are systemd unit files with extra options specific to containers. You need to manually translate your docker-compose.yml into one or more Quadlet files. Documentation for those leaves a lot to be desired too, it's just one huge itemized man page.
In order to stop these attacks, restrict outbound connections from unknown / not allowed binaries.
This kind of malware in particular requires outbound connections to the mining pools. Others downloads scripts or binaries from remote servers, or try to communicate with their c2c servers.
On the other hand, removing exec permissions to /tmp, /var/tmp and /dev/shm is also useful.
Sadly that's more of a duck tape or plaster, because any serious malware will launch their scripts with the proper '/bin/bash /path/to/dropped/payload' invocation. A non-exec mount works reasonably well only against actual binaries dropped into the paths, because it's much less common to launch them with the less known '/bin/ld.so /path/to/my/binary' stanza.
I've also at one time suggested that Debian installer should support configuring a read-only mount for /tmp, but got rejected. Too many packaging scripts depend on being able to run their various steps from /tmp (or more correctly, $TMPDIR).
If this weren't the case, plenty of containers could probably have a fully read-only filesystem.
On ufw systems, I know what to do: if a port I need open is blocked, I run 'ufw allow'. It's dead simple, even I can remember it. And if I don't, I run 'ufw --help' and it tells me to run 'ufw allow'.
Firewall-cmd though? Any time I need to punch a hole in the firewall on a Fedora system, I spend way too long reading the extremely over-complicated output of 'firewall-cmd --help' and the monstrous man page, before I eventually give up and run 'sudo systemctl disable --now firewalld'. This has happened multiple times.
If firewalld/firewall-cmd works for you, great. But I think it's an absolutely terrible solution for anyone whose needs are, "default deny all incoming traffic, open these specific ports, on this computer". And it's wild that it's the default firewall on Fedora Workstation.
This sounds like great news. I followed some of the open issues about this on GitHub and it never really got a satisfactory fix. I found some previous threads on this "StrictForwardPorts": https://news.ycombinator.com/item?id=42603136.
It’s not like iptables was any better, but it was more intuitive as it spoke about IPs and ports, not high-level arbitrary constructs such as zones and services defined in some XML file. And since firewalld uses iptables/nftables underneath, I wonder why do I need a worse leaky abstraction on top of what I already know.
I truly hate firewalld.
I’d love a Linux firewall configured with a sane config file and I think BSD really nailed it. It’s easy to configure and still human readable, even for more advanced firewall gateway setups with many interfaces/zones.
A have no doubt that Linux can do all the same stuff feature-wise, but oh god the UX :/
Imagine container A which exposes tightly-coupled services X and Y. Container B should be able to access only X, container C should be able to accesd only Y.
For some reason there just isn't a convenient way to do this with Docker or Podman. Last time I looked into it, it required having to manually juggle the IP addressed assigned to the container and having the service explicitly bind to it - which is just needlessly complicated. Can firewalls solve this?
And ufw supports nftables btw. I think the real lesson is write your own firewalls and make them non-permissive - then just template that shit with CaC.
I mean there are some payload over payload like GRE VPE/VXLAN/VLAN or IPSec that needs to be written in raw nft if using Foomuuni but it works!.
But I love the Shorewall approach and your configuration gracefully encapsulated Shorewall mechanic.
Disclaimer: I maintain vim-syntax-nftables syntax highlighter repo at Github.
I'm not even sure what to say, or think, or even how to feel about the frontend ecosystem at this point. I've been debating on leaving the whole "web app" ecosystem as my main employment ventures and applying to some places requiring C++. C++ seems much easier to understand than what ever the latest frontend fad is. /rant
Compare that to what we had in the late 00's and early 10's we went through prototype -> mootools -> jquery -> backbone -> angularjs -> ember -> react, all in about 6 years. Thats a new recommended framework every year. If you want to complain about fads and churn, hop on over to AI development, they have plenty.
Pick a solid technology (.NET, Java, Go, etc...) for the backend and use whatever you want for your frontend. Voila, less CVEs and less churn!
Unless you're running a static html export - eg: not running the nextjs server, but serving through nginx or similar
There are way more important things like actually knowing that you are running software with widely known RCE that don't even use established mechanisms to sandbox themselves it seems.
The way the author describes docker being the savior appears to be sheer luck.
Good security is layered.
I guess you can have the appserver fully firewalled and have another bastion host acting as an HTTP proxy, both for inbound as well as outbound connections. But it's not trivial to set up especially for the outbound scenario.
I know port scanners are a thing but the act of using non-default ports seems unreasonably effective at preventing most security problems.
I did docker pull a few times base on some webpost (looks reasonable) and detect app/scripts from inside the docker connect to some .ru sites immediately or a few days later....
I do it for a really long time already, and until now I am not sure if it has any benefit or it's just umbrella in a sideways storm.
I don't think it's wrong, it's just not the same as eg using a yubikey.
My sshd only listens on the VPN interface
App servers run docker, with images that run a single executable (no os, no shell), strict cpu and memory limits. Most of my apps only require very limited temporary storage so usually no need to mount anything. So good luck executing anything in there.
I used, way back in the day, to run Wordpress sites. Would get hacked monthly every possible way. Learned so much, including the fact that often your app is your threat. With Wordpress, every plugin is a vector. Also the ability to easily hop into an instance and rewrite running code (looking at you scripting languages incl JS) is terrible. This motivated my move to Go. The code I compiled is what will run. Period.
Is that the case, though? My understanding was, that even if I run a docker container as root and the container is 100% compromised, there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?
The core of the problem here is that process isolation doesn't save you from whole classes of attack vectors or misconfigurations that open you up to nasty surprises. Docker is great, just don't think of it as a sandbox to run untrusted code.
Of course if you have a kernel exploit you'd be able to break out (this is what gvisor mitigates to some extent), nothing seems to really protect against rowhammer/memory timing style attacks (but they don't seem to be commonly used). Beyond this, the main misconfigurations seem to be too wide volume bindings (e.g. something that allows access to the docker control socket from inside the container, or an obviously stupid mount like mounting your root inside the container).
Am I missing something?
Docker is pretty much the same but supposedly more flimsy.
Both have non-obvious configuration weaknesses that can lead to escapes.
Second, even if your Docker container is configured properly, the attacker gets to call themselves root and talk to the kernel. It's a security boundary, sure, but it's not as battle-tested as the isolation of not being root, or the isolation between VMs.
Thirdly, in the stock configuration processes inside a docker container can use loads of RAM (causing random things to get swapped to disk or OOM killed), can consume lots of CPU, and can fill your disk up. If you consider denial-of-service an attack, there you are.
Fourthly, there are a bunch of settings that disable the security boundary, and a lot of guides online will tell you to use them. Doing something in Docker that needs to access hot-plugged webcams? Hmm, it's not working unless I set --privileged - oops, there goes the security boundary. Trying to attach a debugger while developing and you set CAP_SYS_PTRACE? Bypasses the security boundary. Things like that.
Unfortunately, user namespaces are still not the default configuration with Docker (even though the core issues that made using them painful have long since been resolved).
non necessary vulnerability per. se. Bridged adapter for example lets you do a lot - few years ago there were a story of something like how a guy got a root in container and because the container used bridged adapter he was able to intercept traffic of an account info updates on GCP
I disagree with other commenters here that Docker is not a security boundary. It's a fine one, as long as you don't disable the boundary, which is as easy as running a container with `--privileged`. I wrote about secure alternatives for devcontainers here: https://cgamesplay.com/recipes/devcontainers/#docker-in-devc...
Are you holding millions of dollars in crypto/sensitive data? Better assume the machine and data is compromised and plan accordingly.
Is this your toy server for some low-value things where nothing bad can happen besides a bit of embarrassment even if you do get hit by a container escape zero-day? You're probably fine.
This attack is just a large-scale automated attack designed to mine cryptocurrency; it's unlikely any human ever actually logged into your server. So cleaning up the container is most likely fine.
Also, if you've been compromised, you may have a rootkit that hides itself from the filesystem, so you can't be sure of a file's existence through a simple `ls` or `stat`.
Honestly, citation needed. Very rare unless you're literally giving the container access to write to /usr/bin or other binaries the host is running, to reconfigure your entire /etc, access to sockets like docker's, or some other insane level of over reach I doubt even the least educated docker user would do.
While of course they should be scoped properly, people act like some elusive 0-day container escape will get used on their minecraft server or personal blog that has otherwise sane mounts, non-admin capabilities, etc. You arent that special.
"CVE-2025-66478 - Next.js/Puppeteer RCE)"
That said, do you have an image of the box or a container image? I'm curious about it.
I was lucky in that my DB backups were working so all my persistence wax backed up to S3. I think I could stand up another one in an hour.
Unfortunately I didn't keep an image no. I almost didn't have the foresight to investigate before yeeting the whole box into the sun!
From the root container, depending on volume mounts and capabilities granted to the container, they would enumerate the host directories and find the names of common scripts and then overwrite one such script. Or to be even sneakier, they can append their malicious code to an existing script in the host filesystem. Now each time you run your script, their code piggybacks.
OTOH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here. The $HOME/.{aws,docker,claude,ssh} Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.
Deleting and remaking the container will blow away all state associated with it. So there isn't a whole lot to worry about after you do that.
Now, JS sites everywhere are getting hacked.
JS has turned into the very thing it strived to replace.
It's good to see developers haven't changed. ;)
If you have decent network or process level monitoring, you're likely to find it, while you might not realize the vulnerable software itself or some stealthier, more dangerous malware that might exploit it.
And maybe updating container images with a mechanism similar to renovate with "minimumReleaseTime=7days" or something similar!?
Then you need to run things with as least privilege as possible. Sadly, Docker and containers in general are an anti-pattern here because they’re about convenience first, security second. So the OP should have run the contains as read-only with tight resource limits and ideally IP restrictions on access if it’s not a public service.
Another thing you can do is use Tailscale, or something like it, to keep things being a zero trust, encrypted, access model. Not suitable for public services of course.
And a whole host of other things.
Easier said than done, I know.
Podman makes it easier to be more secure by default than Docker. OpenShift does too, but that's probably taking things too far for a simple self hosted app.
Re: the Internet.
Re: Peer-to-peer.
Re: Video streaming.
Re: AI.
So... Crypto is illegal so anyone using is defacto a criminal by definition.
Also, for this particular instance this is the best bug bounty program i've ever seen. Running a monero node that hits your daily budget cap is not that bad... It could be way worse like steal you DB creditials and sell it to the highest party... So, crypto actually made this better.
Luckily for me, the software I had installed[1] was in an LXC container running under Incus, so the intrusion never escaped the application environment, and the container itself was configured with low CPU priority so I didn't even notice it until I tried to visit the page and it didn't load.
I looked around a bit and it seemed like an SSH key had been added under the root user, and there were some kind of remote management agents installed. This container was running Alpine so it was pretty easy to identify what processes didn't belong from a simple ps output of the remaining processes after shutting down the actual web application.
In the end, I just scrapped the container, but I did save it in case I ever feel like digging around (probably not). In the end I did learn some useful things:
- It's a good idea to assume your system will get taken over, so ensure it's isolated and suitably resource constrained (looking at you, pay-as-you-go cloud users).
- Make sure you have snapshots and backups, in my case I do daily ZFS snapshots in Incus which makes rolling back to before the intrusion a breeze.
- While ideally anything compromised should be scrapped, rolling back, locking it down and upgrading might be OK depending on the threat.
Regarding the miner itself:
- from what I could see in its configuration it hadn't actually been correctly configured, so it's possible they do some kind of benchmark and just leave the system silently compromised if it's not "worth it", they still have a way in to use it for other purposes.
- no attempt had been made at file system obfuscation, which is probably the only reason I really discovered it. There were literally folders in /root lying around with the word "monero" in them, this could have been easily hidden.
- if they hadn't installed a miner and just silently compromised the system, leaving whatever running on it alone (or even doing a better job at CPU priority), I probably never would have noticed this.
- Hetzner firewall (because ufw doesn't work well with docker) to only allow public access to 443.
- Self-hosted OpenVPN to access all private ports. I also self-host an additional Wireguard instance as a backup VPN.
- Cloudflare Access to protect `*.coolifydomain.com` by default. This would have helped protect the OP's Umami setup since only the OP can access the Umami dashboard. Bypass rules can be created in Cloudflare Access to allow access to other systems that need access using IP or domain.
- Cloudflare Access rules to only allow access to internal admin path such as /wp-admin/ through my VPN IP (or via email OTP to specified email ids).
- Traefik labels on docker-compose files in Coolify to add basic auth to internal services which can't be behind Cloudflare Access such as self-hosted Prefect. This would have also added a login screen before an attacker would see Umami's app.
- I host frontends only on Vercel or Cloudflare workers and host the backend API on the Coolify server. Both of these are confirmed to never have been affected, due to decoupling of application routing.
- Finally a bash cron script running on server every 5 minutes that monitors the resources and sends me an alert via Pushover when the usages are above the defined thresholds. We need monitoring and alert as well, security measures alone are not enough.
Even with all these steps, there will always be edge cases. That's the caveat of self-hosting but at the same time it's very liberating.
$ sudo ufw default deny incoming
$ sudo ufw default allow outgoing
$ sudo ufw allow ssh
$ sudo ufw allow 80/tcp
$ sudo ufw allow 443/tcp
$ sudo ufw enable
As a user of iptables this order makes me anxious. I used to cut myself out from the server many times because first blocking then adding exceptions. I can see that this is different here as the last command commits the rules...Meanwhile companies think the clouds are looking at it.... anyhow. it is a real problem.
AWS explicitly spells this out in their Shared Responsibility Model page [0]
It is not your cloud provider's responsibility to protect you if you run outdated and vulnerable software. It's not their responsibility to prevent crypto-miners from running on your instances. It's not even their responsibility to run a firewall, though the major players at least offer it in some form (ie, AWS Security Groups and ACL).
All of that is on the customer. The provider should guarantee the security of the cloud. The customer is responsible for security in the cloud.
[0] https://aws.amazon.com/compliance/shared-responsibility-mode...
Examples like this, are why I don’t run a VPS.
I could definitely do it (I have, in the past), but I’m a mediocre admin, at best.
I prefer paying folks that know their shit to run my hosting.
Search engines try to fight slop results with collateral damage mostly in small or even personal websites. Restaurants are happy to be on one platform only: Google Maps. Who needs an expensive website if you're on there and someone posts your menu as one of the pictures? (Ideally an old version so the prices seem cheaper and you can't be pinned down for false advertising.) Open source communities use Github, sometimes Gitlab or Codeberg, instead of setting up a Forgejo (I host a ton of things myself but notice that the community effect is real and also moved away from self hosting a forge). The cherry in top is when projects use Discord chats as documentation and bug reporting "form". Privacy people use Signal en masse, where Matrix is still as niche as it was when I first heard of it. The binaries referred to as open source just because they're downloadable can be found on huggingface, even the big players use that exclusively afaik. Some smaller projects may be hosted on Github but I have yet to see a self-hosted one. Static websites go on (e.g. Github) Pages and back-ends are put on Firebase. Instead of a NAS, individuals as well as small businesses use a storage service like Onedrive or Icloud. Some more advanced users may put their files on Backblaze B2. Those who newly dip their toes in self-hosting increasingly use a relay server to reach their own network, not because they need it but to avoid dealing with port forwarding or setting up a way to privately reach internal services. Security cameras are another good example of this: you used to install it, set a password, and forward the port so you can watch it outside the home. Nowadays people expect that "it just works" on their phone when they plug it in, no matter where they are. That this relies on Google/Amazon and that they can watch all the feeds is acceptable for the convenience. And that's all not even mentioning the death of the web: people who don't use websites anymore the way they were meant (as hyperlinked pages) but work with an LLM as their one-stop shop
Not that the increased convenience, usability, and thus universal accessibility of e.g. storage and private chats is necessarily bad, but the trend doesn't seem to me as you seem to think it is
I can't think of any example of something that became increasingly often self-hosted instead of less across the last 1, 5, or 10 years
If you see a glimmer of hope for the distributed internet, do share because I feel increasingly as the last person among my friends who hosts their own stuff
I've been on the receiving end of attacks that were reported to be the size of more than 10tbps I couldn't imagine how I would deal with that if I didn't have a 3rd party providing such protection - it would require millions $$ a year just in transit contracts.
There is an increasing amount of software that attempts to reverse this, but as someone from https://thingino.com/ said: opensource is riddled with developers that died to starvation (nobody donates to opensource projects).
Yikes. I would still recommend a server rebuild. That is _not_ a safe configuration in 2025, whatsoever. You are very likely to have a much better engineered persistent infection on that system.
The right thing to do is to roll out a new server (you have a declarative configuration right?), migrate pure data (or better, get it from the latest backup), remove the attacked machine off the internet to do a full audit. Both to learn about what compromises there are for the future and to inform users of the IoT platform if their data has been breached. In some countries, you are even required by law to report breaches. IANAL of course.
It has since been fixed: Lesson learned.
This should not enable the compromise of your session, i.e. either there was some additional vulnerability (and not logging out widened the window during which it could be exploited), or the attack was totally unrelated. Either way, you should find out the real cause.
> Here’s the test. If /tmp/.XIN-unix/javae exists on my host, I’m fucked. If it doesn’t exist, then what I’m seeing is just Docker’s default behavior of showing container processes in the host’s ps output, but they’re actually isolated.
1. Files of running programs can be deleted while the program is running. If the program were trying to hide itself, it would have deleted /tmp/.XIN-unix/javae after it started. The nonexistence of the file is not a reliable source of information for confirming that the container was not escaped.
2. ps shows program-controlled command lines. Any program can change what gets displayed here, including the program name and arguments. If the program were trying to hide itself, it would change this to display `login -fp ubuntu` instead. This is not a reliable source of information for diagnosing problems.
It is good to verify the systemd units and crontab, and since this malware is so obvious, it probably isn't doing these two hiding methods, but information-stealing malware might not be detected by these methods alone.
Later, the article says "Write your own Dockerfiles" and gives one piece of useless advice (using USER root does not affect your container's security posture) and two pieces of good advice that don't have anything to do with writing your own Dockerfiles. "Write your own Dockerfiles" is not useful security advice.
I actually think it is. It makes you more intimate with the application and how it runs, and can mitigate one particular supply-chain security vector.
Agreeing that the reasoning is confused but that particular advice is still good I think.
/tmp/.XIN-unix/javae &
rm /tmp/.XIN-unix/javae
This article’s LLM writing style is painful, and it’s full of misinformation (is Puppeteer even involved in the vulnerability?).I noticed this, because Hetzner forwarded me an email from the German government agency for IT security (BSI) that must have scanned all German based IP addresses for this Next.js vulnerability. It was a table of IP addresses and associated CVEs.
Great service from their side, and a lot of thanks to the German tax payers wo fund my free vulnerability scans ;)
Assume that the malware has replaced system commands, possibly used a kernel vulnerability to lie to you to hide its presence, so do not do anything in the infected system directly ?
Also could prevent something to exfiltrate sensitive data.
I know we aren't supposed to rely on containers as a security boundary, but it sure is great hearing stories like this where the hack doesn't escape the container. The more obstacles the better I guess.
If the human involved can’t escalate, the hack can’t.
Even if you are an owasp member who reads daily vulnerability reports, it's so easy to think you are unaffected.
At least that's what I think happened because I never found out exactly how it was compromised.
The miner was running as root and it's file was even hidden when I was running ls ! So I didn't understand what was happening, it was only after restarting my VPS from with a rescue image, and after mounting the root filesystem, that I found out the file I was seeing in the processes list did indeed exist.
My intuition is that since the SSH server reports what auth methods are available, once a bot sees that password auth is disabled, they will disconnect and not try again.
But I also know that bots can be dumb.
And for basic web sites, it's much better if it requires no back-end.
Every service exposed increases risk and requires additional vigilance to maintain. Which means more effort.
It highlighted the domain: 'jakesaunders.dev' in the address bar in red text.
You have to define a firewall policy and attach it to the VM.
But I am interested in the monero aspect here.
Should I treat this as some datapoint on monero’s security having held up well so far?
No. The reason attackers mine Monero and not some other cryptocurrency isn't the anonymity. Most cryptocurrencies aren't (meaningfully) mineable with CPUs, Monero (apparently) is. There may be others, but I suspect that they either deliver less $ per CPU-second (due to not being valuable), or are sufficiently unknown and/or painful to set up that the attackers just go with the known default.
Trying to mine Bitcoin directly would be pointless, you'd get no money because you're competing with ASIC miners. Some coins were designed to be ASIC resistant, these are mostly mined on GPUs. Monero (and some other coins) were designed to also be GPU resistant (I think). You could see it as a sign that that property has held up (well enough), but nothing else.
If you used the server to mine Bitcoin, you would make approximately zero (0) profit, even if somebody else pays for the server.
But also yes, Monero has technically held up very well.
This became enough of a hassle that I stopped using them.
But yeah it is massively overspecced. Makes me feel cool load testing my go backend at 8000 requests per second though!
But if they do have a vulnerability and manage to escape the sandbox then they will be root on your host.
Running your processes as an unprivileged user inside your containers reduces the possibility of escaping the sandbox, running your containers themselves as un unprivileged user (rootless podman or docker for example) reduces the attack surface when they manage to escape the sandbox.
And isn’t it a design flaw if you can see all processes from inside a container? This could provide useful information for escaping it.
> RandomX utilizes a virtual machine that executes programs in a special instruction set that consists of integer math, floating point math and branches. > These programs can be translated into the CPU's native machine code on the fly (example: program.asm). > At the end, the outputs of the executed programs are consolidated into a 256-bit result using a cryptographic hashing function (Blake2b).
I doubt that you anyone managed to create an ASIC that does this more efficiently and cost effective than a basic CPU. So, no, probably no one is mining Monero using an ASIC.
If they can enslave 100s or even 1000s of machine mining XMR for them, easy money if you set aside the legality of it.
Docker will overwrite your rules when you publish ports.
Do not publish ports with docker. Do not run internal services on the publicly accessible system.
b) if you want to limit your hosting environment to only the language/program you expect to run you should provision with unikernels which enforce it
Except it seems to have done so in this case?
Is there ever a reason someone should run a docker container as root ?
An inbound firewall can only help protect services that aren't meant to be reachable on the public internet. This service was exposed to the internet intentionally so a firewall wouldn't have helped avoid the breach.
The lesson to me is that keeping up with security updates helps prevent publicly exposed services from getting hacked.
js scripts running on frameworks running inside containers
PS so I see the host ended up staying uncompromised
Unless ran as root this could return file not found because of missing permissions, and not just because the file doesn't actually exist, right?
> “I don’t use X” doesn’t mean your dependencies don’t use X
That is beyond obvious, and I don't understand how anyone would feel safe from reading about a CVE on a widely used technology when they run dozens of containers on their server. I have docker containers and as soon as I read the article I went and checked because I have no idea what technology most are built with.
> No more Umami. I’m salty. The CVE was disclosed, they patched it, but I’m not running Next.js-based analytics anymore.
Nonsensical reaction.
Nothing is immune. What analytics are you going to run? If you roll your own you'll probably leave a hole somewhere.
But kudos for the word play!
But that alone would not solve the problem being a RCE from HTTP, that is why edge proxy provider like Cloudflare[0] and Fastfy[1] proactivily added protections in his WAF products.
Even cloudflare had an outage trying to protect his customers[3].
- [0] https://blog.cloudflare.com/waf-rules-react-vulnerability/ - [1] https://www.fastly.com/blog/fastlys-proactive-protection-cri... - [2] https://blog.cloudflare.com/5-december-2025-outage/