Now the tables have turned, and legitimate software has to become somehow polymorphic to thwart attacks by malware.
I'm curious because years ago the academic strongly pushes the FG ASLR story, then OpenBSD did kernel relinking, but I haven't heard any industry story on how effective this is.
Not a rhetorical question.
(FG)ASLR is more of a "targeting at exploit instead of vulnerability" style mitigation.
FG-ASLR helps because, even when you know where .text is, there are now N possible randomized locations for the piece of code your exploit leverages, so if you pick one and exploit M machines that way, only M/N of the exploits will succeed (where you got lucky).
Ultimately it is obfuscation, but with enough entropy it is very effective. It can't mitigate or prevent an exploit, but it makes it more work to turn an exploit into code execution consistently enough for it to be useful.
I wonder if it is possible to make a relinker which only requires binary output -- so it could be easily incorporated into existing systems.
One way I can think of is to keep relocation/original object information in the debug sections, so that one can reconstruct original object files and re-link them. But I am guessing this will not work with LTO though... Or maybe we can just make a bunch of debug sections and store input object/library files verbatim -- this will at least double the binary size, but will allow for easier relinking.
https://research.facebook.com/publications/bolt-a-practical-...
https://groups.google.com/g/llvm-dev/c/ef3mKzAdJ7U/m/1shV64B...
At least, someone finally understands that static, fully predictable, reproduce-able-builds are only an convenience feature for the attacker side.
Fully reproducible builds provide great assurance against the supply chain attacks. But 100% reproducibility is in some cases a bit too much. What matters is whether the artifact can be easily proven to be functionally identical to the canonical one.
So I am 100% for a fully predictable sshd random-relink kit, producing unpredictable sshd binaries, but only as long as there is an instruction how to check that the sshd binary that allegedly came from it indeed could have come from it, and was not quietly replaced by some malicious entity.
You can easily verify the integrity of the object files that are used in the random relinking - they are included in the binary distribution, and are necessary to perform the relinking.
The debate of static vs dynamic linking is still going on, and a very strong argument against static linking has always been that upgrading vulnerable libraries is made difficult. But think of it: package managers already hold the meta-data of what links to what; object files can be distributed just as easily as shared objects; the last necessary step is to move the actual linking step from the kernel to the package manager.
I guess the holy grail would be to combine this with hot patching (https://en.wikipedia.org/wiki/Patch_(computing)#HOT-PATCHING), and relink the kernel every now and then while it is running (currently, a system under attack would have to be rebooted every now and then, and that’s undesirable). That would face ‘a few’ technical hurdles, though.
So it looks like they are going to move on from CVS eventually.
AFAIK, they still interface with CVS directly, but I assume the expectation is to eventually transition to got.
[0]: https://marc.info/?l=openbsd-tech&m=167388832715992&w=2
Makefile.relink: cc -o sshd `echo ${OBJS} | tr ' ' '\n' | sort -R` ${LDADD} ./sshd -V && install -o root -g wheel -m ${BINMODE} sshd /usr/sbin/sshd
https://github.com/openbsd/src/commit/898412097f87ba70d4012f...
Sometimes it can be beneficial to optimize the link so most of the main thread stays in cache. Obviously this only really matters for CPU-intensive programs.