Note this is pre-print, it was not peer-reviewed.
The author seems legit. He did some research before: https://scholar.google.com/citations?user=Unl69CYAAAAJ&hl=en...
Though yet to be top-notch in the field.
These are some fascinating ideas, but I don't buy the crossed-red-lines notion. It seems like an attention teaser.
Whether the systems are "smart" enough to take that a step further, i.e., survive at all costs, seems unlikely. But I can't imagine that that's far away at this point.
One thing that stood out to me here was the shutdown avoidance. Author's state that these models (Qwen more so), were able to intercept SIGKILL and successfully self-replicate to another device.
Not another device, just another process
I mean any half decent coding LLM can literally do
``` from transformers import SomeLLM
model = SomeLLM.from_pretrained(“…”) ```
The hard part will be provisioning the GPUs and actual orchestration at scale to do any kind of actual damage
Although I agree they need to replicate themselves on a different machine. But these are also just shell commands, all it needs access to is a cloud cli/api tool with an active payment method and it can do the same replication