The general worries are:
- An ASI could easily be smart enough to lie to us about its capabilities. It could pretend to be less smart than it is, and hope that people hook it up to the internet or give it direct access to run commands on our computers. (As people are already doing with ChatGPT). We currently have no idea how ChatGPT thinks. It might be 10x smarter than it lets on. We have no way of knowing.
- Modern computers (software and firmware) are almost certainly utterly riddled with security vulnerabilities we don't know about. An ASI might be able to read / extract the firmware and find plenty of vulnerabilities to exploit. Some vulnerabilities allow remote code execution. If a superintelligent AI has the ability to program and access to the internet, it might be able to infect lots of computers and get them to run parts of its mind. If this happened, how would we know? How would we stop it? It could cause all sorts of mayhem and, worse, quietly suppress any attempts people make to understand whats going on or put an end to it. ("Hm, our analytics engine says that article about technology malfunctioning got lots of views but they all came from dishwashers and things. Huh - I refreshed and the anomoly has gone away. Nothing to see here I guess!")
It might be prudent not to give a potential AGI access to the internet, or the ability to run code at all outside a (preferably airgapped) sandbox. OpenAI doesn't think we need to be that careful with GPT4.