Hello there, my point($) is that as far as I understand the technology (I did an ML PhD 20 years ago and have worked in "applications of AI" since then) we don't have an issue around autonomy of machines in two ways. Firstly the machines have no autonomy and will probably not ever have autonomy. Secondly there are already 7 billion autonomous agents, perhaps 0.1% of these are genuinely dangerous to other humans, perhaps many of those are somehow under social control (prison, family, hospital) but some aren't.
We do have a problem with dangerous actuators, nuclear submarines are very dangerous actuators. Nuclear submarines are badly run, and not well managed by any social system.
Rather than worrying about dangerous AI being invented and misused I think we should worry about nuclear submarines being used.
Very few people acknowledge or give a fig about this, instead they sit in bars and talk about fictional scenarios involving what is likely impossible technology (strong AI). And yet today, tomorrow or on any day in the foreseeable future 100's(+) of millions of people may die because of the stupid and careless setup of 60 year old technology.
($) many other people made this point before me, most effectively a chap who I think is called Jaron Lanier, who said it at a talk.
(+) I've made this point before on HN and when I do I get taken to task by two groups of people. Some people think that this estimate is too high because "nuclear weapons aren't that destructive and/or not many of them would actually be used". I invite everyone to do their own research on this topic, a good starting point is an application called NUKEMAP, give yourself a budget of 500 100KT nukes and go after the cities in a continent of your choice. The second criticism is that the estimate is very low, which I agree with - but I am thinking of the people who die that day, not the billions who starve and die as societies collapse (note, these may or may not be the society of western europe or the USA).