Yes, this presents additional risk from non-state actors, but there's no fundamentally new risk here.
I've spent enough time in the shady parts of the internet to realize that people that spend significant time learning about niche/dangerous hobbies _tend_ to realize the seriousness of it.
My fear with bio-weapons would be some 13-year-old being given step-by-step instructions with almost 0 effort to create something truly dangerous. It lowers the bar quite a bit for things that tended to be pretty niche and extreme.
even if this 13-year old somehow found herself alone in a fully-equipped BSL-3 laboratory, it's still a fuck-ton of work. far from "almost 0 effort."
not knowing what to do is not the bottleneck.
Conversational interfaces definitely increase the accessibility of knowledge.
And critically, SaaS AI platforms increase the availability of AI. E.g. the person who wouldn't be able to set up and run a local model, but can click a button on a website.
It seems reasonable to preclude SaaS platforms from making it trivial to produce the worse societal harms. E.g. prevent stable diffusion services from returning celebrities or politicians, or LLMs from producing political content.
Sure, it's still possible. But a knee high barrier at least keeps out those who aren't smart enough to step over it.
For instance there's a pretty huge culture around building your own nuclear fusion device at home. And there are tremendous resources available as well as step by step guides on how to do it. It's still exceptionally difficult (as well as quite dangerous), because it's not like you just get the pieces, put everything together like legos, flick on the switch, and boom you have nuclear fusion. There's a million things that not only can but will go wrong. So in spite of the absolutely immense amount of information about there, it's still a huge achievement for any individual or group to achieve fusion.
And now somebody trying to do any of these sort of things with the guidance of... chatbots? It just seems like the most probable outcome is you end up getting yourself killed.
Why do I bring up David Hahn if he never achieved fusion and wasn't a terrorist? Because of how far he got as a seventeen year old. A fourty year old with a FAANG salary with the ideological bent of Theodore Kaczynski could do stupid amounts of damage. First would be to not try and build a nuclear fusion device. The difficult of building one doesn't seem so important if you're a sociopath when trying to be being a terrorist if every sociopath can go out and buy a gun and head to the local mall. There were two major such incidents in the past weeks, with 12 more mass shootings from Friday to Sunday over this past Halloween weekend**. Instead of worrying about the far-fetched, we would do better addressing something that killed 18 people in Maine and 19 in Texas, and 11 more across the country.
* https://www.pbs.org/newshour/science/building-a-better-breed...
** https://www.npr.org/2023/10/29/1209340362/mass-shootings-hal...
so when are we going to start regulating and restricting the sale of education/text books?
a knowledge portal isn't a new concept.
See: https://en.wikipedia.org/wiki/Chemical_Weapons_Convention
Moreover, current AI can be turned into an agent using basic programming knowledge. Such an agent is not very capable yet, but it's getting better by the month.
That doesn't seem right. Surely, making it easier for non-state actors to do things that state actors only fail to do because they agreed to treaties banning it, can only increase the risk that non-state actors may do those things?
Laser blinding weapons are banned by treaty, widespread access to lasers lead to scenes like this a decade ago during the Arab Spring: https://www.bbc.com/news/av/world-middle-east-23182254
This is splitting hairs for no real purpose. Additional risk is new risk.
> By the mid 1920s there was already enough chemical agents to kill most of the population of Europe. By the 1970s there were enough in global stockpiles to kill every human on the planet several times over.
Those global stockpiles continue to be controlled by state actors though, not aggrieved civilians.
Once we lost that advantage, by the 1990s we had civilians manufacturing and releasing sarin gas in subways and detonating trucks full of fertilizer.
We really don't want kids escalating from school shootings to synthesis and deployment of mustard gas.
---
"The Satyan-7 facility was declared ready for occupancy by September 1993 with the capacity to produce about 40–50 litres (11–13 US gal) of sarin, being equipped with 30-litre (7.9 US gal) capacity mixing flasks within protective hoods, and eventually employing 100 Aum members; the UN would later estimate the value of the building and its contents at $30 million.[23]
Despite the safety features and often state-of-the-art equipment and practices, the operation of the facility was very unsafe – one analyst would later describe the cult as having a "high degree of book learning, but virtually nothing in the way of technical skill."[24]"
---
All of those hundreds of workers, countless experts working for who knows how many man hours, and just massive scale development culminated in a subway attack carried out on 3 lines, during rush hour. It killed a total of 13 people. Imagine if they just bought a bunch of cars and started running people over.
Many of these things sound absolutely terrifying, but in practice they are not such a threat except when carried out at a military level of scale and development.
[1] - https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack
I mean, you can make chlorine gas by mixing bleach and vinegar.
How does actual and potential harm from these incidents compare to harm from common traffic accidents / common health issues / etc? Perhaps legislation / government intervention should be based on harm / benefit? Extreme harm for example might be caused by a large asteroid impact etc so preparing for that could be worthwhile...
They’d probably end up killing fewer people with a lot more effort. Chemical weapons are not really all that effective.
But if your target is an unsuspecting small population in an enclosed space who's spending a lot time there the calculus changes a bit. Sarin for example is odorless and colorless, mustard gas can also be colorless, doesn't hit you immediately and unlikely to be detected by smell.
It actually happened in Iran and it's lucky the people responsible either didn't know what they were doing or were actively trying to not kill people because they easily could have.
How much death and destruction has been brought by state actors vs aggrieved civilians?
Moreover, once an AI with such capability is open source, there's practically no way to put it back into Pandora's box. Implementing proper and judicious regulations will reduce the risks to everyone.
This is incredibly naive. These models unlock capabilities for previously unsophisticated actors to do extremely dangerous things in almost undetectable ways.