2
AI Safety Dance (aisafety.dance)

> (Catastrophic AI Risk doesn't even require "super-human general intelligence"! For example, an AI that's "only" good at designing viruses could help a bio-terrorist organization (like Aum Shinrikyo[18]18) kill millions of people.) >Bio-engineered pandemics: A bio-terrorist cult (like Aum Shinrikyo[18:1]20) uses AI (like AlphaFold[20]21) and DNA-printing (which is getting cheaper fast[21]22) to design multiple new super-viruses, and release them simultaneously in major airports around the globe. (Proof of concept: Scientists have already re-built polio from mail-order DNA... two decades ago.[22]23) I will be leaving links to the responses to this conjecture, since it's been repeated so often. https://datainnovation.org/2024/04/the-shift-in-rhetoric-on-ai-and-biothreats-is-a-lesson-on-the-risks-of-premature-regulation/ https://news.kiwistand.com/stories?index=0x65468b695fcb66d7e5e624d8e7848aac3cfabfc0ce6ef8b294bb6b8562597b2f65792a85 https://www.fast.ai/posts/2023-11-07-dislightenment.html the TL;DR is that bioterrorism is not an AI-activated risk, there is no credible empirical evidence establishing that the knowledge on its own greatly augments the process, and ignoring the obvious zoonotic events that we're already exposed to all the time, the major concern of biological threats is that data collection & en vivo/vitro research are concentrated to facilities with varying degrees (or neglect) of oversight. Furthermore, we have clear evidence that incentives within such facilities can be perverse to the extent of purposely leaking pathogens (e.g. 2001 anthrax attacks). Ultimately, biology requires controlled, sterile environments, and the equipment in this field is already heavily-controlled & heavily-surveilled. The chief risk is well downstream of the en silica theory. Criminal activity is discrete conduct separate from the discrete speech that AI occupies. Giving everyone an encyclopedia of molecular biology is a public good by comparison to Aum Shinrikyo.

Thanks for commenting. I always find it incredibly effortful to dispel the easily written arguments of AI doomers