Large Language Models and Biorisk

American Journal of Bioethics 23 (10):115-118 (2023)
  Copy   BIBTEX


We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and access controls on biomedical AI. We conclude with a suggestion about future research directions in bioethics.

Author Profiles

Nathaniel Sharadin
University of Hong Kong
William D'Alessandro
Oxford University
Harry R. Lloyd
Yale University


Added to PP

240 (#56,321)

6 months
240 (#7,574)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?