Large Language Models and Biorisk

American Journal of Bioethics 23 (10):115-118 (2023)
  Copy   BIBTEX

Abstract

We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and access controls on biomedical AI. We conclude with a suggestion about future research directions in bioethics.

Author Profiles

Nathaniel Sharadin
University of Hong Kong
William D'Alessandro
University of Oxford
Harry R. Lloyd
Yale University

Analytics

Added to PP
2023-10-09

Downloads
347 (#47,387)

6 months
336 (#5,899)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?