Blurring the Line Between Human and Machine Minds: Is U.S. Law Ready for Artificial Intelligence?

Abstract

This Essay discusses whether U.S. law is ready for artificial intelligence (“AI”) which is headed down the road of blurring the line between human and machine minds. Perhaps the most high-profile and recent examples of AI are Large Language Models (“LLMs”) such as ChatGPT and Google Gemini that can generate written text, reason and analyze in a manner that seems to mimic human capabilities. U.S. law is based on English common law, which in turn incorporates Christian principles that assume the dominance and uniqueness of humankind. U.S. law assumes human communication skills are accompanied by attributes such as consciousness and Free Will (“FW”) that, in turn, underpin critical legal concepts such as mens rea – i.e., intent. Philosophers and others generally agree that consciousness is necessary for FW. On the assumption that human beings possess consciousness that supports FW, the law thereafter deems human beings capable of acting with legal consequences, from entering into contracts to committing crimes. With a focus on LLMs, the Essay suggests that U.S. law may struggle to respond to AI because the technology disrupts the law’s assumptions regarding the uniqueness of human traits and abilities such as consciousness, FW, written communications and reasoning.

Author's Profile

Kipp Coddington
University of Wyoming

Analytics

Added to PP
2024-07-17

Downloads
92 (#92,290)

6 months
92 (#58,970)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?