Abstract
This Essay discusses whether U.S. law is ready for artificial intelligence (“AI”) which is headed down the road of blurring the line between human and machine minds. Perhaps the most high-profile and recent examples of AI are Large Language Models (“LLMs”) such as ChatGPT and Google Gemini that can generate written text, reason and analyze in a manner that seems to mimic human capabilities. U.S. law is based on English common law, which in turn incorporates Christian principles that assume the dominance and uniqueness of humankind. U.S. law assumes human communication skills are accompanied by attributes such as consciousness and Free Will (“FW”) that, in turn, underpin critical legal concepts such as mens rea – i.e., intent. Philosophers and others generally agree that consciousness is necessary for FW. On the assumption that human beings possess consciousness that supports FW, the law thereafter deems human beings capable of acting with legal consequences, from entering into contracts to committing crimes. With a focus on LLMs, the Essay suggests that U.S. law may struggle to respond to AI because the technology disrupts the law’s assumptions regarding the uniqueness of human traits and abilities such as consciousness, FW, written communications and reasoning.