AI, alignment, and the categorical imperative

AI and Ethics 3:337-344 (2023)
  Copy   BIBTEX

Abstract

Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a deontic utilitarian principle. Programming machines to do so might be useful, in their view, for applications such as future autonomous vehicles. Their proposal draws both on traditional logic-based and contemporary connectionist approaches, to fuse factual information with normative principles. I will argue that this approach makes demands of machines that go beyond what is currently feasible, and may extend past the limits of the possible for AI. I also argue that a deontological ethics for machines should place greater stress on the formula of humanity of the Kantian categorical imperative. On this principle, one ought never treat a person as a mere means. Recognition of what makes a person a person requires ethical insight. Similar insight is needed to tell treatment as a means from treatment as a mere means. The resources in Kim, Hooker, and Donaldson’s approach is insufficient for this reason. Hesitation regarding deployment of autonomous machines is warranted in light of these alignment concerns.

Author's Profile

Fritz J. McDonald
Oakland University

Analytics

Added to PP
2023-01-10

Downloads
510 (#31,008)

6 months
312 (#6,318)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?