Maxim Consequentialism for Bounded Agents

Abstract

Normative moral theories are frequently invoked to serve one of two distinct purposes: (1) explicate a criterion of rightness, or (2) provide an ethical decision-making procedure. Although a criterion of rightness provides a valuable theoretical ideal, proposed criteria rarely can be (nor are they intended to be) directly translated into a feasible decision-making procedure. This paper applies the computational framework of bounded rationality to moral decision-making to ask: how ought a bounded human agent make ethical decisions? We suggest agents ought to follow moral maxims: principles that approximate rightness in many situations, but that can be overridden in specific, precisely describable circumstances. While this intuitive idea has been proposed many times before, we provide a precise model of how maxim consequentialism functions as an approximation to an act-consequentialist criterion of rightness, while maintaining the flexibility and defeasibility that has eluded most forms of rule consequentialism. Furthermore, while our overarching aim is to propose a new normative standard of moral decision-making, we demonstrate how maxim consequentialism can also function as a descriptive account of human behavior. We conclude by noting that different criteria of rightness may lead to different maxim-based ethics.

Author Profiles

David Danks
University of California, San Diego

Analytics

Added to PP
2023-08-21

Downloads
135 (#80,442)

6 months
74 (#58,013)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?