Abstract
Let us imagine an ideal ethical agent, i.e., an agent who (i) holds a certain ethical theory, (ii) has all factual knowledge needed for determining which action among those open to her is right and which is wrong, according to her theory, and who (iii) is ideally motivated to really do whatever her ethical theory demands her to do. If we grant that the notions of omniscience and ideal motivation both make sense, we may ask: Could there possibly be an ideal utilitarian, that is, an ideal ethical agent whose ethical theory says that our only moral obligation consists in maximizing utility? I claim that an ideal agent cannot be utilitarian. My reasoning against ideal utilitarianism will parallel Putnam's famous argument against the brains in a vat. Putnam argues that an envatted brain cannot describe its own situation because its words do not refer to brains and vats; I argue that an ideal utilitarian cannot entertain or communicate the beliefs necessary to being a utilitarian.