The Problem of Evil in Virtual Worlds

In Mark Silcox (ed.), Experience Machines: The Philosophy of Virtual Worlds. Lanham, MD: Rowman & Littlefield. pp. 137-155 (2017)
Download Edit this record How to cite View on PhilPapers
Abstract
In its original form, Nozick’s experience machine serves as a potent counterexample to a simplistic form of hedonism. The pleasurable life offered by the experience machine, its seems safe to say, lacks the requisite depth that many of us find necessary to lead a genuinely worthwhile life. Among other things, the experience machine offers no opportunities to establish meaningful relationships, or to engage in long-term artistic, intellectual, or political projects that survive one’s death. This intuitive objection finds some support in recent research regarding the psychological effects of phenomena such as video games or social media use. After a brief discussion of these problems, I will consider a variation of the experience machine in which many of these deficits are remedied. In particular, I’ll explore the consequences of a creating a virtual world populated with strongly intelligent AIs with whom users could interact, and that could be engineered to survive the user’s death. The presence of these agents would allow for the cultivation of morally significant relationships, and the world’s long-term persistence would help ground possibilities for a meaningful, purposeful life in a way that Nozick’s original experience machine could not. While the creation of such a world is obviously beyond the scope of current technology, it represents a natural extension of the existing virtual worlds provided by current video games, and it provides a plausible “ideal case” toward which future virtual worlds will move. While this improved experience machine would seem to represent progress over Nozick’s original, I will argue that it raises a number of new problems stemming from the fact that that the world was created to provide a maximally satisfying and meaningful life for the intended user. This, in turn, raises problems analogous in some ways to the problem(s) of evil faced by theists. In particular, I will suggest that it is precisely those features that would make a world most attractive to potential users—the fact that the AIs are genuinely moral agents whose well-being the user can significantly impact—that render its creation morally problematic, since they require that the AIs inhabiting the world be subject to unnecessary suffering. I will survey the main lines of response to the traditional problem of evil, and will argue that they are irrelevant to this modified case. I will close by considering by consider what constraints on the future creation of virtual worlds, if any, might serve to allay the concerns identified in the previous discussion. I will argue that, insofar as the creation of such worlds would allow us to meet morally valuable purposes that could not be easily met otherwise, we would be unwise to prohibit it altogether. However, if our processes of creation are to be justified, they must take account of the interests of the moral agents that would come to exist as the result of our world creation.
Categories
(categorize this paper)
PhilPapers/Archive ID
SHETPO-66
Revision history
Archival date: 2017-10-13
View upload history
References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Added to PP index
2017-10-13

Total downloads
48 ( #111,757 of 28,408 )

Recent downloads (6 months)
48 ( #2,875 of 28,408 )

How can I increase my downloads?

Monthly downloads since first upload
This graph includes both downloads from PhilArchive and clicks to external links.