Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...) to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven’t done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of ‘procreative beneficence’ towards robots. (shrink)
One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a significant threat (...) to the legitimacy of such processes. Modelling my argument on Estlund’s threat of epistocracy, I call this the ‘threat of algocracy’. The article clarifies the nature of this threat and addresses two possible solutions. It is argued that neither solution is likely to be successful, at least not without risking many other things we value about social decision-making. The result is a somewhat pessimistic conclusion in which we confront the possibility that we are creating decision-making processes that constrain and limit opportunities for human participation. (shrink)
Human obsolescence is imminent. We are living through an era in which our activity is becoming less and less relevant to our well-being and to the fate of our planet. This trend toward increased obsolescence is likely to continue in the future, and we must do our best to prepare ourselves and our societies for this reality. Far from being a cause for despair, this is in fact an opportunity for optimism. Harnessed in the right way, the technology that hastens (...) our obsolescence can open us up to new utopian possibilities and enable heightened forms of human flourishing. (shrink)
Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus on (...) the second. In doing so, I make three arguments. First, I argue that there are good reasons to embrace non-work and that these reasons become more compelling in an era of technological unemployment. Second, I argue that the technological advances that make widespread technological unemployment possible could still threaten or undermine human flourishing and meaning, especially if they do not remain confined to the economic sphere. And third, I argue that this threat could be contained if we adopt an integrative approach to our relationship with technology. In advancing these arguments, I draw on three distinct literatures: the literature on technological unemployment and workplace automation; the antiwork critique—which I argue gives reasons to embrace technological unemployment; and the philosophical debate about the conditions for meaning in life—which I argue gives reasons for concern. (shrink)
Soon there will be sex robots. The creation of such devices raises a host of social, legal and ethical questions. In this article, I focus in on one of them. What if these sex robots are deliberately designed and used to replicate acts of rape and child sexual abuse? Should the creation and use of such robots be criminalised, even if no person is harmed by the acts performed? I offer an argument for thinking that they should be. The argument (...) consists of two premises. The first claims that it can be a proper object of the criminal law to regulate wrongful conduct with no extrinsically harmful effects on others. The second claims that the use of robots that replicate acts of rape and child sexual abuse would be wrongful, even if such usage had no extrinsically harmful effects on others. I defend both premises of this argument and consider its implications for the criminal law. I do not offer a conclusive argument for criminalisation, nor would I wish to be interpreted as doing so; instead, I offer a tentative argument and a framework for future debate. This framework may also lead one to question the proposed rationales for criminalisation. (shrink)
This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, (...) and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections. (shrink)
If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in (...) order to think clearly about its ethics. Second, it argues that the second type of deception – superficial state deception – is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use. (shrink)
We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...) from a mismatch between the human desire for retribution and the absence of appropriate subjects of retributive blame. I argue for the potential existence of this gap in an era of increased robotisation; suggest that it is much harder to plug this gap than it is to plug those thus far explored in the literature; and then highlight three important social implications of this gap. (shrink)
The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this possible can also track and gamify aspects of our interpersonal, romantic relationships. Several authors have begun to challenge the ethical and normative implications of this development. In this article, we build upon this work to provide a detailed ethical analysis (...) of the Quantified Relationship. We identify eight core objections to the QR and subject them to critical scrutiny. We argue that although critics raise legitimate concerns, there are ways in which tracking technologies can be used to support and facilitate good relationships. We thus adopt a stance of cautious openness toward this technology and advocate the development of a research agenda for the positive use of QR technologies. (shrink)
Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...) our virtue friends - that to do so is philosophically reasonable. Furthermore, I argue that even if you do not think that robots can be our virtue friends, they can fulfil other important friendship roles, and can complement and enhance the virtue friendships between human beings. (shrink)
This chapter examines a common objection to sex robots: the symbolic-consequences argument. According to this argument sex robots are problematic because they symbolise something disturbing about our attitude to sex-related norms such as consent and the status of our sex partners, and because of the potential consequences of this symbolism. After formalising this objection and considering several real-world uses of it, the chapter subjects it to critical scrutiny. It argues that while there are grounds for thinking that sex robots could (...) symbolically represent a troubling attitude toward women (and maybe children) and the norms of interpersonal sexual relationships, the troubling symbolism is going to be removable in many instances and reformable in others. What will ultimately matter are the consequences of the symbolism but these consequences are likely to be difficult to ascertain. This may warrant an explicitly experimental approach to the development of this technology. (shrink)
Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...) Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines. (shrink)
This article argues that access to meaningful sexual experience should be included within the set of the goods that are subject to principles of distributive justice. It argues that some people are currently unjustly excluded from meaningful sexual experience and it is not implausible to suggest that they might thereby have certain claim rights to sexual inclusion. This does not entail that anyone has a right to sex with another person, but it does entail that duties may be imposed on (...) society to foster greater sexual inclusion. This is a controversial thesis and this article addresses this controversy by engaging with four major objections to it: the misogyny objection; the impossibility objection; the stigmatisation objection; and the unjust social engineering objection. (shrink)
Human values seem to vary across time and space. What implications does this have for the future of human value? Will our human and (perhaps) post-human offspring have very different values from our own? Can we study the future of human values in an insightful and systematic way? This article makes three contributions to the debate about the future of human values. First, it argues that the systematic study of future values is both necessary in and of itself and an (...) important complement to other future-oriented inquiries. Second, it sets out a methodology and a set of methods for undertaking this study. Third, it gives a practical illustration of what this ‘axiological futurism’ might look like by developing a model of the axiological possibility space that humans are likely to navigate over the coming decades. (shrink)
Rapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit people’s ability (...) to participate in meaningful forms of work. Achievement gaps are interesting, in part, because they are the inverse of the (negative) responsibility gaps already widely discussed in the literature on AI ethics. Having described and explained the problem of achievement gaps, the article concludes by identifying four possible policy responses to the problem. (shrink)
In July 2014, the roboticist Ronald Arkin suggested that child sex robots could be used to treat those with paedophilic predilections in the same way that methadone is used to treat heroin addicts. Taking this onboard, it would seem that there is reason to experiment with the regulation of this technology. But most people seem to disagree with this idea, with legal authorities in both the UK and US taking steps to outlaw such devices. In this paper, I subject these (...) different regulatory attitudes to critical scrutiny. In doing so, I make three main contributions to the debate. First, I present a framework for thinking about the regulatory options that we confront when dealing with child sex robots. Second, I argue that there is a prima facie case for restrictive regulation, but that this is contingent on whether Arkin’s hypothesis has a reasonable prospect of being successfully tested. Third, I argue that Arkin’s hypothesis probably does not have a reasonable prospect of being successfully tested. Consequently, we should proceed with utmost caution when it comes to this technology. (shrink)
According to several authors, the enhancement project incorporates a quest for hyperagency - i.e. a state of affairs in which virtually every constitutive aspect of agency (beliefs, desires, moods, dispositions and so forth) is subject to our control and manipulation. This quest, it is claimed, undermines the conditions for a meaningful and worthwhile life. Thus, the enhancement project ought to be forestalled or rejected. How credible is this objection? In this article, I argue: “not very”. I do so by evaluating (...) four different versions of the “hyperagency” objection from four different authors. In each case I argue that the objection either fails outright or, at best, provides weak and defeasible grounds for avoiding enhancement. In addition to this, I argue that there are plausible grounds for thinking that enhancement helps, rather than hinders, us in living the good life. (shrink)
There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...) certain kinds of responsibility gap. The argument is based on the idea that human morality is often tragic. We frequently confront situations in which competing moral considerations pull in different directions and it is impossible to perfectly balance these considerations. This heightens the burden of responsibility associated with our choices. We cope with the tragedy of moral choice in different ways. Sometimes we delude ourselves into thinking the choices we make were not tragic ; sometimes we delegate the tragic choice to others ; sometimes we make the choice ourselves and bear the psychological consequences. Each of these strategies has its benefits and costs. One potential advantage of autonomous machines is that they enable a reduced cost form of delegation. However, we only gain the advantage of this reduced cost if we accept that some techno-responsibility gaps are virtuous. (shrink)
Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. (...) Unfortunately—and I guess this is hardly surprising—I cannot bring myself to agree that the cognitive equivalence strategy is the superior one. In this article, I try to explain why in three steps. First, I clarify the nature of the question that I take both myself and Shevlin to be answering. Second, I clear up some potential confusions about the behavioral equivalence strategy, addressing some other recent criticisms of it. Third, I will explain why I still favor the behavioral equivalence strategy over the cognitive equivalence one. (shrink)
In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue (...) that the particular claims advanced by the CASR are unpersuasive, partly due to a lack of clarity about the campaign’s aims and partly due to substantive defects in the main ethical objections put forward by campaign’s founder(s). Second, broadening our inquiry beyond the arguments proferred by the campaign itself, we argue that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex, which is likely to be unpalatable to those who are active in the campaign. In making this argument we draw upon lessons from the campaign against killer robots. Finally, we conclude by suggesting that although a generalised campaign against sex robots is unwarranted, there are legitimate concerns that one can raise about the development of sex robots. (shrink)
Humans have long wondered whether they can survive the death of their physical bodies. Some people now look to technology as a means by which this might occur, using terms such 'whole brain emulation', 'mind uploading', and 'substrate independent minds' to describe a set of hypothetical procedures for transferring or emulating the functioning of a human mind on a synthetic substrate. There has been much debate about the philosophical implications of such procedures for personal survival. Most participants to that debate (...) assume that the continuation of identity is an objective fact that can be revealed by scientific enquiry or rational debate. We bring into this debate a perspective that has so far been neglected: that personal identities are in large part social constructs. Consequently, to enable a particular identity to survive the transference process, it is not sufficient to settle age-old philosophical questions about the nature of identity. It is also necessary to maintain certain networks of interaction between the synthetic person and its social environment, and sustain a collective belief in the persistence of identity. We defend this position by using the example of the Dalai Lama in Tibetan Buddhist tradition and identify technological procedures that could increase the credibility of personal continuity between biological and artificial substrates. (shrink)
Divine command theories come in several different forms but at their core all of these theories claim that certain moral statuses exist in virtue of the fact that God has commanded them to exist. Several authors argue that this core version of the DCT is vulnerable to an epistemological objection. According to this objection, DCT is deficient because certain groups of moral agents lack epistemic access to God’s commands. But there is confusion as to the precise nature and significance of (...) this objection, and critiques of its key premises. In this article, I try to clear up this confusion and address these critiques. I do so in three ways. First, I offer a simplified general version of the objection. Second, I address the leading criticisms of the premises of this objection, focusing in particular on the role of moral risk/uncertainty in our understanding of God’s commands. And third, I outline four possible interpretations of the argument, each with a differing degree of significance for the proponent of the DCT. (shrink)
The idea that humans should abandon their individuality and use technology to bind themselves together into hivemind societies seems both farfetched and frightening – something that is redolent of the worst dystopias from science fiction. In this article, we argue that these common reactions to the ideal of a hivemind society are mistaken. The idea that humans could form hiveminds is sufficiently plausible for its axiological consequences to be taken seriously. Furthermore, far from being a dystopian nightmare, the hivemind society (...) could be desirable and could enable a form of sentient flourishing. Consequently, we should not be so quick to deny it. We provide two arguments in support of this claim – the axiological openness argument and the desirability argument – and then defend it against three major objections. (shrink)
The chapter introduces the edited collection Robot Sex: Social and Ethical Implications. It proposes a definition of the term 'sex robot' and examines some current prototype models. It also considers the three main ethical questions one can ask about sex robots: (i) do they benefit/harm the user? (ii) do they benefit/harm society? or (iii) do they benefit/harm the robot?
It is widely believed that a conservative moral outlook is opposed to biomedical forms of human enhancement. In this paper, I argue that this widespread belief is incorrect. Using Cohen’s evaluative conservatism as my starting point, I argue that there are strong conservative reasons to prioritise the development of biomedical enhancements. In particular, I suggest that biomedical enhancement may be essential if we are to maintain our current evaluative equilibrium (i.e. the set of values that undergird and permeate our current (...) political, economic, and personal lives) against the threats to that equilibrium posed by external, non-biomedical forms of enhancement. I defend this view against modest conservatives who insist that biomedical enhancements pose a greater risk to our current evaluative equilibrium, and against those who see no principled distinction between the forms of human enhancement. (shrink)
What is techno-optimism and how can it be defended? Although techno-optimist views are widely espoused and critiqued, there have been few attempts to systematically analyse what it means to be a techno-optimist and how one might defend this view. This paper attempts to address this oversight by providing a comprehensive analysis and evaluation of techno-optimism. It is argued that techno-optimism is a pluralistic stance that comes in weak and strong forms. These vary along a number of key dimensions but each (...) shares the view that technology plays a key role in ensuring that the good prevails over the bad. Whatever its strength, to defend this stance, one must flesh out an argument with four key premises. Each of these premises is highly controversial and can be subjected to a number of critiques. The paper discusses five such critiques in detail (the values critique, the treadmill critique, the sustainability critique, the irrationality critique and the insufficiency critique). The paper also considers possible responses from the techno-optimist. Finally, it is concluded that although strong forms of techno-optimism are not intellectually defensible, a modest, agency-based version of techno-optimism may be defensible. (shrink)
Technology could be used to improve morality but it could do so in different ways. Some technologies could augment and enhance moral behaviour externally by using external cues and signals to push and pull us towards morally appropriate behaviours. Other technologies could enhance moral behaviour internally by directly altering the way in which the brain captures and processes morally salient information or initiates moral action. The question is whether there is any reason to prefer one method over the other? In (...) this article, I argue that there is. Specifically, I argue that internal moral enhancement is likely to be preferable to external moral enhancement, when it comes to the legitimacy of political decision-making processes. In fact, I go further than this and argue that the increasingly dominant forms of external moral enhancement may already be posing a significant threat to political legitimacy, one that we should try to address. Consequently, research and development of internal moral enhancements should be prioritised as a political project. (shrink)
How should we react to the development of sexbot technology? Taking their cue from anti-porn feminism, several academic critics lament the development of sexbot technology, arguing that it objectifies and subordinates women, is likely to promote misogynistic attitudes toward sex, and may need to be banned or restricted. In this chapter I argue for an alternative response. Taking my cue from the sex positive ‘feminist porn’ movement, I argue that the best response to the development of ‘bad’ sexbots is to (...) make better ones. This will require changes to the content, process and context of sexbot development. Doing so will acknowledge the valuable role that technology can play in human sexuality, and allow us to challenge gendered norms and assumptions about male and female sexual desire. This will not be a panacea to the social problems that could arise from sexbot development, but it offers a more realistic and hopeful vision for the future of this technology in a pluralistic and progressive society. (shrink)
According to the common view, conscientious objection is grounded in autonomy or in ‘freedom of conscience’ and is tolerated out of respect for the objector's autonomy. Emphasising freedom of conscience or autonomy as a central concept within the issue of conscientious objection implies that the conscientious objector should have an independent choice among alternative beliefs, positions or values. In this paper it is argued that: (a) it is not true that the typical conscientious objector has such a choice when they (...) decide to act upon their conscience and (b) it is not true that the typical conscientious objector exercises autonomy when developing or acquiring their conscience. Therefore, with regard to tolerating conscientious objection, we should apply the concept of autonomy with caution, as tolerating conscientious objection does not reflect respect for the conscientious objector’s right to choose but rather acknowledges their lack of real ability to choose their conscience and to refrain from acting upon their conscience. This has both normative and analytical implications for the treatment of conscientious objectors. (shrink)
There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...) in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology. (shrink)
An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...) existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed. (shrink)
This chapter addresses the growing problem of unwanted sexual interactions in virtual environments. It reviews the available evidence regarding the prevalence and severity of this problem. It then argues that due to the potential harms of such interactions, as well as their nonconsensual nature, there is a good prima facie argument for viewing them as serious moral wrongs. Does this prima facie argument hold up to scrutiny? After considering three major objections – the ‘it’s not real’ objection; the ‘it’s just (...) a game’ objection; and the ‘unrestricted consent’ objection – this chapter argues that it does. The chapter closes by reviewing some of the policy options available to us in addressing the problem of virtual sexual assault. (shrink)
A common objection to moral enhancement is that it would undermine our moral freedom and that this is a bad thing because moral freedom is a great good. Michael Hauskeller has defended this view on a couple of occasions using an arresting thought experiment called the 'Little Alex' problem. In this paper, I reconstruct the argument Hauskeller derives from this thought experiment and subject it to critical scrutiny. I claim that the argument ultimately fails because (a) it assumes that moral (...) freedom is an intrinsic good when, in fact, it is more likely to be an axiological catalyst; and (b) there are reasons to think that moral enhancement does not undermine moral freedom. (shrink)
It is commonly assumed that a virtual life would be less meaningful (perhaps even meaningless). As virtual reality technologies develop and become more integrated into our everyday lives, this poses a challenge for those that care about meaning in life. In this chapter, it is argued that the common assumption about meaninglessness and virtuality is mistaken. After clarifying the distinction between two different visions of virtual reality, four arguments are presented for thinking that meaning is possible in virtual reality. Following (...) this, four objections are discussed and rebutted. The chapter concludes that we can be cautiously optimistic about the possibility of meaning in virtual worlds. (shrink)
This chapter provides a general overview and introduction to the law and ethics of virtual sexual assault. It offers a definition of the phenomenon and argues that there are six interesting types. It then asks and answers three questions: (i) should we criminalise virtual sexual assault? (ii) can you be held responsible for virtual sexual assault? and (iii) are there issues with 'consent' to virtual sexual activity that might make it difficult to prosecute or punish virtual sexual assault?
Skeptical theism (ST) may undercut the key inference in the evidential argument from evil, but it does so at a cost. If ST is true, then we lose our ability to assess the all things considered (ATC) value of natural events and states of affairs. And if we lose that ability, a whole slew of undesirable consequences follow. So goes a common consequential critique of ST. In a recent article, Anderson has argued that this consequential critique is flawed. Anderson claims (...) that ST only has the consequence that we lack epistemic access to potentially God-justifying reasons for permitting a prima facie “bad” (or “evil”) event. But this is very different from lacking epistemic access to the ATC value of such events. God could have an (unknowable) reason for not intervening to prevent E and yet E could still be (knowably) ATC-bad. Ingenious though it is, this article argues that Anderson’s attempted defence of ST is flawed. This is for two reasons. First, and most importantly, the consequential critique does not rely on the questionable assumption he identifies. Indeed, the argument can be made quite easily by relying purely on Anderson’s distinction between God-justifying reasons for permitting E and the ATC value of E. And second, Anderson’s defence of his position, if correct, would serve to undermine the foundations of ST. (shrink)
A basic income might be able to correct for the income related losses of unemployment, but what about the meaning/purpose related losses? For better or worse, many people derive meaning and fulfillment from the jobs they do; if their jobs are taken away, they lose this source of meaning. If we are about the enter an era of rampant job loss as a result of advances in technology, is there a danger that it will also be an era of rampant (...) meaninglessness? In this chapter, I offer counsel against any such despair. I argue that we should encourage the withdrawal from the world of work into a more personal world of games.We should do this because (a) work is structurally bad and getting worse as a result of technology; and (b) a more ludic, game-like life would help us to attain a valuable form of human flourishing. I offer three arguments in support of this view, and respond to critics who argue that withdrawing from the demands of work would result in a more selfish and impoverished form of existence. (shrink)
Sex is an important part of human life. It is a source of pleasure and intimacy, and is integral to many people’s self-identity. This chapter examines the opportunities and challenges posed by the use of AI in how humans express and enact their sexualities. It does so by focusing on three main issues. First, it considers the idea of digisexuality, which according to McArthur and Twist (2017) is the label that should be applied to those ‘whose primary sexual identity comes (...) through the use of technology’, particularly through the use of robotics and AI. While agreeing that this phenomenon is worthy of greater scrutiny, the chapter questions whether it is necessary or socially desirable to see this as a new form of sexual identity. Second, it looks at the role that AI can play in facilitating human-to-human sexual contact, focusing in particular on the use of self-tracking and predictive analytics in optimising sexual and intimate behaviour. There are already a number of apps and services that promise to use AI to do this, but they pose a range of ethical risks that need to be addressed at both an individual and societal level. Finally, it considers the idea that a sophisticated form of AI could be an object of love. Can we be truly intimate with something that has been ‘programmed’ to love us? Contrary to the widely-held view, this chapter argues that this is indeed possible. (shrink)
How might emerging and future technologies—sex robots, love drugs, anti-love drugs, or algorithms to track, quantify, and ‘gamify’ romantic relationships—change how we understand and value love? We canvass some of the main ethical worries posed by such technologies, while also considering whether there are reasons for “cautious optimism” about their implications for our lives. Along the way, we touch on some key ideas from the philosophies of love and technology.
This article argues that the creation of artificial offspring could make our lives more meaningful. By ‘artificial offspring’ I mean beings that we construct, with a mix of human and non-human-like qualities. Robotic artificial intelligences are paradigmatic examples of the form. There are two reasons for thinking that the creation of such beings could make our lives more meaningful and valuable. The first is that the existence of a collective afterlife—i.e. a set of human-like lives that continue after we die—is (...) likely to be an important source and sustainer of meaning in our present lives. The second is that the creation of artificial offspring provides a plausible and potentially better pathway to a collective afterlife than the traditional biological pathway. Both of these arguments are defended from a variety of objections and misunderstandings. (shrink)
Is sex work (specifically, prostitution) vulnerable to technological unemployment? Several authors have argued that it is. They claim that the advent of sophisticated sexual robots will lead to the displacement of human prostitutes, just as, say, the advent of sophisticated manufacturing robots have displaced many traditional forms of factory labour. But are they right? In this article, I critically assess the argument that has been made in favour of this displacement hypothesis. Although I grant the argument a degree of credibility, (...) I argue that the opposing hypothesis -- that prostitution will be resilient to technological unemployment -- is also worth considering. Indeed, I argue that increasing levels of technological unemployment in other fields may well drive more people into the sex work industry. Furthermore, I argue that no matter which hypothesis you prefer -- displacement or resilience -- you can make a good argument for the necessity of a basic income guarantee, either as an obvious way to correct for the precarity of sex work, or as a way to disincentivise those who may be drawn to prostitution. (shrink)
Klaming and Vedder (2010) have argued that enhancement technologies that improve the epistemic efficiency of the legal system (“epistemic enhancements”) would benefit the common good. But there are two flaws to Klaming and Vedder’s argument. First, they rely on an under-theorised and under-specified conception of the common good. When theory and specification are supplied, their CGJ for enhancing eyewitness memory and recall becomes significantly less persuasive. And second, although aware of such problems, they fail to give due weight and consideration (...) to the tensions between the individual and common good. Taking these criticisms onboard, this article proposes an alternative, and stronger, CGJ for epistemic enhancements. The argument has two prongs. Drawing from the literature on social epistemology and democratic legitimacy, it is first argued that there are strong grounds for thinking that epistemic enhancements are a desirable way to improve the democratic legitimacy of the legal system. This gives prima facie but not decisive weight to the CGJ. It is then argued that due to the ongoing desire to improve the way in which scientific evidence is managed by the legal system, epistemic enhancement is not merely desirable but perhaps morally necessary. Although this may seem to sustain tensions between individual and common interests, I argue that in reality it reveals a deep constitutive harmony between the individual good and the common good, one that is both significant in its own right and one that should be exploited by proponents of enhancement. (shrink)
This paper tries to clarify, strengthen and respond to two prominent objections to the development and use of human enhancement technologies. Both objections express concerns about the link between enhancement and the drive for hyperagency. The first derives from the work of Sandel and Hauskeller—and is concerned with the negative impact of hyperagency on social solidarity. In responding to their objection, I argue that although social solidarity is valuable, there is a danger in overestimating its value and in neglecting some (...) obvious ways in which the enhancement project can be planned so as to avoid its degradation. The second objection, though common to several writers, has been most directly asserted by Saskia Nagel, and is concerned with the impact of hyperagency on the burden and distribution of responsibility. Though this is an intriguing objection, I argue that not enough has been done to explain why this is morally problematic. I try to correct for this flaw before offering a variety of strategies for dealing with the problems raised. (shrink)
Bayne and Nagasawa have argued that the properties traditionally attributed to God provide an insufficient grounding for the obligation to worship God. They do so partly because the same properties, when possessed in lesser quantities by human beings, do not give rise to similar obligations. In a recent paper, Jeremy Gwiazda challenges this line of argument. He does so because it neglects the possible existence of a threshold obligation to worship, i.e. an obligation that only kicks in when the value (...) of a parameter has crossed a certain threshold. This article argues that there is a serious flaw in Gwiazda’s proposal. Although thresholds may play an important part in how we think about our obligations, their function is distinct from that envisaged by Gwiazda. To be precise, this article argues that thresholds are only relevant to obligations to the extent that they transform a pre-existing imperfect obligation or act of supererogation into a perfect obligation. Since it is not clear that there is an imperfect obligation to worship any being, and indeed since on a certain conception of moral agency it is highly unlikely that there could be, the search for a rational basis for the obligation to worship must continue. (shrink)
Matthew Kramer has recently defended a novel justification for the death penalty, something he calls the purgative rationale. According to this rationale, the death penalty can be justifiably implemented if it is necessary in order to purge defilingly evil offenders from a moral community. Kramer claims that this rationale overcomes the problems associated with traditional rationales for the death penalty. Although Kramer is to be commended for carving out a novel niche in a well-worn dialectical space, I argue that his (...) rationale falls somewhat short of the mark. By his own lights, a successful justification of the death penalty must show that death is the minimally invasive, most humane means to some legitimate moral end. But even if we grant that his rationale picks out a legitimate moral end, there are at least three alternatives to death, either ignored or not fully considered by Kramer, which would seem to satisfy that end in a less invasive, more humane manner. (shrink)
Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) (...) defend the need for a novel conceptual model for understanding the systemic legal disruption caused by new technologies such as AI; (ii) to situate this model in relation to preceding debates about the interaction of regulation with new technologies (particularly the ‘cyberlaw’ and ‘robolaw’ debates); and (iii) to set out a detailed model for understanding the legal disruption precipitated by AI, examining both pathways stemming from new affordances that can give rise to a regulatory ‘disruptive moment’, as well as the Legal Development, Displacement or Destruction that can ensue. The article proposes that this model of legal disruption can be broadly generalisable to understanding the legal effects and challenges of other emerging technologies. (shrink)
Can human life have value in a world in which humans are rendered obsolete by technological advances? This article answers this question by developing an extended analysis of the axiological impact of human obsolescence. In doing so, it makes four main arguments. First, it argues that human obsolescence is a complex phenomenon that can take on at least four distinct forms. Second, it argues that one of these forms of obsolescence (‘actual-general’ obsolescence) is not a coherent concept and hence not (...) a plausible threat to human well-being. Third, it argues that existing fears of technologically-induced human obsolescence are less compelling than they first appear. Fourth, it argues that there are two reasons for embracing a world of widespread, technologically-induced human obsolescence. (shrink)
We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory (...) systems respond? This chapter defends three claims by way of response. First, it argues that autonomy is indeed under threat in some new and interesting ways. Second, it evaluates and disputes the claim that we shouldn’t overestimate these new threats because the technology is just an old wolf in a new sheep’s clothing. Third, and finally, it looks at responses to these threats at both the individual and societal level and argues that although we shouldn’t encourage an attitude of ‘helplessness’ among the users of algorithmic tools there is an important role for legal and regulatory responses to these threats that go beyond what are currently on offer. (shrink)
This is a response to Earp and colleagues' target article "If I could just stop loving you: Anti-love biotechnology and the ethics of a chemical break-up". I argue that the authors may indulge in the vice of in-principlism when presenting their ethical framework for dealing with anti-love biotechnology, and that they mis-apply the concept of harm.
Theistic metaethics usually places one key restriction on the explanation of moral facts, namely: every moral fact must ultimately be explained by some fact about God. But the widely held belief that moral truths are necessary truths seems to undermine this claim. If a moral truth is necessary, then it seems like it neither needs nor has an explanation. Or so the objection typically goes. Recently, two proponents of theistic metaethics — William Lane Craig and Mark Murphy — have argued (...) that this objection is flawed. They claim that even if a truth is necessary, it does not follow that it neither needs nor has an explanation. In this article, I challenge Craig and Murphy’s reasoning on three main grounds. First, I argue that the counterexamples they use to undermine the necessary truth objection to theistic metaethics are flawed. While they may provide some support for the notion that necessary truths can be explained, they do not provide support for the notion that necessary moral truths can be explained. Second, I argue that the principles of explanation that Murphy and Craig use to support theistic metaethics are either question-begging (in the case of Murphy) or improperly motivated (in the case of Craig). And third, I provide a general defence of the claim that necessary moral truths neither need nor have an explanation. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.