Switch to: Citations

Add references

You must login to add references.
  1. Value Theory and the Best Interests Standard1.David Degrazia - 1995 - Bioethics 9 (1):50-61.
    The idea of a patient's best interests raises issues in prudential value theory–the study of what makes up an individual's ultimate (nonmoral) good or well‐being. While this connection may strike a philosopher as obvious, the literature on the best interests standard reveals almost no engagement of recent work in value theory. There seems to be a growing sentiment among bioethicists that their work is independent of philosophical theorizing. Is this sentiment wrong in the present case? Does value theory make a (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   60 citations  
  • Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Of, for, and by the people: the legal lacuna of synthetic persons.Joanna J. Bryson, Mihailis E. Diamantis & Thomas D. Grant - 2017 - Artificial Intelligence and Law 25 (3):273-291.
    Conferring legal personhood on purely synthetic entities is a very real legal possibility, one under consideration presently by the European Union. We show here that such legislative action would be morally unnecessary and legally troublesome. While AI legal personhood may have some emotional or economic appeal, so do many superficially desirable hazards against which the law protects us. We review the utility and history of legal fictions of personhood, discussing salient precedents where such fictions resulted in abuse or incoherence. We (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Machines as Moral Patients We Shouldn’t Care About : The Interests and Welfare of Current Machines.John Basl - 2014 - Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Restitution: A new paradigm of criminal justice.Randy E. Barnett - 1977 - Ethics 87 (4):279-301.
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • The child's right to an open future.Joel Feinberg - 2006 - In Randall Curren (ed.), Philosophy of Education: An Anthology. Malden, MA: Wiley-Blackwell.
    Download  
     
    Export citation  
     
    Bookmark   132 citations  
  • Robots should be slaves.Joanna J. Bryson - 2010 - In Yorick Wilks (ed.), Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues. John Benjamins Publishing. pp. 63-74.
    Download  
     
    Export citation  
     
    Bookmark   78 citations  
  • Humans and Robots: Ethics, Agency, and Anthropomorphism.Sven Nyholm - 2020 - Rowman & Littlefield International.
    This book argues that we need to explore how human beings can best coordinate and collaborate with robots in responsible ways. It investigates ethically important differences between human agency and robot agency to work towards an ethics of responsible human-robot interaction.
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Fully Autonomous AI.Wolfhart Totschnig - 2020 - Science and Engineering Ethics 26 (5):2473-2485.
    In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Download  
     
    Export citation  
     
    Bookmark   218 citations  
  • Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
    The possibility that autonomous weapons will be deployed on the battlefields of the future raises the challenge of determining who can be held responsible for how these weapons act. Robert Sparrow has argued that it would be impossible to attribute responsibility for autonomous robots' actions to their creators, their commanders, or the robots themselves. This essay reaches a much different conclusion. It argues that the problem of determining responsibility for autonomous robots can be solved by addressing it within the context (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • A Defense of the Rights of Artificial Intelligences.Eric Schwitzgebel & Mara Garza - 2015 - Midwest Studies in Philosophy 39 (1):98-119.
    There are possible artificially intelligent beings who do not differ in any morally relevant respect from human beings. Such possible beings would deserve moral consideration similar to that of human beings. Our duties to them would not be appreciably reduced by the fact that they are non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their existence to us, we would likely have additional moral obligations to them that we don’t ordinarily owe to (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Robotrust and Legal Responsibility.Ugo Pagallo - 2010 - Knowledge, Technology & Policy 23 (3):367-379.
    The paper examines some aspects of today’s debate on trust and e-trust and, more specifically, issues of legal responsibility for the production and use of robots. Their impact on human-to-human interaction has produced new problems both in the fields of contractual and extra-contractual liability in that robots negotiate, enter into contracts, establish rights and obligations between humans, while reshaping matters of responsibility and risk in trust relations. Whether or not robotrust concerns human-to-robot or even robot-to-robot relations, there is a new (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • An AGI Modifying Its Utility Function in Violation of the Strong Orthogonality Thesis.James D. Miller, Roman Yampolskiy & Olle Häggström - 2020 - Philosophies 5 (4):40.
    An artificial general intelligence (AGI) might have an instrumental drive to modify its utility function to improve its ability to cooperate, bargain, promise, threaten, and resist and engage in blackmail. Such an AGI would necessarily have a utility function that was at least partially observable and that was influenced by how other agents chose to interact with it. This instrumental drive would conflict with the strong orthogonality thesis since the modifications would be influenced by the AGI’s intelligence. AGIs in highly (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Using the best interests standard to decide whether to test children for untreatable, late-onset genetic diseases.Loretta M. Kopelman - 2007 - Journal of Medicine and Philosophy 32 (4):375 – 394.
    A new analysis of the Best Interests Standard is given and applied to the controversy about testing children for untreatable, severe late-onset genetic diseases, such as Huntington's disease or Alzheimer's disease. A professional consensus recommends against such predictive testing, because it is not in children's best interest. Critics disagree. The Best Interests Standard can be a powerful way to resolve such disputes. This paper begins by analyzing its meaning into three necessary and jointly sufficient conditions showing it: is an "umbrella" (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • ‘A Brute to the Brutes?’: Descartes' Treatment of Animals: Discussion.John Cottingham - 1978 - Philosophy 53 (206):551 - 559.
    To be able to believe that a dog with a broken paw is not really in pain when it whimpers is a quite extraordinary achievement even for a philosopher. Yet according to the standard interpretaion, this is just what Descartes did believe. He held, we are informed, the ‘monstrous’ thesis that ‘animals are without feeling or awareness of any kind’. The Standard view has been reiterated in a recent collection on animal rights, which casts Descartes as the villain of the (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • The ethics of designing artificial agents.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2008 - Ethics and Information Technology 10 (2-3):115-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • On Liberty, Utilitarianism, and Other Essays.John Stuart Mill - 2015 - Oxford University Press UK.
    'it is only the cultivation of individuality which produces, or can produce, well developed human beings'Mill's four essays, 'On Liberty, 'Utilitarianism', 'Considerations on Representative Government', and 'The Subjection of Women' examine the most central issues that face liberal democratic regimes - whether in the nineteenth century or the twenty-first. They have formed the basis for many of the political institutions of the West since the late nineteenth century, tackling as they do the appropriate grounds for protecting individual liberty, the basic (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • The Machinery of Freedom.David Friedman - unknown
    Capitalism is the best. It's free enterprise. Barter. Gimbels, if I get really rank with the clerk, 'Well I don't like this', how I can resolve it? If it really gets ridiculous, I go, 'Frig it, man, I walk.' What can this guy do at Gimbels, even if he was the president of Gimbels? He can always reject me from that store, but I can always go to Macy's. He can't really hurt me. Communism is like one big phone company. (...)
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  • Level-headed mysterianism and artificial experience.Jesse J. Prinz - 2003 - Journal of Consciousness Studies 10 (4-5):111-132.
    Many materialists believe that we should, in principle, be able to build a conscious computing machine. Others disagree. I favour a sceptical position, but of another variety. The problem isn't that it would be impossible to create a conscious computer. The problem is that we cannot know whether it is possible. There are principled reasons for thinking that we wouldn't ever be able to confirm that allegedly conscious computers were conscious. The proper stance on computational consciousness is agnosticism. Despite this (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Value pluralism.Elinor Mason - 2008 - Stanford Encyclopedia of Philosophy.
    Overview of the main issues about value pluralism.
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Designing People to Serve.Steve Petersen - 2011 - In Patrick Lin, George Bekey & Keith Abney (eds.), Robot Ethics. MIT Press.
    I argue that, contrary to intuition, it would be both possible and permissible to design people - whether artificial or organic - who by their nature desire to do tasks we find unpleasant.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • The Extended Corporate Mind: When Corporations Use AI to Break the Law.Mihailis E. Diamantis - 2020 - North Carolina Law Review 98:893-932.
    Algorithms may soon replace employees as the leading cause of corporate harm. For centuries, the law has defined corporate misconduct — anything from civil discrimination to criminal insider trading — in terms of employee misconduct. Today, however, breakthroughs in artificial intelligence and big data allow automated systems to make many corporate decisions, e.g., who gets a loan or what stocks to buy. These technologies introduce valuable efficiencies, but they do not remove (or even always reduce) the incidence of corporate harm. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Is it good for them too? Ethical concern for the sexbots.Steve Petersen - 2017 - In John Danaher & Neil McArthur (eds.), Robot Sex: Social Implications and Ethical. Cambridge, USA: MIT Press. pp. 155-171.
    In this chapter I'd like to focus on a small corner of sexbot ethics that is rarely considered elsewhere: the question of whether and when being a sexbot might be good---or bad---*for the sexbot*. You might think this means you are in for a dry sermon about the evils of robot slavery. If so, you'd be wrong; the ethics of robot servitude are far more complicated than that. In fact, if the arguments here are right, designing a robot to serve (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations