Switch to: Citations

Add references

You must login to add references.
  1. The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   177 citations  
  • Research Methods in Education.L. Cohen, L. Manion & K. Morrison - 2000 - British Journal of Educational Studies 48 (4):446-446.
    Download  
     
    Export citation  
     
    Bookmark   158 citations  
  • Why robot nannies probably won't do much psychological damage.Joanna J. Bryson - 2010 - Interaction Studies 11 (2):196-200.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • From the ethics of technology towards an ethics of knowledge policy.René von Schomberg - 2007 - AI and Society.
    My analysis takes as its point of departure the controversial assumption that contemporary ethical theories cannot capture adequately the ethical and social challenges of scientific and technological development. This assumption is rooted in the argument that classical ethical theory invariably addresses the issue of ethical responsibility in terms of whether and how intentional actions of individuals can be justified. Scientific and technological developments, however, have produced unintentional consequences and side-consequences. These consequences very often result from collective decisions concerning the way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • From the ethics of technology towards an ethics of knowledge policy: implications for robotics.René von Schomberg - 2008 - AI and Society 22 (3):331-348.
    My analysis takes as its point of departure the controversial assumption that contemporary ethical theories cannot capture adequately the ethical and social challenges of scientific and technological development. This assumption is rooted in the argument that classical ethical theory invariably addresses the issue of ethical responsibility in terms of whether and how intentional actions of individuals can be justified. Scientific and technological developments, however, have produced unintentional consequences and side-consequences. These consequences very often result from collective decisions concerning the way (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Robots in aged care: a dystopian future.Robert Sparrow - 2016 - AI and Society 31 (4):1-10.
    In this paper I describe a future in which persons in advanced old age are cared for entirely by robots and suggest that this would be a dystopia, which we would be well advised to avoid if we can. Paying attention to the objective elements of welfare rather than to people’s happiness reveals the central importance of respect and recognition, which robots cannot provide, to the practice of aged care. A realistic appreciation of the current economics of the aged care (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Should we welcome robot teachers?Amanda J. C. Sharkey - 2016 - Ethics and Information Technology 18 (4):283-297.
    Current uses of robots in classrooms are reviewed and used to characterise four scenarios: Robot as Classroom Teacher; Robot as Companion and Peer; Robot as Care-eliciting Companion; and Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Why do children abuse robots?Tatsuya Nomura, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro & Sachie Yamada - 2016 - Latest Issue of Interaction Studies 17 (3):347-369.
    We found that children sometimes abused a social robot placed in a shopping mall hallway. They verbally abused the robot, repeatedly obstructed its path, and sometimes even kicked and punched the robot. To investigate the reasons for the abuse, we conducted a field study in which we interviewed visiting children who exhibited serious abusive behaviors, including physical contact. We analyzed interview contents to determine whether the children perceived the robot as human-like, why they abused it, and whether they thought that (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   172 citations  
  • What is a human? Toward psychological benchmarks in the field of humanrobot interaction.Peter H. Kahn, Hiroshi Ishiguro, Batya Friedman, Takayuki Kanda, Nathan G. Freier, Rachel L. Severson & Jessica Miller - 2007 - Interaction Studies 8 (3):363-390.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • What is a Human?: Toward psychological benchmarks in the field of human–robot interaction.Peter H. Kahn, Hiroshi Ishiguro, Batya Friedman, Takayuki Kanda, Nathan G. Freier, Rachel L. Severson & Jessica Miller - 2007 - Interaction Studies 8 (3):363-390.
    In this paper, we move toward offering psychological benchmarks to measure success in building increasingly humanlike robots. By psychological benchmarks we mean categories of interaction that capture conceptually fundamental aspects of human life, specified abstractly enough to resist their identity as a mere psychological instrument, but capable of being translated into testable empirical propositions. Nine possible benchmarks are considered: autonomy, imitation, intrinsic moral value, moral accountability, privacy, reciprocity, conventionality, creativity, and authenticity of relation. Finally, we discuss how getting the right (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Socio-ethics of interaction with intelligent interactive technologies.Satinder P. Gill - 2008 - AI and Society 22 (3):283-300.
    Socio-ethics covers the relation of the individual with the group and with society, as the individual acquires the skills for social life with others and the conduct of ‘normal responsible behaviour’ (Leal in AI Soc 9:29–32, 1995) that guides moral action. For a consideration of what it means to be socially skilled in everyday human interaction and the ethical issues arising from the new conditions of interaction that come with the integration of intelligent interactive artefacts, we will provide an analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • On seeing human: A three-factor theory of anthropomorphism.Nicholas Epley, Adam Waytz & John T. Cacioppo - 2007 - Psychological Review 114 (4):864-886.
    Download  
     
    Export citation  
     
    Bookmark   126 citations  
  • "Discipline and Punish.Michel Foucault - 1975 - Vintage Books.
    Download  
     
    Export citation  
     
    Bookmark   846 citations  
  • Learning robots and human responsibility.Dante Marino & Guglielmo Tamburrini - 2006 - International Review of Information Ethics 6:46-51.
    Epistemic limitations concerning prediction and explanation of the behaviour of robots that learn from experience are selectively examined by reference to machine learning methods and computational theories of supervised inductive learning. Moral responsibility and liability ascription problems concerning damages caused by learning robot actions are discussed in the light of these epistemic limitations. In shaping responsibility ascription policies one has to take into account the fact that robots and softbots - by combining learning with autonomy, pro-activity, reasoning, and planning - (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations