Results for 'responsible robotics'

1000+ found
Order:
  1. From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence.Hin-Yan Liu & Karolina Zawieska - 2020 - Ethics and Information Technology 22 (4):321-333.
    As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  2. Robots, jobs, taxes, and responsibilities.Luciano Floridi - 2017 - Philosophy and Technology 30 (1):1-4.
    Robots—in the form of apps, webbots, algorithms, house appliances, personal assistants, smart watches, and other systems—proliferate in the digital world, and increasingly perform a number of tasks more speedily and efficiently than humans can. This paper explores how in the future robots can be regulated when working alongside humans, focusing on issues such as robot taxation and legal liability.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Robots, Autonomy, and Responsibility.Raul Hakli & Pekka Mäkelä - 2016 - In Johanna Seibt, Marco Nørskov & Søren Schack Andersen (eds.), What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016. IOS Press. pp. 145-154.
    We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele’s history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Punishing Robots – Way Out of Sparrow’s Responsibility Attribution Problem.Maciek Zając - 2020 - Journal of Military Ethics 19 (4):285-291.
    The Laws of Armed Conflict require that war crimes be attributed to individuals who can be held responsible and be punished. Yet assigning responsibility for the actions of Lethal Autonomous Weapon...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. The Mandatory Ontology of Robot Responsibility.Marc Champagne - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):448–454.
    Do we suddenly become justified in treating robots like humans by positing new notions like “artificial moral agency” and “artificial moral responsibility”? I answer no. Or, to be more precise, I argue that such notions may become philosophically acceptable only after crucial metaphysical issues have been addressed. My main claim, in sum, is that “artificial moral responsibility” betokens moral responsibility to the same degree that a “fake orgasm” betokens an orgasm.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Lethal Military Robots: Who is Responsible When Things Go Wrong?Peter Olsthoorn - 2018 - In Rocci Luppicini (ed.), The Changing Scope of Technoethics in Contemporary Society. Hershey, PA, USA: pp. 106-123.
    Although most unmanned systems that militaries use today are still unarmed and predominantly used for surveillance, it is especially the proliferation of armed military robots that raises some serious ethical questions. One of the most pressing concerns the moral responsibility in case a military robot uses violence in a way that would normally qualify as a war crime. In this chapter, the authors critically assess the chain of responsibility with respect to the deployment of both semi-autonomous and (learning) autonomous lethal (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   58 citations  
  8. Authenticity and co-design: On responsibly creating relational robots for children.Milo Phillips-Brown, Marion Boulicault, Jacqueline Kory-Westland, Stephanie Nguyen & Cynthia Breazeal - 2023 - In Mizuko Ito, Remy Cross, Karthik Dinakar & Candice Odgers (eds.), Algorithmic Rights and Protections for Children. MIT Press. pp. 85-121.
    Meet Tega. Blue, fluffy, and AI-enabled, Tega is a relational robot: a robot designed to form relationships with humans. Created to aid in early childhood education, Tega talks with children, plays educational games with them, solves puzzles, and helps in creative activities like making up stories and drawing. Children are drawn to Tega, describing him as a friend, and attributing thoughts and feelings to him ("he's kind," "if you just left him here and nobody came to play with him, he (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. The rise of the robots and the crisis of moral patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  10. The Robotic Touch: Why there is no good reason to prefer human nurses to carebots.Karen Lancaster - 2019 - Philosophy in the Contemporary World 25 (2):88-109.
    An elderly patient in a care home only wants human nurses to provide her care – not robots. If she selected her carers based on skin colour, it would be seen as racist and morally objectionable, but is choosing a human nurse instead of a robot also morally objectionable and speciesist? A plausible response is that it is not, because humans provide a better standard of care than robots do, making such a choice justifiable. In this paper, I show why (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Autonomous killer robots are probably good news.Vincent C. Müller - 2016 - In Ezio Di Nucci & Filippo Santonio de Sio (eds.), Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons. London: Ashgate. pp. 67-81.
    Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  12. Just war and robots’ killings.Thomas W. Simpson & Vincent C. Müller - 2016 - Philosophical Quarterly 66 (263):302-22.
    May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  13. When Do Robots Have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will.Eddy Nahmias, Corey Allen & Bradley Loveall - 2019 - In Bernard Feltz, Marcus Missal & Andrew Cameron Sims (eds.), Free Will, Causality, and Neuroscience. Leiden: Brill.
    While philosophers and scientists sometimes suggest (or take for granted) that consciousness is an essential condition for free will and moral responsibility, there is surprisingly little discussion of why consciousness (and what sorts of conscious experience) is important. We discuss some of the proposals that have been offered. We then discuss our studies using descriptions of humanoid robots to explore people’s attributions of free will and responsibility, of various kinds of conscious sensations and emotions, and of reasoning capacities, and examine (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  14. Responsible research for the construction of maximally humanlike automata: the paradox of unattainable informed consent.Lantz Fleming Miller - 2020 - Ethics and Information Technology 22 (4):297-305.
    Since the Nuremberg Code and the first Declaration of Helsinki, globally there has been increasing adoption and adherence to procedures for ensuring that human subjects in research are as well informed as possible of the study’s reasons and risks and voluntarily consent to serving as subject. To do otherwise is essentially viewed as violation of the human research subject’s legal and moral rights. However, with the recent philosophical concerns about responsible robotics, the limits and ambiguities of research-subjects ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15. Can Humanoid Robots be Moral?Sanjit Chakraborty - 2018 - Ethics in Science, Environment and Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output’, in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated extensive debate, i.e. ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and creativity interplay (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Can humanoid robots be moral?Sanjit Chakraborty - 2018 - Ethics in Science and Environmental Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output,’ in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated the extensive debate, i.e., ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and creativity (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  17. Can humanoid robots be moral?Sanjit Chakraborty - 2018 - Ethics in Science and Environmental Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output’, in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated extensive debate, i.e. ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and creativity interplay (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  18. Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents.Markus Https://Orcidorg Kneer - 2021 - Cognitive Science 45 (10):e13032.
    The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  19. Designing Virtuous Sex Robots.Anco Peeters & Pim Haselager - 2019 - International Journal of Social Robotics:1-12.
    We propose that virtue ethics can be used to address ethical issues central to discussions about sex robots. In particular, we argue virtue ethics is well equipped to focus on the implications of sex robots for human moral character. Our evaluation develops in four steps. First, we present virtue ethics as a suitable framework for the evaluation of human–robot relationships. Second, we show the advantages of our virtue ethical account of sex robots by comparing it to current instrumentalist approaches, showing (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  20. Can a robot lie?Markus Kneer - manuscript
    The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  21. Building better Sex Robots: Lessons from Feminist Pornography.John Danaher - 2019 - In Yuefang Zhou & Martin H. Fischer (eds.), Ai Love You : Developments in Human-Robot Intimate Relationships. Springer Verlag.
    How should we react to the development of sexbot technology? Taking their cue from anti-porn feminism, several academic critics lament the development of sexbot technology, arguing that it objectifies and subordinates women, is likely to promote misogynistic attitudes toward sex, and may need to be banned or restricted. In this chapter I argue for an alternative response. Taking my cue from the sex positive ‘feminist porn’ movement, I argue that the best response to the development of ‘bad’ sexbots is to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  22. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when (...)
    Download  
     
    Export citation  
     
    Bookmark   71 citations  
  23. Sociable Robots for Later Life: Carebots, Friendbots and Sexbots.Nancy S. Jecker - 2021 - In Ruiping Fan & Mark J. Cherry (eds.), Sex Robots: Social Impact and the Future of Human Relations. Springer. pp. 25-40.
    This chapter discusses three types of sociable robots for older adults: robotic caregivers ; robotic friends ; and sex robots. The central argument holds that society ought to make reasonable efforts to provide these types of robots and that under certain conditions, omitting such support not only harms older adults but poses threats to their dignity. The argument proceeds stepwise. First, the chapter establishes that assisting care-dependent older adults to perform activities of daily living is integral to respecting dignity. Here, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Reasons to Punish Autonomous Robots.Zac Cogley - 2023 - The Gradient 14.
    I here consider the reasonableness of punishing future autonomous military robots. I argue that it is an engineering desideratum that these devices be responsive to moral considerations as well as human criticism and blame. Additionally, I argue that someday it will be possible to build such machines. I use these claims to respond to the no subject of punishment objection to deploying autonomous military robots, the worry being that an “accountability gap” could result if the robot committed a war crime. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Robots and us: towards an economics of the ‘Good Life’.C. W. M. Naastepad & Jesse M. Mulder - 2018 - Review of Social Economy:1-33.
    (Expected) adverse effects of the ‘ICT Revolution’ on work and opportunities for individuals to use and develop their capacities give a new impetus to the debate on the societal implications of technology and raise questions regarding the ‘responsibility’ of research and innovation (RRI) and the possibility of achieving ‘inclusive and sustainable society’. However, missing in this debate is an examination of a possible conflict between the quest for ‘inclusive and sustainable society’ and conventional economic principles guiding capital allocation (including the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Does kindness towards robots lead to virtue? A reply to Sparrow’s asymmetry argument.Mark Coeckelbergh - 2021 - Ethics and Information Technology 1 (Online first):649-656.
    Does cruel behavior towards robots lead to vice, whereas kind behavior does not lead to virtue? This paper presents a critical response to Sparrow’s argument that there is an asymmetry in the way we (should) think about virtue and robots. It discusses how much we should praise virtue as opposed to vice, how virtue relates to practical knowledge and wisdom, how much illusion is needed for it to be a barrier to virtue, the relation between virtue and consequences, the moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Singularity Humanities -Singularity robot is a member of human community.Daihyun Chung - 2017 - Cheolhak-Korean Journal of Philosophy 131:189-216.
    [Abstract] Suppose that the Big Bang was the first singularity in the history of the cosmos. Then it would be plausible to presume that the availability of the strong general intelligence should mark the second singularity for the natural human race. The human race needs to be prepared to make it sure that if a singularity robot becomes a person, the robotic person should be a blessing for the humankind rather than a curse. Toward this direction I would scrutinize the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Risks and Robots – some ethical issues.Peter Olsthoorn & Lambèr Royakkers - 2011 - Archive International Society for Military Ethics, 2011.
    While in many countries the use of unmanned systems is still in its infancy, other countries, most notably the US and Israel, are much ahead. Most of the systems in operation today are unarmed and are mainly used for reconnaissance and clearing improvised explosive devices. But over the last years the deployment of armed military robots is also on the increase, especially in the air. This might make unethical behavior less likely to happen, seeing that unmanned systems are immune to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Réguler les robots-tueurs, plutôt que les interdire.Vincent C. Müller & Thomas W. Simpson - 2015 - Multitudes 58 (1):77.
    This is the short version, in French translation by Anne Querrien, of the originally jointly authored paper: Müller, Vincent C., ‘Autonomous killer robots are probably good news’, in Ezio Di Nucci and Filippo Santoni de Sio, Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons. - - - L’article qui suit présente un nouveau système d’armes fondé sur des robots qui risque d’être prochainement utilisé. À la différence des drones qui sont manoeuvrés à distance (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  33. Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition.Massimiliano L. Cappuccio, Anco Peeters & William McDonald - 2019 - Philosophy and Technology 33 (1):9-31.
    This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots and (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  34. Development of reaching to the body in early infancy: From experiments to robotic models.Matej Hoffmann, Lisa K. Chinn, Eszter Somogyi, Tobias Heed, Jacqueline Fagard, Jeffrey J. Lockman & Kevin J. O'Regan - 2017 - In Matej Hoffmann, Lisa K. Chinn, Eszter Somogyi, Tobias Heed, Jacqueline Fagard, Jeffrey J. Lockman & Kevin J. O'Regan (eds.), 2017 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob). IEEE. pp. 112-119.
    We have been observing how infants between 3 and 21 months react when a vibrotactile stimulation (a buzzer) is applied to different parts of their bodies. Responses included in particular movement of the stimulated body part and successful reaching for and removal of the buzzer. Overall, there is a pronounced developmental progression from general to specific movement patterns, especially in the first year. In this article we review the series of studies we conducted and then focus on possible mechanisms that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. There is no 'I' in 'Robot': Robots and Utilitarianism (expanded & revised).Christopher Grau - 2011 - In Susan Anderson & Michael Anderson (eds.), Machine Ethics. Cambridge University Press. pp. 451.
    Utilizing the film I, Robot as a springboard, I here consider the feasibility of robot utilitarians, the moral responsibilities that come with the creation of ethical robots, and the possibility of distinct ethics for robot-robot interaction as opposed to robot-human interaction. (This is a revised and expanded version of an essay that originally appeared in IEEE: Intelligent Systems.).
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  36. Playing the Blame Game with Robots.Markus Kneer & Michael T. Stuart - 2021 - In Markus Kneer & Michael T. Stuart (eds.), Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21 Companion). New York, NY, USA:
    Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]–[4]. In this paper, we explore the moral- psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  37. Kantian Ethics in the Age of Artificial Intelligence and Robotics.Ozlem Ulgen - 2017 - Questions of International Law 1 (43):59-83.
    Artificial intelligence and robotics is pervasive in daily life and set to expand to new levels potentially replacing human decision-making and action. Self-driving cars, home and healthcare robots, and autonomous weapons are some examples. A distinction appears to be emerging between potentially benevolent civilian uses of the technology (eg unmanned aerial vehicles delivering medicines), and potentially malevolent military uses (eg lethal autonomous weapons killing human com- batants). Machine-mediated human interaction challenges the philosophical basis of human existence and ethical conduct. (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  38. ’How could you even ask that?’ Moral considerability, uncertainty and vulnerability in social robotics.Alexis Elder - 2020 - Journal of Sociotechnical Critique 1 (1):1-23.
    When it comes to social robotics (robots that engage human social responses via “eyes” and other facial features, voice-based natural-language interactions, and even evocative movements), ethicists, particularly in European and North American traditions, are divided over whether and why they might be morally considerable. Some argue that moral considerability is based on internal psychological states like consciousness and sentience, and debate about thresholds of such features sufficient for ethical consideration, a move sometimes criticized for being overly dualistic in its (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Artificial Intelligence Systems, Responsibility and Agential Self-Awareness.Lydia Farina - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin, Germany: pp. 15-25.
    This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. First Steps Towards an Ethics of Robots and Artificial Intelligence.John Tasioulas - 2019 - Journal of Practical Ethics 7 (1):61-95.
    This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  41. Can I Feel Your Pain? The Biological and Socio-Cognitive Factors Shaping People’s Empathy with Social Robots.Joanna Karolina Malinowska - 2022 - International Journal of Social Robotics 14 (2):341–355.
    This paper discuss the phenomenon of empathy in social robotics and is divided into three main parts. Initially, I analyse whether it is correct to use this concept to study and describe people’s reactions to robots. I present arguments in favour of the position that people actually do empathise with robots. I also consider what circumstances shape human empathy with these entities. I propose that two basic classes of such factors be distinguished: biological and socio-cognitive. In my opinion, one (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Position and Speed Control of 2 DOF Industrial Robotic Arm using Robust Controllers.Mustefa Jibril, Messay Tadese & Reta Degefa - 2020 - Scienceopen Journal 2020 (10):8.
    In this paper, a 2 DOF industrial robotic arm is designed and simulated for elbow and wrist angle and velocity performance improvement using robust control method. Mixed H 2 /H ∞ synthesis with regional pole placement and H 2 optimal controllers are used to improve the system output. The open loop response of the robot arm shows that the elbow and wrist angles and velocities need some improvement. Comparison of the proposed controllers for an impulse and step input signals have (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Mitigating emotional risks in human-social robot interactions through virtual interactive environment indication.Aorigele Bao, Yi Zeng & Enmeng lu - 2023 - Humanities and Social Sciences Communications 2023.
    Humans often unconsciously perceive social robots involved in their lives as partners rather than mere tools, imbuing them with qualities of companionship. This anthropomorphization can lead to a spectrum of emotional risks, such as deception, disappointment, and reverse manipulation, that existing approaches struggle to address effectively. In this paper, we argue that a Virtual Interactive Environment (VIE) exists between humans and social robots, which plays a crucial role and demands necessary consideration and clarification in order to mitigate potential emotional risks. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Remaking responsibility: complexity and scattered causes in human agency.Joshua Fost & Coventry Angela - 2013 - In Tangjia Wang (ed.), Proceedings of the 1st International Conference of Philosophy: Yesterday, Today & Tomorrow. Global Science and Technology Forum. pp. 91-101.
    Contrary to intuitions that human beings are free to think and act with “buck-stopping” freedom, philosophers since Holbach and Hume have argued that universal causation makes free will nonsensical. Contemporary neuroscience has strengthened their case and begun to reveal subtle and counterintuitive mechanisms in the processes of conscious agency. Although some fear that determinism undermines moral responsibility, the opposite is true: free will, if it existed, would undermine coherent systems of justice. Moreover, deterministic views of human choice clarify the conditions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  46. Position and Speed Control of 2DOF Industrial Robotic Arm using Robust Controllers.Mustefa Jibril, Mesay Tadesse & Reta Degefa - 2020 - Journal of Engineering and Applied Sciences 15 (24):3765-3769.
    In this study, a 2 DOF industrial robotic arm is designed and simulated for elbow and wrist angle and velocity performance improvement using robust control method. Mixed H2/H infinity synthesis with regional pole placement and H2 optimal controllers are used to improve the system output. The open loop response of the robot arm shows that the elbow and wrist angles and velocities need some improvement. Comparison of the proposed controllers for an impulse and step input signals have been done and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Introduction to the International Handbook on Responsible Innovation.Rene Von Schomberg - 2019 - In René von Schomberg & Jonathan Hankins (eds.), International Handbook on Responsible Innovation. A global resource. Cheltenham, Royaume-Uni: Edward Elgar Publishing. pp. 1-11.
    he Handbook constitutes a global resource for the fast growing interdisciplinary research and policy communities addressing the challenge of driving innovation towards socially desirable outcomes. This book brings together well-known authors from the US, Europe, Asia and South-Africa who develop conceptual, ethical and regional perspectives on responsible innovation as well as exploring the prospects for further implementation of responsible innovation in emerging technological practices ranging from agriculture and medicine, to nanotechnology and robotics. The emphasis is on the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  48. Design to Implementation of A Line Follower Robot Using 5 Sensors.Anupoju Sai Vamsi, Badana Manasa, Kocherla Rama Krishna, Tarigoppula Venu & A. N. V. N. Shashank - 2019 - International Journal of Engineering and Information Systems (IJEAIS) 3 (1):42-47.
    Abstract: The main objective is to design a line follower robot is to carry products in the manufacturing process in industries. In this paper, we mainly focused on the design to work the line follower efficiently with lighter weight. The line follower robot designed with 5 sensors to make the robot move in even complex paths. This paper discussed the mechanical and technical issues with the line follower robot and applications in various fields. In the working model, we used black (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Toward Modeling and Automating Ethical Decision Making: Design, Implementation, Limitations, and Responsibilities.Gregory S. Reed & Nicholaos Jones - 2013 - Topoi 32 (2):237-250.
    One recent priority of the U.S. government is developing autonomous robotic systems. The U.S. Army has funded research to design a metric of evil to support military commanders with ethical decision-making and, in the future, allow robotic military systems to make autonomous ethical judgments. We use this particular project as a case study for efforts that seek to frame morality in quantitative terms. We report preliminary results from this research, describing the assumptions and limitations of a program that assesses the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. More than Skin Deep: a Response to “The Whiteness of AI”.Shelley Park - 2021 - Philosophy and Technology 34 (4):1961-1966.
    This commentary responds to Stephen Cave and Kanta Dihal’s call for further investigations of the whiteness of AI. My response focuses on three overlapping projects needed to more fully understand racial bias in the construction of AI and its representations in pop culture: unpacking the intersections of gender and other variables with whiteness in AI’s construction, marketing, and intended functions; observing the many different ways in which whiteness is scripted, and noting how white racial framing exceeds white casting and thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 1000