View topic on PhilPapers for more information
Related categories

70 found
Order:
More results on PhilPapers
1 — 50 / 70
  1. added 2020-07-01
    Coupling Levels of Abstraction in Understanding Meaningful Human Control of Autonomous Weapons: A Two-Tiered Approach.Steven Umbrello - manuscript
    The international debate on the ethics and legality of autonomous weapon systems (AWS) as well as the call for a ban are primarily focused on the nebulous concept of fully autonomous AWS. More specifically, on AWS that are capable of target selection and engagement without human supervision or control. This paper argues that such a conception of autonomy is divorced both from military planning and decision-making operations as well as the design requirements that govern AWS engineering and subsequently the tracking (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. added 2020-06-17
    Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering sense (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. added 2020-05-01
    Incorporating Ethics Into Artificial Intelligence.Amitai Etzioni & Oren Etzioni - 2017 - The Journal of Ethics 21 (4):403-418.
    This article reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  4. added 2020-04-24
    Ethics of Artificial Intelligence.Vincent C. Müller - forthcoming - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 1-20.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. added 2020-04-06
    Artificial Beings Worthy of Moral Consideration in Virtual Environments: An Analysis of Ethical Viability.Stefano Gualeni - 2020 - Journal of Virtual Worlds Research 13 (1).
    This article explores whether and under which circumstances it is ethically viable to include artificial beings worthy of moral consideration in virtual environments. In particular, the article focuses on virtual environments such as those in digital games and training simulations – interactive and persistent digital artifacts designed to fulfill specific purposes, such as entertainment, education, training, or persuasion. The article introduces the criteria for moral consideration that serve as a framework for this analysis. Adopting this framework, the article tackles the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. added 2020-03-17
    Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up approaches to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. added 2020-03-07
    Digital Well-Being and Manipulation Online.Michael Klenk - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach. Springer.
    Social media use is soaring globally. Existing research of its ethical implications predominantly focuses on the relationships amongst human users online, and their effects. The nature of the software-to-human relationship and its impact on digital well-being, however, has not been sufficiently addressed yet. This paper aims to close the gap. I argue that some intelligent software agents, such as newsfeed curator algorithms in social media, manipulate human users because they do not intend their means of influence to reveal the user’s (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. added 2020-01-22
    Robots Like Me: Challenges and Ethical Issues in Aged Care.Ipke Wachsmuth - 2018 - Frontiers in Psychology 9 (432).
    This paper addresses the issue of whether robots could substitute for human care, given the challenges in aged care induced by the demographic change. The use of robots to provide emotional care has raised ethical concerns, e.g., that people may be deceived and deprived of dignity. In this paper it is argued that these concerns might be mitigated and that it may be sufficient for robots to take part in caring when they behave *as if* they care.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. added 2019-10-25
    Robot Betrayal: A Guide to the Ethics of Robotic Deception.John Danaher - 2020 - Ethics and Information Technology 22 (2):117-128.
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. added 2019-08-10
    Distributive Justice as an Ethical Principle for Autonomous Vehicle Behavior Beyond Hazard Scenarios.Manuel Dietrich & Thomas H. Weisswange - 2019 - Ethics and Information Technology 21 (3):227-239.
    Through modern driver assistant systems, algorithmic decisions already have a significant impact on the behavior of vehicles in everyday traffic. This will become even more prominent in the near future considering the development of autonomous driving functionality. The need to consider ethical principles in the design of such systems is generally acknowledged. However, scope, principles and strategies for their implementations are not yet clear. Most of the current discussions concentrate on situations of unavoidable crashes in which the life of human (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. added 2019-08-09
    When AI Meets PC: Exploring the Implications of Workplace Social Robots and a Human-Robot Psychological Contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. added 2019-04-25
    Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. added 2019-03-18
    Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy. Palo Alto, Cal.: CSLI, Stanford University. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. added 2019-03-14
    Osaammeko rakentaa moraalisia toimijoita?Antti Kauppinen - forthcoming - In Panu Raatikainen (ed.), Tekoäly, ihminen ja yhteiskunta.
    Jotta olisimme moraalisesti vastuussa teoistamme, meidän on kyettävä muodostamaan käsityksiä oikeasta ja väärästä ja toimimaan ainakin jossain määrin niiden mukaisesti. Jos olemme täysivaltaisia moraalitoimijoita, myös ymmärrämme miksi jotkin teot ovat väärin, ja kykenemme siten joustavasti mukauttamaan toimintaamme eri tilanteisiin. Esitän, ettei näköpiirissä ole tekoälyjärjestelmiä, jotka kykenisivät aidosti välittämään oikein tekemisestä tai ymmärtämään moraalin vaatimuksia, koska nämä kyvyt vaativat kokemustietoisuutta ja kokonaisvaltaista arvostelukykyä. Emme siten voi sysätä koneille vastuuta teoistaan. Meidän on sen sijaan pyrittävä rakentamaan keinotekoisia oikeintekijöitä - järjestelmiä, jotka eivät (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  15. added 2019-02-19
    First Human Upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. added 2019-02-11
    Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. added 2018-12-20
    Fare e funzionare. Sull'analogia di robot e organismo.Fabio Fossa - 2018 - InCircolo - Rivista di Filosofia E Culture 6:73-88.
    In this essay I try to determine the extent to which it is possible to conceive robots and organisms as analogous entities. After a cursory preamble on the long history of epistemological connections between machines and organisms I focus on Norbert Wiener’s cybernetics, where the analogy between modern machines and organisms is introduced most explicitly. The analysis of issues pertaining to the cybernetic interpretation of the analogy serves then as a basis for a critical assessment of its reprise in contemporary (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  18. added 2018-11-26
    Making Metaethics Work for AI: Realism and Anti-Realism.Michal Klincewicz & Lily E. Frank - 2018 - In Mark Coeckelbergh, M. Loh, J. Funk, M. Seibt & J. Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and Public Space. Amsterdam, Netherlands: IOS Press. pp. 311-318.
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. added 2018-11-16
    Machine Medical Ethics.Simon Peter van Rysewyk & Matthijs Pontier (eds.) - 2014 - Springer.
    In medical settings, machines are in close proximity with human beings: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. Machines in these contexts are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. -/- As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. added 2018-11-07
    The Motivations and Risks of Machine Ethics.Stephen Cave, Rune Nyrup, Karina Vold & Adrian Weller - 2019 - Proceedings of the IEEE 107 (3):562-574.
    Many authors have proposed constraining the behaviour of intelligent systems with ‘machine ethics’ to ensure positive social outcomes from the development of such systems. This paper critically analyses the prospects for machine ethics, identifying several inherent limitations. While machine ethics may increase the probability of ethical behaviour in some situations, it cannot guarantee it due to the nature of ethics, the computational limitations of computational agents and the complexity of the world. In addition, machine ethics, even if it were to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. added 2018-11-07
    Can Humanoid Robots Be Moral?Sanjit Chakraborty - 2018 - Ethics in Science, Environment and Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output’, in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated extensive debate, i.e. ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and creativity interplay with (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. added 2018-08-21
    Introduction: Philosophy and Theory of Artificial Intelligence.Vincent C. Müller - 2012 - Minds and Machines 22 (2):67-69.
    The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, reasoning, learning, language, action, interaction, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. added 2018-07-05
    Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. added 2018-07-02
    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. added 2018-06-06
    Designing in Ethics. [REVIEW]Steven Umbrello - 2019 - Prometheus: Critical Studies in Innovation 35 (1):160-161.
    Designing in Ethics provides a compilation of well-curated essays that tackle the ethical issues that surround technological design and argue that ethics must form a constitutive part of the designing process and a foundation in our institutions and practices. The appropriation of a design approach to applied ethics is argued as a means by which ethical issues that implicate technological artifact may be achieved.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. added 2018-05-19
    Mental Time-Travel, Semantic Flexibility, and A.I. Ethics.Marcus Arvan - forthcoming - AI and Society:1-20.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ _GenEth_. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. added 2018-04-16
    Do Machines Have Prima Facie Duties?Gary Comstock - 2015 - In Machine Medical Ethics. London: Springer. pp. 79-92.
    A properly programmed artificially intelligent agent may eventually have one duty, the duty to satisfice expected welfare. We explain this claim and defend it against objections.
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  28. added 2018-01-13
    Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. added 2017-12-30
    Two Challenges for CI Trustworthiness and How to Address Them.Kevin Baum, Eva Schmidt & A. Köhl Maximilian - 2017 - Proceedings of the 1st Workshop on Explainable Computational Intelligence (XCI 2017).
    We argue that, to be trustworthy, Computa- tional Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and gras- pable. We support this claim by drawing par- allels with trustworthy human persons, and we show what difference this makes in a hypo- thetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism which could be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. added 2017-12-01
    Transparent, Explainable, and Accountable AI for Robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  31. added 2017-10-04
    Fundamental Issues of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. added 2017-09-18
    Preserving a Combat Commander’s Moral Agency: The Vincennes Incident as a Chinese Room.Patrick Chisan Hew - 2016 - Ethics and Information Technology 18 (3):227-235.
    We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. added 2017-09-18
    Artificial Moral Agents Are Infeasible with Foreseeable Technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  34. added 2017-09-04
    Artificial Consciousness and the Consciousness-Attention Dissociation.Harry Haroutioun Haladjian & Carlos Montemayor - 2016 - Consciousness and Cognition 45:210-225.
    Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  35. added 2017-03-28
    Artificial Intelligence as a Means to Moral Enhancement.Michał Klincewicz - 2016 - Studies in Logic, Grammar and Rhetoric 48 (1):171-187.
    This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as Kantianism or utilitarianism, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  36. added 2017-03-28
    Metaethics in Context of Engineering Ethical and Moral Systems.Michal Klincewicz & Lily Frank - 2016 - In AAAI Spring Series Technical Reports. Palo Alto, CA, USA: AAAI Press.
    It is not clear to what the projects of creating an artificial intelligence (AI) that does ethics, is moral, or makes moral judgments amounts. In this paper we discuss some of the extant metaethical theories and debates in moral philosophy by which such projects should be informed, specifically focusing on the project of creating an AI that makes moral judgments. We argue that the scope and aims of that project depend a great deal on antecedent metaethical commitments. Metaethics, therefore, plays (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. added 2017-01-21
    Understanding and Augmenting Human Morality: The Actwith Model of Conscience.Jeffrey White - 2009 - In L. Magnani (ed.), computational intelligence.
    Abstract. Recent developments, both in the cognitive sciences and in world events, bring special emphasis to the study of morality. The cognitive sci- ences, spanning neurology, psychology, and computational intelligence, offer substantial advances in understanding the origins and purposes of morality. Meanwhile, world events urge the timely synthesis of these insights with tra- ditional accounts that can be easily assimilated and practically employed to augment moral judgment, both to solve current problems and to direct future action. The object of the (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  38. added 2016-12-26
    Membrane Computing: From Biology to Computation and Back.Paolo Milazzo - 2014 - Isonomia: Online Philosophical Journal of the University of Urbino:1-15.
    Natural Computing is a field of research in Computer Science aimed at reinterpreting biological phenomena as computing mechanisms. This allows unconventional computing architectures to be proposed in which computations are performed by atoms, DNA strands, cells, insects or other biological elements. Membrane Computing is a branch of Natural Computing in which biological phenomena of interest are related with interactions between molecules inside cells. The research in Membrane Computing has lead to very important theoretical results that show how, in principle, cells (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  39. added 2016-10-19
    When is a Robot a Moral Agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   34 citations  
  40. added 2016-10-18
    Turing Test, Chinese Room Argument, Symbol Grounding Problem. Meanings in Artificial Agents (APA 2013).Christophe Menant - 2013 - American Philosophical Association Newsletter on Philosophy and Computers 13 (1):30-34.
    The Turing Test (TT), the Chinese Room Argument (CRA), and the Symbol Grounding Problem (SGP) are about the question “can machines think?” We propose to look at these approaches to Artificial Intelligence (AI) by showing that they all address the possibility for Artificial Agents (AAs) to generate meaningful information (meanings) as we humans do. The initial question about thinking machines is then reformulated into “can AAs generate meanings like humans do?” We correspondingly present the TT, the CRA and the SGP (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  41. added 2016-10-10
    Roboethics: Ethics Applied to Robotics.Gianmarco Veruggio, Jorje Solis & Machiel Van der Loos - 2011 - IEEE Robotics and Automation Magazine 1 (March):21-22.
    This special issue deals with the emerging debate on robo- ethics, the human ethics ap- plied to robotics. Is a specific ethic applied to robotics truly neces- sary? Or, conversely, are not the gen- eral principles of ethics adequate to answer many of the issues raised by our field’s applications? In our opin- ion, and according to many roboticists and human scientists, many novel issues that emerge and many more that will show up in the immediate future, arising from the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. added 2016-07-27
    Artificial Free Will: The Responsibility Strategy and Artificial Agents.Sven Delarivière - 2016 - Apeiron Student Journal of Philosophy (Portugal) 7:175-203.
    Both a traditional notion of free will, present in human beings, and artificial intelligence are often argued to be inherently incompatible with determinism. Contrary to these criticisms, this paper defends that an account of free will compatible with determinism, the responsibility strategy (coined here) specifically, is a variety of free will worth wanting as well as a variety that is possible to (in principle) artificially construct. First, freedom will be defined and related to ethics. With that in mind, the two (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. added 2016-07-27
    Machines as Moral Patients We Shouldn’T Care About : The Interests and Welfare of Current Machines.John Basl - 2014 - Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  44. added 2016-07-27
    The Ethics of Creating Artificial Consciousness.John Basl - 2013 - APA Newsletter on Philosophy and Computers 13 (1):23-29.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. added 2016-07-27
    Homo Sapiens 2.0 Why We Should Build the Better Robots of Our Nature.Eric Dietrich - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
    It is possible to survey humankind and be proud, even to smile, for we accomplish great things. Art and science are two notable worthy human accomplishments. Consonant with art and science are some of the ways we treat each other. Sacrifice and heroism are two admirable human qualities that pervade human interaction. But, as everyone knows, all this goodness is more than balanced by human depravity. Moral corruption infests our being. Why?
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. added 2016-07-27
    Defining Agency: Individuality, Normativity, Asymmetry, and Spatio-Temporality in Action.Xabier Barandiaran, E. Di Paolo & M. Rohde - 2009 - Adaptive Behavior 17 (5):367-386.
    The concept of agency is of crucial importance in cognitive science and artificial intelligence, and it is often used as an intuitive and rather uncontroversial term, in contrast to more abstract and theoretically heavy-weighted terms like “intentionality”, “rationality” or “mind”. However, most of the available definitions of agency are either too loose or unspecific to allow for a progressive scientific program. They implicitly and unproblematically assume the features that characterize agents, thus obscuring the full potential and challenge of modeling agency. (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   51 citations  
  47. added 2016-07-27
    AI, Situatedness, Creativity, and Intelligence; or the Evolution of the Little Hearing Bones.Eric Dietrich - 1996 - J. Of Experimental and Theoretical AI 8 (1):1-6.
    Good sciences have good metaphors. Indeed, good sciences are good because they have good metaphors. AI could use more good metaphors. In this editorial, I would like to propose a new metaphor to help us understand intelligence. Of course, whether the metaphor is any good or not depends on whether it actually does help us. (What I am going to propose is not something opposed to computationalism -- the hypothesis that cognition is computation. Noncomputational metaphors are in vogue these days, (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  48. added 2016-07-09
    Rethinking Machine Ethics in the Era of Ubiquitous Technology.Jeffrey White (ed.) - 2015 - IGI.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. added 2016-03-01
    Will Life Be Worth Living in a World Without Work? Technological Unemployment and the Meaning of Life.John Danaher - 2017 - Science and Engineering Ethics 23 (1):41-64.
    Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus on (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  50. added 2015-11-30
    Granny and the Robots: Ethical Issues in Robot Care for the Elderly.Amanda Sharkey & Noel Sharkey - 2012 - Ethics and Information Technology 14 (1):27-40.
    The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   57 citations  
1 — 50 / 70