View topic on PhilPapers for more information
Related categories

55 found
Order:
More results on PhilPapers
1 — 50 / 55
  1. added 2019-03-14
    Osaammeko rakentaa moraalisia toimijoita?Antti Kauppinen - forthcoming - In Panu Raatikainen (ed.), Tekoäly, ihminen ja yhteiskunta.
    Jotta olisimme moraalisesti vastuussa teoistamme, meidän on kyettävä muodostamaan käsityksiä oikeasta ja väärästä ja toimimaan ainakin jossain määrin niiden mukaisesti. Jos olemme täysivaltaisia moraalitoimijoita, myös ymmärrämme miksi jotkin teot ovat väärin, ja kykenemme siten joustavasti mukauttamaan toimintaamme eri tilanteisiin. Esitän, ettei näköpiirissä ole tekoälyjärjestelmiä, jotka kykenisivät aidosti välittämään oikein tekemisestä tai ymmärtämään moraalin vaatimuksia, koska nämä kyvyt vaativat kokemustietoisuutta ja kokonaisvaltaista arvostelukykyä. Emme siten voi sysätä koneille vastuuta teoistaan. Meidän on sen sijaan pyrittävä rakentamaan keinotekoisia oikeintekijöitä - järjestelmiä, jotka eivät (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  2. added 2019-02-19
    First Human Upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. added 2018-12-20
    Fare e funzionare. Sull'analogia di robot e organismo.Fabio Fossa - 2018 - InCircolo - Rivista di Filosofia E Culture 6:73-88.
    In this essay I try to determine the extent to which it is possible to conceive robots and organisms as analogous entities. After a cursory preamble on the long history of epistemological connections between machines and organisms I focus on Norbert Wiener’s cybernetics, where the analogy between modern machines and organisms is introduced most explicitly. The analysis of issues pertaining to the cybernetic interpretation of the analogy serves then as a basis for a critical assessment of its reprise in contemporary (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  4. added 2018-11-26
    Making Metaethics Work for AI: Realism and Anti-Realism.Michal Klincewicz & Lily E. Frank - 2018 - In Mark Coeckelbergh, M. Loh, J. Funk, M. Seibt & J. Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and Public Space. Amsterdam, Netherlands: IOS Press. pp. 311-318.
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. added 2018-11-07
    Can Humanoid Robots Be Moral?Sanjit Chakraborty - 2018 - Ethics in Science, Environment and Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output’, in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated extensive debate, i.e. ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and creativity interplay with (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. added 2018-08-21
    Introduction: Philosophy and Theory of Artificial Intelligence.Vincent C. Müller - 2012 - Minds and Machines 22 (2):67-69.
    The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, reasoning, learning, language, action, interaction, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. added 2018-07-05
    Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. added 2018-07-03
    Evolution: The Computer Systems Engineer Designing Minds.Aaron Sloman - 2011 - Avant: Trends in Interdisciplinary Studies 2 (2):45–69.
    What we have learnt in the last six or seven decades about virtual machinery, as a result of a great deal of science and technology, enables us to offer Darwin a new defence against critics who argued that only physical form, not mental capabilities and consciousness could be products of evolution by natural selection. The defence compares the mental phenomena mentioned by Darwin’s opponents with contents of virtual machinery in computing systems. Objects, states, events, and processes in virtual machinery which (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. added 2018-07-02
    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. added 2018-06-06
    Designing in Ethics. [REVIEW]Steven Umbrello - 2019 - Prometheus: Critical Studies in Innovation 35 (1):160-161.
    Designing in Ethics provides a compilation of well-curated essays that tackle the ethical issues that surround technological design and argue that ethics must form a constitutive part of the designing process and a foundation in our institutions and practices. The appropriation of a design approach to applied ethics is argued as a means by which ethical issues that implicate technological artifact may be achieved.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. added 2018-05-19
    Mental Time-Travel, Semantic Flexibility, and A.I. Ethics.Marcus Arvan - forthcoming - AI and Society:1-20.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ _GenEth_. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. added 2018-04-16
    Do Machines Have Prima Facie Duties?Gary Comstock - 2015 - In Machine Medical Ethics. London: Springer. pp. 79-92.
    A properly programmed artificially intelligent agent may eventually have one duty, the duty to satisfice expected welfare. We explain this claim and defend it against objections.
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  13. added 2018-01-13
    Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. added 2017-12-30
    Two Challenges for CI Trustworthiness and How to Address Them.Kevin Baum, Eva Schmidt & A. Köhl Maximilian - 2017 - Proceedings of the 1st Workshop on Explainable Computational Intelligence (XCI 2017).
    We argue that, to be trustworthy, Computa- tional Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and gras- pable. We support this claim by drawing par- allels with trustworthy human persons, and we show what difference this makes in a hypo- thetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism which could be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. added 2017-12-01
    Transparent, Explainable, and Accountable AI for Robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  16. added 2017-10-04
    Fundamental Issues of Artificial Intelligence.Vincent Müller (ed.) - 2016 - Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. added 2017-09-18
    Preserving a Combat Commander’s Moral Agency: The Vincennes Incident as a Chinese Room.Patrick Chisan Hew - 2016 - Ethics and Information Technology 18 (3):227-235.
    We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. added 2017-09-18
    Artificial Moral Agents Are Infeasible with Foreseeable Technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. added 2017-09-04
    Artificial Consciousness and the Consciousness-Attention Dissociation.Harry Haroutioun Haladjian & Carlos Montemayor - 2016 - Consciousness and Cognition 45:210-225.
    Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. added 2017-03-28
    Artificial Intelligence as a Means to Moral Enhancement.Michał Klincewicz - 2016 - Studies in Logic, Grammar and Rhetoric 48 (1):171-187.
    This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as Kantianism or utilitarianism, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. added 2017-03-28
    Metaethics in Context of Engineering Ethical and Moral Systems.Michal Klincewicz & Lily Frank - 2016 - In AAAI Spring Series Technical Reports. Palo Alto, CA, USA: AAAI Press.
    It is not clear to what the projects of creating an artificial intelligence (AI) that does ethics, is moral, or makes moral judgments amounts. In this paper we discuss some of the extant metaethical theories and debates in moral philosophy by which such projects should be informed, specifically focusing on the project of creating an AI that makes moral judgments. We argue that the scope and aims of that project depend a great deal on antecedent metaethical commitments. Metaethics, therefore, plays (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. added 2017-01-21
    Understanding and Augmenting Human Morality: The Actwith Model of Conscience.Jeffrey White - 2009 - In L. Magnani (ed.), computational intelligence.
    Abstract. Recent developments, both in the cognitive sciences and in world events, bring special emphasis to the study of morality. The cognitive sci- ences, spanning neurology, psychology, and computational intelligence, offer substantial advances in understanding the origins and purposes of morality. Meanwhile, world events urge the timely synthesis of these insights with tra- ditional accounts that can be easily assimilated and practically employed to augment moral judgment, both to solve current problems and to direct future action. The object of the (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  23. added 2016-12-26
    Membrane Computing: From Biology to Computation and Back.Paolo Milazzo - 2014 - Isonomia: Online Philosophical Journal of the University of Urbino:1-15.
    Natural Computing is a field of research in Computer Science aimed at reinterpreting biological phenomena as computing mechanisms. This allows unconventional computing architectures to be proposed in which computations are performed by atoms, DNA strands, cells, insects or other biological elements. Membrane Computing is a branch of Natural Computing in which biological phenomena of interest are related with interactions between molecules inside cells. The research in Membrane Computing has lead to very important theoretical results that show how, in principle, cells (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  24. added 2016-10-18
    Turing Test, Chinese Room Argument, Symbol Grounding Problem. Meanings in Artificial Agents (2013).Christophe Menant - 2013 - In The American Philosophical Associat ion (ed.), APA Newsletter Philosophy and Computers Fall 2013 ISSN 2155-9708. The American Philosophical Associat ion.
    The Turing Test (TT), the Chinese Room Argument (CRA), and the Symbol Grounding Problem (SGP) are about the question “can machines think?” We propose to look at these approaches to Artificial Intelligence (AI) by showing that they all address the possibility for Artificial Agents (AAs) to generate meaningful information (meanings) as we humans do. The initial question about thinking machines is then reformulated into “can AAs generate meanings like humans do?” We correspondingly present the TT, the CRA and the SGP (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. added 2016-10-10
    Roboethics: Ethics Applied to Robotics.Gianmarco Veruggio, Jorje Solis & Machiel Van der Loos - 2011 - IEEE Robotics and Automation Magazine 1 (March):21-22.
    This special issue deals with the emerging debate on robo- ethics, the human ethics ap- plied to robotics. Is a specific ethic applied to robotics truly neces- sary? Or, conversely, are not the gen- eral principles of ethics adequate to answer many of the issues raised by our field’s applications? In our opin- ion, and according to many roboticists and human scientists, many novel issues that emerge and many more that will show up in the immediate future, arising from the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. added 2016-07-27
    Artificial Free Will: The Responsibility Strategy and Artificial Agents.Sven Delarivière - 2016 - Apeiron Student Journal of Philosophy (Portugal) 7:175-203.
    Both a traditional notion of free will, present in human beings, and artificial intelligence are often argued to be inherently incompatible with determinism. Contrary to these criticisms, this paper defends that an account of free will compatible with determinism, the responsibility strategy (coined here) specifically, is a variety of free will worth wanting as well as a variety that is possible to (in principle) artificially construct. First, freedom will be defined and related to ethics. With that in mind, the two (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. added 2016-07-27
    Machines as Moral Patients We Shouldn't Care About (Yet): The Interests and Welfare of Current Machines.John Basl - 2014 - Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. added 2016-07-27
    The Ethics of Creating Artificial Consciousness.John Basl - 2013 - APA Newsletter on Philosophy and Computers 13 (1):23-29.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. added 2016-07-27
    Homo Sapiens 2.0 Why We Should Build the Better Robots of Our Nature.Eric Dietrich - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
    It is possible to survey humankind and be proud, even to smile, for we accomplish great things. Art and science are two notable worthy human accomplishments. Consonant with art and science are some of the ways we treat each other. Sacrifice and heroism are two admirable human qualities that pervade human interaction. But, as everyone knows, all this goodness is more than balanced by human depravity. Moral corruption infests our being. Why?
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. added 2016-07-27
    Defining Agency: Individuality, Normativity, Asymmetry, and Spatio-Temporality in Action.Xabier Barandiaran, E. Di Paolo & M. Rohde - 2009 - Adaptive Behavior 17 (5):367-386.
    The concept of agency is of crucial importance in cognitive science and artificial intelligence, and it is often used as an intuitive and rather uncontroversial term, in contrast to more abstract and theoretically heavy-weighted terms like “intentionality”, “rationality” or “mind”. However, most of the available definitions of agency are either too loose or unspecific to allow for a progressive scientific program. They implicitly and unproblematically assume the features that characterize agents, thus obscuring the full potential and challenge of modeling agency. (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   17 citations  
  31. added 2016-07-27
    AI, Situatedness, Creativity, and Intelligence; or the Evolution of the Little Hearing Bones.Eric Dietrich - 1996 - J. Of Experimental and Theoretical AI 8 (1):1-6.
    Good sciences have good metaphors. Indeed, good sciences are good because they have good metaphors. AI could use more good metaphors. In this editorial, I would like to propose a new metaphor to help us understand intelligence. Of course, whether the metaphor is any good or not depends on whether it actually does help us. (What I am going to propose is not something opposed to computationalism -- the hypothesis that cognition is computation. Noncomputational metaphors are in vogue these days, (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  32. added 2016-07-09
    Rethinking Machine Ethics in the Era of Ubiquitous Technology.Jeffrey White (ed.) - 2015 - IGI.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. added 2016-03-01
    Will Life Be Worth Living in a World Without Work? Technological Unemployment and the Meaning of Life.John Danaher - 2017 - Science and Engineering Ethics 23 (1):41-64.
    Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus on (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  34. added 2015-11-30
    Granny and the Robots: Ethical Issues in Robot Care for the Elderly.Amanda Sharkey & Noel Sharkey - 2012 - Ethics and Information Technology 14 (1):27-40.
    The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   25 citations  
  35. added 2015-11-19
    Autonomous Killer Robots Are Probably Good News.Vincent C. Müller - 2016 - In Ezio Di Nucci & Filippo Santonio de Sio (eds.), Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons. Ashgate. pp. 67-81.
    Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. added 2015-11-19
    Just War and Robots’ Killings.Thomas W. Simpson & Vincent C. Müller - 2016 - Philosophical Quarterly 66 (263):302-22.
    May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. added 2015-11-12
    Future Progress in Artificial Intelligence: A Poll Among Experts.Vincent C. Müller & Nick Bostrom - 2014 - AI Matters 1 (1):9-11.
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. added 2015-11-07
    Risks of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. added 2015-11-07
    Editorial: Risks of Artificial Intelligence.Vincent C. Müller - 2016 - In Risks of artificial intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. added 2015-11-05
    Editorial: Risks of General Artificial Intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. added 2015-11-05
    Risks of Artificial General Intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. added 2015-11-01
    Would You Mind Being Watched by Machines? Privacy Concerns in Data Mining.Vincent C. Müller - 2009 - AI and Society 23 (4):529-544.
    "Data mining is not an invasion of privacy because access to data is only by machines, not by people": this is the argument that is investigated here. The current importance of this problem is developed in a case study of data mining in the USA for counterterrorism and other surveillance purposes. After a clarification of the relevant nature of privacy, it is argued that access by machines cannot warrant the access to further information, since the analysis will have to be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. added 2015-10-30
    Trusting the (Ro)Botic Other: By Assumption?Paul B. de Laat - 2015 - SIGCAS Computers and Society 45 (3):255-260.
    How may human agents come to trust (sophisticated) artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet (ro)bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  44. added 2015-08-26
    Toward Modeling and Automating Ethical Decision Making: Design, Implementation, Limitations, and Responsibilities.Gregory S. Reed & Nicholaos Jones - 2013 - Topoi 32 (2):237-250.
    One recent priority of the U.S. government is developing autonomous robotic systems. The U.S. Army has funded research to design a metric of evil to support military commanders with ethical decision-making and, in the future, allow robotic military systems to make autonomous ethical judgments. We use this particular project as a case study for efforts that seek to frame morality in quantitative terms. We report preliminary results from this research, describing the assumptions and limitations of a program that assesses the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. added 2015-06-18
    Nick Bostrom: Superintelligence: Paths, Dangers, Strategies. [REVIEW]Paul D. Thorn - 2015 - Minds and Machines 25 (3):285-289.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. added 2015-04-16
    Why AI Doomsayers Are Like Sceptical Theists and Why It Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. added 2014-11-20
    Autonomous Reboot: The Challenges of Artificial Moral Agency and the Ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  48. added 2014-09-26
    Picturing Mind Machines, An Adaptation by Janneke van Leeuwen.Simon van Rysewyk & Janneke van Leeuwen - 2014 - In Simon Peter van Rysewyk & Matthijs Pontier (eds.), Machine Medical Ethics. Springer.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. added 2014-05-21
    Machine Medical Ethics.Simon Peter van Rysewyk & Matthijs Pontier (eds.) - 2014 - Springer.
    In medical settings, machines are in close proximity with human beings: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. Machines in these contexts are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. -/- As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. added 2014-05-06
    Who’s Afraid of Robots? Fear of Automation and the Ideal of Direct Control.Ezio Di Nucci & Filippo Santoni de Sio - 2014 - In Fiorella Battaglia & Natalie Weidenfeld (eds.), Roboethics in Film. Pisa University Press.
    We argue that lack of direct and conscious control is not, in principle, a reason to be afraid of machines in general and robots in particular: in order to articulate the ethical and political risks of increasing automation one must, therefore, tackle the difficult task of precisely delineating the theoretical and practical limits of sustainable delegation to robots.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 55