Switch to: Citations

Add references

You must login to add references.
  1. Leakproofing the Singularity Artificial Intelligence Confinement Problem.Roman Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
    This paper attempts to formalize and to address the 'leakproofing' of the Singularity problem presented by David Chalmers. The paper begins with the definition of the Artificial Intelligence Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay potential negative effect from the technological singularity while allowing humanity to benefit from the superintelligence.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Freedom of the will and the concept of a person.Harry G. Frankfurt - 1971 - Journal of Philosophy 68 (1):5-20.
    It is my view that one essential difference between persons and other creatures is to be found in the structure of a person's will. Besides wanting and choosing and being moved to do this or that, men may also want to have certain desires and motives. They are capable of wanting to be different, in their preferences and purposes, from what they are. Many animals appear to have the capacity for what I shall call "first-order desires" or "desires of the (...)
    Download  
     
    Export citation  
     
    Bookmark   1459 citations  
  • Freedom of the will and the concept of a person.Harry Frankfurt - 2004 - In Tim Crane & Katalin Farkas (eds.), Metaphysics: A Guide and Anthology. Oxford University Press UK.
    Download  
     
    Export citation  
     
    Bookmark   707 citations  
  • Coherent Extrapolated Volition.Eliezer Yudkowsky - 2001 - The Singularity Institute.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • .Peter Railton - 1985 - Rowman & Littlefield.
    Download  
     
    Export citation  
     
    Bookmark   199 citations  
  • Why Richard Brandt does not need cognitive psychotherapy, and other glad news about idealized preference theories in meta-ethics.David Zimmerman - 2003 - Journal of Value Inquiry 37 (3):373-394.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Implications and consequences of robots with biological brains.Kevin Warwick - 2010 - Ethics and Information Technology 12 (3):223-234.
    In this paper a look is taken at the relatively new area of culturing neural tissue and embodying it in a mobile robot platform—essentially giving a robot a biological brain. Present technology and practice is discussed. New trends and the potential effects of and in this area are also indicated. This has a potential major impact with regard to society and ethical issues and hence some initial observations are made. Some initial issues are also considered with regard to the potential (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Can robots be moral?Laszlo Versenyi - 1974 - Ethics 84 (3):248-259.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Intelligent machinery, a heretical theory.A. M. Turing - 1996 - Philosophia Mathematica 4 (3):256-260.
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • "Construal-level theory of psychological distance": Correction to Trope and Liberman (2010).Yaacov Trope & Nira Liberman - 2010 - Psychological Review 117 (3):1024-1024.
    Download  
     
    Export citation  
     
    Bookmark   106 citations  
  • Weaving Technology and Policy Together to Maintain Confidentiality.Latanya Sweeney - 1997 - Journal of Law, Medicine and Ethics 25 (2-3):98-110.
    Organizations often release and receive medical data with all explicit identifiers, such as name, address, telephone number, and Social Security number, removed on the assumption that patient confidentiality is maintained because the resulting data look anonymous. However, in most of these cases, the remaining data can be used to reidenafy individuals by linking or matching the data to other data bases or by looking at unique characteristics found in the fields and records of the data base itself. When these less (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Weaving Technology and Policy Together to Maintain Confidentiality.Latanya Sweeney - 1997 - Journal of Law, Medicine and Ethics 25 (2-3):98-110.
    Organizations often release and receive medical data with all explicit identifiers, such as name, address, telephone number, and Social Security number, removed on the assumption that patient confidentiality is maintained because the resulting data look anonymous. However, in most of these cases, the remaining data can be used to reidenafy individuals by linking or matching the data to other data bases or by looking at unique characteristics found in the fields and records of the data base itself. When these less (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Advantages of artificial intelligences, uploads, and digital minds.Kaj Sotala - 2012 - International Journal of Machine Consciousness 4 (01):275-291.
    I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. Hardware advantages include greater serial speeds and greater parallel speeds. Self-improvement advantages include improvement of algorithms, design of new mental modules, and modification of motivational system. Co-operative advantages include copyability, perfect co-operation, improved communication, and transfer of skills. Human handicaps include computational limitations and faulty heuristics, human-centric biases, and socially motivated cognition. The shape of hardware (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Full information accounts of well-being.David Sobel - 1994 - Ethics 104 (4):784-810.
    Download  
     
    Export citation  
     
    Bookmark   90 citations  
  • Dispositional Theories of Value.Michael Smith, David Lewis & Mark Johnston - 1989 - Aristotelian Society Supplementary Volume 63 (1):89-174.
    Download  
     
    Export citation  
     
    Bookmark   396 citations  
  • The perils of cognitive enhancement and the urgent imperative to enhance the moral character of humanity.Ingmar Persson & Julian Savulescu - 2008 - Journal of Applied Philosophy 25 (3):162-177.
    abstract As history shows, some human beings are capable of acting very immorally. 1 Technological advance and consequent exponential growth in cognitive power means that even rare evil individuals can act with catastrophic effect. The advance of science makes biological, nuclear and other weapons of mass destruction easier and easier to fabricate and, thus, increases the probability that they will come into the hands of small terrorist groups and deranged individuals. Cognitive enhancement by means of drugs, implants and biological (including (...)
    Download  
     
    Export citation  
     
    Bookmark   189 citations  
  • Toward some circuitry of ethical robots or an observational science of the genesis of social evaluation in the mind-like behavior of artifacts.W. S. McCulloch - 1956 - Acta Biotheoretica 11 (3-4):147-156.
    Modern knowledge of servo systems and computing machines makes it possible to specify a circuit that can and will induce the rules and winning moves in a game like chess when they are given only ostensibly, that is, by playing against opponents who quit when illegal or losing moves are made. Such circuits enjoy a value social in the sense that it is shared by the players.La connaissance moderne des servomécanismes et des machines à calculer permet de concevoir un circuit (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Why uploading will not work, or, the ghosts haunting transhumanism.Patrick D. Hopkins - 2012 - International Journal of Machine Consciousness 4 (01):229-243.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Electron imaging technology for whole brain neural circuit mapping.Kenneth J. Hayworth - 2012 - International Journal of Machine Consciousness 4 (01):87-108.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • My brain, my mind, and I: Some philosophical assumptions of mind-uploading.Michael Hauskeller - 2012 - International Journal of Machine Consciousness 4 (01):187-200.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Shall We Vote on Values, But Bet on Beliefs?Robin Hanson - 2013 - Journal of Political Philosophy 21 (2):151-178.
    Policy disputes arise at all scales of governance: in clubs, non-profits, firms, nations, and alliances of nations. Both the means and ends of policy are disputed. While many, perhaps most, policy disputes arise from conflicting ends, important disputes also arise from differing beliefs on how to achieve shared ends. In fact, according to many experts in economics and development, governments often choose policies that are “inefficient” in the sense that most everyone could expect to gain from other feasible policies. Many (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • When should two minds be considered versions of one another?Ben Goertzel - 2012 - International Journal of Machine Consciousness 4 (01):177-185.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Moral enhancement.Thomas Douglas - 2008 - Journal of Applied Philosophy 25 (3):228-245.
    Opponents of biomedical enhancement often claim that, even if such enhancement would benefit the enhanced, it would harm others. But this objection looks unpersuasive when the enhancement in question is a moral enhancement — an enhancement that will expectably leave the enhanced person with morally better motives than she had previously. In this article I (1) describe one type of psychological alteration that would plausibly qualify as a moral enhancement, (2) argue that we will, in the medium-term future, probably be (...)
    Download  
     
    Export citation  
     
    Bookmark   192 citations  
  • The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. [REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • On How to Build a Moral Machine.Paul Bello & Selmer Bringsjord - 2013 - Topoi 32 (2):251-266.
    Herein we make a plea to machine ethicists for the inclusion of constraints on their theories consistent with empirical data on human moral cognition. As philosophers, we clearly lack widely accepted solutions to issues regarding the existence of free will, the nature of persons and firm conditions on moral agency/patienthood; all of which are indispensable concepts to be deployed by any machine able to make moral judgments. No agreement seems forthcoming on these matters, and we don’t hold out hope for (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A framework for approaches to transfer of a mind's substrate.Sim Bamford - 2012 - International Journal of Machine Consciousness 4 (01):23-34.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Existential risks: analyzing human extinction scenarios and related hazards.Nick Bostrom - 2002 - J Evol Technol 9 (1).
    Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the propects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from (...)
    Download  
     
    Export citation  
     
    Bookmark   75 citations  
  • .Michael Friedman & Alfred Nordmann (eds.) - 2006 - MIT Press.
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • What Might Cognition Be, If Not Computation?Tim Van Gelder - 1995 - Journal of Philosophy 92 (7):345 - 381.
    Download  
     
    Export citation  
     
    Bookmark   303 citations  
  • Freedom of the Will and the Concept of a Person.Harry Frankfurt - 1971 - In Gary Watson (ed.), Free Will. Oxford University Press.
    Download  
     
    Export citation  
     
    Bookmark   608 citations  
  • An Essay on the Desire-Based Reasons Model.Attila Tanyi - 2006 - Dissertation, Central European University
    The dissertation argues against the view that normative reasons for action are grounded in desires. It first works out the different versions of the Model. After this, in the next three chapters, it presents and discusses three arguments against the Model, on the basis of which, it concludes that the Model gives us the wrong account of normative practical reasons.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Saving Machines From Themselves: The Ethics of Deep Self-Modification.Peter Suber - unknown
    We human beings do have the power to modify our deep structure, through drugs and surgery. But we cannot yet use this power with enough precision to make deep changes to our neural structure without high risk of death or disability. There are two reasons why we find ourselves in this position. First, our instruments of self-modification are crude. Second, we have very limited knowledge about where and how to apply our instruments to get specific desirable effects. For the same (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The singularity: A philosophical analysis.David J. Chalmers - 2010 - Journal of Consciousness Studies 17 (9-10):9 - 10.
    What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...)
    Download  
     
    Export citation  
     
    Bookmark   114 citations  
  • Can Intelligence Explode?Marcus Hutter - 2012 - Journal of Consciousness Studies 19 (1-2):143-166.
    The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science fiction to popular science magazines and finally to attract the attention of serious philosophers. David Chalmers' (JCS 2010) article is the first comprehensive philosophical analysis of the singularity in a respected philosophy journal. The motivation of my article is to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Singularity: A Reply to Commentators.David J. Chalmers - 2012 - Journal of Consciousness Studies (7-8):141-167.
    I would like to thank the authors of the 26 contributions to this symposium on my article “The Singularity: A Philosophical Analysis”. I learned a great deal from the reading their commentaries. Some of the commentaries engaged my article in detail, while others developed ideas about the singularity in other directions. In this reply I will concentrate mainly on those in the first group, with occasional comments on those in the second. A singularity (or an intelligence explosion) is a rapid (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Response to The Singularity by David Chalmers.Drew McDermott - 2012 - Journal of Consciousness Studies 19 (1-2):1-2.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Mystery of David Chalmers.Daniel Dennett - 2012 - Journal of Consciousness Studies 19 (1-2):1-2.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Thinking inside the box: Using and controlling an oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - forthcoming - Minds and Machines.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A brain in a vat cannot break out: why the singularity must be extended, embedded and embodied.Francis Heylighen & Center Leo Apostel Ecco - 2012 - Journal of Consciousness Studies 19 (1-2):126-142.
    The present paper criticizes Chalmers's discussion of the Singularity, viewed as the emergence of a superhuman intelligence via the self-amplifying development of artificial intelligence. The situated and embodied view of cognition rejects the notion that intelligence could arise in a closed 'brain-in-a-vat' system, because intelligence is rooted in a high-bandwidth, sensory-motor interaction with the outside world. Instead, it is proposed that superhuman intelligence can emerge only in a distributed fashion, in the form of a self-organizing network of humans, computers, and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Between angels and animals: The question of robot ethics, or is Kantian moral agency desirable?Anthony F. Beavers - unknown
    In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its actions. Since building (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when (...)
    Download  
     
    Export citation  
     
    Bookmark   70 citations  
  • Personal Identity and Uploading.Mark Walker - 2011 - Journal of Evolution and Technology 22 (1):37-52.
    Objections to uploading may be parsed into substrate issues, dealing with the computer platform of upload and personal identity. This paper argues that the personal identity issues of uploading are no more or less challenging than those of bodily transfer often discussed in the philosophical literature. It is argued that what is important in personal identity involves both token and type identity. While uploading does not preserve token identity, it does save type identity; and even qua token, one may have (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • When will computer hardware match the human brain?Hans Moravec - 1998 - Journal of Evolution and Technology 1 (1):10.
    Computers have far to go to match human strengths, and our estimates will depend on analogy and extrapolation. Fortunately, these are grounded in the first bit of the journey, now behind us. Thirty years of computer vision reveals that 1 MIPS can extract simple features from real-time imagery--tracking a white line or a white spot on a mottled background. 10 MIPS can follow complex gray-scale patches--as smart bombs, cruise missiles and early self-driving vans attest. 100 MIPS can follow moderately unpredictable (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Mitigating potential hazards to humans from the development of intelligent machines.William Daley - 2011 - Synesis: A Journal of Science, Technology, Ethics, and Policy 2 (1):G44 - G50.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory itself, and from (...)
    Download  
     
    Export citation  
     
    Bookmark   75 citations  
  • Economic Growth Given Machine Intelligence.Robin Hanson - unknown
    A simple exogenous growth model gives conservative estimates of the economic implications of machine intelligence. Machines complement human labor when they become more productive at the jobs they perform, but machines also substitute for human labor by taking over human jobs. At first, expensive hardware and software does only the few jobs where computers have the strongest advantage over humans. Eventually, computers do most jobs. At first, complementary effects dominate, and human wages rise with computer productivity. But eventually substitution can (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Should Humanity Build a Global AI Nanny to Delay the Singularity Until It's Better Understood?Ben Goertzel - 2012 - Journal of Consciousness Studies 19 (1-2):96.
    Chalmers suggests that, if a Singularity fails to occur in the next few centuries, the most likely reason will be 'motivational defeaters' i.e. at some point humanity or human-level AI may abandon the effort to create dramatically superhuman artificial general intelligence. Here I explore one plausible way in which that might happen: the deliberate human creation of an 'AI Nanny' with mildly superhuman intelligence and surveillance powers, designed either to forestall Singularity eternally, or to delay the Singularity until humanity more (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Staring into the singularity.Eliezer Yudkowsky - manuscript
    1: The End of History 2: The Beyondness of the Singularity 2.1: The Definition of Smartness 2.2: Perceptual Transcends 2.3: Great Big Numbers 2.4: Smarter Than We Are 3: Sooner Than You Think 4: Uploading 5: The Interim Meaning of Life 6: Getting to the Singularity.
    Download  
     
    Export citation  
     
    Bookmark   5 citations