View topic on PhilPapers for more information
Related categories

69 found
Order:
More results on PhilPapers
1 — 50 / 69
  1. Explainable AI is Indispensable in Areas Where Liability is an Issue.Nelson Brochado - manuscript
    What is explainable artificial intelligence and why is it indispensable in areas where liability is an issue?
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Good AI for the Present of Humanity Democratizing AI Governance.Nicholas Kluge Corrêa - manuscript
    What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. The Debate on the Ethics of AI in Health Care: A Reconstruction and Critical Review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. First Human Upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Levels of Self-Improvement in AI and Their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  6. AI Alignment Problem: “Human Values” Don’T Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Narrow AI Nanny: Reaching Strategic Advantage Via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - manuscript
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways of autonomous systems. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper proposes the Value Sensitive Design (VSD) approach as a principled framework for incorporating these values in design. The example of autonomous vehicles is used as a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Ethical Pitfalls for Natural Language Processing in Psychology.Mark Alfano, Emily Sullivan & Amir Ebrahimi Fard - forthcoming - In Morteza Dehghani & Ryan Boyd (eds.), The Atlas of Language Analysis in Psychology. Guilford Press.
    Knowledge is power. Knowledge about human psychology is increasingly being produced using natural language processing (NLP) and related techniques. The power that accompanies and harnesses this knowledge should be subject to ethical controls and oversight. In this chapter, we address the ethical pitfalls that are likely to be encountered in the context of such research. These pitfalls occur at various stages of the NLP pipeline, including data acquisition, enrichment, analysis, storage, and sharing. We also address secondary uses of the results (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  13. The Ethics of Algorithmic Outsourcing in Everyday Life.John Danaher - forthcoming - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford, UK: Oxford University Press.
    We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Inscrutable Processes: Algorithms, Agency, and Divisions of Deliberative Labour.Marinus Ferreira - forthcoming - Journal of Applied Philosophy.
    As the use of algorithmic decision‐making becomes more commonplace, so too does the worry that these algorithms are often inscrutable and our use of them is a threat to our agency. Since we do not understand why an inscrutable process recommends one option over another, we lose our ability to judge whether the guidance is appropriate and are vulnerable to being led astray. In response, I claim that guidance being inscrutable does not automatically make its guidance inappropriate. This phenomenon is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Who Should Bear the Risk When Self‐Driving Vehicles Crash?Antti Kauppinen - forthcoming - Journal of Applied Philosophy.
    The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But liability matters a great deal to just distribution of risk of harm. While morality sometimes requires simply minimizing relevant harms, this is not so when one party is liable to harm in virtue of voluntarily engaging in activity that foreseeably creates a risky situation, while having reasonable alternatives. On plausible (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Machines Learning Values.Steve Petersen - forthcoming - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. New York, USA: Oxford University Press.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. Mapping Value Sensitive Design Onto AI for Social Good Principles.Steven Umbrello & Ibo van de Poel - 2021 - AI and Ethics 1:1-14.
    Value Sensitive Design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Technologically scaffolded atypical cognition: The case of YouTube’s recommender system.Mark Alfano, Amir Ebrahimi Fard, J. Adam Carter, Peter Clutton & Colin Klein - 2020 - Synthese:1-24.
    YouTube has been implicated in the transformation of users into extremists and conspiracy theorists. The alleged mechanism for this radicalizing process is YouTube’s recommender system, which is optimized to amplify and promote clips that users are likely to watch through to the end. YouTube optimizes for watch-through for economic reasons: people who watch a video through to the end are likely to then watch the next recommended video as well, which means that more advertisements can be served to them. This (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   2 citations  
  19. Digital Psychiatry: Ethical Risks and Opportunities for Public Health and Well-Being.Christopher Burr, Jessica Morley, Mariarosaria Taddeo & Luciano Floridi - 2020 - IEEE Transactions on Technology and Society 1 (1):21-33.
    Common mental health disorders are rising globally, creating a strain on public healthcare systems. This has led to a renewed interest in the role that digital technologies may have for improving mental health outcomes. One result of this interest is the development and use of artificial intelligence for assessing, diagnosing, and treating mental health issues, which we refer to as ‘digital psychiatry’. This article focuses on the increasing use of digital psychiatry outside of clinical settings, in the following sectors: education, (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  20. Modelos Dinâmicos Aplicados à Aprendizagem de Valores em Inteligência Artificial.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2020 - Veritas – Revista de Filosofia da Pucrs 2 (65):1-15.
    Experts in Artificial Intelligence (AI) development predict that advances in the development of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance is not made prudently and critically-reflexively, it can result in negative outcomes for humanity. For this reason, several researchers in the area have developed a robust, beneficial, and safe concept of AI for the preservation of humanity and the environment. Currently, several of the open problems in the field of AI research (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  21. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up approaches to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy. Palo Alto, Cal.: CSLI, Stanford University. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  23. AI Methods in Bioethics.Joshua August Skorburg, Walter Sinnott-Armstrong & Vincent Conitzer - 2020 - American Journal of Bioethics: Empirical Bioethics 1 (11):37-39.
    Commentary about the role of AI in bioethics for the 10th anniversary issue of AJOB: Empirical Bioethics.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   4 citations  
  25. The Future of War: The Ethical Potential of Leaving War to Lethal Autonomous Weapons.Steven Umbrello, Phil Torres & Angelo F. De Bellis - 2020 - AI and Society 35 (1):273-282.
    Lethal Autonomous Weapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Robustness to Fundamental Uncertainty in AGI Alignment.G. G. Worley Iii - 2020 - Journal of Consciousness Studies 27 (1-2):225-241.
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. The Motivations and Risks of Machine Ethics.Stephen Cave, Rune Nyrup, Karina Vold & Adrian Weller - 2019 - Proceedings of the IEEE 107 (3):562-574.
    Many authors have proposed constraining the behaviour of intelligent systems with ‘machine ethics’ to ensure positive social outcomes from the development of such systems. This paper critically analyses the prospects for machine ethics, identifying several inherent limitations. While machine ethics may increase the probability of ethical behaviour in some situations, it cannot guarantee it due to the nature of ethics, the computational limitations of computational agents and the complexity of the world. In addition, machine ethics, even if it were to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. The Rise of the Robots and the Crisis of Moral Patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   10 citations  
  29. Distributive Justice as an Ethical Principle for Autonomous Vehicle Behavior Beyond Hazard Scenarios.Manuel Dietrich & Thomas H. Weisswange - 2019 - Ethics and Information Technology 21 (3):227-239.
    Through modern driver assistant systems, algorithmic decisions already have a significant impact on the behavior of vehicles in everyday traffic. This will become even more prominent in the near future considering the development of autonomous driving functionality. The need to consider ethical principles in the design of such systems is generally acknowledged. However, scope, principles and strategies for their implementations are not yet clear. Most of the current discussions concentrate on situations of unavoidable crashes in which the life of human (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  30. The Promise and Perils of Medical AI.Robert Sparrow & Joshua James Hatherley - 2019 - International Journal of Chinese and Comparative Philosophy of Medicine 17 (2):79-109.
    What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. چگونه هفت Sociopaths که حکومت چین در حال برنده شدن در جنگ جهانی سه و سه راه برای جلوگیری از آنها.Michael Richard Starks - 2019 - In خودکشی توسط دموکراسی یک موانع برای آمریکا و جهان. Las Vegas, NV USA: Reality Press. pp. 41-45.
    اولین چیزی که ما باید در ذهن داشته باشیم این است که زمانی که گفت که چین می گوید که این یا چین این کار را انجام می دهد ، ما از مردم چین صحبت نمی کنیم ، اما از Sociopaths که کنترل حزب کمونیست چین-چینی ، یعنی هفت قاتلان جامعه سالخورده (SSSSK) از th e کمیته ایستاده از حزب کمونیست چین و یا 25 نفر از اعضای پلی تکنیک و غیره. -/- برنامه های حزب کمونیست برای WW3 و سلطه (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  32. Será que Hominoids ou Androids Destroem a Terra? — uma revisão de Como Criar Uma Mente (How to Create a Mind) por Ray Kurzweil (2012) (revisão revisada 2019).Michael Richard Starks - 2019 - In Delírios Utópicos Suicidas no Século XXI Filosofia, Natureza Humana e o Colapso da Civilization- Artigos e Comentários 2006-2019 5ª edição. Las Vegas, NV USA: Reality Press. pp. 155-167.
    Alguns anos atrás, cheguei ao ponto onde eu normalmente pode dizer a partir do título de um livro, ou pelo menos a partir dos títulos do capítulo, que tipos de erros filosóficos serão feitas e com que freqüência. No caso de obras nominalmente científicas, estas podem ser largamente restritas a certos capítulos que enceram filosóficos ou tentam tirar conclusões gerais sobre o significado ou significado a longo prazo do trabalho. Normalmente entretanto as matérias científicas do fato são misturado generosa com (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  33. Global Solutions Vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Atomically Precise Manufacturing and Responsible Innovation: A Value Sensitive Design Approach to Explorative Nanophilosophy.Steven Umbrello - 2019 - International Journal of Technoethics 10 (2):1-21.
    Although continued investments in nanotechnology are made, atomically precise manufacturing (APM) to date is still regarded as speculative technology. APM, also known as molecular manufacturing, is a token example of a converging technology, has great potential to impact and be affected by other emerging technologies, such as artificial intelligence, biotechnology, and ICT. The development of APM thus can have drastic global impacts depending on how it is designed and used. This paper argues that the ethical issues that arise from APM (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  35. Lethal Autonomous Weapons: Designing War Machines with Values.Steven Umbrello - 2019 - Delphi: Interdisciplinary Review of Emerging Technologies 1 (2):30-34.
    Lethal Autonomous Weapons (LAWs) have becomes the subject of continuous debate both at national and international levels. Arguments have been proposed both for the development and use of LAWs as well as their prohibition from combat landscapes. Regardless, the development of LAWs continues in numerous nation-states. This paper builds upon previous philosophical arguments for the development and use of LAWs and proposes a design framework that can be used to ethically direct their development. The conclusion is that the philosophical arguments (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. Crash Algorithms for Autonomous Cars: How the Trolley Problem Can Move Us Beyond Harm Minimisation.Dietmar Hübner & Lucie White - 2018 - Ethical Theory and Moral Practice 21 (3):685-698.
    The prospective introduction of autonomous cars into public traffic raises the question of how such systems should behave when an accident is inevitable. Due to concerns with self-interest and liberal legitimacy that have become paramount in the emerging debate, a contractarian framework seems to provide a particularly attractive means of approaching this problem. We examine one such attempt, which derives a harm minimisation rule from the assumptions of rational self-interest and ignorance of one’s position in a future accident. We contend, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. Autonomous Weapon Systems, Asymmetrical Warfare, and Myths.Michal Klincewicz - 2018 - Civitas 23.
    Predictions about autonomous weapon systems (AWS) are typically thought to channel fears that drove all the myths about intelligence embodied in matter. One of these is the idea that the technology can get out of control and ultimately lead to horrific consequences, as is the case in Mary Shelley’s classic Frankenstein. Given this, predictions about AWS are sometimes dismissed as science-fiction fear-mongering. This paper considers several analogies between AWS and other weapon systems and ultimately offers an argument that nuclear weapons (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Assessing the Future Plausibility of Catastrophically Dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Robots Like Me: Challenges and Ethical Issues in Aged Care.Ipke Wachsmuth - 2018 - Frontiers in Psychology 9 (432).
    This paper addresses the issue of whether robots could substitute for human care, given the challenges in aged care induced by the demographic change. The use of robots to provide emotional care has raised ethical concerns, e.g., that people may be deceived and deprived of dignity. In this paper it is argued that these concerns might be mitigated and that it may be sufficient for robots to take part in caring when they behave *as if* they care.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. AAAI: An Argument Against Artificial Intelligence.Sander Beckers - 2017 - In Vincent Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence.Patrick Lin, Keith Abney & Ryan Jenkins (eds.) - 2017 - Oxford University Press.
    As robots slip into more domains of human life-from the operating room to the bedroom-they take on our morally important tasks and decisions, as well as create new risks from psychological to physical. This book answers the urgent call to study their ethical, legal, and policy impacts.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  44. Superintelligence as Superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2.0. New York, USA: Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  45. Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2017 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Berlin: Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012).Michael Starks - 2017 - In Suicidal Utopian Delusions in the 21st Century 4th ed (2019). Henderson, NV USA: Michael Starks. pp. 675.
    Some years ago I reached the point where I can usually tell from the title of a book, or at least from the chapter titles, what kinds of philosophical mistakes will be made and how frequently. In the case of nominally scientific works these may be largely restricted to certain chapters which wax philosophical or try to draw general conclusions about the meaning or long term significance of the work. Normally however the scientific matters of fact are generously interlarded with (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. Artificial Intelligence in Life Extension: From Deep Learning to Superintelligence.Alexey Turchin, Denkenberger David, Zhila Alice, Markov Sergey & Batin Mikhail - 2017 - Informatica 41:401.
    In this paper, we focus on the most efficacious AI applications for life extension and anti-aging at three expected stages of AI development: narrow AI, AGI and superintelligence. First, we overview the existing research and commercial work performed by a select number of startups and academic projects. We find that at the current stage of “narrow” AI, the most promising areas for life extension are geroprotector-combination discovery, detection of aging biomarkers, and personalized anti-aging therapy. These advances could help currently living (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  48. Doctor of Philosophy Thesis in Military Informatics (OpenPhD ) : Lethal Autonomy of Weapons is Designed and/or Recessive.Nyagudi Nyagudi Musandu - 2016-12-09 - Dissertation, OpenPhD (#Openphd) E.G. Wikiversity Https://En.Wikiversity.Org/Wiki/Doctor_of_Philosophy , Etc.
    My original contribution to knowledge is : Any weapon that exhibits intended and/or untended lethal autonomy in targeting and interdiction – does so by way of design and/or recessive flaw(s) in its systems of control – any such weapon is capable of war-fighting and other battle-space interaction in a manner that its Human Commander does not anticipate. Even with the complexity of Lethal Autonomy issues there is nothing particular to gain from being a low-tech Military. Lethal autonomous weapons are therefore (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  49. Artificial Intelligence: Opportunities and Implications for the Future of Decision Making.U. K. Government & Office for Science - 2016
    Artificial intelligence has arrived. In the online world it is already a part of everyday life, sitting invisibly behind a wide range of search engines and online commerce sites. It offers huge potential to enable more efficient and effective business and government but the use of artificial intelligence brings with it important questions about governance, accountability and ethics. Realising the full potential of artificial intelligence and avoiding possible adverse consequences requires societies to find satisfactory answers to these questions. This report (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. Preserving a Combat Commander’s Moral Agency: The Vincennes Incident as a Chinese Room.Patrick Chisan Hew - 2016 - Ethics and Information Technology 18 (3):227-235.
    We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 69