View topic on PhilPapers for more information
Related categories

150 found
Order:
More results on PhilPapers
1 — 50 / 150
Material to categorize
  1. Deepfakes and the Epistemic Backstop.Regina Rini - manuscript
    Deepfake technology uses machine learning to fabricate video and audio recordings that represent people doing and saying things they've never done. In coming years, malicious actors will likely use this technology in attempts to manipulate public discourse. This paper prepares for that danger by explicating the unappreciated way in which recordings have so far provided an epistemic backstop to our testimonial practices. Our reasonable trust in the testimony of others depends, to a surprising extent, on the regulative effects of the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Computational Models (of Narrative) for Literary Studies.Antonio Lieto - 2015 - Semicerchio, Rivista di Poesia Comparata 2 (LIII):38-44.
    In the last decades a growing body of literature in Artificial Intelligence (AI) and Cognitive Science (CS) has approached the problem of narrative understanding by means of computational systems. Narrative, in fact, is an ubiquitous element in our everyday activity and the ability to generate and understand stories, and their structures, is a crucial cue of our intelligence. However, despite the fact that - from an historical standpoint - narrative (and narrative structures) have been an important topic of investigation in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Legal Fictions and the Essence of Robots: Thoughts on Essentialism and Pragmatism in the Regulation of Robotics.Fabio Fossa - 2018 - In Mark Coeckelbergh, Janina Loh, Michael Funk, Joanna Seibt & Marco Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and, Public Space. Amsterdam: pp. 103-111.
    The purpose of this paper is to offer some critical remarks on the so-called pragmatist approach to the regulation of robotics. To this end, the article mainly reviews the work of Jack Balkin and Joanna Bryson, who have taken up such ap- proach with interestingly similar outcomes. Moreover, special attention will be paid to the discussion concerning the legal fiction of ‘electronic personality’. This will help shed light on the opposition between essentialist and pragmatist methodologies. After a brief introduction (1.), (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Making Metaethics Work for AI: Realism and Anti-Realism.Michal Klincewicz & Lily E. Frank - 2018 - In Mark Coeckelbergh, M. Loh, J. Funk, M. Seibt & J. Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and Public Space. Amsterdam, Netherlands: IOS Press. pp. 311-318.
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Philosophy and Theory of Artificial Intelligence 2017.Vincent Müller (ed.) - 2017 - Berlin: Springer.
    This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
Artificial Intelligence Safety
  1. چگونه هفت Sociopaths که حکومت چین در حال برنده شدن در جنگ جهانی سه و سه راه برای جلوگیری از آنها.Michael Richard Starks - 2019 - In خودکشی توسط دموکراسی یک موانع برای آمریکا و جهان. Las Vegas, NV USA: Reality Press. pp. 41-45.
    اولین چیزی که ما باید در ذهن داشته باشیم این است که زمانی که گفت که چین می گوید که این یا چین این کار را انجام می دهد ، ما از مردم چین صحبت نمی کنیم ، اما از Sociopaths که کنترل حزب کمونیست چین-چینی ، یعنی هفت قاتلان جامعه سالخورده (SSSSK) از th e کمیته ایستاده از حزب کمونیست چین و یا 25 نفر از اعضای پلی تکنیک و غیره. -/- برنامه های حزب کمونیست برای WW3 و سلطه (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  2. Explainable AI is Indispensable in Areas Where Liability is an Issue.Nelson Brochado - manuscript
    Certain animals and, in particular, humans have always been curious about the mysteries of the world. We have always shown interest in exploring the unknown, so that it becomes known. The necessity of discovery is likely inherent to our nature and it is possibly related to our limited time. Throughout the years, we have developed ways of communicating with each other and other animals. In particular, we have developed ways of saving and transferring information and knowledge. We have also developed (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Unexplainability and Incomprehensibility of Artificial Intelligence.Roman Yampolskiy - manuscript
    Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2018 - AI and Society:1-17.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. Unpredictability of AI.Roman Yampolskiy - manuscript
    The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. In conclusion, impact of Unpredictability on AI Safety is discussed.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. The Ethics of Algorithmic Outsourcing in Everyday Life.John Danaher - forthcoming - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford, UK: Oxford University Press.
    We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Designing AI for Social Good: Seven Essential Factors.Josh Cowls, Thomas C. King, Mariarosaria Taddeo & Luciano Floridi - manuscript
    The idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to address social problems effectively through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies (Cath et al. 2018). This article addresses this gap (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. AI Alignment Problem: “Human Values” Don’T Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Autonomous Weapon Systems, Asymmetrical Warfare, and Myths.Michal Klincewicz - 2018 - Civitas 23.
    Predictions about autonomous weapon systems (AWS) are typically thought to channel fears that drove all the myths about intelligence embodied in matter. One of these is the idea that the technology can get out of control and ultimately lead to horrific consequences, as is the case in Mary Shelley’s classic Frankenstein. Given this, predictions about AWS are sometimes dismissed as science-fiction fear-mongering. This paper considers several analogies between AWS and other weapon systems and ultimately offers an argument that nuclear weapons (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Lethal Autonomous Weapons: Designing War Machines with Values.Steven Umbrello - 2019 - Delphi: Interdisciplinary Review of Emerging Technologies 1 (2):30-34.
    Lethal Autonomous Weapons (LAWs) have becomes the subject of continuous debate both at national and international levels. Arguments have been proposed both for the development and use of LAWs as well as their prohibition from combat landscapes. Regardless, the development of LAWs continues in numerous nation-states. This paper builds upon previous philosophical arguments for the development and use of LAWs and proposes a design framework that can be used to ethically direct their development. The conclusion is that the philosophical arguments (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. First Human Upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Privacy, Transparency, and Accountability in the NSA’s Bulk Metadata Program.Alan Rubel - 2015 - In Adam D. Moore (ed.), Privacy, Security, and Accountability: Ethics, Law, and Policy. London, UK: pp. 183-202.
    Disputes at the intersection of national security, surveillance, civil liberties, and transparency are nothing new, but they have become a particularly prominent part of public discourse in the years since the attacks on the World Trade Center in September 2001. This is in part due to the dramatic nature of those attacks, in part based on significant legal developments after the attacks (classifying persons as “enemy combatants” outside the scope of traditional Geneva protections, legal memos by White House counsel providing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. The Rise of the Robots and the Crisis of Moral Patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Crash Algorithms for Autonomous Cars: How the Trolley Problem Can Move Us Beyond Harm Minimisation.Dietmar Hübner & Lucie White - 2018 - Ethical Theory and Moral Practice 21 (3):685-698.
    The prospective introduction of autonomous cars into public traffic raises the question of how such systems should behave when an accident is inevitable. Due to concerns with self-interest and liberal legitimacy that have become paramount in the emerging debate, a contractarian framework seems to provide a particularly attractive means of approaching this problem. We examine one such attempt, which derives a harm minimisation rule from the assumptions of rational self-interest and ignorance of one’s position in a future accident. We contend, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. AAAI: An Argument Against Artificial Intelligence.Sander Beckers - 2017 - In Vincent Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2017 - In Vincent Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Berlin: Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Robustness to Fundamental Uncertainty in AGI Alignment.I. I. I. G. Gordon Worley - manuscript
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. Narrow AI Nanny: Reaching Strategic Advantage Via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. The Future of War: The Ethical Potential of Leaving War to Lethal Autonomous Weapons.Steven Umbrello, Phil Torres & Angelo F. De Bellis - forthcoming - AI and Society.
    Lethal Autonomous Weapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Atomically Precise Manufacturing and Responsible Innovation: A Value Sensitive Design Approach to Explorative Nanophilosophy.Steven Umbrello - 2019 - International Journal of Technoethics 10 (2):1-21.
    Although continued investments in nanotechnology are made, atomically precise manufacturing (APM) to date is still regarded as speculative technology. APM, also known as molecular manufacturing, is a token example of a converging technology, has great potential to impact and be affected by other emerging technologies, such as artificial intelligence, biotechnology, and ICT. The development of APM thus can have drastic global impacts depending on how it is designed and used. This paper argues that the ethical issues that arise from APM (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  23. Machine Medical Ethics.Simon Peter van Rysewyk & Matthijs Pontier (eds.) - 2014 - Springer.
    In medical settings, machines are in close proximity with human beings: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. Machines in these contexts are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. -/- As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Global Solutions Vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Levels of Self-Improvement in AI and Their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  26. Assessing the Future Plausibility of Catastrophically Dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence.Patrick Lin, Keith Abney & Ryan Jenkins (eds.) - 2017 - Oxford University Press.
    As robots slip into more domains of human life-from the operating room to the bedroom-they take on our morally important tasks and decisions, as well as create new risks from psychological to physical. This book answers the urgent call to study their ethical, legal, and policy impacts.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. Superintelligence as Superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2.0. New York, USA: Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Artificial Intelligence in Life Extension: From Deep Learning to Superintelligence.Alexey Turchin, Denkenberger David, Zhila Alice, Markov Sergey & Batin Mikhail - 2017 - Informatica 41:401.
    In this paper, we focus on the most efficacious AI applications for life extension and anti-aging at three expected stages of AI development: narrow AI, AGI and superintelligence. First, we overview the existing research and commercial work performed by a select number of startups and academic projects. We find that at the current stage of “narrow” AI, the most promising areas for life extension are geroprotector-combination discovery, detection of aging biomarkers, and personalized anti-aging therapy. These advances could help currently living (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. Preserving a Combat Commander’s Moral Agency: The Vincennes Incident as a Chinese Room.Patrick Chisan Hew - 2016 - Ethics and Information Technology 18 (3):227-235.
    We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012).Michael Starks - 2017 - In Suicidal Utopian Delusions in the 21st Century 4th ed (2019). Henderson, NV USA: Michael Starks. pp. 675.
    Some years ago I reached the point where I can usually tell from the title of a book, or at least from the chapter titles, what kinds of philosophical mistakes will be made and how frequently. In the case of nominally scientific works these may be largely restricted to certain chapters which wax philosophical or try to draw general conclusions about the meaning or long term significance of the work. Normally however the scientific matters of fact are generously interlarded with (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  35. Doctor of Philosophy Thesis in Military Informatics (OpenPhD ) : Lethal Autonomy of Weapons is Designed and/or Recessive.Nyagudi Nyagudi Musandu - 2016-12-09 - Dissertation, OpenPhD (#Openphd) E.G. Wikiversity Https://En.Wikiversity.Org/Wiki/Doctor_of_Philosophy , Etc.
    My original contribution to knowledge is : Any weapon that exhibits intended and/or untended lethal autonomy in targeting and interdiction – does so by way of design and/or recessive flaw(s) in its systems of control – any such weapon is capable of war-fighting and other battle-space interaction in a manner that its Human Commander does not anticipate. Even with the complexity of Lethal Autonomy issues there is nothing particular to gain from being a low-tech Military. Lethal autonomous weapons are therefore (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  36. Artificial Intelligence: Opportunities and Implications for the Future of Decision Making.U. K. Government & Office for Science - 2016
    Artificial intelligence has arrived. In the online world it is already a part of everyday life, sitting invisibly behind a wide range of search engines and online commerce sites. It offers huge potential to enable more efficient and effective business and government but the use of artificial intelligence brings with it important questions about governance, accountability and ethics. Realising the full potential of artificial intelligence and avoiding possible adverse consequences requires societies to find satisfactory answers to these questions. This report (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. Why AI Doomsayers Are Like Sceptical Theists and Why It Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Nick Bostrom: Superintelligence: Paths, Dangers, Strategies. [REVIEW]Paul D. Thorn - 2015 - Minds and Machines 25 (3):285-289.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Autonomous Weapons Systems, the Frame Problem and Computer Security.Michał Klincewicz - 2015 - Journal of Military Ethics 14 (2):162-176.
    Unlike human soldiers, autonomous weapons systems are unaffected by psychological factors that would cause them to act outside the chain of command. This is a compelling moral justification for their development and eventual deployment in war. To achieve this level of sophistication, the software that runs AWS will have to first solve two problems: the frame problem and the representation problem. Solutions to these problems will inevitably involve complex software. Complex software will create security risks and will make AWS critically (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  40. Rethinking Machine Ethics in the Era of Ubiquitous Technology.Jeffrey White (ed.) - 2015 - IGI.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. Fundamental Issues of Artificial Intelligence.Vincent Müller (ed.) - 2016 - Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. From the Ethics of Technology Towards an Ethics of Knowledge Policy.René von Schomberg - 2007 - AI and Society.
    My analysis takes as its point of departure the controversial assumption that contemporary ethical theories cannot capture adequately the ethical and social challenges of scientific and technological development. This assumption is rooted in the argument that classical ethical theory invariably addresses the issue of ethical responsibility in terms of whether and how intentional actions of individuals can be justified. Scientific and technological developments, however, have produced unintentional consequences and side-consequences. These consequences very often result from collective decisions concerning the way (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Introduction: Philosophy and Theory of Artificial Intelligence.Vincent C. Müller - 2012 - Minds and Machines 22 (2):67-69.
    The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, reasoning, learning, language, action, interaction, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Ethical Issues in Advanced Artificial Intelligence.Nick Bostrom - manuscript
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   10 citations  
Cyborgs
  1. Digital Immortality: Theory and Protocol for Indirect Mind Uploading.Alexey Turchin - manuscript
    Future superintelligent AI will be able to reconstruct a model of the personality of a person who lived in the past based on informational traces. This could be regarded as some form of immortality if this AI also solves the problem of personal identity in a copy-friendly way. A person who is currently alive could invest now in passive self-recording and active self-description to facilitate such reconstruction. In this article, we analyze informational-theoretical relationships between the human mind, its traces, and (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 150