Results for 'AI Safety'

612 found
Order:
  1. Global Solutions Vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Levels of Self-Improvement in AI and Their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  3. Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4.  38
    Unpredictability of AI.Roman Yampolskiy - manuscript
    The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. In conclusion, impact of Unpredictability on AI Safety is discussed.
    Download  
     
    Export citation  
     
    Bookmark  
  5. On Controllability of Artificial Intelligence.Roman Yampolskiy - manuscript
    Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. Consequences of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6.  89
    Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7.  32
    Designometry – Formalization of Artifacts and Methods.Soenke Ziesche & Roman Yampolskiy - manuscript
    Two interconnected surveys are presented, one of artifacts and one of designometry. Artifacts are objects, which have an originator and do not exist in nature. Designometry is a new field of study, which aims to identify the originators of artifacts. The space of artifacts is described and also domains, which pursue designometry, yet currently doing so without collaboration or common methodologies. On this basis, synergies as well as a generic axiom and heuristics for the quest of the creators of artifacts (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  8.  81
    Machines Learning Values.Steve Petersen - forthcoming - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. New York, USA: Oxford University Press.
    Download  
     
    Export citation  
     
    Bookmark  
  9.  47
    Robustness to Fundamental Uncertainty in AGI Alignment.G. G. Worley Iii - 2020 - Journal of Consciousness Studies 27 (1-2):225-241.
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. AI Alignment Problem: “Human Values” Don’T Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Designing AI for Social Good: Seven Essential Factors.Josh Cowls, Thomas C. King, Mariarosaria Taddeo & Luciano Floridi - manuscript
    The idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to address social problems effectively through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies (Cath et al. 2018). This article addresses this gap (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. Assessing the Future Plausibility of Catastrophically Dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13.  62
    Robustness to Fundamental Uncertainty in AGI Alignment.I. I. I. G. Gordon Worley - manuscript
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15. Philosophy and Theory of Artificial Intelligence 2017.Vincent C. Müller (ed.) - 2017 - Berlin: Springer.
    This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI (...); and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   4 citations  
  17. Saliva Ontology: An Ontology-Based Framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  18. Bioinformatics Advances in Saliva Diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  61
    Towards a Body Fluids Ontology: A Unified Application Ontology for Basic and Translational Science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, physiological, and biochemical data, We briefly review the Blood Ontology (BLO), Saliva Ontology (SALO) and Kidney and Urinary Pathway Ontology (KUPO) initiatives. We discuss the methods employed in each, and address the project of using them as starting point for a unified body fluids ontology resource. We conclude with a description of how the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Transparent, Explainable, and Accountable AI for Robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  21.  86
    AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Why AI Doomsayers Are Like Sceptical Theists and Why It Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Temporary Safety Hazards.Jeffrey Sanford Russell - 2016 - Noûs 50 (4):152-174.
    The Epistemic Objection says that certain theories of time imply that it is impossible to know which time is absolutely present. Standard presentations of the Epistemic Objection are elliptical—and some of the most natural premises one might fill in to complete the argument end up leading to radical skepticism. But there is a way of filling in the details which avoids this problem, using epistemic safety. The new version has two interesting upshots. First, while Ross Cameron alleges that the (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  24. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2019 - Synthese:arXiv:1901.02918v1.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  25. New Developments in the Philosophy of AI.Vincent Müller - 2016 - In Fundamental Issues of Artificial Intelligence. Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  26. AI Methods in Bioethics.Joshua August Skorburg, Walter Sinnott-Armstrong & Vincent Conitzer - 2020 - American Journal of Bioethics: Empirical Bioethics 1 (11):37-39.
    Commentary about the role of AI in bioethics for the 10th anniversary issue of AJOB: Empirical Bioethics.
    Download  
     
    Export citation  
     
    Bookmark  
  27. Sensitivity, Safety, and the Law: A Reply to Pardo.David Enoch & Levi Spectre - 2019 - Legal Theory 25 (3):178-199.
    ABSTRACTIn a recent paper, Michael Pardo argues that the epistemic property that is legally relevant is the one called Safety, rather than Sensitivity. In the process, he argues against our Sensitivity-related account of statistical evidence. Here we revisit these issues, partly in order to respond to Pardo, and partly in order to make general claims about legal epistemology. We clarify our account, we show how it adequately deals with counterexamples and other worries, we raise suspicions about Safety's value (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28.  24
    Expanding Care-Centered Value Sensitive Design: Designing Care Robots with AI for Social Good Norms.Steven Umbrello, Marianna Capasso, Maurizio Balistreri, Alberto Pirni & Federica Merenda - manuscript
    The increasing automation and ubiquity of robotics deployed within the field of care boasts promising advantages. However, challenging ethical issues arise also as a consequence. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. It takes the value sensitive design (VSD) approach to technology design and extends its application to care robots by not only integrating the values of care, but also (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Yes, Safety is in Danger.Tomas Bogardus & Chad Marxen - 2014 - Philosophia 42 (2):321-334.
    In an essay recently published in this journal (“Is Safety in Danger?”), Fernando Broncano-Berrocal defends the safety condition on knowledge from a counterexample proposed by Tomas Bogardus (Philosophy and Phenomenological Research, 2012). In this paper, we will define the safety condition, briefly explain the proposed counterexample, and outline Broncano-Berrocal’s defense of the safety condition. We will then raise four objections to Broncano-Berrocal’s defense, four implausible implications of his central claim. In the end, we conclude that Broncano-Berrocal’s (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  30.  55
    Sensitivity, Safety, and Impossible Worlds.Guido Melchior - forthcoming - Philosophical Studies:1-17.
    Modal knowledge accounts that are based on standards possible-worlds semantics face well-known problems when it comes to knowledge of necessities. Beliefs in necessities are trivially sensitive and safe and, therefore, trivially constitute knowledge according to these accounts. In this paper, I will first argue that existing solutions to this necessity problem, which accept standard possible-worlds semantics, are unsatisfactory. In order to solve the necessity problem, I will utilize an unorthodox account of counterfactuals, as proposed by Nolan, on which we also (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31.  17
    AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Ethics of Artificial Intelligence in Brain and Mental Health.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32.  65
    Non-Reductive Safety.Michael Blome-Tillmann - 2020 - Belgrade Philosophical Annual 33:25-38.
    Safety principles in epistemology are often hailed as providing us with an explanation of why we fail to have knowledge in Gettier cases and lottery examples, while at the same time allowing for the fact that we know the negations of sceptical hypotheses. In a recent paper, Sinhababu and Williams have produced an example—the Backward Clock—that is meant to spell trouble for safety accounts of knowledge. I argue that the Backward Clock case is, in fact, unproblematic for the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Saving Safety From Counterexamples.Thomas Grundmann - forthcoming - Synthese.
    In this paper I will offer a comprehensive defense of the safety account of knowledge against counterexamples that have been recently put forward. In section 1, I will discuss different versions of safety, arguing that a specific variant of method-relativized safety is the most plausible. I will then use this specific version of safety to respond to counterexamples in the recent literature. In section 2, I will address alleged examples of safe beliefs that still constitute Gettier (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  34. Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2017 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Berlin: Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Safety's Swamp: Against The Value of Modal Stability.Georgi Gardiner - 2017 - American Philosophical Quarterly 54 (2):119-129.
    An account of the nature of knowledge must explain the value of knowledge. I argue that modal conditions, such as safety and sensitivity, do not confer value on a belief and so any account of knowledge that posits a modal condition as a fundamental constituent cannot vindicate widely held claims about the value of knowledge. I explain the implications of this for epistemology: We must either eschew modal conditions as a fundamental constituent of knowledge, or retain the modal conditions (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  36. Safety, Explanation, Iteration.Daniel Greco - 2016 - Philosophical Issues 26 (1):187-208.
    This paper argues for several related theses. First, the epistemological position that knowledge requires safe belief can be motivated by views in the philosophy of science, according to which good explanations show that their explananda are robust. This motivation goes via the idea—recently defended on both conceptual and empirical grounds—that knowledge attributions play a crucial role in explaining successful action. Second, motivating the safety requirement in this way creates a choice point—depending on how we understand robustness, we'll end up (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37.  72
    Beware of Safety.Christian Piller - 2019 - Analytic Philosophy 60 (4):01-29.
    Safety, as discussed in contemporary epistemology, is a feature of true beliefs. Safe beliefs, when formed by the same method, remain true in close-by possible worlds. I argue that our beliefs being safely true serves no recognisable epistemic interest and, thus, that this notion of safety should play no role in epistemology. Epistemologists have been misled by failing to distinguish between a feature of beliefs — being safely true — and a feature of believers, namely being safe from (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Foley’s Threshold View of Belief and the Safety Condition on Knowledge.Michael J. Shaffer - 2018 - Metaphilosophy 49 (4):589-594.
    This paper introduces a new argument against Richard Foley’s threshold view of belief. His view is based on the Lockean Thesis (LT) and the Rational Threshold Thesis (RTT). The argument introduced here shows that the views derived from the LT and the RTT violate the safety condition on knowledge in way that threatens the LT and/or the RTT.
    Download  
     
    Export citation  
     
    Bookmark  
  39. Safety, Virtue, Scepticism: Remarks on Sosa.Peter Baumann - 2015 - Croatian Journal of Philosophy (45):295-306.
    Ernest Sosa has made and continues to make major contributions to a wide variety of topics in epistemology. In this paper I discuss some of his core ideas about the nature of knowledge and scepticism. I start with a discussion of the safety account of knowledge – a view he has championed and further developed over the years. I continue with some questions concerning the role of the concept of an epistemic virtue for our understanding of knowledge. Safety (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Philosophy and Theory of Artificial Intelligence, 3–4 October (Report on PT-AI 2011).Vincent C. Müller - 2011 - The Reasoner 5 (11):192-193.
    Report for "The Reasoner" on the conference "Philosophy and Theory of Artificial Intelligence", 3 & 4 October 2011, Thessaloniki, Anatolia College/ACT, http://www.pt-ai.org. --- Organization: Vincent C. Müller, Professor of Philosophy at ACT & James Martin Fellow, Oxford http://www.sophia.de --- Sponsors: EUCogII, Oxford-FutureTech, AAAI, ACM-SIGART, IACAP, ECCAI.
    Download  
     
    Export citation  
     
    Bookmark  
  41. Theory and Philosophy of AI (Minds and Machines, 22/2 - Special Volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  42. Testimony, Transmission, and Safety.Joachim Horvath - 2008 - Abstracta 4 (1):27-43.
    Most philosophers believe that testimony is not a fundamental source of knowledge, but merely a way to transmit already existing knowledge. However, Jennifer Lackey has presented some counterexamples which show that one can actually come to know something through testimony that no one ever knew before. Yet, the intuitive idea can be preserved by the weaker claim that someone in a knowledge-constituting testimonial chain has to have access to some non-testimonial source of knowledge with regard to what is testified. But (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Unexplainability and Incomprehensibility of Artificial Intelligence.Roman Yampolskiy - manuscript
    Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Safety, the Preface Paradox and Possible Worlds Semantics.Michael Shaffer - 2019 - Axiomathes 29 (4):347-361.
    This paper contains an argument to the effect that possible worlds semantics renders semantic knowledge impossible, no matter what ontological interpretation is given to possible worlds. The essential contention made is that possible worlds semantic knowledge is unsafe and this is shown by a parallel with the preface paradox.
    Download  
     
    Export citation  
     
    Bookmark  
  45. Toward an Ethics of AI Assistants: An Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  46.  39
    Blind Spots in AI Ethics and Biases in AI Governance.Nicholas Kluge Corrêa - manuscript
    There is an interesting link between critical theory and certain genres of literature that may be of interest to the current debate on AI ethics. While critical theory generally points out certain deficiencies in the present to criticize it, futurology, and literary genres such as Cyber-punk, extrapolate our present deficits in possible dystopian futures to criticize the status quo. Given the great advance of the AI industry in recent years, an increasing number of ethical matters have been raised and debated, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47.  24
    Aiming AI at a Moving Target: Health.Mihai Nadin - forthcoming - AI and Society:1-9.
    Justified by spectacular achievements facilitated through applied deep learning methodology, the “Everything is possible” view dominates this new hour in the “boom and bust” curve of AI performance. The optimistic view collides head on with the “It is not possible”—ascertainments often originating in a skewed understanding of both AI and medicine. The meaning of the conflicting views can be assessed only by addressing the nature of medicine. Specifically: Which part of medicine, if any, can and should be entrusted to AI—now (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  48. AI, Concepts, and the Paradox of Mental Representation, with a Brief Discussion of Psychological Essentialism.Eric Dietrich - 2001 - J. Of Exper. And Theor. AI 13 (1):1-7.
    Mostly philosophers cause trouble. I know because on alternate Thursdays I am one -- and I live in a philosophy department where I watch all of them cause trouble. Everyone in artificial intelligence knows how much trouble philosophers can cause (and in particular, we know how much trouble one philosopher -- John Searle -- has caused). And, we know where they tend to cause it: in knowledge representation and the semantics of data structures. This essay is about a recent case (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  49. AI, Situatedness, Creativity, and Intelligence; or the Evolution of the Little Hearing Bones.Eric Dietrich - 1996 - J. Of Experimental and Theoretical AI 8 (1):1-6.
    Good sciences have good metaphors. Indeed, good sciences are good because they have good metaphors. AI could use more good metaphors. In this editorial, I would like to propose a new metaphor to help us understand intelligence. Of course, whether the metaphor is any good or not depends on whether it actually does help us. (What I am going to propose is not something opposed to computationalism -- the hypothesis that cognition is computation. Noncomputational metaphors are in vogue these days, (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  50. AI and the Mechanistic Forces of Darkness.Eric Dietrich - 1995 - J. Of Experimental and Theoretical AI 7 (2):155-161.
    Under the Superstition Mountains in central Arizona toil those who would rob humankind o f its humanity. These gray, soulless monsters methodically tear away at our meaning, our subjectivity, our essence as transcendent beings. With each advance, they steal our freedom and dignity. Who are these denizens of darkness, these usurpers of all that is good and holy? None other than humanity’s arch-foe: The Cognitive Scientists -- AI researchers, fallen philosophers, psychologists, and other benighted lovers of computers. Unless they are (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 612