Results for 'AI Safety and Security'

1000+ found
Order:
  1. On Controllability of Artificial Intelligence.Roman Yampolskiy - manuscript
    Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. Consequences of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  2. Cybercrime and Online Safety: Addressing the Challenges and Solutions Related to Cybercrime, Online Fraud, and Ensuring a Safe Digital Environment for All Users— A Case of African States (10th edition).Emmanuel N. Vitus - 2023 - Tijer- International Research Journal 10 (9):975-989.
    The internet has made the world more linked than ever before. While taking advantage of this online transition, cybercriminals target flaws in online systems, networks, and infrastructure. Businesses, government organizations, people, and communities all across the world, particularly in African countries, are all severely impacted on an economic and social level. Many African countries focused more on developing secure electricity and internet networks; yet, cybersecurity usually receives less attention than it should. One of Africa's major issues is the lack of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau - manuscript
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Unexplainability and Incomprehensibility of Artificial Intelligence.Roman Yampolskiy - manuscript
    Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Australia's Approach to AI Governance in Security and Defence.Susannah Kate Devitt & Damian Copeland - forthcoming - In M. Raska, Z. Stanley-Lockman & R. Bitzinger (eds.), AI Governance for National Security and Defence: Assessing Military AI Strategic Perspectives. Routledge. pp. 38.
    Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Levels of Self-Improvement in AI and their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Motivational Strategies and Security Service Delive ry in Universities in Cross River State, Nigeria.Comfort Robert Etor & Michael Ekpenyong Asuquo - 2021 - International Journal of Educational Administrati on, Planning, and Research 13 (1):55-65.
    This study assessed two motivational strategies and their respective ties to the service delivery in public universities in Cross River State. In achieving the central and specific targets of this research, four research questions and two null hypotheses were answered and tested in the study. The entire population of 440 security personnel in two public universities was studied, based on the census approach and following the ex-post facto research design. Three sets of expert-validated questionnaires, with Cronbach reliability estimates of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10.  79
    Professional culture and security: an innovative approach to implementing a medical facility.Daria Hromtseva & Oleksandr P. Krupskyi - 2015 - European Journal of Management Issues 23 (5):15-23.
    This article suggests different approaches to the definition of the nature, types, and component elements of safety culture (SC) in the organization; given the possible method of evaluation; formulated the concept of professional safety culture by taking into account the features of the medical industry; suggested innovative ways to implement the SC and strengthen it to improve the efficiency of hospitals. -/- Запропоновано окремі підходи до трактування сутності, видів і складових елементів культури безпеки (КБ) в організації; надано можливий (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. Towards a Body Fluids Ontology: A unified application ontology for basic and translational science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, physiological, and biochemical data, We briefly review the Blood Ontology (BLO), Saliva Ontology (SALO) and Kidney and Urinary Pathway Ontology (KUPO) initiatives. We discuss the methods employed in each, and address the project of using them as starting point for a unified body fluids ontology resource. We conclude with a description of how the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. From Confucius to Coding and Avicenna to Algorithms: Cultivating Ethical AI Development through Cross-Cultural Ancient Wisdom.Ammar Younas & Yi Zeng - manuscript
    This paper explores the potential of integrating ancient educational principles from diverse eastern cultures into modern AI ethics curricula. It draws on the rich educational traditions of ancient China, India, Arabia, Persia, Japan, Tibet, Mongolia, and Korea, highlighting their emphasis on philosophy, ethics, holistic development, and critical thinking. By examining these historical educational systems, the paper establishes a correlation with modern AI ethics principles, advocating for the inclusion of these ancient teachings in current AI development and education. The proposed integration (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Modal Security and Evolutionary Debunking.Daniel Z. Korman & Dustin Locke - 2023 - Midwest Studies in Philosophy 47:135-156.
    According to principles of modal security, evidence undermines a belief only when it calls into question certain purportedly important modal connections between one’s beliefs and the truth (e.g., safety or sensitivity). Justin Clarke-Doane and Dan Baras have advanced such principles with the aim of blocking evolutionary moral debunking arguments. We examine a variety of different principles of modal security, showing that some of these are too strong, failing to accommodate clear cases of undermining, while others are too (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Automated Influence and the Challenge of Cognitive Security.Sarah Rajtmajer & Daniel Susser - forthcoming - HoTSoS: ACM Symposium on Hot Topics in the Science of Security.
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
    Download  
     
    Export citation  
     
    Bookmark  
  18. Cyber Security and Dehumanisation.Marie Oldfield - 2021 - 5Th Digital Geographies Research Group Annual Symposium.
    Artificial Intelligence is becoming widespread and as we continue ask ‘can we implement this’ we neglect to ask ‘should we implement this’. There are various frameworks and conceptual journeys one should take to ensure a robust AI product; context is one of the vital parts of this. AI is now expected to make decisions, from deciding who gets a credit card to cancer diagnosis. These decisions affect most, if not all, of society. As developers if we do not understand or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Unpredictability of AI.Roman Yampolskiy - manuscript
    The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. In conclusion, impact of Unpredictability on AI Safety is discussed.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. Factors and conditions of the environmental and economic security formation in Ukraine.Igor Britchenko, Jozefína Drotárová, Oksana Yudenko, Larysa Holovina & Tetiana Shmatkovska - 2022 - Ad Alta: Journal of Interdisciplinary Research 2 (12):108-112.
    The article examines the peculiarities of the formation of the ecological and economic security system and the specifics of its principles. The relevance of the transformation of approaches to understanding the essence and principles of ecological and economic security in the context of the need to ensure sustainable development is substantiated. The levels of ecological and economic security and the peculiarities of changes in profits and costs during the transition of the economic system and business entities between (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Artificial thinking and doomsday projections: a discourse on trust, ethics and safety.Jeffrey White, Dietrich Brandt, Jan Söffner & Larry Stapleton - 2023 - AI and Society 38 (6):2119-2124.
    The article reflects on where AI is headed and the world along with it, considering trust, ethics and safety. Implicit in artificial thinking and doomsday appraisals is the engineered divorce from reality of sublime human embodiment. Jeffrey White, Dietrich Brandt, Jan Soeffner, and Larry Stapleton, four scholars associated with AI & Society, address these issues, and more, in the following exchange.
    Download  
     
    Export citation  
     
    Bookmark  
  22. Food security: modern challenges and mechanisms to ensure.Maksym Bezpartochnyi, Igor Britchenko & Olesia Bezpartochna - 2023 - Košice: Vysoká škola bezpečnostného manažérstva v Košiciach.
    The authors of the scientific monograph have come to the conclusion that ensuring food security during martial law requires the use of mechanisms to support agricultural exports, diversify logistics routes, ensure environmental safety, provide financial and marketing support. Basic research focuses on assessment the state of agricultural producers, analysing the financial and accounting system, logistics activities, ensuring competitiveness, and environmental pollution. The research results have been implemented in the different decision-making models during martial law, international logistics management, digital (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. The argument for near-term human disempowerment through AI.Leonard Dung - 2024 - AI and Society:1-14.
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Artificial Intelligence Ethics and Safety: practical tools for creating "good" models.Nicholas Kluge Corrêa -
    The AI Robotics Ethics Society (AIRES) is a non-profit organization founded in 2018 by Aaron Hui to promote awareness and the importance of ethical implementation and regulation of AI. AIRES is now an organization with chapters at universities such as UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University, and the Pontifical Catholic University of Rio Grande do Sul (Brazil). AIRES at PUCRS is the first international chapter of AIRES, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists.Elliott Thornley - forthcoming - Philosophical Studies.
    I explain the shutdown problem: the problem of designing artificial agents that (1) shut down when a shutdown button is pressed, (2) don’t try to prevent or cause the pressing of the shutdown button, and (3) otherwise pursue goals competently. I prove three theorems that make the difficulty precise. These theorems show that agents satisfying some innocuous-seeming conditions will often try to prevent or cause the pressing of the shutdown button, even in cases where it’s costly to do so. And (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Security Institutions, Use of Force and the State: A Moral Framework.Shannon Ford - 2016 - Dissertation, Australian National University
    This thesis examines the key moral principles that should govern decision-making by police and military when using lethal force. To this end, it provides an ethical analysis of the following question: Under what circumstances, if any, is it morally justified for the agents of state-sanctioned security institutions to use lethal force, in particular the police and the military? Recent literature in this area suggests that modern conflicts involve new and unique features that render conventional ways of thinking about the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. TRANSPORT SECURITY AS A FACTOR OF TRANSPORT AND COMMUNICATION SYSTEM OF UKRAINE SELF-SUSTAINING DEVELOPMENT.Igor Britchenko & Tetiana Cherniavska - 2017 - Науковий Вісник Полісся 1 (9):16-24.
    In the present article, attention is focused on the existing potential to ensure national self-sufficiency, the main challenges to its achievement and future prospects. According to the authors, transportation and communication system of the country can become the dominant model for self-sufficient development, due to its geostrategic location which allows it to be an advantageous bridge for goods, and passengers transit transportation between the states of Europe, Asia and the Middle East. To date, the transport and communication system is hardly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Some discussions on critical information security issues in the artificial intelligence era.Vuong Quan Hoang, Viet-Phuong La, Hong-Son Nguyen & Minh-Hoang Nguyen - manuscript
    The rapid advancement of Information Technology (IT) platforms and programming languages has transformed the dynamics and development of human society. The cyberspace and associated utilities are expanding, leading to a gradual shift from real-world living to virtual life (also known as cyberspace or digital space). The expansion and development of Natural Language Processing (NLP) models and Large Language Models (LLMs) demonstrate human-like characteristics in reasoning, perception, attention, and creativity, helping humans overcome operational barriers. Alongside the immense potential of artificial intelligence (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31.  60
    Artificial intelligence, its application and development prospects in the context of state security.Igor Britchenko & Krzysztof Chochowski - 2022 - Politics and Security 6 (3):3-7.
    Today, we observe the process of the constant expansion of the list of countries using AI in order to ensure the state of security, although depending on the system in force in them, its intensity and depth of interference in the sphere of rights and freedoms of an individual are different. The purpose of this article is to define what is AI, which is applicable in the area of state security, and to indicate the prospects for the development (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. The Blood Ontology: An ontology in the domain of hematology.Almeida Mauricio Barcellos, Proietti Anna Barbara de Freitas Carneiro, Ai Jiye & Barry Smith - 2011 - In Proceedings of the Second International Conference on Biomedical Ontology, Buffalo, NY, July 28-30, 2011 (CEUR 883). pp. (CEUR Workshop Proceedings, 833).
    Despite the importance of human blood to clinical practice and research, hematology and blood transfusion data remain scattered throughout a range of disparate sources. This lack of systematization concerning the use and definition of terms poses problems for physicians and biomedical professionals. We are introducing here the Blood Ontology, an ongoing initiative designed to serve as a controlled vocabulary for use in organizing information about blood. The paper describes the scope of the Blood Ontology, its stage of development and some (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Good Robot, Bad Robot: Dark and Creepy Sides of Robotics, Automated Vehicles, and Ai.Jo Ann Oravec - 2022 - New York, NY, USA: Palgrave-Macmillan.
    This book explores how robotics and artificial intelligence can enhance human lives but also have unsettling “dark sides.” It examines expanding forms of negativity and anxiety about robots, AI, and autonomous vehicles as our human environments are reengineered for intelligent military and security systems and for optimal workplace and domestic operations. It focuses on the impacts of initiatives to make robot interactions more humanlike and less creepy. It analyzes the emerging resistances against these entities in the wake of omnipresent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Internal Instability as a Security Challenge for Vietnam.Nguyen Hoang Tien, Nguyen Van Tien, Rewel Jimenez Santural Jose, Nguyen Minh Duc & Nguyen Minh Ngoc - 2020 - Journal of Southwest Jiaotong University 55 (4):1-13.
    National security is one of the most critical elements for Vietnam society, economy and political system, their stability, sustainability and prosperity. It is unconditionally the top priority for Vietnamese government, State, Communist Party and military forces. In the contemporary world with advanced technology and rapid globalization process taking place, beside many extant economic, social and political benefits there are many appearing challenges and threats that could endanger and destabilize the current socio-economic and political system of any country, including Vietnam. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. An Ontology of Security from a Risk Treatment Perspective.Ítalo Oliveira, Tiago Prince Sales, Riccardo Baratella, Mattia Fumagalli & Giancarlo Guizzardi - 2022 - In 41th International Conference, ER 2022, Proceedings. Cham: Springer. pp. 365-379.
    In Risk Management, security issues arise from complex relations among objects and agents, their capabilities and vulnerabilities, the events they are involved in, and the value and risk they ensue to the stakeholders at hand. Further, there are patterns involving these relations that crosscut many domains, ranging from information security to public safety. Understanding and forming a shared conceptualization and vocabulary about these notions and their relations is fundamental for modeling the corresponding scenarios, so that proper (...) countermeasures can be devised. Ontologies are instruments developed to address these conceptual clarification and terminological systematization issues. Over the years, several ontologies have been proposed in Risk Management and Security Engineering. However, as shown in recent literature, they fall short in many respects, including generality and expressivity - the latter impacting on their interoperability with related models. We propose a Reference Ontology for Security Engineering (ROSE) from a Risk Treatment perspective. Our proposal leverages on two existing Reference Ontologies: the Common Ontology of Value and Risk and a Reference Ontology of Prevention, both of which are grounded on the Unified Foundational Ontology (UFO). ROSE is employed for modeling and analysing some cases, in particular providing clarification to the semantically overloaded notion of Security Mechanism. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  36. How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  37. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Will AI take away your job? [REVIEW]Marie Oldfield - 2020 - Tech Magazine.
    Will AI take away your job? The answer is probably not. AI systems can be good predictive systems and be very good at pattern recognition. AI systems have a very repetitive approach to sets of data, which can be useful in certain circumstances. However, AI does make obvious mistakes. This is because AI does not have a sense of context. As Humans we have years of experience in the real world. We have vast amounts of contextual data stored in our (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012).Michael Starks - 2017 - In Suicidal Utopian Delusions in the 21st Century 4th ed (2019). Henderson, NV USA: Michael Starks. pp. 675.
    Some years ago I reached the point where I can usually tell from the title of a book, or at least from the chapter titles, what kinds of philosophical mistakes will be made and how frequently. In the case of nominally scientific works these may be largely restricted to certain chapters which wax philosophical or try to draw general conclusions about the meaning or long term significance of the work. Normally however the scientific matters of fact are generously interlarded with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Philosophy and theory of artificial intelligence 2017.Vincent C. Müller (ed.) - 2017 - Berlin: Springer.
    This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI (...); and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Assessing the future plausibility of catastrophically dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Nollywood in Diversity: New Dimensions for Behaviour Change and National Security in Nigeria.Stanislaus Iyorza - 2017 - International Journal of Communication 21.
    This paper sets out to demystify the nature of Nollywood movies existing in diversity and to propose new dimensions for using film to achieve behaviour change and a dependable national security in Nigeria. The paper views national security as the art of ensuring national safety of the government. Nollywood has naturally diversified along ethnic dimensions including the Hausa movies (Kannywood) in the North, the Yoruba movies in the West and the Ibo movies in the Eastern part of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Political Legitimacy, Authoritarianism, and Climate Change.Ross Mittiga - forthcoming - American Political Science Review.
    Is authoritarian power ever legitimate? The contemporary political theory literature—which largely conceptualizes legitimacy in terms of democracy or basic rights—would seem to suggest not. I argue, however, that there exists another, overlooked aspect of legitimacy concerning a government’s ability to ensure safety and security. While, under normal conditions, maintaining democracy and rights is typically compatible with guaranteeing safety, in emergency situations, conflicts between these two aspects of legitimacy can and often do arise. A salient example of this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. Sensitivity, safety, and impossible worlds.Guido Melchior - 2021 - Philosophical Studies 178 (3):713-729.
    Modal knowledge accounts that are based on standards possible-worlds semantics face well-known problems when it comes to knowledge of necessities. Beliefs in necessities are trivially sensitive and safe and, therefore, trivially constitute knowledge according to these accounts. In this paper, I will first argue that existing solutions to this necessity problem, which accept standard possible-worlds semantics, are unsatisfactory. In order to solve the necessity problem, I will utilize an unorthodox account of counterfactuals, as proposed by Nolan, on which we also (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  45. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Designometry – Formalization of Artifacts and Methods.Soenke Ziesche & Roman Yampolskiy - manuscript
    Two interconnected surveys are presented, one of artifacts and one of designometry. Artifacts are objects, which have an originator and do not exist in nature. Designometry is a new field of study, which aims to identify the originators of artifacts. The space of artifacts is described and also domains, which pursue designometry, yet currently doing so without collaboration or common methodologies. On this basis, synergies as well as a generic axiom and heuristics for the quest of the creators of artifacts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Catching Treacherous Turn: A Model of the Multilevel AI Boxing.Alexey Turchin - manuscript
    With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the field (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Predicting and Preferring.Nathaniel Sharadin - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.
    Download  
     
    Export citation  
     
    Bookmark  
  49.  93
    The Unlikeliest of Duos; Why Super Intelligent AI Will Cooperate with Humans.Griffin Pithie - manuscript
    The focus of this article is the "good-will theory", which explains the effect humans can have on the safety of AI, along with how it is in the best interest of a superintelligent AI to work alongside humans and not overpower them. Future papers dealing with the good-will theory will be published, but discuss different talking points in regards to possible or real objections to the theory.
    Download  
     
    Export citation  
     
    Bookmark  
  50. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and access (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 1000