Results for 'AI Safety and Security'

965 found
Order:
  1. On Controllability of Artificial Intelligence.Roman Yampolskiy - 2016
    Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. Consequences of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  2. Cybercrime and Online Safety: Addressing the Challenges and Solutions Related to Cybercrime, Online Fraud, and Ensuring a Safe Digital Environment for All Users— A Case of African States (10th edition).Emmanuel N. Vitus - 2023 - Tijer- International Research Journal 10 (9):975-989.
    The internet has made the world more linked than ever before. While taking advantage of this online transition, cybercriminals target flaws in online systems, networks, and infrastructure. Businesses, government organizations, people, and communities all across the world, particularly in African countries, are all severely impacted on an economic and social level. Many African countries focused more on developing secure electricity and internet networks; yet, cybersecurity usually receives less attention than it should. One of Africa's major issues is the lack of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Unexplainability and Incomprehensibility of Artificial Intelligence.Roman Yampolskiy - manuscript
    Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. AI Safety: A Climb To Armageddon?Herman Cappelen, Dever Josh & Hawthorne John - manuscript
    This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, the expected correlation between an AI system's power at the point of failure and the severity of the resulting harm, and the tendency of safety measures to enable AI systems to become more powerful before failing - safety efforts have negative expected utility. The paper examines three response strategies: (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Turchin Alexey & David Denkenberger (eds.), Artificial Intelligence Safety and Security. CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  7.  49
    Safety and Protection Practices in the Early Childhood Education Centres.Ibiwari Caroline Dike & Mkpoikanke Sunday Otu - 2024 - International Journal of Home Economics, Hospitality and Allied Research 3 (1):294-305.
    A safe and secure environment is an essential part of the early childhood development of any child. This study aims to investigate the safety and protection practices of early childhood centers in the Anambra state, Nigeria, and to determine if any improvements can be made to them. This study analyzed data collected from 60 Early Childhood Care Centers (ECCE Centers) and 60 Pre-Primary Schools (Preprimary School) in Anambra State using the Evaluation of ECCE Implementation Kit (KEIEP), direct observation, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau - manuscript
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Unjustified untrue "beliefs": AI hallucinations and justification logics.Kristina Šekrst - forthcoming - In Kordula Świętorzecka, Filip Grgić & Anna Brozek (eds.), Logic, Knowledge, and Tradition. Essays in Honor of Srecko Kovac.
    In artificial intelligence (AI), responses generated by machine-learning models (most often large language models) may be unfactual information presented as a fact. For example, a chatbot might state that the Mona Lisa was painted in 1815. Such phenomenon is called AI hallucinations, seeking inspiration from human psychology, with a great difference of AI ones being connected to unjustified beliefs (that is, AI “beliefs”) rather than perceptual failures). -/- AI hallucinations may have their source in the data itself, that is, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Levels of Self-Improvement in AI and their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Australia's Approach to AI Governance in Security and Defence.Susannah Kate Devitt & Damian Copeland - forthcoming - In M. Raska, Z. Stanley-Lockman & R. Bitzinger (eds.), AI Governance for National Security and Defence: Assessing Military AI Strategic Perspectives. Routledge. pp. 38.
    Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14.  43
    Ethical Concerns in Computational Linguistic Field National Defense: A Philosophical Investigation of Language and Security.Mhd Halkis Malkis4 - 2024 - Linguistic and Philosophical Investigations 23 (1):386–396.
    This research examines ethical issues in computational linguistics that can be applied to national defense by analyzing philosophical and security language. The increasing use of language contexts, such as intelligence and communication data analysis, raises ethical and philosophical challenges related to privacy, control, and accuracy. This research aims to identify and analyze ethical issues, especially in the use of computational linguistics in defense applications, as well as their implications for the protection of individual rights and privacy. This method involves (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Motivational Strategies and Security Service Delive ry in Universities in Cross River State, Nigeria.Comfort Robert Etor & Michael Ekpenyong Asuquo - 2021 - International Journal of Educational Administrati on, Planning, and Research 13 (1):55-65.
    This study assessed two motivational strategies and their respective ties to the service delivery in public universities in Cross River State. In achieving the central and specific targets of this research, four research questions and two null hypotheses were answered and tested in the study. The entire population of 440 security personnel in two public universities was studied, based on the census approach and following the ex-post facto research design. Three sets of expert-validated questionnaires, with Cronbach reliability estimates of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. (1 other version)Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012).Michael Starks - 2016 - In Suicidal Utopian Delusions in the 21st Century: Philosophy, Human Nature and the Collapse of Civilization-- Articles and Reviews 2006-2017 2nd Edition Feb 2018. Las Vegas, USA: Reality Press. pp. 675.
    Some years ago I reached the point where I can usually tell from the title of a book, or at least from the chapter titles, what kinds of philosophical mistakes will be made and how frequently. In the case of nominally scientific works these may be largely restricted to certain chapters which wax philosophical or try to draw general conclusions about the meaning or long term significance of the work. Normally however the scientific matters of fact are generously interlarded with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. AI-POWERED THREAT INTELLIGENCE FOR PROACTIVE SECURITY MONITORING IN CLOUD INFRASTRUCTURES.Tummalachervu Chaitanya Kanth - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):76-83.
    Cloud computing has become an essential component of enterprises and organizations globally in the current era of digital technology. The cloud has a multitude of advantages, including scalability, flexibility, and cost-effectiveness, rendering it an appealing choice for data storage and processing. The increasing storage of sensitive information in cloud environments has raised significant concerns over the security of such systems. The frequency of cyber threats and attacks specifically aimed at cloud infrastructure has been increasing, presenting substantial dangers to the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. From Confucius to Coding and Avicenna to Algorithms: Cultivating Ethical AI Development through Cross-Cultural Ancient Wisdom.Ammar Younas & Yi Zeng - manuscript
    This paper explores the potential of integrating ancient educational principles from diverse eastern cultures into modern AI ethics curricula. It draws on the rich educational traditions of ancient China, India, Arabia, Persia, Japan, Tibet, Mongolia, and Korea, highlighting their emphasis on philosophy, ethics, holistic development, and critical thinking. By examining these historical educational systems, the paper establishes a correlation with modern AI ethics principles, advocating for the inclusion of these ancient teachings in current AI development and education. The proposed integration (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Professional culture and security: an innovative approach to implementing a medical facility.Daria Hromtseva & Oleksandr P. Krupskyi - 2015 - European Journal of Management Issues 23 (5):15-23.
    This article suggests different approaches to the definition of the nature, types, and component elements of safety culture (SC) in the organization; given the possible method of evaluation; formulated the concept of professional safety culture by taking into account the features of the medical industry; suggested innovative ways to implement the SC and strengthen it to improve the efficiency of hospitals. -/- Запропоновано окремі підходи до трактування сутності, видів і складових елементів культури безпеки (КБ) в організації; надано можливий (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20.  56
    The AI Revolution in Deterrence Theory: 10 Groundbreaking Concepts Reshaping Global Security.Yu Chen - manuscript
    This article explores the transformative impact of artificial intelligence on deterrence theory, introducing 10 groundbreaking concepts that are reshaping global security dynamics. As traditional deterrence strategies face challenges in an increasingly complex and interconnected world, these innovative approaches leverage AI, complex systems theory, and emerging technologies to create more sophisticated and adaptive deterrence mechanisms. From Chaos Deterrence, which harnesses unpredictability, to Möbius Deterrence, which blurs the lines between offense and defense, these concepts represent a paradigm shift in conflict prevention (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21.  94
    Values in science and AI alignment research.Leonard Dung - manuscript
    Roughly, empirical AI alignment research (AIA) is an area of AI research which investigates empirically how to design AI systems in line with human goals. This paper examines the role of non-epistemic values in AIA. It argues that: (1) Sciences differ in the degree to which values influence them. (2) AIA is strongly value-laden. (3) This influence of values is managed inappropriately and thus threatens AIA’s epistemic integrity and ethical beneficence. (4) AIA should strive to achieve value transparency, critical scrutiny (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Unpredictability of AI.Roman Yampolskiy - manuscript
    The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. In conclusion, impact of Unpredictability on AI Safety is discussed.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  23. Artificial thinking and doomsday projections: a discourse on trust, ethics and safety.Jeffrey White, Dietrich Brandt, Jan Söffner & Larry Stapleton - 2023 - AI and Society 38 (6):2119-2124.
    The article reflects on where AI is headed and the world along with it, considering trust, ethics and safety. Implicit in artificial thinking and doomsday appraisals is the engineered divorce from reality of sublime human embodiment. Jeffrey White, Dietrich Brandt, Jan Soeffner, and Larry Stapleton, four scholars associated with AI & Society, address these issues, and more, in the following exchange.
    Download  
     
    Export citation  
     
    Bookmark  
  24. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Automated Influence and the Challenge of Cognitive Security.Sarah Rajtmajer & Daniel Susser - forthcoming - HoTSoS: ACM Symposium on Hot Topics in the Science of Security.
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Modal Security and Evolutionary Debunking.Daniel Z. Korman & Dustin Locke - 2023 - Midwest Studies in Philosophy 47:135-156.
    According to principles of modal security, evidence undermines a belief only when it calls into question certain purportedly important modal connections between one’s beliefs and the truth (e.g., safety or sensitivity). Justin Clarke-Doane and Dan Baras have advanced such principles with the aim of blocking evolutionary moral debunking arguments. We examine a variety of different principles of modal security, showing that some of these are too strong, failing to accommodate clear cases of undermining, while others are too (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Factors and conditions of the environmental and economic security formation in Ukraine.Igor Britchenko, Jozefína Drotárová, Oksana Yudenko, Larysa Holovina & Tetiana Shmatkovska - 2022 - Ad Alta: Journal of Interdisciplinary Research 2 (12):108-112.
    The article examines the peculiarities of the formation of the ecological and economic security system and the specifics of its principles. The relevance of the transformation of approaches to understanding the essence and principles of ecological and economic security in the context of the need to ensure sustainable development is substantiated. The levels of ecological and economic security and the peculiarities of changes in profits and costs during the transition of the economic system and business entities between (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Good Robot, Bad Robot: Dark and Creepy Sides of Robotics, Automated Vehicles, and Ai.Jo Ann Oravec - 2022 - New York, NY, USA: Palgrave-Macmillan.
    This book explores how robotics and artificial intelligence can enhance human lives but also have unsettling “dark sides.” It examines expanding forms of negativity and anxiety about robots, AI, and autonomous vehicles as our human environments are reengineered for intelligent military and security systems and for optimal workplace and domestic operations. It focuses on the impacts of initiatives to make robot interactions more humanlike and less creepy. It analyzes the emerging resistances against these entities in the wake of omnipresent (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Group Prioritarianism: Why AI should not replace humanity.Frank Hong - 2024 - Philosophical Studies:1-19.
    If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation, at least if we assume consequentialism. Without resorting to non-consequentialist normative theories that suggest that we ought not always (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Cyber Security and Dehumanisation.Marie Oldfield - 2021 - 5Th Digital Geographies Research Group Annual Symposium.
    Artificial Intelligence is becoming widespread and as we continue ask ‘can we implement this’ we neglect to ask ‘should we implement this’. There are various frameworks and conceptual journeys one should take to ensure a robust AI product; context is one of the vital parts of this. AI is now expected to make decisions, from deciding who gets a credit card to cancer diagnosis. These decisions affect most, if not all, of society. As developers if we do not understand or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Security Institutions, Use of Force and the State: A Moral Framework.Shannon Ford - 2016 - Dissertation, Australian National University
    This thesis examines the key moral principles that should govern decision-making by police and military when using lethal force. To this end, it provides an ethical analysis of the following question: Under what circumstances, if any, is it morally justified for the agents of state-sanctioned security institutions to use lethal force, in particular the police and the military? Recent literature in this area suggests that modern conflicts involve new and unique features that render conventional ways of thinking about the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Internal Instability as a Security Challenge for Vietnam.Nguyen Hoang Tien, Nguyen Van Tien, Rewel Jimenez Santural Jose, Nguyen Minh Duc & Nguyen Minh Ngoc - 2020 - Journal of Southwest Jiaotong University 55 (4):1-13.
    National security is one of the most critical elements for Vietnam society, economy and political system, their stability, sustainability and prosperity. It is unconditionally the top priority for Vietnamese government, State, Communist Party and military forces. In the contemporary world with advanced technology and rapid globalization process taking place, beside many extant economic, social and political benefits there are many appearing challenges and threats that could endanger and destabilize the current socio-economic and political system of any country, including Vietnam. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Food security: modern challenges and mechanisms to ensure.Maksym Bezpartochnyi, Igor Britchenko & Olesia Bezpartochna - 2023 - Košice: Vysoká škola bezpečnostného manažérstva v Košiciach.
    The authors of the scientific monograph have come to the conclusion that ensuring food security during martial law requires the use of mechanisms to support agricultural exports, diversify logistics routes, ensure environmental safety, provide financial and marketing support. Basic research focuses on assessment the state of agricultural producers, analysing the financial and accounting system, logistics activities, ensuring competitiveness, and environmental pollution. The research results have been implemented in the different decision-making models during martial law, international logistics management, digital (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Artificial Intelligence Ethics and Safety: practical tools for creating "good" models.Nicholas Kluge Corrêa -
    The AI Robotics Ethics Society (AIRES) is a non-profit organization founded in 2018 by Aaron Hui to promote awareness and the importance of ethical implementation and regulation of AI. AIRES is now an organization with chapters at universities such as UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University, and the Pontifical Catholic University of Rio Grande do Sul (Brazil). AIRES at PUCRS is the first international chapter of AIRES, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. AI Sovereignty: Navigating the Future of International AI Governance.Yu Chen - manuscript
    The rapid proliferation of artificial intelligence (AI) technologies has ushered in a new era of opportunities and challenges, prompting nations to grapple with the concept of AI sovereignty. This article delves into the definition and implications of AI sovereignty, drawing parallels to the well-established notion of cyber sovereignty. By exploring the connotations of AI sovereignty, including control over AI development, data sovereignty, economic impacts, national security considerations, and ethical and cultural dimensions, the article provides a comprehensive understanding of this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists.Elliott Thornley - forthcoming - Philosophical Studies:1-28.
    I explain the shutdown problem: the problem of designing artificial agents that (1) shut down when a shutdown button is pressed, (2) don’t try to prevent or cause the pressing of the shutdown button, and (3) otherwise pursue goals competently. I prove three theorems that make the difficulty precise. These theorems show that agents satisfying some innocuous-seeming conditions will often try to prevent or cause the pressing of the shutdown button, even in cases where it’s costly to do so. And (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Political Legitimacy, Authoritarianism, and Climate Change.Ross Mittiga - forthcoming - American Political Science Review.
    Is authoritarian power ever legitimate? The contemporary political theory literature—which largely conceptualizes legitimacy in terms of democracy or basic rights—would seem to suggest not. I argue, however, that there exists another, overlooked aspect of legitimacy concerning a government’s ability to ensure safety and security. While, under normal conditions, maintaining democracy and rights is typically compatible with guaranteeing safety, in emergency situations, conflicts between these two aspects of legitimacy can and often do arise. A salient example of this (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  41. Artificial Intelligence in Intelligence Agencies, Defense and National Security.Nicolae Sfetcu - 2024 - Bucharest, Romania: MultiMedia Publishing.
    This book explores the use of artificial intelligence by intelligence services around the world and its critical role in intelligence analysis, defense, and national security. Intelligence services play a crucial role in national security, and the adoption of artificial intelligence technologies has had a significant impact on their operations. It also examines the various applications of artificial intelligence in intelligence services, the implications, challenges and ethical considerations associated with its use. The book emphasizes the need for continued research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. An Ontology of Security from a Risk Treatment Perspective.Ítalo Oliveira, Tiago Prince Sales, Riccardo Baratella, Mattia Fumagalli & Giancarlo Guizzardi - 2022 - In Ítalo Oliveira, Tiago Prince Sales, Riccardo Baratella, Mattia Fumagalli & Giancarlo Guizzardi (eds.), 41th International Conference, ER 2022, Proceedings. Cham: Springer. pp. 365-379.
    In Risk Management, security issues arise from complex relations among objects and agents, their capabilities and vulnerabilities, the events they are involved in, and the value and risk they ensue to the stakeholders at hand. Further, there are patterns involving these relations that crosscut many domains, ranging from information security to public safety. Understanding and forming a shared conceptualization and vocabulary about these notions and their relations is fundamental for modeling the corresponding scenarios, so that proper (...) countermeasures can be devised. Ontologies are instruments developed to address these conceptual clarification and terminological systematization issues. Over the years, several ontologies have been proposed in Risk Management and Security Engineering. However, as shown in recent literature, they fall short in many respects, including generality and expressivity - the latter impacting on their interoperability with related models. We propose a Reference Ontology for Security Engineering (ROSE) from a Risk Treatment perspective. Our proposal leverages on two existing Reference Ontologies: the Common Ontology of Value and Risk and a Reference Ontology of Prevention, both of which are grounded on the Unified Foundational Ontology (UFO). ROSE is employed for modeling and analysing some cases, in particular providing clarification to the semantically overloaded notion of Security Mechanism. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Artificial intelligence, its application and development prospects in the context of state security.Igor Britchenko & Krzysztof Chochowski - 2022 - Politics and Security 6 (3):3-7.
    Today, we observe the process of the constant expansion of the list of countries using AI in order to ensure the state of security, although depending on the system in force in them, its intensity and depth of interference in the sphere of rights and freedoms of an individual are different. The purpose of this article is to define what is AI, which is applicable in the area of state security, and to indicate the prospects for the development (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Assessing the future plausibility of catastrophically dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. TRANSPORT SECURITY AS A FACTOR OF TRANSPORT AND COMMUNICATION SYSTEM OF UKRAINE SELF-SUSTAINING DEVELOPMENT.Igor Britchenko & Tetiana Cherniavska - 2017 - Науковий Вісник Полісся 1 (9):16-24.
    In the present article, attention is focused on the existing potential to ensure national self-sufficiency, the main challenges to its achievement and future prospects. According to the authors, transportation and communication system of the country can become the dominant model for self-sufficient development, due to its geostrategic location which allows it to be an advantageous bridge for goods, and passengers transit transportation between the states of Europe, Asia and the Middle East. To date, the transport and communication system is hardly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and access (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  49.  46
    A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences.Lode Lauwaert - 2023 - Artificial Intelligence Review 56:3473–3504.
    Since its emergence in the 1960s, Artifcial Intelligence (AI) has grown to conquer many technology products and their felds of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from diferent domains, together with numerous tools (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Going Back to Normal: A Phenomenological Study on the Challenges and Coping Mechanisms of Junior High School Teachers in the Full Implementation of In-Person Classes in the Public Secondary Schools in the Division of Rizal.Jarom Anero & Eloisa Tamayo - 2023 - Psychology and Education: A Multidisciplinary Journal 12:767-808.
    The study focused on exploring and understanding the challenges junior high school teachers in the Division of Rizal faced during the full implementation of in-person classes and identifying the coping mechanisms they employed to adapt to this new educational landscape. Forty participants were purposefully selected from various public secondary school clusters in the division of Rizal. A qualitative phenomenological design was employed, and the information collected through Google Forms was imported into Microsoft Excel and Microsoft Word. After importing the data, (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 965