Results for 'AI Risk'

981 found
Order:
  1. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  2. (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  3. AI Risk Denialism.Roman V. Yampolskiy - manuscript
    In this work, we survey skepticism regarding AI risk and show parallels with other types of scientific skepticism. We start by classifying different types of AI Risk skepticism and analyze their root causes. We conclude by suggesting some intervention approaches, which may be successful in reducing AI risk skepticism, at least amongst artificial intelligence researchers.
    Download  
     
    Export citation  
     
    Bookmark  
  4. Towards a Taxonomy of AI Risks in the Health Domain.Delaram Golpayegani, Joshua Hovsha, Leon Rossmaier, Rana Saniei & Jana Misic - 2022 - 2022 Fourth International Conference on Transdisciplinary Ai (Transai).
    The adoption of AI in the health sector has its share of benefits and harms to various stakeholder groups and entities. There are critical risks involved in using AI systems in the health domain; risks that can have severe, irreversible, and life-changing impacts on people’s lives. With the development of innovative AI-based applications in the medical and healthcare sectors, new types of risks emerge. To benefit from novel AI applications in this domain, the risks need to be managed in order (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.Brandon Perry & Risto Uuk - 2019 - Big Data and Cognitive Computing 3 (2):1-17.
    This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are created (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Two Types of AI Existential Risk: Decisive and Accumulative.Atoosa Kasirzadeh - manuscript
    The conventional discourse on existential risks (x-risks) from AI typically focuses on abrupt, dire events caused by advanced AI systems, particularly those that might achieve or surpass human-level intelligence. These events have severe consequences that either lead to human extinction or irreversibly cripple human civilization to a point beyond recovery. This discourse, however, often neglects the serious possibility of AI x-risks manifesting incrementally through a series of smaller yet interconnected disruptions, gradually crossing critical thresholds over time. This paper contrasts the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. AI Survival Stories: a Taxonomic Analysis of AI Existential Risk.Herman Cappelen, Simon Goldstein & John Hawthorne - forthcoming - Philosophy of Ai.
    Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8.  54
    Risks Deriving from the Agential Profiles of Modern AI Systems.Barnaby Crook - forthcoming - In Vincent C. Müller, Aliya R. Dewey, Leonard Dung & Guido Löhr (eds.), Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    Modern AI systems based on deep learning are neither traditional tools nor full-blown agents. Rather, they are characterised by idiosyncratic agential profiles, i.e., combinations of agency-relevant properties. Modern AI systems lack superficial features which enable people to recognise agents but possess sophisticated information processing capabilities which can undermine human goals. I argue that systems fitting this description, when they are adversarial with respect to human users, pose particular risks to those users. To explicate my argument, I provide conditions under which (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Extinction Risks from AI: Invisible to Science?Vojtech Kovarik, Christiaan van Merwijk & Ida Mattsson - manuscript
    In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart’s Law as “Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity”, and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart’s Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims to be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  11. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  12. Medical AI, Inductive Risk, and the Communication of Uncertainty: The Case of Disorders of Consciousness.Jonathan Birch - forthcoming - Journal of Medical Ethics.
    Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is “cognitive-motor dissociation” (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram (EEG) recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient’s family, because this information may confuse, alarm and mislead. Instead, we need a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13.  33
    Cardiovascular image analysis: AI can analyze heart images to assess cardiovascular health and identify potential risks.Sankara Reddy Thamma Sankara Reddy Thamma - 2024 - International Journal of Science and Research Archive 12 (2):2969 - 2976.
    CVDs continue to be the number one cause of death, therefore early detection and treatment is crucial. Currently, the application of Artificial Intelligence (AI) in the cardiac image analysis has become popular due to increased accuracy, productivity, and modelling. In this paper, the use of the AI system to study the echocardiogram, CT, MRI, and other images of the heart and blood vessels to view the risk factors for cardiovascular diseases is discussed. We focus on the current methods of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Risks of artificial intelligence.Vincent C. Muller (ed.) - 2015 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15. Advancing Financial Risk Modeling: Vasicek Framework Enhanced by Agentic Generative Ai.Satyadhar Joshi - 2025 - International Research Journal of Modernization in Engineering Technology and Science 1 (7):4413-4420.
    This paper provides a comprehensive review of the Vasicek model and its applications in finance, categorizing the literature into four key areas: Vasicek model applications, Monte Carlo simulations, negative interest rates and risk, and deep learning for financial time series. To provide deeper insights, a synthesis chart and chronological analysis are included to highlight significant trends and contributions. Building upon this foundation, we employ Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) to generate synthetic future interest rate data. These (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16.  9
    Risk of What? Defining Harm in the Context of AI Safety.Laura Fearnley, Elly Cairns, Tom Stoneham, Philippa Ryan, Jenn Chubb, Jo Iacovides, Cynthia Iglesias Urrutia, Phillip Morgan, John McDermid & Ibrahim Habli - manuscript
    For decades, the field of system safety has designed safe systems by reducing the risk of physical harm to humans, property and the environment to an acceptable level. Recently, this definition of safety has come under scrutiny by governments and researchers who argue that the narrow focus on reducing physical harm, whilst necessary, is not sufficient to secure the safety of AI systems. There is growing pressure to expand the scope of safety in the context of AI to address (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI.Jonathan Birch - 2024 - Oxford: Oxford University Press.
    Can octopuses feel pain and pleasure? What about crabs, shrimps, insects, or spiders? How do we tell whether a person unresponsive after severe brain injury might be suffering? When does a fetus in the womb start to have conscious experiences? Could there even be rudimentary feelings in miniature models of the human brain, grown from human stem cells? And what about AI? These are questions about the edge of sentience, and they are subject to enormous, disorienting uncertainty. The stakes are (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Download  
     
    Export citation  
     
    Bookmark   208 citations  
  20. Editorial: Risks of artificial intelligence.Vincent C. Müller - 2015 - In Risks of general intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Rethinking the Redlines Against AI Existential Risks.Yi Zeng, Xin Guan, Enmeng Lu & Jinyu Fan - manuscript
    The ongoing evolution of advanced AI systems will have profound, enduring, and significant impacts on human existence that must not be overlooked. These impacts range from empowering humanity to achieve unprecedented transcendence to potentially causing catastrophic threats to our existence. To proactively and preventively mitigate these potential threats, it is crucial to establish clear redlines to prevent AI-induced existential risks by constraining and regulating advanced AI and their related AI actors. This paper explores different concepts of AI existential risk, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. All too human? Identifying and mitigating ethical risks of Social AI.Henry Shevlin - manuscript
    This paper presents an overview of the risks and benefits of Social AI, understood as conversational AI systems that cater to human social needs like romance, companionship, or entertainment. Section 1 of the paper provides a brief history of conversational AI systems and introduces conceptual distinctions to help distinguish varieties of Social AI and pathways to their deployment. Section 2 of the paper adds further context via a brief discussion of anthropomorphism and its relevance to assessment of human-chatbot relationships. Section (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. New developments in the philosophy of AI.Vincent C. Müller - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  25. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  27. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. What is AI safety? What do we want it to be?Jacqueline Harding & Cameron Domenico Kirk-Giannini - manuscript
    The field of AI safety seeks to prevent or reduce the harms caused by AI systems. A simple and appealing account of what is distinctive of AI safety as a field holds that this feature is constitutive: a research project falls within the purview of AI safety just in case it aims to prevent or reduce the harms caused by AI systems. Call this appealingly simple account The Safety Conception of AI safety. Despite its simplicity and appeal, we argue that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  30. Clinical Decisions Using AI Must Consider Patient Values.Jonathan Birch, Kathleen A. Creel, Abhinav K. Jha & Anya Plutynski - 2022 - Nature Medicine 28:229–232.
    Built-in decision thresholds for AI diagnostics are ethically problematic, as patients may differ in their attitudes about the risk of false-positive and false-negative results, which will require that clinicians assess patient values.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Health AI Poses Distinct Harms and Potential Benefits for Disabled People.Charles Binkley, Joel Michael Reynolds & Andrew Schuman - 2025 - Nature Medicine 1.
    This piece in Nature Medicine notes the risks that incorporation of AI systems into health care poses to disabled patients and proposes ways to avoid them and instead create benefit.
    Download  
     
    Export citation  
     
    Bookmark  
  32. Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  33. AI through the looking glass: an empirical study of structural social and ethical challenges in AI.Mark Ryan, Nina De Roo, Hao Wang, Vincent Blok & Can Atik - 2024 - AI and Society 1 (1):1-17.
    This paper examines how professionals (N = 32) working on artificial intelligence (AI) view structural AI ethics challenges like injustices and inequalities beyond individual agents' direct intention and control. This paper answers the research question: What are professionals’ perceptions of the structural challenges of AI (in the agri-food sector)? This empirical paper shows that it is essential to broaden the scope of ethics of AI beyond micro- and meso-levels. While ethics guidelines and AI ethics often focus on the responsibility of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Decentralized Governance of AI Agents.Tomer Jordi Chaffer, Charles von Goins Ii, Bayo Okusanya, Dontrail Cotlage & Justin Goldston - manuscript
    Autonomous AI agents present transformative opportunities and significant governance challenges. Existing frameworks, such as the EU AI Act and the NIST AI Risk Management Framework, fall short of addressing the complexities of these agents, which are capable of independent decision-making, learning, and adaptation. To bridge these gaps, we propose the ETHOS (Ethical Technology and Holistic Oversight System) framework—a decentralized governance (DeGov) model leveraging Web3 technologies, including blockchain, smart contracts, and decentralized autonomous organizations (DAOs). ETHOS establishes a global registry for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  36. Combating Disinformation with AI: Epistemic and Ethical Challenges.Benjamin Lange & Ted Lechterman - 2021 - IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) 1:1-5.
    AI-supported methods for identifying and combating disinformation are progressing in their development and application. However, these methods face a litany of epistemic and ethical challenges. These include (1) robustly defining disinformation, (2) reliably classifying data according to this definition, and (3) navigating ethical risks in the deployment of countermeasures, which involve a mixture of harms and benefits. This paper seeks to expose and offer preliminary analysis of these challenges.
    Download  
     
    Export citation  
     
    Bookmark  
  37. AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Generative AI in the Creative Industries: Revolutionizing Art, Music, and Media.Mohammed F. El-Habibi, Mohammed A. Hamed, Raed Z. Sababa, Mones M. Al-Hanjori, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Engineering Research(Ijaer) 8 (10):71-74.
    Abstract: Generative AI is transforming the creative industries by redefining how art, music, and media are produced and experienced. This paper explores the profound impact of generative AI technologies, such as deep learning models and neural networks, on creative processes. By enabling artists, musicians, and content creators to collaborate with AI, these systems enhance creativity, speed up production, and generate novel forms of expression. The paper also addresses ethical considerations, including intellectual property rights, the role of human creativity in AI-assisted (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2023 - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  40. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Turchin Alexey & David Denkenberger (eds.), Artificial Intelligence Safety and Security. CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  41.  11
    Aligning AI with the Universal Formula for Balanced Decision-Making.Angelito Malicse - manuscript
    -/- Aligning AI with the Universal Formula for Balanced Decision-Making -/- Introduction -/- Artificial Intelligence (AI) represents a highly advanced form of automated information processing, capable of analyzing vast amounts of data, identifying patterns, and making predictive decisions. However, the effectiveness of AI depends entirely on the integrity of its inputs, processing mechanisms, and decision-making frameworks. If AI is programmed without a foundational understanding of natural laws, it risks reinforcing misinformation, bias, and societal imbalance. -/- Angelito Malicse’s universal formula, particularly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  43. AI Sovereignty: Navigating the Future of International AI Governance.Yu Chen - manuscript
    The rapid proliferation of artificial intelligence (AI) technologies has ushered in a new era of opportunities and challenges, prompting nations to grapple with the concept of AI sovereignty. This article delves into the definition and implications of AI sovereignty, drawing parallels to the well-established notion of cyber sovereignty. By exploring the connotations of AI sovereignty, including control over AI development, data sovereignty, economic impacts, national security considerations, and ethical and cultural dimensions, the article provides a comprehensive understanding of this emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44.  75
    Automating Business Process Compliance for the EU AI Act.Claudio Novelli, Guido Governatori & Antonino Rotolo - 2023 - In Giovanni Sileno, Jerry Spanakis & Gijs van Dijck (eds.), Legal Knowledge and Information Systems. Proceedings of JURIX 2023. IOS Press. pp. 125-130.
    The EU AI Act is the first step toward a comprehensive legal framework for AI. It introduces provisions for AI systems based on their risk levels in relation to fundamental rights. Providers of AI systems must conduct Conformity Assessments before market placement. Recent amendments added Fundamental Rights Impact Assessments for high-risk AI system users, focusing on compliance with EU and national laws, fundamental rights, and potential impacts on EU values. The paper suggests that automating business process compliance can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Values in science and AI alignment research.Leonard Dung - manuscript
    Roughly, empirical AI alignment research (AIA) is an area of AI research which investigates empirically how to design AI systems in line with human goals. This paper examines the role of non-epistemic values in AIA. It argues that: (1) Sciences differ in the degree to which values influence them. (2) AIA is strongly value-laden. (3) This influence of values is managed inappropriately and thus threatens AIA’s epistemic integrity and ethical beneficence. (4) AIA should strive to achieve value transparency, critical scrutiny (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. AI in HRM: Revolutionizing Recruitment, Performance Management, and Employee Engagement.Mostafa El-Ghoul, Mohammed M. Almassri, Mohammed F. El-Habibi, Mohanad H. Al-Qadi, Alaa Abou Eloun, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Applied Research (Ijaar) 8 (9):16-23.
    Artificial Intelligence (AI) is rapidly transforming Human Resource Management (HRM) by enhancing the efficiency and effectiveness of key functions such as recruitment, performance management, and employee engagement. This paper explores the integration of AI technologies in HRM, focusing on their potential to revolutionize these critical areas. In recruitment, AI-driven tools streamline candidate sourcing, screening, and selection processes, leading to more accurate and unbiased hiring decisions. Performance management is similarly transformed, with AI enabling continuous, data-driven feedback and personalized development plans that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Decolonial AI as Disenclosure.Warmhold Jan Thomas Mollema - 2024 - Open Journal of Social Sciences 12 (2):574-603.
    The development and deployment of machine learning and artificial intelligence (AI) engender “AI colonialism”, a term that conceptually overlaps with “data colonialism”, as a form of injustice. AI colonialism is in need of decolonization for three reasons. Politically, because it enforces digital capitalism’s hegemony. Ecologically, as it negatively impacts the environment and intensifies the extraction of natural resources and consumption of energy. Epistemically, since the social systems within which AI is embedded reinforce Western universalism by imposing Western colonial values on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Mapping the potential AI-driven virtual hyper-personalised ikigai universe.Soenke Ziesche & Roman Yampolskiy - manuscript
    Ikigai is a Japanese concept, which, in brief, refers to the “reason or purpose to live”. I-risks have been identified as a category of risks complementing x- risks, i.e., existential risks, and s-risks, i.e., suffering risks, which describes undesirable future scenarios in which humans are deprived of the pursuit of their individual ikigai. While some developments in AI increase i-risks, there are also AI-driven virtual opportunities, which reduce i-risks by increasing the space of potential ikigais, largely due to developments in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. AI Safety: A Climb To Armageddon?Herman Cappelen, Dever Josh & Hawthorne John - manuscript
    This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, the expected correlation between an AI system's power at the point of failure and the severity of the resulting harm, and the tendency of safety measures to enable AI systems to become more powerful before failing - safety efforts have negative expected utility. The paper examines three response strategies: Optimism, Mitigation, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Making metaethics work for AI: realism and anti-realism.Michal Klincewicz & Lily E. Frank - 2018 - In Mark Coeckelbergh, M. Loh, J. Funk, M. Seibt & J. Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and Public Space. pp. 311-318.
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 981