Results for 'AI Risk'

986 found
Order:
  1. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  2. (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  3. AI Risk Denialism.Roman V. Yampolskiy - manuscript
    In this work, we survey skepticism regarding AI risk and show parallels with other types of scientific skepticism. We start by classifying different types of AI Risk skepticism and analyze their root causes. We conclude by suggesting some intervention approaches, which may be successful in reducing AI risk skepticism, at least amongst artificial intelligence researchers.
    Download  
     
    Export citation  
     
    Bookmark  
  4. Towards a Taxonomy of AI Risks in the Health Domain.Delaram Golpayegani, Joshua Hovsha, Leon Rossmaier, Rana Saniei & Jana Misic - 2022 - 2022 Fourth International Conference on Transdisciplinary Ai (Transai).
    The adoption of AI in the health sector has its share of benefits and harms to various stakeholder groups and entities. There are critical risks involved in using AI systems in the health domain; risks that can have severe, irreversible, and life-changing impacts on people’s lives. With the development of innovative AI-based applications in the medical and healthcare sectors, new types of risks emerge. To benefit from novel AI applications in this domain, the risks need to be managed in order (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.Brandon Perry & Risto Uuk - 2019 - Big Data and Cognitive Computing 3 (2):1-17.
    This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are created (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Two Types of AI Existential Risk: Decisive and Accumulative.Atoosa Kasirzadeh - 2025 - Philosophical Studies 1:1-29.
    The conventional discourse on existential risks (x-risks) from AI typically focuses on abrupt, dire events caused by advanced AI systems, particularly those that might achieve or surpass human-level intelligence. These events have severe consequences that either lead to human extinction or irreversibly cripple human civilization to a point beyond recovery. This decisive view, however, often neglects the serious possibility of AI x-risk manifesting gradually through an incremental series of smaller yet interconnected disruptions, crossing critical thresholds over time. This paper (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. AI Survival Stories: a Taxonomic Analysis of AI Existential Risk.Herman Cappelen, Simon Goldstein & John Hawthorne - forthcoming - Philosophy of Ai.
    Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  9.  41
    Climate Adaptation Money Isn’t Reaching the Most Vulnerable— And Why It Matters.Đại Bàng - 2025 - The Bird Village.
    Climate change is affecting communities across the globe, yet those most vulnerable to its impacts are often the last to receive the financial assistance they need. A recent critical review by Venner, García-Lamarca, and Olazabal (2024) examines how climate adaptation finance—funding intended to help societies adjust to the impacts of climate change—is distributed. The findings are concerning: despite repeated global commitments to prioritize those most at risk, adaptation finance tends to benefit the most powerful and well-resourced actors rather than (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Extinction Risks from AI: Invisible to Science?Vojtech Kovarik, Christiaan van Merwijk & Ida Mattsson - manuscript
    In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart’s Law as “Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity”, and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart’s Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims to be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. AI Welfare Risks.Adrià Moret - forthcoming - Philosophical Studies.
    In the coming years or decades, as frontier AI systems become more capable and agentic, it is increasingly likely that they meet the sufficient conditions to be welfare subjects under the three major theories of well-being. Consequently, we should extend some moral consideration to advanced AI systems. Drawing from leading philosophical theories of desire, affect and autonomy I argue that under the three major theories of well-being, there are two AI welfare risks: restricting the behaviour of advanced AI systems and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12.  30
    AI-Powered Risk Modeling in Quantum Finance : Redefining Enterprise Decision Systems.Sachin Dixit - 2022 - International Journal of Scientific Research in Science, Engineering and Technology 9 (4):547-572.
    The integration of artificial intelligence (AI) and quantum computing is poised to redefine the landscape of financial risk modeling and enterprise decision-making systems. This paper investigates the synergistic potential of these transformative technologies, emphasizing the development of hybrid AI-quantum algorithms to address the increasing complexity of modern financial systems. Traditional risk modeling methodologies often face significant limitations in capturing intricate market dynamics and accounting for real-time decision-making constraints. By leveraging quantum computing's unparalleled computational capabilities, particularly its ability to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  13. Medical AI, Inductive Risk, and the Communication of Uncertainty: The Case of Disorders of Consciousness.Jonathan Birch - forthcoming - Journal of Medical Ethics.
    Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is “cognitive-motor dissociation” (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram (EEG) recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient’s family, because this information may confuse, alarm and mislead. Instead, we need a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Risks of artificial intelligence.Vincent C. Muller (ed.) - 2015 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15.  96
    Cardiovascular image analysis: AI can analyze heart images to assess cardiovascular health and identify potential risks.Sankara Reddy Thamma - 2024 - International Journal of Science and Research Archive 12 (2):2969 - 2976.
    CVDs continue to be the number one cause of death, therefore early detection and treatment is crucial. Currently, the application of Artificial Intelligence (AI) in the cardiac image analysis has become popular due to increased accuracy, productivity, and modelling. In this paper, the use of the AI system to study the echocardiogram, CT, MRI, and other images of the heart and blood vessels to view the risk factors for cardiovascular diseases is discussed. We focus on the current methods of (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  16. Risk of What? Defining Harm in the Context of AI Safety.Laura Fearnley, Elly Cairns, Tom Stoneham, Philippa Ryan, Jenn Chubb, Jo Iacovides, Cynthia Iglesias Urrutia, Phillip Morgan, John McDermid & Ibrahim Habli - manuscript
    For decades, the field of system safety has designed safe systems by reducing the risk of physical harm to humans, property and the environment to an acceptable level. Recently, this definition of safety has come under scrutiny by governments and researchers who argue that the narrow focus on reducing physical harm, whilst necessary, is not sufficient to secure the safety of AI systems. There is growing pressure to expand the scope of safety in the context of AI to address (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Generative AI in Digital Insurance: Redefining Customer Experience, Fraud Detection, and Risk Management.Adavelli Sateesh Reddy - 2024 - International Journal of Computer Science and Information Technology Research 5 (2):41-60.
    This abstract summarizes, in essence, what generative AI means to the insurance industry. The kind of promise generated AI offers to insurance is huge: in risk assessment, customer experience, and operational efficiency. Natural disaster impact, financial market volatility, and cyber threat are augmented with techniques of real time scenario generation and modeling as well as predictive simulation based on synthetic data. One of the challenges that stand in the way of deploying these AI methods, however, is data privacy, model (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Risks Deriving from the Agential Profiles of Modern AI Systems.Barnaby Crook - forthcoming - In Vincent C. Müller, Leonard Dung, Guido Löhr & Aliya Rumana, Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    Modern AI systems based on deep learning are neither traditional tools nor full-blown agents. Rather, they are characterised by idiosyncratic agential profiles, i.e., combinations of agency-relevant properties. Modern AI systems lack superficial features which enable people to recognise agents but possess sophisticated information processing capabilities which can undermine human goals. I argue that systems fitting this description, when they are adversarial with respect to human users, pose particular risks to those users. To explicate my argument, I provide conditions under which (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19.  35
    Agentic AI for Secure Financial Data Processing: Real-Time Analytics, Cloud Migration, and Risk Mitigation in AWS-Based Architectures.Kumar Rohit - 2025 - International Journal of Innovative Research in Science Engineering and Technology 14 (4):5488-5498.
    Using Agentic AI in concert with AWS Full Stack Development, this study offers a technically solid and scalable method to securely process financial data. AWS Lambda for serverless computation, Amazon S3 for scalable storage, AWS Glue for data classification and transformation, Athena for SQL-like querying, and QuickSight for interactive visualisation comprise the fundamental elements of the system. While Agentic AI modules enable autonomous decision-making abilities to regulate data integrity, recognise abnormalities, and adaptably react to compliance violations, Custom AWS configuration rules (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Editorial: Risks of artificial intelligence.Vincent C. Müller - 2015 - In Risks of general intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  32
    AI In Banking and Finance: Opportunities, Risks, and Future Trends.Raju Krishna Mohan - 2020 - International Journal of Innovative Research in Science, Engineering and Technology 9 (5).
    Artificial Intelligence (AI) is transforming the banking and finance sectors by enhancing efficiency, personalization, and decision-making. This paper explores AI's applications, including credit scoring, fraud detection, and customer service, while addressing associated risks such as data privacy concerns, algorithmic biases, and regulatory challenges. The study also examines future trends, emphasizing the integration of AI with blockchain and quantum computing. Through a literature review and analysis of current practices, this research provides insights into the evolving landscape of AI in financial services.
    Download  
     
    Export citation  
     
    Bookmark  
  23.  58
    AI-Enhanced ETL for Modernizing Data Warehouses in Insurance and Risk Management.Seethala Srinivasa Chakravarthy - 2019 - Journal of Scientific and Engineering Research 6 (7):298-301.
    The insurance and risk management sectors are experiencing a significant transformation driven by the need for more sophisticated data analysis and predictive modeling. This paper explores the critical role of artificial intelligence (AI) in enhancing Extract, Transform, Load (ETL) processes for modernizing data warehouses within these industries. We examine how AI-enhanced ETL addresses key challenges such as data quality, integration of diverse data sources, and real-time processing. The study investigates various applications and proposes a framework for successful implementation. Our (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  25. Ethical Considerations of AI and ML in Insurance Risk Management: Addressing Bias and Ensuring Fairness (8th edition).Palakurti Naga Ramesh - 2025 - International Journal of Multidisciplinary Research in Science, Engineering and Technology 8 (1):202-210.
    Artificial Intelligence (AI) and Machine Learning (ML) are transforming the insurance industry by optimizing risk assessment, fraud detection, and customer service. However, the rapid adoption of these technologies raises significant ethical concerns, particularly regarding bias and fairness. This chapter explores the ethical challenges of using AI and ML in insurance risk management, focusing on bias mitigation and fairness enhancement strategies. By analyzing real-world case studies, regulatory frameworks, and technical methodologies, this chapter aims to provide a roadmap for developing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. The Role of AI in Cyber Risk Management: Predictive Analytics for Security Incident Forecasting and Mitigation.Vishal Sresth - 2021 - International Journal of Research and Analytical Reviews 8 (2).
    The growing sophistication and frequency of cyber threats stressed the limitations of traditional reactive cyber security approaches. Organizations are turning to Artificial Intelligence (AI) to improve cyber risk management through predictive capacities. This article investigates the role of AI-oriented predictive analysis in providing and mitigating security incidents. Analyzing historical cyber safety data and leveraging machine learning models as a temporal series forecast, anomaly detection, and supervised classification-ESTE study evaluate the ability of AI systems to anticipate potential threats and support (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27.  28
    Risk, Regulation, and Reward: Navigating AI Adoption in Finance.Tripathi Ishita Priya - 2025 - International Journal of Multidisciplinary Research in Science, Engineering and Technology 8 (5).
    Generative models have revolutionized the landscape of artificial intelligence by shifting the focus from predictive tasks to creative and constructive capabilities. From the early use of probabilistic models to the modern architectures of deep neural networks, the evolution of generative models has been marked by increasing sophistication in capturing data distributions and generating novel content. This paper explores the trajectory of generative modeling—from rudimentary statistical techniques to complex structures such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformer-based large (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI.Jonathan Birch - 2024 - Oxford: Oxford University Press.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  30. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Download  
     
    Export citation  
     
    Bookmark   231 citations  
  31. Rethinking the Redlines Against AI Existential Risks.Yi Zeng, Xin Guan, Enmeng Lu & Jinyu Fan - manuscript
    The ongoing evolution of advanced AI systems will have profound, enduring, and significant impacts on human existence that must not be overlooked. These impacts range from empowering humanity to achieve unprecedented transcendence to potentially causing catastrophic threats to our existence. To proactively and preventively mitigate these potential threats, it is crucial to establish clear redlines to prevent AI-induced existential risks by constraining and regulating advanced AI and their related AI actors. This paper explores different concepts of AI existential risk, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. All too human? Identifying and mitigating ethical risks of Social AI.Henry Shevlin - manuscript
    This paper presents an overview of the risks and benefits of Social AI, understood as conversational AI systems that cater to human social needs like romance, companionship, or entertainment. Section 1 of the paper provides a brief history of conversational AI systems and introduces conceptual distinctions to help distinguish varieties of Social AI and pathways to their deployment. Section 2 of the paper adds further context via a brief discussion of anthropomorphism and its relevance to assessment of human-chatbot relationships. Section (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  33. Beyond the Claims: Emerging AI Models and Predictive Analytics in Property & Casualty Insurance Risk Assessment.Adavelli Sateesh Reddy - 2024 - International Journal of Science and Research 13 (7):1625-1631.
    P&C insurers have an important role in addressing financial risk management needs but now struggle to respond to the new forms of risk. Historical analysis and actuarial calculations, which form the backbone of classical approaches to risk measurement and management, are not well suited to such new kinds of risks as climate change, cyber risks, and business cycle risks. These conventional approaches are also a static method for selling, which has limited potential in changing quickly with new (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. New developments in the philosophy of AI.Vincent C. Müller - 2016 - In Vincent C. Müller, Fundamental Issues of Artificial Intelligence. Cham: Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  35. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  38. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  39. What is AI safety? What do we want it to be?Jacqueline Harding & Cameron Domenico Kirk-Giannini - manuscript
    The field of AI safety seeks to prevent or reduce the harms caused by AI systems. A simple and appealing account of what is distinctive of AI safety as a field holds that this feature is constitutive: a research project falls within the purview of AI safety just in case it aims to prevent or reduce the harms caused by AI systems. Call this appealingly simple account The Safety Conception of AI safety. Despite its simplicity and appeal, we argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Combating Disinformation with AI: Epistemic and Ethical Challenges.Benjamin Lange & Ted Lechterman - 2021 - IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) 1:1-5.
    AI-supported methods for identifying and combating disinformation are progressing in their development and application. However, these methods face a litany of epistemic and ethical challenges. These include (1) robustly defining disinformation, (2) reliably classifying data according to this definition, and (3) navigating ethical risks in the deployment of countermeasures, which involve a mixture of harms and benefits. This paper seeks to expose and offer preliminary analysis of these challenges.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Turchin Alexey & David Denkenberger, Artificial Intelligence Safety and Security. CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  42. AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. AI through the looking glass: an empirical study of structural social and ethical challenges in AI.Mark Ryan, Nina De Roo, Hao Wang, Vincent Blok & Can Atik - 2024 - AI and Society 1 (1):1-17.
    This paper examines how professionals (N = 32) working on artificial intelligence (AI) view structural AI ethics challenges like injustices and inequalities beyond individual agents' direct intention and control. This paper answers the research question: What are professionals’ perceptions of the structural challenges of AI (in the agri-food sector)? This empirical paper shows that it is essential to broaden the scope of ethics of AI beyond micro- and meso-levels. While ethics guidelines and AI ethics often focus on the responsibility of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Generative AI in the Creative Industries: Revolutionizing Art, Music, and Media.Mohammed F. El-Habibi, Mohammed A. Hamed, Raed Z. Sababa, Mones M. Al-Hanjori, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Engineering Research(Ijaer) 8 (10):71-74.
    Abstract: Generative AI is transforming the creative industries by redefining how art, music, and media are produced and experienced. This paper explores the profound impact of generative AI technologies, such as deep learning models and neural networks, on creative processes. By enabling artists, musicians, and content creators to collaborate with AI, these systems enhance creativity, speed up production, and generate novel forms of expression. The paper also addresses ethical considerations, including intellectual property rights, the role of human creativity in AI-assisted (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Clinical Decisions Using AI Must Consider Patient Values.Jonathan Birch, Kathleen A. Creel, Abhinav K. Jha & Anya Plutynski - 2022 - Nature Medicine 28:229–232.
    Built-in decision thresholds for AI diagnostics are ethically problematic, as patients may differ in their attitudes about the risk of false-positive and false-negative results, which will require that clinicians assess patient values.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Decolonial AI as Disenclosure.Warmhold Jan Thomas Mollema - 2024 - Open Journal of Social Sciences 12 (2):574-603.
    The development and deployment of machine learning and artificial intelligence (AI) engender “AI colonialism”, a term that conceptually overlaps with “data colonialism”, as a form of injustice. AI colonialism is in need of decolonization for three reasons. Politically, because it enforces digital capitalism’s hegemony. Ecologically, as it negatively impacts the environment and intensifies the extraction of natural resources and consumption of energy. Epistemically, since the social systems within which AI is embedded reinforce Western universalism by imposing Western colonial values on (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  47. Health AI Poses Distinct Harms and Potential Benefits for Disabled People.Charles Binkley, Joel Michael Reynolds & Andrew Schuman - 2025 - Nature Medicine 1.
    This piece in Nature Medicine notes the risks that incorporation of AI systems into health care poses to disabled patients and proposes ways to avoid them and instead create benefit.
    Download  
     
    Export citation  
     
    Bookmark  
  48. AI in HRM: Revolutionizing Recruitment, Performance Management, and Employee Engagement.Mostafa El-Ghoul, Mohammed M. Almassri, Mohammed F. El-Habibi, Mohanad H. Al-Qadi, Alaa Abou Eloun, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Applied Research (Ijaar) 8 (9):16-23.
    Artificial Intelligence (AI) is rapidly transforming Human Resource Management (HRM) by enhancing the efficiency and effectiveness of key functions such as recruitment, performance management, and employee engagement. This paper explores the integration of AI technologies in HRM, focusing on their potential to revolutionize these critical areas. In recruitment, AI-driven tools streamline candidate sourcing, screening, and selection processes, leading to more accurate and unbiased hiring decisions. Performance management is similarly transformed, with AI enabling continuous, data-driven feedback and personalized development plans that (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  49.  34
    AI Does Not Judge: The Structural Ineligibility of Artificial Systems for Moral Authority.Jinho Kim - unknown
    This paper challenges the growing discourse suggesting artificial intelligence (AI) may one day serve as a moral decision-maker or possess moral authority. Using the framework of Judgemental Philosophy, we argue that AI, regardless of its sophistication in simulating reasoning or consistency, is structurally ineligible for genuine moral judgement because it cannot satisfy the necessary preconditions defined by the Judgemental Triad (Constructivity, Coherence, and Resonance). While AI systems can exhibit high degrees of Constructivity (generating complex outputs from data) and Coherence (maintaining (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  98
    Harnessing AI and Business Rules for Financial Transactions: Addressing Fraud and Security Challenges.Palakurti Naga Ramesh - 2024 - Esp International Journal of Advancements in Computational Technology 2 (4):104-119.
    In today’s rapidly evolving financial landscape, the adoption of Artificial Intelligence (AI) and Machine Learning (ML) technologies, coupled with the deployment of Business Rules Management Systems (BRMS), has transformed how financial transactions are conducted, monitored, and secured. With fraud, particularly in check deposit transactions, becoming increasingly sophisticated, financial institutions are turning to AI and ML to enhance their risk management strategies. This paper explores the integration of AI-driven models and business rules in financial transactions, focusing on their application in (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
1 — 50 / 986