Switch to: References

Add citations

You must login to add citations.
  1. Mammalian Value Systems.Gopal P. Sarma & Nick J. Hay - 2016 - Arxiv Preprint Arxiv:1607.08289.
    Characterizing human values is a topic deeply interwoven with the sciences, humanities, political philosophy, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest increasingly sophisticated AI systems becoming widespread and responsible for (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of society, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Beyond Preferences in AI Alignment.Tan Zhi-Xuan, Micah Carroll, Matija Franklin & Hal Ashton - forthcoming - Philosophical Studies:1-51.
    The dominant practice of AI alignment assumes (1) that preferences are an adequate representation of human values, (2) that human rationality can be understood in terms of maximizing the satisfaction of preferences, and (3) that AI systems should be aligned with the preferences of one or more humans to ensure that they behave safely and in accordance with our values. Whether implicitly followed or explicitly endorsed, these commitments constitute what we term apreferentistapproach to AI alignment. In this paper, we characterize (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI for crisis decisions.Tina Comes - 2024 - Ethics and Information Technology 26 (1):1-14.
    Increasingly, our cities are confronted with crises. Fuelled by climate change and a loss of biodiversity, increasing inequalities and fragmentation, challenges range from social unrest and outbursts of violence to heatwaves, torrential rainfall, or epidemics. As crises require rapid interventions that overwhelm human decision-making capacity, AI has been portrayed as a potential avenue to support or even automate decision-making. In this paper, I analyse the specific challenges of AI in urban crisis management as an example and test case for many (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts forward Sentientist Coherent Extrapolated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral disagreement and artificial intelligence.Pamela Robinson - 2024 - AI and Society 39 (5):2425-2438.
    Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. _Moral solutions_ apply a moral theory or related principles and largely ignore the details of the disagreement. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Value Sensitive Design for autonomous weapon systems – a primer.Christine Boshuijzen-van Burken - 2023 - Ethics and Information Technology 25 (1):1-14.
    Value Sensitive Design (VSD) is a design methodology developed by Batya Friedman and Peter Kahn (2003) that brings in moral deliberations in an early stage of a design process. It assumes that neither technology itself is value neutral, nor shifts the value-ladennes to the sole usage of technology. This paper adds to emerging literature onVSD for autonomous weapons systems development and discusses extant literature on values in autonomous systems development in general and in autonomous weapons development in particular. I identify (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Fear of AI: an inquiry into the adoption of autonomous cars in spite of fear, and a theoretical framework for the study of artificial intelligence technology acceptance.Federico Cugurullo & Ransford A. Acheampong - forthcoming - AI and Society:1-16.
    Artificial intelligence (AI) is becoming part of the everyday. During this transition, people’s intention to use AI technologies is still unclear and emotions such as fear are influencing it. In this paper, we focus on autonomous cars to first verify empirically the extent to which people fear AI and then examine the impact that fear has on their intention to use AI-driven vehicles. Our research is based on a systematic survey and it reveals that while individuals are largely afraid of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Extending the Is-ought Problem to Top-down Artificial Moral Agents.Robert James M. Boyles - 2022 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 9 (2):171–189.
    This paper further cashes out the notion that particular types of intelligent systems are susceptible to the is-ought problem, which espouses the thesis that no evaluative conclusions may be inferred from factual premises alone. Specifically, it focuses on top-down artificial moral agents, providing ancillary support to the view that these kinds of artifacts are not capable of producing genuine moral judgements. Such is the case given that machines built via the classical programming approach are always composed of two parts, namely: (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Nonhuman Value: A Survey of the Intrinsic Valuation of Natural and Artificial Nonhuman Entities.Andrea Owe, Seth D. Baum & Mark Coeckelbergh - 2022 - Science and Engineering Ethics 28 (5):1-29.
    To be intrinsically valuable means to be valuable for its own sake. Moral philosophy is often ethically anthropocentric, meaning that it locates intrinsic value within humans. This paper rejects ethical anthropocentrism and asks, in what ways might nonhumans be intrinsically valuable? The paper answers this question with a wide-ranging survey of theories of nonhuman intrinsic value. The survey includes both moral subjects and moral objects, and both natural and artificial nonhumans. Literatures from environmental ethics, philosophy of technology, philosophy of art, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):239–256.
    As the range of potential uses for Artificial Intelligence, in particular machine learning, has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - manuscript
    As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.
    This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements (...)
    Download  
     
    Export citation  
     
    Bookmark   60 citations  
  • Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely integrated (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • AI Ethics and Value Alignment for Nonhuman Animals.Soenke Ziesche - 2021 - Philosophies 6 (2):31.
    This article is about a specific, but so far neglected peril of AI, which is that AI systems may become existential as well as causing suffering risks for nonhuman animals. The AI value alignment problem has now been acknowledged as critical for AI safety as well as very hard. However, currently it has only been attempted to align the values of AI systems with human values. It is argued here that this ought to be extended to the values of nonhuman (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History.Phil Torres - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press.
    This chapter argues that dual-use emerging technologies are distributing unprecedented offensive capabilities to nonstate actors. To counteract this trend, some scholars have proposed that states become a little “less liberal” by implementing large-scale surveillance policies to monitor the actions of citizens. This is problematic, though, because the distribution of offensive capabilities is also undermining states’ capacity to enforce the rule of law. I will suggest that the only plausible escape from this conundrum, at least from our present vantage point, is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence.Alexander Buhmann & Christian Fieseler - forthcoming - Business Ethics Quarterly:1-34.
    Responsible innovation in artificial intelligence calls for public deliberation: well-informed “deep democratic” debate that involves actors from the public, private, and civil society sectors in joint efforts to critically address the goals and means of AI. Adopting such an approach constitutes a challenge, however, due to the opacity of AI and strong knowledge boundaries between experts and citizens. This undermines trust in AI and undercuts key conditions for deliberation. We approach this challenge as a problem of situating the knowledge of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Surveillance, security, and AI as technological acceptance.Yong Jin Park & S. Mo Jones-Jang - 2023 - AI and Society 38 (6):2667-2678.
    Public consumption of artificial intelligence (AI) technologies has been rarely investigated from the perspective of data surveillance and security. We show that the technology acceptance model, when properly modified with security and surveillance fears about AI, builds an insight on how individuals begin to use, accept, or evaluate AI and its automated decisions. We conducted two studies, and found positive roles of perceived ease of use (PEOU) and perceived usefulness (PU). AI security concern, however, negatively affected PEOU and PU, resulting (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - 2023 - Minds and Machines 33 (1):221-248.
    Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • From AI for people to AI for the world and the universe.Seth D. Baum & Andrea Owe - 2023 - AI and Society 38 (2):679-680.
    Recent work in AI ethics often calls for AI to advance human values and interests. The concept of “AI for people” is one notable example. Though commendable in some respects, this work falls short by excluding the moral significance of nonhumans. This paper calls for a shift in AI ethics to more inclusive paradigms such as “AI for the world” and “AI for the universe”. The paper outlines the case for more inclusive paradigms and presents implications for moral philosophy and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Synthetic Deliberation: Can Emulated Imagination Enhance Machine Ethics?Robert Pinka - 2020 - Minds and Machines 31 (1):121-136.
    Artificial intelligence is becoming increasingly entwined with our daily lives: AIs work as assistants through our phones, control our vehicles, and navigate our vacuums. As these objects become more complex and work within our societies in ways that affect our well-being, there is a growing demand for machine ethics—we want a guarantee that the various automata in our lives will behave in a way that minimizes the amount of harm they create. Though many technologies exist as moral artifacts, the development (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Government regulation or industry self-regulation of AI? Investigating the relationships between uncertainty avoidance, people’s AI risk perceptions, and their regulatory preferences in Europe.Bartosz Wilczek, Sina Thäsler-Kordonouri & Maximilian Eder - forthcoming - AI and Society:1-15.
    Artificial Intelligence (AI) has the potential to influence people’s lives in various ways as it is increasingly integrated into important decision-making processes in key areas of society. While AI offers opportunities, it is also associated with risks. These risks have sparked debates about how AI should be regulated, whether through government regulation or industry self-regulation. AI-related risk perceptions can be shaped by national cultures, especially the cultural dimension of uncertainty avoidance. This raises the question of whether people in countries with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics-based auditing of automated decision-making systems: intervention points and policy implications.Jakob Mökander & Maria Axente - 2023 - AI and Society 38 (1):153-171.
    Organisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS. In previous work, we proposed that ethics-based auditing (EBA)—that is, a structured process by which ADMS are assessed for consistency with relevant principles or norms—can (a) help organisations (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations