Switch to: References

Add citations

You must login to add citations.
  1. AI, Radical Ignorance, and the Institutional Approach to Consent.Etye Steinberg - 2024 - Philosophy and Technology 37 (3):1-26.
    More and more, we face AI-based products and services. Using these services often requires our explicit consent, e.g., by agreeing to the services’ Terms and Conditions clause. Current advances introduce the ability of AI to evolve and change its own modus operandi over time in such a way that we cannot know, at the moment of consent, what it is in the future to which we are now agreeing. Therefore, informed consent is impossible regarding certain kinds of AI. Call this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can Finance Be a Virtuous Practice? A MacIntyrean Account.Marta Rocchi, Ignacio Ferrero & Ron Beadle - 2021 - Business Ethics Quarterly 31 (1):75-105.
    ABSTRACTFinance may suffer from institutional deformations that subordinate its distinctive goods to the pursuit of external goods, but this should encourage attempts to reform the institutionalization of finance rather than to reject its potential for virtuous business activity. This article argues that finance should be regarded as a domain-relative practice. Alongside management, its moral status thereby varies with the purposes it serves. Hence, when practitioners working in finance facilitate projects that create common goods, it allows them to develop virtues. This (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Against the Double Standard Argument in AI Ethics.Scott Hill - 2024 - Philosophy and Technology 37 (1):1-5.
    In an important and widely cited paper, Zerilli, Knott, Maclaurin, and Gavaghan (2019) argue that opaque AI decision makers are at least as transparent as human decision makers and therefore the concern that opaque AI is not sufficiently transparent is mistaken. I argue that the concern about opaque AI should not be understood as the concern that such AI fails to be transparent in a way that humans are transparent. Rather, the concern is that the way in which opaque AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding via exemplification in XAI: how explaining image classification benefits from exemplars.Sara Mann - forthcoming - AI and Society:1-16.
    Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Healthy Mistrust: Medical Black Box Algorithms, Epistemic Authority, and Preemptionism.Andreas Wolkenstein - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    In the ethics of algorithms, a specifically epistemological analysis is rarely undertaken in order to gain a critique (or a defense) of the handling of or trust in medical black box algorithms (BBAs). This article aims to begin to fill this research gap. Specifically, the thesis is examined according to which such algorithms are regarded as epistemic authorities (EAs) and that the results of a medical algorithm must completely replace other convictions that patients have (preemptionism). If this were true, it (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Jaz u odgovornosti u informatičkoj eri.Jelena Mijić - 2023 - Društvo I Politika 4 (4):25-38.
    Odgovornost pripisujemo sa namerom da postignemo neki cilj. Jedno od opših mesta u filozofskoj literaturi je da osobi možemo pripisati moralnu odgovornost ako su zadovoljena bar dva uslova: da subjekt delanja ima kontrolu nad svojim postupcima i da je u stanju da navede razloge u prilog svog postupka. Međutim, četvrtu industrijsku revoluciju karakterišu sociotehnološke pojave koje nas potencijalno suočavaju sa tzv. problemom jaza u odgovornosti. Rasprave o odgovornosti u kontekstu veštačke inteligencije karakteriše nejasna i neodređena upotreba ovog pojma. Da bismo (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Consideration and Disclosure of Group Risks in Genomics and Other Data-Centric Research: Does the Common Rule Need Revision?Carolyn Riley Chapman, Gwendolyn P. Quinn, Heini M. Natri, Courtney Berrios, Patrick Dwyer, Kellie Owens, Síofra Heraty & Arthur L. Caplan - forthcoming - American Journal of Bioethics:1-14.
    Harms and risks to groups and third-parties can be significant in the context of research, particularly in data-centric studies involving genomic, artificial intelligence, and/or machine learning technologies. This article explores whether and how United States federal regulations should be adapted to better align with current ethical thinking and protect group interests. Three aspects of the Common Rule deserve attention and reconsideration with respect to group interests: institutional review board (IRB) assessment of the risks/benefits of research; disclosure requirements in the informed (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Democratizing AI from a Sociotechnical Perspective.Merel Noorman & Tsjalling Swierstra - 2023 - Minds and Machines 33 (4):563-586.
    Artificial Intelligence (AI) technologies offer new ways of conducting decision-making tasks that influence the daily lives of citizens, such as coordinating traffic, energy distributions, and crowd flows. They can sort, rank, and prioritize the distribution of fines or public funds and resources. Many of the changes that AI technologies promise to bring to such tasks pertain to decisions that are collectively binding. When these technologies become part of critical infrastructures, such as energy networks, citizens are affected by these decisions whether (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics of Artificial Intelligence.Stefan Buijsman, Michael Klenk & Jeroen van den Hoven - forthcoming - In Nathalie Smuha (ed.), Cambridge Handbook on the Law, Ethics and Policy of AI. Cambridge University Press.
    Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Engineering Trustworthiness in the Online Environment.Hugh Desmond - 2023 - In Mark Alfano & David Collins (eds.), The Moral Psychology of Trust. Lexington Books. pp. 215-237.
    Algorithm engineering is sometimes portrayed as a new 21st century return of manipulative social engineering. Yet algorithms are necessary tools for individuals to navigate online platforms. Algorithms are like a sensory apparatus through which we perceive online platforms: this is also why individuals can be subtly but pervasively manipulated by biased algorithms. How can we better understand the nature of algorithm engineering and its proper function? In this chapter I argue that algorithm engineering can be best conceptualized as a type (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Going beyond the “common suspects”: to be presumed innocent in the era of algorithms, big data and artificial intelligence.Athina Sachoulidou - forthcoming - Artificial Intelligence and Law:1-54.
    This article explores the trend of increasing automation in law enforcement and criminal justice settings through three use cases: predictive policing, machine evidence and recidivism algorithms. The focus lies on artificial-intelligence-driven tools and technologies employed, whether at pre-investigation stages or within criminal proceedings, in order to decode human behaviour and facilitate decision-making as to whom to investigate, arrest, prosecute, and eventually punish. In this context, this article first underlines the existence of a persistent dilemma between the goal of increasing the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reframing data ethics in research methods education: a pathway to critical data literacy.Javiera Atenas, Leo Havemann & Cristian Timmermann - 2023 - International Journal of Educational Technology in Higher Education 20:11.
    This paper presents an ethical framework designed to support the development of critical data literacy for research methods courses and data training programmes in higher education. The framework we present draws upon our reviews of literature, course syllabi and existing frameworks on data ethics. For this research we reviewed 250 research methods syllabi from across the disciplines, as well as 80 syllabi from data science programmes to understand how or if data ethics was taught. We also reviewed 12 data ethics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trust and Trustworthiness in Al Ethics.Karoline Reinhardt - 2022 - In Al and Ethics. Springer.
    Download  
     
    Export citation  
     
    Bookmark  
  • Decolonizing AI Ethics: Relational Autonomy as a Means to Counter AI Harms.Sábëlo Mhlambi & Simona Tiribelli - 2023 - Topoi 42 (3):867-880.
    Many popular artificial intelligence (AI) ethics frameworks center the principle of autonomy as necessary in order to mitigate the harms that might result from the use of AI within society. These harms often disproportionately affect the most marginalized within society. In this paper, we argue that the principle of autonomy, as currently formalized in AI ethics, is itself flawed, as it expresses only a mainstream mainly liberal notion of autonomy as rational self-determination, derived from Western traditional philosophy. In particular, we (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A Hippocratic Oath for mathematicians? Mapping the landscape of ethics in mathematics.Dennis Müller, Maurice Chiodo & James Franklin - 2022 - Science and Engineering Ethics 28 (5):1-30.
    While the consequences of mathematically-based software, algorithms and strategies have become ever wider and better appreciated, ethical reflection on mathematics has remained primitive. We review the somewhat disconnected suggestions of commentators in recent decades with a view to piecing together a coherent approach to ethics in mathematics. Calls for a Hippocratic Oath for mathematicians are examined and it is concluded that while lessons can be learned from the medical profession, the relation of mathematicians to those affected by their work is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Effectiveness of Embedded Values Analysis Modules in Computer Science Education: An Empirical Study.Matthew Kopec, Meica Magnani, Vance Ricks, Roben Torosyan, John Basl, Nicholas Miklaucic, Felix Muzny, Ronald Sandler, Christo Wilson, Adam Wisniewski-Jensen, Cora Lundgren, Kevin Mills & Mark Wells - 2023 - Big Data and Society 10 (1).
    Embedding ethics modules within computer science courses has become a popular response to the growing recognition that CS programs need to better equip their students to navigate the ethical dimensions of computing technologies like AI, machine learning, and big data analytics. However, the popularity of this approach has outpaced the evidence of its positive outcomes. To help close that gap, this empirical study reports positive results from Northeastern’s program that embeds values analysis modules into CS courses. The resulting data suggest (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach.Filippo Santoni de Sio, Giulio Mecacci, Simeon Calvert, Daniel Heikoop, Marjan Hagenzieker & Bart van Arem - 2023 - Minds and Machines 33 (4):587-611.
    The paper presents a framework to realise “meaningful human control” over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project “Meaningful Human Control over Automated Driving Systems” lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Individual benefits and collective challenges: Experts’ views on data-driven approaches in medical research and healthcare in the German context.Silke Schicktanz & Lorina Buhr - 2022 - Big Data and Society 9 (1).
    Healthcare provision, like many other sectors of society, is undergoing major changes due to the increased use of data-driven methods and technologies. This increased reliance on big data in medicine can lead to shifts in the norms that guide healthcare providers and patients. Continuous critical normative reflection is called for to track such potential changes. This article presents the results of an interview-based study with 20 German and Swiss experts from the fields of medicine, life science research, informatics and humanities (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition.Rosalie Waelen & Michał Wieczorek - 2022 - Philosophy and Technology 35 (2).
    AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of recognition (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Promises and Pitfalls of Algorithm Use by State Authorities.Maryam Amir Haeri, Kathrin Hartmann, Jürgen Sirsch, Georg Wenzelburger & Katharina A. Zweig - 2022 - Philosophy and Technology 35 (2):1-31.
    Algorithmic systems are increasingly used by state agencies to inform decisions about humans. They produce scores on risks of recidivism in criminal justice, indicate the probability for a job seeker to find a job in the labor market, or calculate whether an applicant should get access to a certain university program. In this contribution, we take an interdisciplinary perspective, provide a bird’s eye view of the different key decisions that are to be taken when state actors decide to use an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Epistemic injustice and data science technologies.John Symons & Ramón Alvarado - 2022 - Synthese 200 (2):1-26.
    Technologies that deploy data science methods are liable to result in epistemic harms involving the diminution of individuals with respect to their standing as knowers or their credibility as sources of testimony. Not all harms of this kind are unjust but when they are we ought to try to prevent or correct them. Epistemically unjust harms will typically intersect with other more familiar and well-studied kinds of harm that result from the design, development, and use of data science technologies. However, (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Binding the Smart City Human-Digital System with Communicative Processes.Brandt Dainow - 2021 - In Michael Nagenborg, Taylor Stone, Margoth González Woge & Pieter E. Vermaas (eds.), Technology and the City: Towards a Philosophy of Urban Technologies. Springer Verlag. pp. 389-411.
    This chapter will explore the dynamics of power underpinning ethical issues within smart cities via a new paradigm derived from Systems Theory. The smart city is an expression of technology as a socio-technical system. The vision of the smart city contains a deep fusion of many different technical systems into a single integrated “ambient intelligence”. ETICA Project, 2010, p. 102). Citizens of the smart city will not experience a succession of different technologies, but a single intelligent and responsive environment through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Book review: Luca Possati (2021): “The algorithmic unconscious: how psychoanalysis helps in understanding AI” (Routledge). [REVIEW]Marc Cheong - 2024 - AI and Society 39 (2):819-821.
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)The epistemological foundations of data science: a critical analysis.Jules Desai, David Watson, Vincent Wang, Mariarosaria Taddeo & Luciano Floridi - manuscript
    The modern abundance and prominence of data has led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data science. We propose a partition of the epistemology of data science into the following five domains: (i) the constitution of data science; (ii) the kind of enquiry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical problems in the use of algorithms in data management and in a free market economy.Rafał Szopa - 2023 - AI and Society 38 (6):2487-2498.
    The problem that I present in this paper concerns the issue of ethical evaluation of algorithms, especially those used in social media and which create profiles of users of these media and new technologies that have recently emerged and are intended to change the functioning of technologies used in data management. Systems such as Overton, SambaNova or Snorkel were created to help engineers create data management models, but they are based on different assumptions than the previous approach in machine learning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Fairness as Equal Concession: Critical Remarks on Fair AI.Christopher Yeomans & Ryan van Nood - 2021 - Science and Engineering Ethics 27 (6):1-14.
    Although existing work draws attention to a range of obstacles in realizing fair AI, the field lacks an account that emphasizes how these worries hang together in a systematic way. Furthermore, a review of the fair AI and philosophical literature demonstrates the unsuitability of ‘treat like cases alike’ and other intuitive notions as conceptions of fairness. That review then generates three desiderata for a replacement conception of fairness valuable to AI research: (1) It must provide a meta-theory for understanding tradeoffs, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Speeding up to keep up: exploring the use of AI in the research process.Jennifer Chubb, Peter Cowling & Darren Reed - 2022 - AI and Society 37 (4):1439-1457.
    There is a long history of the science of intelligent machines and its potential to provide scientific insights have been debated since the dawn of AI. In particular, there is renewed interest in the role of AI in research and research policy as an enabler of new methods, processes, management and evaluation which is still relatively under-explored. This empirical paper explores interviews with leading scholars on the potential impact of AI on research practice and culture through deductive, thematic analysis to (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Responsible nudging for social good: new healthcare skills for AI-driven digital personal assistants.Marianna Capasso & Steven Umbrello - 2022 - Medicine, Health Care and Philosophy 25 (1):11-22.
    Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • AI Recruitment Algorithms and the Dehumanization Problem.Megan Fritts & Frank Cabrera - 2021 - Ethics and Information Technology (4):1-11.
    According to a recent survey by the HR Research Institute, as the presence of artificial intelligence (AI) becomes increasingly common in the workplace, HR professionals are worried that the use of recruitment algorithms will lead to a “dehumanization” of the hiring process. Our main goals in this paper are threefold: i) to bring attention to this neglected issue, ii) to clarify what exactly this concern about dehumanization might amount to, and iii) to sketch an argument for why dehumanizing the hiring (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Varieties of artifacts: Embodied, perceptual, cognitive, and affective.Richard Heersmink - 2021 - Topics in Cognitive Science (4):1-24.
    The primary goal of this essay is to provide a comprehensive overview and analysis of the various relations between material artifacts and the embodied mind. A secondary goal of this essay is to identify some of the trends in the design and use of artifacts. First, based on their functional properties, I identify four categories of artifacts co-opted by the embodied mind, namely (1) embodied artifacts, (2) perceptual artifacts, (3) cognitive artifacts, and (4) affective artifacts. These categories can overlap and (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected (...)
    Download  
     
    Export citation  
     
    Bookmark   55 citations  
  • The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do not (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns.Aurelia Tamò-Larrieux, Christoph Lutz, Eduard Fosch Villaronga & Heike Felzmann - 2019 - Big Data and Society 6 (1).
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Towards Transparency by Design for Artificial Intelligence.Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz & Aurelia Tamò-Larrieux - 2020 - Science and Engineering Ethics 26 (6):3333-3361.
    In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making environments. With the rise of artificial intelligence and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Artificial Intelligence and Medical Humanities.Kirsten Ostherr - 2020 - Journal of Medical Humanities 43 (2):211-232.
    The use of artificial intelligence in healthcare has led to debates about the role of human clinicians in the increasingly technological contexts of medicine. Some researchers have argued that AI will augment the capacities of physicians and increase their availability to provide empathy and other uniquely human forms of care to their patients. The human vulnerabilities experienced in the healthcare context raise the stakes of new technologies such as AI, and the human dimensions of AI in healthcare have particular significance (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Automated Influence and the Challenge of Cognitive Security.Sarah Rajtmajer & Daniel Susser - forthcoming - HoTSoS: ACM Symposium on Hot Topics in the Science of Security.
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   79 citations