Switch to: References

Add citations

You must login to add citations.
  1. Algorithmic Transparency, Manipulation, and Two Concepts of Liberty.Ulrik Franke - 2024 - Philosophy and Technology 37 (1):1-6.
    As more decisions are made by automated algorithmic systems, the transparency of these systems has come under scrutiny. While such transparency is typically seen as beneficial, there is a also a critical, Foucauldian account of it. From this perspective, worries have recently been articulated that algorithmic transparency can be used for manipulation, as part of a disciplinary power structure. Klenk (Philosophy & Technology 36, 79, 2023) recently argued that such manipulation should not be understood as exploitation of vulnerable victims, but (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Brain–Computer Interfaces: Lessons to Be Learned from the Ethics of Algorithms.Andreas Wolkenstein, Ralf J. Jox & Orsolya Friedrich - 2018 - Cambridge Quarterly of Healthcare Ethics 27 (4):635-646.
    :Brain–computer interfaces are driven essentially by algorithms; however, the ethical role of such algorithms has so far been neglected in the ethical assessment of BCIs. The goal of this article is therefore twofold: First, it aims to offer insights into whether the problems related to the ethics of BCIs can be better grasped with the help of already existing work on the ethics of algorithms. As a second goal, the article explores what kinds of solutions are available in that body (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Transparency as Manipulation? Uncovering the Disciplinary Power of Algorithmic Transparency.Hao Wang - 2022 - Philosophy and Technology 35 (3):1-25.
    Automated algorithms are silently making crucial decisions about our lives, but most of the time we have little understanding of how they work. To counter this hidden influence, there have been increasing calls for algorithmic transparency. Much ink has been spilled over the informational account of algorithmic transparency—about how much information should be revealed about the inner workings of an algorithm. But few studies question the power structure beneath the informational disclosure of the algorithm. As a result, the information disclosure (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial intelligence, public control, and supply of a vital commodity like COVID-19 vaccine.Vladimir Tsyganov - 2023 - AI and Society 38 (6):2619-2628.
    The article examines the problem of ensuring the political stability of a democratic social system with a shortage of a vital commodity (like vaccine against COVID-19). In such a system, members of society citizens assess the authorities. Thus, actions by the authorities to increase the supply of this commodity can contribute to citizens' approval and hence political stability. However, this supply is influenced by random factors, the actions of competitors, etc. Therefore, citizens do not have sufficient information about all the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Framing the effects of machine learning on science.Victo J. Silva, Maria Beatriz M. Bonacelli & Carlos A. Pacheco - forthcoming - AI and Society:1-17.
    Studies investigating the relationship between artificial intelligence and science tend to adopt a partial view. There is no broad and holistic view that synthesizes the channels through which this interaction occurs. Our goal is to systematically map the influence of the latest AI techniques on science. We draw on the work of Nathan Rosenberg to develop a taxonomy of the effects of technology on science. The proposed framework comprises four categories of technology effects on science: intellectual, economic, experimental and instrumental. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mapping the Ethicality of Algorithmic Pricing: A Review of Dynamic and Personalized Pricing. [REVIEW]Peter Seele, Claus Dierksmeier, Reto Hofstetter & Mario D. Schultz - 2019 - Journal of Business Ethics 170 (4):697-719.
    Firms increasingly deploy algorithmic pricing approaches to determine what to charge for their goods and services. Algorithmic pricing can discriminate prices both dynamically over time and personally depending on individual consumer information. Although legal, the ethicality of such approaches needs to be examined as often they trigger moral concerns and sometimes outrage. In this research paper, we provide an overview and discussion of the ethical challenges germane to algorithmic pricing. As a basis for our discussion, we perform a systematic interpretative (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Humanistic interpretation and machine learning.Juho Pääkkönen & Petri Ylikoski - 2021 - Synthese 199:1461–1497.
    This paper investigates how unsupervised machine learning methods might make hermeneutic interpretive text analysis more objective in the social sciences. Through a close examination of the uses of topic modeling—a popular unsupervised approach in the social sciences—it argues that the primary way in which unsupervised learning supports interpretation is by allowing interpreters to discover unanticipated information in larger and more diverse corpora and by improving the transparency of the interpretive process. This view highlights that unsupervised modeling does not eliminate the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Behavioural artificial intelligence: an agenda for systematic empirical studies of artificial inference.Tore Pedersen & Christian Johansen - 2020 - AI and Society 35 (3):519-532.
    Artificial intelligence receives attention in media as well as in academe and business. In media coverage and reporting, AI is predominantly described in contrasted terms, either as the ultimate solution to all human problems or the ultimate threat to all human existence. In academe, the focus of computer scientists is on developing systems that function, whereas philosophy scholars theorize about the implications of this functionality for human life. In the interface between technology and philosophy there is, however, one imperative aspect (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial intelligence, transparency, and public decision-making.Karl de Fine Licht & Jenny de Fine Licht - 2020 - AI and Society 35 (4):917-926.
    The increasing use of Artificial Intelligence for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Dissecting the Algorithmic Leviathan: On the Socio-Political Anatomy of Algorithmic Governance.Pascal D. König - 2020 - Philosophy and Technology 33 (3):467-485.
    A growing literature is taking an institutionalist and governance perspective on how algorithms shape society based on unprecedented capacities for managing social complexity. Algorithmic governance altogether emerges as a novel and distinctive kind of societal steering. It appears to transcend established categories and modes of governance—and thus seems to call for new ways of thinking about how social relations can be regulated and ordered. However, as this paper argues, despite its novel way of realizing outcomes of collective steering and coordination, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. [REVIEW]Tomáš Kliegr, Štěpán Bahník & Johannes Fürnkranz - 2021 - Artificial Intelligence 295 (C):103458.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Algorithmic Accountability In the Making.Deborah G. Johnson - 2021 - Social Philosophy and Policy 38 (2):111-127.
    Algorithms are now routinely used in decision-making; they are potent components in decisions that affect the lives of individuals and the activities of public and private institutions. Although use of algorithms has many benefits, a number of problems have been identified with their use in certain domains, most notably in domains where safety and fairness are important. Awareness of these problems has generated public discourse calling for algorithmic accountability. However, the current discourse focuses largely on algorithms and their opacity. I (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Automating anticorruption?María Carolina Jiménez & Emanuela Ceva - 2022 - Ethics and Information Technology 24 (4):1-14.
    The paper explores some normative challenges concerning the integration of Machine Learning (ML) algorithms into anticorruption in public institutions. The challenges emerge from the tensions between an approach treating ML algorithms as allies to an exclusively legalistic conception of anticorruption and an approach seeing them within an institutional ethics of office accountability. We explore two main challenges. One concerns the variable opacity of some ML algorithms, which may affect public officeholders’ capacity to account for institutional processes relying upon ML techniques. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Ethical and Epistemological Utility of Explicable AI in Medicine.Christian Herzog - 2022 - Philosophy and Technology 35 (2):1-31.
    In this article, I will argue in favor of both the ethical and epistemological utility of explanations in artificial intelligence -based medical technology. I will build on the notion of “explicability” due to Floridi, which considers both the intelligibility and accountability of AI systems to be important for truly delivering AI-powered services that strengthen autonomy, beneficence, and fairness. I maintain that explicable algorithms do, in fact, strengthen these ethical principles in medicine, e.g., in terms of direct patient–physician contact, as well (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective.Erik Hermann - 2022 - Journal of Business Ethics 179 (1):43-61.
    Artificial intelligence is shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Algorithms and values in justice and security.Paul Hayes, Ibo van de Poel & Marc Steen - 2020 - AI and Society 35 (3):533-555.
    This article presents a conceptual investigation into the value impacts and relations of algorithms in the domain of justice and security. As a conceptual investigation, it represents one step in a value sensitive design based methodology. Here, we explicate and analyse the expression of values of accuracy, privacy, fairness and equality, property and ownership, and accountability and transparency in this context. We find that values are sensitive to disvalue if algorithms are designed, implemented or deployed inappropriately or without sufficient consideration (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models.Torbjørn Gundersen & Kristine Bærøe - 2022 - Science and Engineering Ethics 28 (2):1-16.
    This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and applied (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI&Society: editorial volume 35.2: the trappings of AI Agency.Karamjit S. Gill - 2020 - AI and Society 35 (2):289-296.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Biased Face Recognition Technology Used by Government: A Problem for Liberal Democracy.Michael Gentzel - 2021 - Philosophy and Technology 34 (4):1639-1663.
    This paper presents a novel philosophical analysis of the problem of law enforcement’s use of biased face recognition technology in liberal democracies. FRT programs used by law enforcement in identifying crime suspects are substantially more error-prone on facial images depicting darker skin tones and females as compared to facial images depicting Caucasian males. This bias can lead to citizens being wrongfully investigated by police along racial and gender lines. The author develops and defends “A Liberal Argument Against Biased FRT,” which (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • First- and Second-Level Bias in Automated Decision-making.Ulrik Franke - 2022 - Philosophy and Technology 35 (2):1-20.
    Recent advances in artificial intelligence offer many beneficial prospects. However, concerns have been raised about the opacity of decisions made by these systems, some of which have turned out to be biased in various ways. This article makes a contribution to a growing body of literature on how to make systems for automated decision-making more transparent, explainable, and fair by drawing attention to and further elaborating a distinction first made by Nozick between first-level bias in the application of standards and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Towards Transparency by Design for Artificial Intelligence.Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz & Aurelia Tamò-Larrieux - 2020 - Science and Engineering Ethics 26 (6):3333-3361.
    In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making environments. With the rise of artificial intelligence and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The disciplinary power of predictive algorithms: a Foucauldian perspective.Paul B. de Laat - 2019 - Ethics and Information Technology 21 (4):319-329.
    Big Data are increasingly used in machine learning in order to create predictive models. How are predictive practices that use such models to be situated? In the field of surveillance studies many of its practitioners assert that “governance by discipline” has given way to “governance by risk”. The individual is dissolved into his/her constituent data and no longer addressed. I argue that, on the contrary, in most of the contexts where predictive modelling is used, it constitutes Foucauldian discipline. Compliance to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Algorithmic decision-making employing profiling: will trade secrecy protection render the right to explanation toothless?Paul B. de Laat - 2022 - Ethics and Information Technology 24 (2).
    Algorithmic decision-making based on profiling may significantly affect people’s destinies. As a rule, however, explanations for such decisions are lacking. What are the chances for a “right to explanation” to be realized soon? After an exploration of the regulatory efforts that are currently pushing for such a right it is concluded that, at the moment, the GDPR stands out as the main force to be reckoned with. In cases of profiling, data subjects are granted the right to receive meaningful information (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms.Bas de Boer & Olya Kudina - 2021 - Theoretical Medicine and Bioethics 42 (5):245-266.
    In this paper, we examine the qualitative moral impact of machine learning-based clinical decision support systems in the process of medical diagnosis. To date, discussions about machine learning in this context have focused on problems that can be measured and assessed quantitatively, such as by estimating the extent of potential harm or calculating incurred risks. We maintain that such discussions neglect the qualitative moral impact of these technologies. Drawing on the philosophical approaches of technomoral change and technological mediation theory, which (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Algorithmic augmentation of democracy: considering whether technology can enhance the concepts of democracy and the rule of law through four hypotheticals.Paul Burgess - 2022 - AI and Society 37 (1):97-112.
    The potential use, relevance, and application of AI and other technologies in the democratic process may be obvious to some. However, technological innovation and, even, its consideration may face an intuitive push-back in the form of algorithm aversion (Dietvorst et al. J Exp Psychol 144(1):114–126, 2015). In this paper, I confront this intuition and suggest that a more ‘extreme’ form of technological change in the democratic process does not necessarily result in a worse outcome in terms of the fundamental concepts (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations