Switch to: References

Add citations

You must login to add citations.
  1. Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context.Roel Dobbe & Anouk Wolters - 2024 - Minds and Machines 34 (2):1-51.
    This paper provides an empirical and conceptual account on seeing machine learning models as part of a sociotechnical system to identify relevant vulnerabilities emerging in the context of use. As ML is increasingly adopted in socially sensitive and safety-critical domains, many ML applications end up not delivering on their promises, and contributing to new forms of algorithmic harm. There is still a lack of empirical insights as well as conceptual tools and frameworks to properly understand and design for the impact (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act.Johann Laux - 2024 - AI and Society 39 (6):2853-2866.
    Human oversight has become a key mechanism for the governance of artificial intelligence (“AI”). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three contributions. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial Intelligence and Autonomy: On the Ethical Dimension of Recommender Systems.Sofia Bonicalzi, Mario De Caro & Benedetta Giovanola - 2023 - Topoi 42 (3):819-832.
    Feasting on a plethora of social media platforms, news aggregators, and online marketplaces, recommender systems (RSs) are spreading pervasively throughout our daily online activities. Over the years, a host of ethical issues have been associated with the diffusion of RSs and the tracking and monitoring of users’ data. Here, we focus on the impact RSs may have on personal autonomy as the most elusive among the often-cited sources of grievance and public outcry. On the grounds of a philosophically nuanced notion (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Mechanical Jurisprudence and Domain Distortion: How Predictive Algorithms Warp the Law.Dasha Pruss - 2021 - Philosophy of Science 88 (5):1101-1112.
    The value-ladenness of computer algorithms is typically framed around issues of epistemic risk. In this article, I examine a deeper sense of value-ladenness: algorithmic methods are not only themselves value-laden but also introduce value into how we reason about their domain of application. I call this domain distortion. In particular, using insights from jurisprudence, I show that the use of recidivism risk assessment algorithms presupposes legal formalism and blurs the distinction between liability assessment and sentencing, which distorts how the domain (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • An Ethical Inquiry of the Effect of Cockpit Automation on the Responsibilities of Airline Pilots: Dissonance or Meaningful Control?W. David Holford - 2020 - Journal of Business Ethics 176 (1):141-157.
    Airline pilots are attributed ultimate responsibility and final authority over their aircraft to ensure the safety and well-being of all its occupants. Yet, with the advent of automation technologies, a dissonance has emerged in that pilots have lost their actual decision-making authority as well as their ability to act in an adequate fashion towards meeting their responsibilities when unexpected circumstances or emergencies occur. Across the literature in human factor studies, we show how automated algorithmic technologies have wrestled control away from (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Ethical Implications of Using Artificial Intelligence in Auditing.Ivy Munoko, Helen L. Brown-Liburd & Miklos Vasarhelyi - 2020 - Journal of Business Ethics 167 (2):209-234.
    Accounting firms are reporting the use of Artificial Intelligence in their auditing and advisory functions, citing benefits such as time savings, faster data analysis, increased levels of accuracy, more in-depth insight into business processes, and enhanced client service. AI, an emerging technology that aims to mimic the cognitive skills and judgment of humans, promises competitive advantages to the adopter. As a result, all the Big 4 firms are reporting its use and their plans to continue with this innovation in areas (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Clouds on the horizon: clinical decision support systems, the control problem, and physician-patient dialogue.Mahmut Alpertunga Kara - forthcoming - Medicine, Health Care and Philosophy:1-13.
    Artificial intelligence-based clinical decision support systems have a potential to improve clinical practice, but they may have a negative impact on the physician-patient dialogue, because of the control problem. Physician-patient dialogue depends on human qualities such as compassion, trust, and empathy, which are shared by both parties. These qualities are necessary for the parties to reach a shared understanding -the merging of horizons- about clinical decisions. The patient attends the clinical encounter not only with a malfunctioning body, but also with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Internal Moral Enhancement Might Be politically Better than External Moral Enhancement.John Danaher - 2016 - Neuroethics 12 (1):39-54.
    Technology could be used to improve morality but it could do so in different ways. Some technologies could augment and enhance moral behaviour externally by using external cues and signals to push and pull us towards morally appropriate behaviours. Other technologies could enhance moral behaviour internally by directly altering the way in which the brain captures and processes morally salient information or initiates moral action. The question is whether there is any reason to prefer one method over the other? In (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Extending knowledge-how.Gloria Andrada - 2022 - Philosophical Explorations 26 (2):197-213.
    This paper examines what it takes for a state of knowledge-how to be extended (i.e. partly constituted by entities external to the organism) within an anti-intellectualist approach to knowledge-how. I begin by examining an account of extended knowledge-how developed by Carter, J. Adam, and Boleslaw Czarnecki. 2016 [“Extended Knowledge-How.” Erkenntnis 81 (2): 259–273], and argue that it fails to properly distinguish between cognitive outsourcing and extended knowing-how. I then introduce a solution to this problem which rests on the distribution of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare.James Johnson - 2022 - Journal of Military Ethics 21 (3):246-271.
    Can AI solve the ethical, moral, and political dilemmas of warfare? How is artificial intelligence (AI)-enabled warfare changing the way we think about the ethical-political dilemmas and practice of war? This article explores the key elements of the ethical, moral, and political dilemmas of human-machine interactions in modern digitized warfare. It provides a counterpoint to the argument that AI “rational” efficiency can simultaneously offer a viable solution to human psychological and biological fallibility in combat while retaining “meaningful” human control over (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical Issues in Democratizing Digital Phenotypes and Machine Learning in the Next Generation of Digital Health Technologies.Maurice D. Mulvenna, Raymond Bond, Jack Delaney, Fatema Mustansir Dawoodbhoy, Jennifer Boger, Courtney Potts & Robin Turkington - 2021 - Philosophy and Technology 34 (4):1945-1960.
    Digital phenotyping is the term given to the capturing and use of user log data from health and wellbeing technologies used in apps and cloud-based services. This paper explores ethical issues in making use of digital phenotype data in the arena of digital health interventions. Products and services based on digital wellbeing technologies typically include mobile device apps as well as browser-based apps to a lesser extent, and can include telephony-based services, text-based chatbots, and voice-activated chatbots. Many of these digital (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Teasing out Artificial Intelligence in Medicine: An Ethical Critique of Artificial Intelligence and Machine Learning in Medicine.Mark Henderson Arnold - 2021 - Journal of Bioethical Inquiry 18 (1):121-139.
    The rapid adoption and implementation of artificial intelligence in medicine creates an ontologically distinct situation from prior care models. There are both potential advantages and disadvantages with such technology in advancing the interests of patients, with resultant ontological and epistemic concerns for physicians and patients relating to the instatiation of AI as a dependent, semi- or fully-autonomous agent in the encounter. The concept of libertarian paternalism potentially exercised by AI (and those who control it) has created challenges to conventional assessments (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Effective Human Oversight of AI-Based Systems: A Signal Detection Perspective on the Detection of Inaccurate and Unfair Outputs.Markus Langer, Kevin Baum & Nadine Schlicker - 2024 - Minds and Machines 35 (1):1-30.
    Legislation and ethical guidelines around the globe call for effective human oversight of AI-based systems in high-risk contexts – that is oversight that reliably reduces the risks otherwise associated with the use of AI-based systems. Such risks may relate to the imperfect accuracy of systems (e.g., inaccurate classifications) or to ethical concerns (e.g., unfairness of outputs). Given the significant role that human oversight is expected to play in the operation of AI-based systems, it is crucial to better understand the conditions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The effects of explanations on automation bias.Mor Vered, Tali Livni, Piers Douglas Lionel Howe, Tim Miller & Liz Sonenberg - 2023 - Artificial Intelligence 322 (C):103952.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Toward a Psychology of Deep Reinforcement Learning Agents Using a Cognitive Architecture.Konstantinos Mitsopoulos, Sterling Somers, Joel Schooler, Christian Lebiere, Peter Pirolli & Robert Thomson - 2022 - Topics in Cognitive Science 14 (4):756-779.
    We argue that cognitive models can provide a common ground between human users and deep reinforcement learning (Deep RL) algorithms for purposes of explainable artificial intelligence (AI). Casting both the human and learner as cognitive models provides common mechanisms to compare and understand their underlying decision-making processes. This common grounding allows us to identify divergences and explain the learner's behavior in human understandable terms. We present novel salience techniques that highlight the most relevant features in each model's decision-making, as well (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Effects of an Unexpected and Expected Event on Older Adults’ Autonomic Arousal and Eye Fixations During Autonomous Driving.Alice C. Stephenson, Iveta Eimontaite, Praminda Caleb-Solly, Phillip L. Morgan, Tabasum Khatun, Joseph Davis & Chris Alford - 2020 - Frontiers in Psychology 11.
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic Decision-Making and the Control Problem.John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2019 - Minds and Machines 29 (4):555-578.
    The danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it “the control problem”, understood as the tendency of the human within a human–machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Automation-Induced Complacency Potential: Development and Validation of a New Scale.Stephanie M. Merritt, Alicia Ako-Brew, William J. Bryant, Amy Staley, Michael McKenna, Austin Leone & Lei Shirase - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (2 other versions)Neuroergonomics: Where the Cortex Hits the Concrete.P. A. Hancock - 2019 - Frontiers in Human Neuroscience 13.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Comparing the Relative Strengths of EEG and Low-Cost Physiological Devices in Modeling Attention Allocation in Semiautonomous Vehicles.Dean Cisler, Pamela M. Greenwood, Daniel M. Roberts, Ryan McKendrick & Carryl L. Baldwin - 2019 - Frontiers in Human Neuroscience 13.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents.Ewart J. de Visser, Paul J. Beatty, Justin R. Estepp, Spencer Kohn, Abdulaziz Abubshait, John R. Fedota & Craig G. McDonald - 2018 - Frontiers in Human Neuroscience 12.
    Download  
     
    Export citation  
     
    Bookmark  
  • On doing multi-act arithmetic: A multitrait-multimethod approach of performance dimensions in integrated multitasking.Frank Schumann, Michael B. Steinborn, Hagen C. Flehmig, Jens Kürten, Robert Langner & Lynn Huestegge - 2022 - Frontiers in Psychology 13.
    Here we present a systematic plan to the experimental study of test–retest reliability in the multitasking domain, adopting the multitrait-multimethod approach to evaluate the psychometric properties of performance in Düker-type speeded multiple-act mental arithmetic. These form of tasks capacitate the experimental analysis of integrated multi-step processing by combining multiple mental operations in flexible ways in the service of the overarching goal of completing the task. A particular focus was on scoring methodology, particularly measures of response speed variability. To this end, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “Computer says no”: Algorithmic decision support and organisational responsibility.Angelika Adensamer, Rita Gsenger & Lukas Daniel Klausner - 2021 - Journal of Responsible Technology 7-8 (C):100014.
    Download  
     
    Export citation  
     
    Bookmark  
  • Reduced Attention Allocation during Short Periods of Partially Automated Driving: An Event-Related Potentials Study.Ignacio Solís-Marcos, Alejandro Galvao-Carmona & Katja Kircher - 2017 - Frontiers in Human Neuroscience 11.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Looking for Age Differences in Self-Driving Vehicles: Examining the Effects of Automation Reliability, Driving Risk, and Physical Impairment on Trust.Ericka Rovira, Anne Collins McLaughlin, Richard Pak & Luke High - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark  
  • Over What Range Should Reliabilists Measure Reliability?Stefan Buijsman - 2024 - Erkenntnis 89 (7):2641-2661.
    Process reliabilist accounts claim that a belief is justified when it is the result of a reliable belief-forming process. Yet over what range of possible token processes is this reliability calculated? I argue against the idea that _all_ possible token processes (in the actual world, or some other subset of possible worlds) are to be considered using the case of a user acquiring beliefs based on the output of an AI system, which is typically reliable for a substantial local range (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • From Trust in Automation to Decision Neuroscience: Applying Cognitive Neuroscience Methods to Understand and Improve Interaction Decisions Involved in Human Automation Interaction.Kim Drnec, Amar R. Marathe, Jamie R. Lukos & Jason S. Metcalfe - 2016 - Frontiers in Human Neuroscience 10.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Effects of Trust, Self-Confidence, and Feedback on the Use of Decision Automation.Rebecca Wiczorek & Joachim Meyer - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark  
  • Individual Differences in Attributes of Trust in Automation: Measurement and Application to System Design.Thomas B. Sheridan - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark   2 citations