Switch to: References

Add citations

You must login to add citations.
  1. Exposing implicit biases and stereotypes in human and artificial intelligence: state of the art and challenges with a focus on gender.Ludovica Marinucci, Claudia Mazzuca & Aldo Gangemi - 2023 - AI and Society 38 (2):747-761.
    Biases in cognition are ubiquitous. Social psychologists suggested biases and stereotypes serve a multifarious set of cognitive goals, while at the same time stressing their potential harmfulness. Recently, biases and stereotypes became the purview of heated debates in the machine learning community too. Researchers and developers are becoming increasingly aware of the fact that some biases, like gender and race biases, are entrenched in the algorithms some AI applications rely upon. Here, taking into account several existing approaches that address the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Big Data for Biomedical Research and Personalised Medicine: an Epistemological and Ethical Cross-Analysis.Thierry Magnin & Mathieu Guillermin - 2017 - Human and Social Studies. Research and Practice 6 (3):13-36.
    Big data techniques, data-driven science and their technological applications raise many serious ethical questions, notably about privacy protection. In this paper, we highlight an entanglement between epistemology and ethics of big data. Discussing the mobilisation of big data in the fields of biomedical research and health care, we show how an overestimation of big data epistemic power – of their objectivity or rationality understood through the lens of neutrality – can become ethically threatening. Highlighting the irreducible non-neutrality at play in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning and power relations.Jonne Maas - forthcoming - AI and Society.
    There has been an increased focus within the AI ethics literature on questions of power, reflected in the ideal of accountability supported by many Responsible AI guidelines. While this recent debate points towards the power asymmetry between those who shape AI systems and those affected by them, the literature lacks normative grounding and misses conceptual clarity on how these power dynamics take shape. In this paper, I develop a workable conceptualization of said power dynamics according to Cristiano Castelfranchi’s conceptual framework (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A Neo-Republican Critique of AI ethics.Jonne Maas - 2022 - Journal of Responsible Technology 9 (C):100022.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Digital Phenotype: a Philosophical and Ethical Exploration.Michele Loi - 2019 - Philosophy and Technology 32 (1):155-171.
    The concept of the digital phenotype has been used to refer to digital data prognostic or diagnostic of disease conditions. Medical conditions may be inferred from the time pattern in an insomniac’s tweets, the Facebook posts of a depressed individual, or the web searches of a hypochondriac. This paper conceptualizes digital data as an extended phenotype of humans, that is as digital information produced by humans and affecting human behavior and culture. It argues that there are ethical obligations to persons (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • The needle and the damage done: Of haystacks and anxious panopticons.Sarah Logan - 2017 - Big Data and Society 4 (2).
    How should we understand the surveillance state post Snowden? This paper is concerned with the relationship between increased surveillance capacity and state power. The paper begins by analysing two metaphors used in public post Snowden discourse to describe state surveillance practices: the haystack and the panopticon. It argues that these metaphors share a flawed common entailment regarding surveillance, knowledge and power which cannot accurately capture important aspects of state anxiety generated by mass surveillance in an age of big data. The (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Problems with “Friendly AI”.Oliver Li - 2021 - Ethics and Information Technology 23 (3):543-550.
    On virtue ethical grounds, Barbro Fröding and Martin Peterson recently recommended that near-future AIs should be developed as ‘Friendly AI’. AI in social interaction with humans should be programmed such that they mimic aspects of human friendship. While it is a reasonable goal to implement AI systems interacting with humans as Friendly AI, I identify four issues that need to be addressed concerning Friendly AI with Fröding’s and Peterson’s understanding of Friendly AI as a starting point. In a first step, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Challenges of Algorithm-Based HR Decision-Making for Personal Integrity.Ulrich Leicht-Deobald, Thorsten Busch, Christoph Schank, Antoinette Weibel, Simon Schafheitle, Isabelle Wildhaber & Gabriel Kasper - 2019 - Journal of Business Ethics 160 (2):377-392.
    Organizations increasingly rely on algorithm-based HR decision-making to monitor their employees. This trend is reinforced by the technology industry claiming that its decision-making tools are efficient and objective, downplaying their potential biases. In our manuscript, we identify an important challenge arising from the efficiency-driven logic of algorithm-based HR decision-making, namely that it may shift the delicate balance between employees’ personal integrity and compliance more in the direction of compliance. We suggest that critical data literacy, ethical awareness, the use of participatory (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Applicants’ Fairness Perceptions of Algorithm-Driven Hiring Procedures.Maude Lavanchy, Patrick Reichert, Jayanth Narayanan & Krishna Savani - forthcoming - Journal of Business Ethics:1-26.
    Despite the rapid adoption of technology in human resource departments, there is little empirical work that examines the potential challenges of algorithmic decision-making in the recruitment process. In this paper, we take the perspective of job applicants and examine how they perceive the use of algorithms in selection and recruitment. Across four studies on Amazon Mechanical Turk, we show that people in the role of a job applicant perceive algorithm-driven recruitment processes as less fair compared to human only or algorithm-assisted (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning.Maya Krishnan - 2020 - Philosophy and Technology 33 (3):487-502.
    The usefulness of machine learning algorithms has led to their widespread adoption prior to the development of a conceptual framework for making sense of them. One common response to this situation is to say that machine learning suffers from a “black box problem.” That is, machine learning algorithms are “opaque” to human users, failing to be “interpretable” or “explicable” in terms that would render categorization procedures “understandable.” The purpose of this paper is to challenge the widespread agreement about the existence (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions.Sebastian Krügel, Andreas Ostermaier & Matthias Uhl - 2022 - Philosophy and Technology 35 (1):1-37.
    Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Social impacts of algorithmic decision-making: A research agenda for the social sciences.Frauke Kreuter, Christoph Kern, Ruben L. Bach & Frederic Gerdon - 2022 - Big Data and Society 9 (1).
    Academic and public debates are increasingly concerned with the question whether and how algorithmic decision-making may reinforce social inequality. Most previous research on this topic originates from computer science. The social sciences, however, have huge potentials to contribute to research on social consequences of ADM. Based on a process model of ADM systems, we demonstrate how social sciences may advance the literature on the impacts of ADM on social inequality by uncovering and mitigating biases in training data, by understanding data (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Municipal surveillance regulation and algorithmic accountability.P. M. Krafft, Michael Katell & Meg Young - 2019 - Big Data and Society 6 (2).
    A wave of recent scholarship has warned about the potential for discriminatory harms of algorithmic systems, spurring an interest in algorithmic accountability and regulation. Meanwhile, parallel concerns about surveillance practices have already led to multiple successful regulatory efforts of surveillance technologies—many of which have algorithmic components. Here, we examine municipal surveillance regulation as offering lessons for algorithmic oversight. Taking the 2017 Seattle Surveillance Ordinance as our primary case study and surveying efforts across five other cities, we describe the features of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Dissecting the Algorithmic Leviathan: On the Socio-Political Anatomy of Algorithmic Governance.Pascal D. König - 2020 - Philosophy and Technology 33 (3):467-485.
    A growing literature is taking an institutionalist and governance perspective on how algorithms shape society based on unprecedented capacities for managing social complexity. Algorithmic governance altogether emerges as a novel and distinctive kind of societal steering. It appears to transcend established categories and modes of governance—and thus seems to call for new ways of thinking about how social relations can be regulated and ordered. However, as this paper argues, despite its novel way of realizing outcomes of collective steering and coordination, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Challenges in enabling user control over algorithm-based services.Pascal D. König - 2024 - AI and Society 39 (1):195-205.
    Algorithmic systems that provide services to people by supporting or replacing human decision-making promise greater convenience in various areas. The opacity of these applications, however, means that it is not clear how much they truly serve their users. A promising way to address the issue of possible undesired biases consists in giving users control by letting them configure a system and aligning its performance with users’ own preferences. However, as the present paper argues, this form of control over an algorithmic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From Reality to World. A Critical Perspective on AI Fairness.Jean-Marie John-Mathews, Dominique Cardon & Christine Balagué - 2022 - Journal of Business Ethics 178 (4):945-959.
    Fairness of Artificial Intelligence decisions has become a big challenge for governments, companies, and societies. We offer a theoretical contribution to consider AI ethics outside of high-level and top-down approaches, based on the distinction between “reality” and “world” from Luc Boltanski. To do so, we provide a new perspective on the debate on AI fairness and show that criticism of ML unfairness is “realist”, in other words, grounded in an already instituted reality based on demographic categories produced by institutions. Second, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • People Prefer Moral Discretion to Algorithms: Algorithm Aversion Beyond Intransparency.Johanna Jauernig, Matthias Uhl & Gari Walkowitz - 2022 - Philosophy and Technology 35 (1):1-25.
    We explore aversion to the use of algorithms in moral decision-making. So far, this aversion has been explained mainly by the fear of opaque decisions that are potentially biased. Using incentivized experiments, we study which role the desire for human discretion in moral decision-making plays. This seems justified in light of evidence suggesting that people might not doubt the quality of algorithmic decisions, but still reject them. In our first study, we found that people prefer humans with decision-making discretion to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Politicizing Algorithms by Other Means: Toward Inquiries for Affective Dissensions.Florian Jaton & Dominique Vinck - 2023 - Perspectives on Science 31 (1):84-118.
    In this paper, we build upon Bruno Latour’s political writings to address the current impasse regarding algorithms in public life. We assert that the increasing difficulties at governing algorithms—be they qualified as “machine learning,” “big data,” or “artificial intelligence”—can be related to their current ontological thinness: deriving from constricted views on theoretical practices, algorithms’ standard definition as problem-solving computerized methods provides poor grips for affective dissensions. We then emphasize on the role historical and ethnographic studies of algorithms can potentially play (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Assessing biases, relaxing moralism: On ground-truthing practices in machine learning design and application.Florian Jaton - 2021 - Big Data and Society 8 (1).
    This theoretical paper considers the morality of machine learning algorithms and systems in the light of the biases that ground their correctness. It begins by presenting biases not as a priori negative entities but as contingent external referents—often gathered in benchmarked repositories called ground-truth datasets—that define what needs to be learned and allow for performance measures. I then argue that ground-truth datasets and their concomitant practices—that fundamentally involve establishing biases to enable learning procedures—can be described by their respective morality, here (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Examination and diagnosis of electronic patient records and their associated ethics: a scoping literature review.Tim Jacquemard, Colin P. Doherty & Mary B. Fitzsimons - 2020 - BMC Medical Ethics 21 (1):1-13.
    BackgroundElectronic patient record (EPR) technology is a key enabler for improvements to healthcare service and management. To ensure these improvements and the means to achieve them are socially and ethically desirable, careful consideration of the ethical implications of EPRs is indicated. The purpose of this scoping review was to map the literature related to the ethics of EPR technology. The literature review was conducted to catalogue the prevalent ethical terms, to describe the associated ethical challenges and opportunities, and to identify (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Comparative legal study on privacy and personal data protection for robots equipped with artificial intelligence: looking at functional and technological aspects.Kaori Ishii - 2019 - AI and Society 34 (3):509-533.
    This paper undertakes a comparative legal study to analyze the challenges of privacy and personal data protection posed by Artificial Intelligence embedded in Robots, and to offer policy suggestions. After identifying the benefits from various AI usages and the risks posed by AI-related technologies, I then analyze legal frameworks and relevant discussions in the EU, USA, Canada, and Japan, and further consider the efforts of Privacy by Design originating in Ontario, Canada. While various AI usages provide great convenience, many issues, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Justicia algorítmica y autodeterminación deliberativa.Daniel Innerarity - 2023 - Isegoría 68:e23.
    Si la democracia consiste en posibilitar que todas las personas tengan iguales posibilidades de influir en las decisiones que les afectan, las sociedades digitales tienen que interrogarse por el modo de conseguir que los nuevos entornos hagan factible esa igualdad. Las primeras dificultades son conceptuales: entender cómo se configura la interacción entre los humanos y los algoritmos, en qué consiste el aprendizaje de estos dispositivos y cuál es la naturaleza de sus sesgos. Inmediatamente después nos topamos con la cuestión ineludible (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Making the black box society transparent.Daniel Innerarity - forthcoming - AI and Society:1-7.
    The growing presence of smart devices in our lives turns all of society into something largely unknown to us. The strategy of demanding transparency stems from the desire to reduce the ignorance to which this automated society seems to condemn us. An evaluation of this strategy first requires that we distinguish the different types of non-transparency. Once we reveal the limits of the transparency needed to confront these devices, the article examines the alternative strategy of explainable artificial intelligence and concludes (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Ethics of AI-Enabled Recruiting and Selection: A Review and Research Agenda.Anna Lena Hunkenschroer & Christoph Luetge - 2022 - Journal of Business Ethics 178 (4):977-1007.
    Companies increasingly deploy artificial intelligence technologies in their personnel recruiting and selection process to streamline it, making it faster and more efficient. AI applications can be found in various stages of recruiting, such as writing job ads, screening of applicant resumes, and analyzing video interviews via face recognition software. As these new technologies significantly impact people’s lives and careers but often trigger ethical concerns, the ethicality of these AI applications needs to be comprehensively understood. However, given the novelty of AI (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • A New Argument for No-Fault Compensation in Health Care: The Introduction of Artificial Intelligence Systems.Søren Holm, Catherine Stanton & Benjamin Bartlett - 2021 - Health Care Analysis 29 (3):171-188.
    Artificial intelligence systems advising healthcare professionals will be widely introduced into healthcare settings within the next 5–10 years. This paper considers how this will sit with tort/negligence based legal approaches to compensation for medical error. It argues that the introduction of AI systems will provide an additional argument pointing towards no-fault compensation as the better legal solution to compensation for medical error in modern health care systems. The paper falls into four parts. The first part rehearses the main arguments for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Evaluating the prospects for university-based ethical governance in artificial intelligence and data-driven innovation.Christine Hine - 2021 - Research Ethics 17 (4):464-479.
    There has been considerable debate around the ethical issues raised by data-driven technologies such as artificial intelligence. Ethical principles for the field have focused on the need to ensure...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective.Erik Hermann - 2022 - Journal of Business Ethics 179 (1):43-61.
    Artificial intelligence is shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Ethical Artificial Intelligence in Chemical Research and Development: A Dual Advantage for Sustainability.Erik Hermann, Gunter Hermann & Jean-Christophe Tremblay - 2021 - Science and Engineering Ethics 27 (4):1-16.
    Artificial intelligence can be a game changer to address the global challenge of humanity-threatening climate change by fostering sustainable development. Since chemical research and development lay the foundation for innovative products and solutions, this study presents a novel chemical research and development process backed with artificial intelligence and guiding ethical principles to account for both process- and outcome-related sustainability. Particularly in ethically salient contexts, ethical principles have to accompany research and development powered by artificial intelligence to promote social and environmental (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmisches Entscheiden, Ambiguitätstoleranz und die Frage nach dem Sinn.Lisa Herzog - 2021 - Deutsche Zeitschrift für Philosophie 69 (2):197-213.
    In more and more contexts, human decision-making is replaced by algorithmic decision-making. While promising to deliver efficient and objective decisions, algorithmic decision systems have specific weaknesses, some of which are particularly dangerous if data are collected and processed by profit-oriented companies. In this paper, I focus on two problems that are at the root of the logic of algorithmic decision-making: (1) (in)tolerance for ambiguity, and (2) instantiations of Campbell’s law, i. e. of indicators that are used for “social decision-making” being (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Discrimination in the age of artificial intelligence.Bert Heinrichs - 2022 - AI and Society 37 (1):143-154.
    In this paper, I examine whether the use of artificial intelligence (AI) and automated decision-making (ADM) aggravates issues of discrimination as has been argued by several authors. For this purpose, I first take up the lively philosophical debate on discrimination and present my own definition of the concept. Equipped with this account, I subsequently review some of the recent literature on the use AI/ADM and discrimination. I explain how my account of discrimination helps to understand that the general claim in (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Varieties of artifacts: Embodied, perceptual, cognitive, and affective.Richard Heersmink - 2021 - Topics in Cognitive Science (4):1-24.
    The primary goal of this essay is to provide a comprehensive overview and analysis of the various relations between material artifacts and the embodied mind. A secondary goal of this essay is to identify some of the trends in the design and use of artifacts. First, based on their functional properties, I identify four categories of artifacts co-opted by the embodied mind, namely (1) embodied artifacts, (2) perceptual artifacts, (3) cognitive artifacts, and (4) affective artifacts. These categories can overlap and (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Algorithms and values in justice and security.Paul Hayes, Ibo van de Poel & Marc Steen - 2020 - AI and Society 35 (3):533-555.
    This article presents a conceptual investigation into the value impacts and relations of algorithms in the domain of justice and security. As a conceptual investigation, it represents one step in a value sensitive design based methodology. Here, we explicate and analyse the expression of values of accuracy, privacy, fairness and equality, property and ownership, and accountability and transparency in this context. We find that values are sensitive to disvalue if algorithms are designed, implemented or deployed inappropriately or without sufficient consideration (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Unprepared humanities: A pedagogy (forced) online.Houman Harouni - 2021 - Journal of Philosophy of Education 55 (4-5):633-648.
    Journal of Philosophy of Education, EarlyView.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Model Talk: Calculative Cultures in Quantitative Finance.Kristian Bondo Hansen - 2021 - Science, Technology, and Human Values 46 (3):600-627.
    This paper explores how calculative cultures shape perceptions of models and practices of model use in the financial industry. A calculative culture comprises a specific set of practices and norms concerning data and model use in an organizational setting. Drawing on interviews with model users working in algorithmic securities trading, I argue that the introduction of complex machine-learning models changes the dynamics in calculative cultures, which leads to a displacement of human judgment in quantitative finance. In this paper, I distinguish (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Promises and Pitfalls of Algorithm Use by State Authorities.Maryam Amir Haeri, Kathrin Hartmann, Jürgen Sirsch, Georg Wenzelburger & Katharina A. Zweig - 2022 - Philosophy and Technology 35 (2):1-31.
    Algorithmic systems are increasingly used by state agencies to inform decisions about humans. They produce scores on risks of recidivism in criminal justice, indicate the probability for a job seeker to find a job in the labor market, or calculate whether an applicant should get access to a certain university program. In this contribution, we take an interdisciplinary perspective, provide a bird’s eye view of the different key decisions that are to be taken when state actors decide to use an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Making sense of algorithms: Relational perception of contact tracing and risk assessment during COVID-19.Ross Graham & Chuncheng Liu - 2021 - Big Data and Society 8 (1).
    Governments and citizens of nearly every nation have been compelled to respond to COVID-19. Many measures have been adopted, including contact tracing and risk assessment algorithms, whereby citizen whereabouts are monitored to trace contact with other infectious individuals in order to generate a risk status via algorithmic evaluation. Based on 38 in-depth interviews, we investigate how people make sense of Health Code, the Chinese contact tracing and risk assessment algorithmic sociotechnical assemblage. We probe how people accept or resist Health Code (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • AI Recruitment Algorithms and the Dehumanization Problem.Megan Fritts & Frank Cabrera - 2021 - Ethics and Information Technology (4):1-11.
    According to a recent survey by the HR Research Institute, as the presence of artificial intelligence (AI) becomes increasingly common in the workplace, HR professionals are worried that the use of recruitment algorithms will lead to a “dehumanization” of the hiring process. Our main goals in this paper are threefold: i) to bring attention to this neglected issue, ii) to clarify what exactly this concern about dehumanization might amount to, and iii) to sketch an argument for why dehumanizing the hiring (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • An Analysis of the Impact of Brain-Computer Interfaces on Autonomy.Orsolya Friedrich, Eric Racine, Steffen Steinert, Johannes Pömsl & Ralf J. Jox - 2018 - Neuroethics 14 (1):17-29.
    Research conducted on Brain-Computer Interfaces has grown considerably during the last decades. With the help of BCIs, users can gain a wide range of functions. Our aim in this paper is to analyze the impact of BCIs on autonomy. To this end, we introduce three abilities that most accounts of autonomy take to be essential: the ability to use information and knowledge to produce reasons; the ability to ensure that intended actions are effectively realized ; and the ability to enact (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Algorithmic Political Bias—an Entrenchment Concern.Ulrik Franke - 2022 - Philosophy and Technology 35 (3):1-6.
    This short commentary on Peters identifies the entrenchment of political positions as one additional concern related to algorithmic political bias, beyond those identified by Peters. First, it is observed that the political positions detected and predicted by algorithms are typically contingent and largely explained by “political tribalism”, as argued by Brennan. Second, following Hacking, the social construction of political identities is analyzed and it is concluded that algorithmic political bias can contribute to such identities. Third, following Nozick, it is argued (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial intelligence and the ‘Good Society’: the US, EU, and UK approach.Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo & Luciano Floridi - 2018 - Science and Engineering Ethics 24 (2):505-528.
    In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of a ‘good (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Towards Transparency by Design for Artificial Intelligence.Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz & Aurelia Tamò-Larrieux - 2020 - Science and Engineering Ethics 26 (6):3333-3361.
    In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making environments. With the rise of artificial intelligence and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Algorithmic affordances for productive resistance.Nancy Ettlinger - 2018 - Big Data and Society 5 (1).
    Although overarching if not foundational conceptualizations of digital governance in the field of critical data studies aptly account for and explain subjection, calculated resistance is left conceptually unattended despite case studies that document instances of resistance. I ask at the outset why conceptualizations of digital governance are so bleak, and I argue that all are underscored implicitly by a Deleuzian theory of desire that overlooks agency, defined here in Foucauldian terms. I subsequently conceptualize digital governance as encompassing subjection as well (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5).
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Massimo Durante & Marcello D'Agostino - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Ethics of Artificial Intelligence.Stefan Buijsman, Michael Klenk & Jeroen van den Hoven - forthcoming - In Nathalie Smuha (ed.), Cambridge Handbook on the Law, Ethics and Policy of AI. Cambridge University Press.
    Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The epistemological foundations of data science: a critical analysis.Jules Desai, David Watson, Vincent Wang, Mariarosaria Taddeo & Luciano Floridi - manuscript
    The modern abundance and prominence of data has led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data science. We propose a partition of the epistemology of data science into the following five domains: (i) the constitution of data science; (ii) the kind of enquiry (...)
    Download  
     
    Export citation  
     
    Bookmark