Contents
790 found
Order:
1 — 50 / 790
Material to categorize
  1. Book review: Coeckelbergh, Mark (2022): The political philosophy of AI. [REVIEW]Michael W. Schmidt - 2024 - TATuP - Zeitschrift Für Technikfolgenabschätzung in Theorie Und Praxis 33 (1):68–69.
    Mark Coeckelbergh starts his book with a very powerful picture based on a real incident: On the 9th of January 2020, Robert Williams was wrongfully arrested by Detroit police officers in front of his two young daughters, wife and neighbors. For 18 hours the police would not disclose the grounds for his arrest (American Civil Liberties Union 2020; Hill 2020). The decision to arrest him was primarily based on a facial detection algorithm which matched Mr. Williams’ driving license photo with (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Giới thiệu về năm tiền đề của tương tác giữa người và máy trong kỉ nguyên trí tuệ nhân tạo.Manh-Tung Ho & T. Hong-Kong Nguyen - manuscript
    Bài viết này giới thiệu năm yếu tố tiền đề đó với mục đích gia tăng nhận thức về quan hệ giữa người và máy trong bối cảnh công nghệ ngày càng thay đổi cuộc sống thường nhật. Năm tiền đề bao gồm: Tiền đề về cấu trúc xã hội, văn hóa, chính trị, và lịch sử; về tính tự chủ và sự tự do của con người; về nền tảng triết học và nhân văn của nhân loại; về hiện (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
Algorithmic Fairness
  1. Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  2. Conformism, Ignorance & Injustice: AI as a Tool of Epistemic Oppression.Martin Miragoli - forthcoming - Episteme: A Journal of Social Epistemology.
    From music recommendation to assessment of asylum applications, machine-learning algorithms play a fundamental role in our lives. Naturally, the rise of AI implementation strategies has brought to public attention the ethical risks involved. However, the dominant anti-discrimination discourse, too often preoccupied with identifying particular instances of harmful AIs, has yet to bring clearly into focus the more structural roots of AI-based injustice. This paper addresses the problem of AI-based injustice from a distinctively epistemic angle. More precisely, I argue that the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. An Impossibility Theorem for Base Rate Tracking and Equalised Odds.Rush T. Stewart, Benjamin Eva, Shanna Slank & Reuben Stern - forthcoming - Analysis.
    There is a theorem that shows that it is impossible for an algorithm to jointly satisfy the statistical fairness criteria of Calibration and Equalised Odds non-trivially. But what about the recently advocated alternative to Calibration, Base Rate Tracking? Here, we show that Base Rate Tracking is strictly weaker than Calibration, and then take up the question of whether it is possible to jointly satisfy Base Rate Tracking and Equalised Odds in non-trivial scenarios. We show that it is not, thereby establishing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Big Data as Tracking Technology and Problems of the Group and its Members.Haleh Asgarinia - 2023 - In Kevin Macnish & Adam Henschke (eds.), The Ethics of Surveillance in Times of Emergency. Oxford University Press. pp. 60-75.
    Digital data help data scientists and epidemiologists track and predict outbreaks of disease. Mobile phone GPS data, social media data, or other forms of information updates such as the progress of epidemics are used by epidemiologists to recognize disease spread among specific groups of people. Targeting groups as potential carriers of a disease, rather than addressing individuals as patients, risks causing harm to groups. While there are rules and obligations at the level of the individual, we have to reach a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only demonstrate how (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. An Epistemic Lens on Algorithmic Fairness.Elizabeth Edenberg & Alexandra Wood - 2023 - Eaamo '23: Proceedings of the 3Rd Acm Conference on Equity and Access in Algorithms, Mechanisms, and Optimization.
    In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches to algorithmic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. New York: Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered by conflations (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Fair equality of chances for prediction-based decisions.Michele Loi, Anders Herlitz & Hoda Heidari - forthcoming - Economics and Philosophy:1-24.
    This article presents a fairness principle for evaluating decision-making based on predictions: a decision rule is unfair when the individuals directly impacted by the decisions who are equal with respect to the features that justify inequalities in outcomes do not have the same statistical prospects of being benefited or harmed by them, irrespective of their socially salient morally arbitrary traits. The principle can be used to evaluate prediction-based decision-making from the point of view of a wide range of antecedently specified (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Shared decision-making and maternity care in the deep learning age: Acknowledging and overcoming inherited defeaters.Keith Begley, Cecily Begley & Valerie Smith - 2021 - Journal of Evaluation in Clinical Practice 27 (3):497–503.
    In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. In this paper the authors, with backgrounds in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - forthcoming - Philosophical Studies:1-29.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer simulation, we (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Measurement invariance, selection invariance, and fair selection revisited.Remco Heesen & Jan-Willem Romeijn - 2023 - Psychological Methods 28 (3):687-690.
    This note contains a corrective and a generalization of results by Borsboom et al. (2008), based on Heesen and Romeijn (2019). It highlights the relevance of insights from psychometrics beyond the context of psychological testing.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Should Algorithms that Predict Recidivism Have Access to Race?Duncan Purves & Jeremy Davis - 2023 - American Philosophical Quarterly 60 (2):205-220.
    Recent studies have shown that recidivism scoring algorithms like COMPAS have significant racial bias: Black defendants are roughly twice as likely as white defendants to be mistakenly classified as medium- or high-risk. This has led some to call for abolishing COMPAS. But many others have argued that algorithms should instead be given access to a defendant's race, which, perhaps counterintuitively, is likely to improve outcomes. This approach can involve either establishing race-sensitive risk thresholds, or distinct racial ‘tracks’. Is there a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Three Lessons For and From Algorithmic Discrimination.Frej Klem Thomsen - 2023 - Res Publica (2):1-23.
    Algorithmic discrimination has rapidly become a topic of intense public and academic interest. This article explores three issues raised by algorithmic discrimination: 1) the distinction between direct and indirect discrimination, 2) the notion of disadvantageous treatment, and 3) the moral badness of discriminatory automated decision-making. It argues that some conventional distinctions between direct and indirect discrimination appear not to apply to algorithmic discrimination, that algorithmic discrimination may often be discrimination between groups, as opposed to against groups, and that it is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Bias infects the algorithms that wield increasing control over our lives. Predictive policing systems overestimate crime in communities of color; hiring algorithms dock qualified female candidates; and facial recognition software struggles to recognize dark-skinned faces. Algorithmic bias has received significant attention. Algorithmic neutrality, in contrast, has been largely neglected. Algorithmic neutrality is my topic. I take up three questions. What is algorithmic neutrality? Is algorithmic neutrality possible? When we have algorithmic neutrality in mind, what can we learn about algorithmic bias? (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy.Atoosa Kasirzadeh - 2022 - Aies '22: Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society.
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in -- ideals (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Ethical AI at work: the social contract for Artificial Intelligence and its implications for the workplace psychological contract.Sarah Bankins & Paul Formosa - 2021 - In Redefining the psychological contract in the digital era: issues for research and practice. Cham, Switzerland: pp. 55-72.
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can be handled (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  21. Siri, Stereotypes, and the Mechanics of Sexism.Alexis Elder - 2022 - Feminist Philosophy Quarterly 8 (3).
    Feminized AIs designed for in-home verbal assistance are often subjected to gendered verbal abuse by their users. I survey a variety of features contributing to this phenomenon—from financial incentives for businesses to build products likely to provoke gendered abuse, to the impact of such behavior on household members—and identify a potential worry for attempts to criticize the phenomenon; while critics may be tempted to argue that engaging in gendered abuse of AI increases the chances that one will direct this abuse (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Algorithmic Microaggressions.Emma McClure & Benjamin Wald - 2022 - Feminist Philosophy Quarterly 8 (3).
    We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, stereotyping and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Re-assessing Google as Epistemic Tool in the Age of Personalisation.Tanya de Villiers-Botha - 2022 - The Proceedings of SACAIR2022 Online Conference, the 3rd Southern African Conference for Artificial Intelligence Research.
    Google Search is arguably one of the primary epistemic tools in use today, with the lion’s share of the search-engine market globally. Scholarship on countering the current scourge of misinformation often recommends “digital lit- eracy” where internet users, especially those who get their information from so- cial media, are encouraged to fact-check such information using reputable sources. Given our current internet-based epistemic landscape, and Google’s dominance of the internet, it is very likely that such acts of epistemic hygiene will take (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Self-fulfilling Prophecy in Practical and Automated Prediction.Owen C. King & Mayli Mertens - 2023 - Ethical Theory and Moral Practice 26 (1):127-152.
    A self-fulfilling prophecy is, roughly, a prediction that brings about its own truth. Although true predictions are hard to fault, self-fulfilling prophecies are often regarded with suspicion. In this article, we vindicate this suspicion by explaining what self-fulfilling prophecies are and what is problematic about them, paying special attention to how their problems are exacerbated through automated prediction. Our descriptive account of self-fulfilling prophecies articulates the four elements that define them. Based on this account, we begin our critique by showing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Algorithmic Indirect Discrimination, Fairness, and Harm.Frej Klem Thomsen - 2023 - AI and Ethics.
    Over the past decade, scholars, institutions, and activists have voiced strong concerns about the potential of automated decision systems to indirectly discriminate against vulnerable groups. This article analyses the ethics of algorithmic indirect discrimination, and argues that we can explain what is morally bad about such discrimination by reference to the fact that it causes harm. The article first sketches certain elements of the technical and conceptual background, including definitions of direct and indirect algorithmic differential treatment. It next introduces three (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee any (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Algorithmic Political Bias Can Reduce Political Polarization.Uwe Peters - 2022 - Philosophy and Technology 35 (3):1-7.
    Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that algorithmic political classifications entrench (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. Algorithmic Bias and Risk Assessments: Lessons from Practice.Ali Hasan, Shea Brown, Jovana Davidovic, Benjamin Lange & Mitt Regan - 2022 - Digital Society 1 (1):1-15.
    In this paper, we distinguish between different sorts of assessments of algorithmic systems, describe our process of assessing such systems for ethical risk, and share some key challenges and lessons for future algorithm assessments and audits. Given the distinctive nature and function of a third-party audit, and the uncertain and shifting regulatory landscape, we suggest that second-party assessments are currently the primary mechanisms for analyzing the social impacts of systems that incorporate artificial intelligence. We then discuss two kinds of as-sessments: (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  30. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a variety of normative egalitarian considerations. We (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Social Media and its Negative Impacts on Autonomy.Siavosh Sahebi & Paul Formosa - 2022 - Philosophy and Technology 35 (3):1-24.
    How social media impacts the autonomy of its users is a topic of increasing focus. However, much of the literature that explores these impacts fails to engage in depth with the philosophical literature on autonomy. This has resulted in a failure to consider the full range of impacts that social media might have on autonomy. A deeper consideration of these impacts is thus needed, given the importance of both autonomy as a moral concept and social media as a feature of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  32. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   22 citations  
  33. From Explanation to Recommendation: Ethical Standards for Algorithmic Recourse.Emily Sullivan & Philippe Verreault-Julien - forthcoming - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES’22).
    People are increasingly subject to algorithmic decisions, and it is generally agreed that end-users should be provided an explanation or rationale for these decisions. There are different purposes that explanations can have, such as increasing user trust in the system or allowing users to contest the decision. One specific purpose that is gaining more traction is algorithmic recourse. We first pro- pose that recourse should be viewed as a recommendation problem, not an explanation problem. Then, we argue that the capability (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  35. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. What lies behind AGI: ethical concerns related to LLMs.Giada Pistilli - 2022 - Éthique Et Numérique 1 (1):59-68.
    This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. When Gig Workers Become Essential: Leveraging Customer Moral Self-Awareness Beyond COVID-19.Julian Friedland - 2022 - Business Horizons 66 (2):181-190.
    The COVID-19 pandemic has intensified the extent to which economies in the developed and developing world rely on gig workers to perform essential tasks such as health care, personal transport, food and package delivery, and ad hoc tasking services. As a result, workers who provide such services are no longer perceived as mere low-skilled laborers, but as essential workers who fulfill a crucial role in society. The newly elevated moral and economic status of these workers increases consumer demand for corporate (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Identity and the Limits of Fair Assessment.Rush T. Stewart - 2022 - Journal of Theoretical Politics 34 (3):415-442.
    In many assessment problems—aptitude testing, hiring decisions, appraisals of the risk of recidivism, evaluation of the credibility of testimonial sources, and so on—the fair treatment of different groups of individuals is an important goal. But individuals can be legitimately grouped in many different ways. Using a framework and fairness constraints explored in research on algorithmic fairness, I show that eliminating certain forms of bias across groups for one way of classifying individuals can make it impossible to eliminate such bias across (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  39. Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  40. The Limits of Reallocative and Algorithmic Policing.Luke William Hunt - 2022 - Criminal Justice Ethics 41 (1):1-24.
    Policing in many parts of the world—the United States in particular—has embraced an archetypal model: a conception of the police based on the tenets of individuated archetypes, such as the heroic police “warrior” or “guardian.” Such policing has in part motivated moves to (1) a reallocative model: reallocating societal resources such that the police are no longer needed in society (defunding and abolishing) because reform strategies cannot fix the way societal problems become manifest in (archetypal) policing; and (2) an algorithmic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and social explainability can (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  42. Rule by Automation: How Automated Decision Systems Promote Freedom and Equality.Athmeya Jayaram & Jacob Sparks - 2022 - Moral Philosophy and Politics 9 (2):201-218.
    Using automated systems to avoid the need for human discretion in government contexts – a scenario we call ‘rule by automation’ – can help us achieve the ideal of a free and equal society. Drawing on relational theories of freedom and equality, we explain how rule by automation is a more complete realization of the rule of law and why thinkers in these traditions have strong reasons to support it. Relational theories are based on the absence of human domination and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness in healthcare. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. From human resources to human rights: Impact assessments for hiring algorithms.Josephine Yam & Joshua August Skorburg - 2021 - Ethics and Information Technology 23 (4):611-623.
    Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two reasons. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  45. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect the allocation outcomes? (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. AI Recruitment Algorithms and the Dehumanization Problem.Megan Fritts & Frank Cabrera - 2021 - Ethics and Information Technology (4):1-11.
    According to a recent survey by the HR Research Institute, as the presence of artificial intelligence (AI) becomes increasingly common in the workplace, HR professionals are worried that the use of recruitment algorithms will lead to a “dehumanization” of the hiring process. Our main goals in this paper are threefold: i) to bring attention to this neglected issue, ii) to clarify what exactly this concern about dehumanization might amount to, and iii) to sketch an argument for why dehumanizing the hiring (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  47. Proceed with Caution.Annette Zimmermann & Chad Lee-Stronach - 2021 - Canadian Journal of Philosophy (1):6-25.
    It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  48. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted toolkits that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   10 citations  
1 — 50 / 790