Contents
77 found
Order:
1 — 50 / 77
  1. (1 other version)If the Difference Principle Won’t Make a Real Difference in Algorithmic Fairness, What Will? [REVIEW]Reuben Binns - manuscript
    In ‘Rawlsian algorithmic fairness and a missing aggregation property of the difference Principle’, the authors argue that there is a false assumption in algorithmic fairness interventions inspired by John Rawls’ theory of justice. They argue that applying the difference principle at the level of a local algorithmic decision-making context (what they term a ‘constituent situation’), is neither necessary nor sufficient for the difference principle to be upheld at the aggregate level of society at large. I find these arguments compelling. They (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap.Tianqi Kou - manuscript
    Two goals - improving replicability and accountability of Machine Learning research respectively, have accrued much attention from the AI ethics and the Machine Learning community. Despite sharing the measures of improving transparency, the two goals are discussed in different registers - replicability registers with scientific reasoning whereas accountability registers with ethical reasoning. Given the existing challenge of the Responsibility Gap - holding Machine Learning scientists accountable for Machine Learning harms due to them being far from sites of application, this paper (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing control over our lives—over the jobs we get, the loans we're granted, the information we see online. Algorithms can and often do wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has been largely neglected. I investigate algorithmic neutrality, tackling three questions: What is algorithmic neutrality? Is it possible? And when we have it in mind, what can we learn about algorithmic bias?
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  6. Algorithmic Fairness Criteria as Evidence.Will Fleisher - forthcoming - Ergo: An Open Access Journal of Philosophy.
    Statistical fairness criteria are widely used for diagnosing and ameliorating algorithmic bias. However, these fairness criteria are controversial as their use raises several difficult questions. I argue that the major problems for statistical algorithmic fairness criteria stem from an incorrect understanding of their nature. These criteria are primarily used for two purposes: first, evaluating AI systems for bias, and second constraining machine learning optimization problems in order to ameliorate such bias. The first purpose typically involves treating each criterion as a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?Joshua Hatherley - forthcoming - Journal of Medical Ethics.
    It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this ‘the disclosure thesis.’ Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. A Framework for Assurance Audits of Algorithmic Systems.Benjamin Lange, Khoa Lam, Borhane Hamelin, Davidovic Jovana, Shea Brown & Ali Hasan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    An increasing number of regulations propose the notion of ‘AI audits’ as an enforcement mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently have little to no agreed upon practices, procedures, taxonomies, and standards. We propose the ‘criterion audit’ as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only demonstrate how (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  10. The Ideals Program in Algorithmic Fairness.Rush T. Stewart - forthcoming - AI and Society:1-11.
    I consider statistical criteria of algorithmic fairness from the perspective of the _ideals_ of fairness to which these criteria are committed. I distinguish and describe three theoretical roles such ideals might play. The usefulness of this program is illustrated by taking Base Rate Tracking and its ratio variant as a case study. I identify and compare the ideals of these two criteria, then consider them in each of the aforementioned three roles for ideals. This ideals program may present a way (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. An Impossibility Theorem for Base Rate Tracking and Equalized Odds.Rush Stewart, Benjamin Eva, Shanna Slank & Reuben Stern - forthcoming - Analysis.
    There is a theorem that shows that it is impossible for an algorithm to jointly satisfy the statistical fairness criteria of Calibration and Equalised Odds non-trivially. But what about the recently advocated alternative to Calibration, Base Rate Tracking? Here, we show that Base Rate Tracking is strictly weaker than Calibration, and then take up the question of whether it is possible to jointly satisfy Base Rate Tracking and Equalised Odds in non-trivial scenarios. We show that it is not, thereby establishing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  12. From Explanation to Recommendation: Ethical Standards for Algorithmic Recourse.Emily Sullivan & Philippe Verreault-Julien - forthcoming - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES’22).
    People are increasingly subject to algorithmic decisions, and it is generally agreed that end-users should be provided an explanation or rationale for these decisions. There are different purposes that explanations can have, such as increasing user trust in the system or allowing users to contest the decision. One specific purpose that is gaining more traction is algorithmic recourse. We first pro- pose that recourse should be viewed as a recommendation problem, not an explanation problem. Then, we argue that the capability (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - 2025 - Philosophical Studies 182 (1):25-53.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer simulation, we (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  14. Artificial Intelligence in Higher Education in South Africa: Some Ethical Considerations.Tanya de Villiers-Botha - 2024 - Kagisano 15:165-188.
    There are calls from various sectors, including the popular press, industry, and academia, to incorporate artificial intelligence (AI)-based technologies in general, and large language models (LLMs) (such as ChatGPT and Gemini) in particular, into various spheres of the South African higher education sector. Nonetheless, the implementation of such technologies is not without ethical risks, notably those related to bias, unfairness, privacy violations, misinformation, lack of transparency, and threats to autonomy. This paper gives an overview of the more pertinent ethical concerns (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Generative AI and the Future of Democratic Citizenship.Paul Formosa, Bhanuraj Kashyap & Siavosh Sahebi - 2024 - Digital Government: Research and Practice 2691 (2024/05-ART).
    Generative AI technologies have the potential to be socially and politically transformative. In this paper, we focus on exploring the potential impacts that Generative AI could have on the functioning of our democracies and the nature of citizenship. We do so by drawing on accounts of deliberative democracy and the deliberative virtues associated with it, as well as the reciprocal impacts that social media and Generative AI will have on each other and the broader information landscape. Drawing on this background (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. The Many Meanings of Vulnerability in the AI Act and the One Missing.Federico Galli & Claudio Novelli - 2024 - Biolaw Journal 1.
    This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. Making a Murderer: How Risk Assessment Tools May Produce Rather Than Predict Criminal Behavior.Donal Khosrowi & Philippe van Basshuysen - 2024 - American Philosophical Quarterly 61 (4):309-325.
    Algorithmic risk assessment tools, such as COMPAS, are increasingly used in criminal justice systems to predict the risk of defendants to reoffend in the future. This paper argues that these tools may not only predict recidivism, but may themselves causally induce recidivism through self-fulfilling predictions. We argue that such “performative” effects can yield severe harms both to individuals and to society at large, which raise epistemic-ethical responsibilities on the part of developers and users of risk assessment tools. To meet these (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Fair equality of chances for prediction-based decisions.Michele Loi, Anders Herlitz & Hoda Heidari - 2024 - Economics and Philosophy 40 (3):557-580.
    This article presents a fairness principle for evaluating decision-making based on predictions: a decision rule is unfair when the individuals directly impacted by the decisions who are equal with respect to the features that justify inequalities in outcomes do not have the same statistical prospects of being benefited or harmed by them, irrespective of their socially salient morally arbitrary traits. The principle can be used to evaluate prediction-based decision-making from the point of view of a wide range of antecedently specified (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  19. Conformism, Ignorance & Injustice: AI as a Tool of Epistemic Oppression.Martin Miragoli - 2024 - Episteme: A Journal of Social Epistemology:1-19.
    From music recommendation to assessment of asylum applications, machine-learning algorithms play a fundamental role in our lives. Naturally, the rise of AI implementation strategies has brought to public attention the ethical risks involved. However, the dominant anti-discrimination discourse, too often preoccupied with identifying particular instances of harmful AIs, has yet to bring clearly into focus the more structural roots of AI-based injustice. This paper addresses the problem of AI-based injustice from a distinctively epistemic angle. More precisely, I argue that the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. New Possibilities for Fair Algorithms.Michael Nielsen & Rush Stewart - 2024 - Philosophy and Technology 37 (4):1-17.
    We introduce a fairness criterion that we call Spanning. Spanning i) is implied by Calibration, ii) retains interesting properties of Calibration that some other ways of relaxing that criterion do not, and iii) unlike Calibration and other prominent ways of weakening it, is consistent with Equalized Odds outside of trivial cases.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  21. Spanning in and Spacing out? A Reply to Eva.Michael Nielsen & Rush Stewart - 2024 - Philosophy and Technology 37 (4):1-4.
    We reply to Eva's comment on our "New Possibilities for Fair Algorithms," comparing and contrasting our Spanning criterion with his suggested Spacing criterion.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Big Data as Tracking Technology and Problems of the Group and its Members.Haleh Asgarinia - 2023 - In Kevin Macnish & Adam Henschke (eds.), The Ethics of Surveillance in Times of Emergency. Oxford University Press. pp. 60-75.
    Digital data help data scientists and epidemiologists track and predict outbreaks of disease. Mobile phone GPS data, social media data, or other forms of information updates such as the progress of epidemics are used by epidemiologists to recognize disease spread among specific groups of people. Targeting groups as potential carriers of a disease, rather than addressing individuals as patients, risks causing harm to groups. While there are rules and obligations at the level of the individual, we have to reach a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee any (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  25. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a variety of normative egalitarian considerations. We (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  26. An Epistemic Lens on Algorithmic Fairness.Elizabeth Edenberg & Alexandra Wood - 2023 - Eaamo '23: Proceedings of the 3Rd Acm Conference on Equity and Access in Algorithms, Mechanisms, and Optimization.
    In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches to algorithmic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered by conflations (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  29. Measurement invariance, selection invariance, and fair selection revisited.Remco Heesen & Jan-Willem Romeijn - 2023 - Psychological Methods 28 (3):687-690.
    This note contains a corrective and a generalization of results by Borsboom et al. (2008), based on Heesen and Romeijn (2019). It highlights the relevance of insights from psychometrics beyond the context of psychological testing.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  31. Self-fulfilling Prophecy in Practical and Automated Prediction.Owen C. King & Mayli Mertens - 2023 - Ethical Theory and Moral Practice 26 (1):127-152.
    A self-fulfilling prophecy is, roughly, a prediction that brings about its own truth. Although true predictions are hard to fault, self-fulfilling prophecies are often regarded with suspicion. In this article, we vindicate this suspicion by explaining what self-fulfilling prophecies are and what is problematic about them, paying special attention to how their problems are exacerbated through automated prediction. Our descriptive account of self-fulfilling prophecies articulates the four elements that define them. Based on this account, we begin our critique by showing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  32. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  33. Multiplicative Metric Fairness Under Composition.Milan Mossé - 2023 - Symposium on Foundations of Responsible Computing 4.
    Dwork, Hardt, Pitassi, Reingold, & Zemel [6] introduced two notions of fairness, each of which is meant to formalize the notion of similar treatment for similarly qualified individuals. The first of these notions, which we call additive metric fairness, has received much attention in subsequent work studying the fairness of a system composed of classifiers which are fair when considered in isolation [3, 4, 7, 8, 12] and in work studying the relationship between fair treatment of individuals and fair treatment (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Should Algorithms that Predict Recidivism Have Access to Race?Duncan Purves & Jeremy Davis - 2023 - American Philosophical Quarterly 60 (2):205-220.
    Recent studies have shown that recidivism scoring algorithms like COMPAS have significant racial bias: Black defendants are roughly twice as likely as white defendants to be mistakenly classified as medium- or high-risk. This has led some to call for abolishing COMPAS. But many others have argued that algorithms should instead be given access to a defendant's race, which, perhaps counterintuitively, is likely to improve outcomes. This approach can involve either establishing race-sensitive risk thresholds, or distinct racial ‘tracks’. Is there a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Algorithmic Indirect Discrimination, Fairness, and Harm.Frej Klem Thomsen - 2023 - AI and Ethics.
    Over the past decade, scholars, institutions, and activists have voiced strong concerns about the potential of automated decision systems to indirectly discriminate against vulnerable groups. This article analyses the ethics of algorithmic indirect discrimination, and argues that we can explain what is morally bad about such discrimination by reference to the fact that it causes harm. The article first sketches certain elements of the technical and conceptual background, including definitions of direct and indirect algorithmic differential treatment. It next introduces three (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. Three Lessons For and From Algorithmic Discrimination.Frej Klem Thomsen - 2023 - Res Publica (2):1-23.
    Algorithmic discrimination has rapidly become a topic of intense public and academic interest. This article explores three issues raised by algorithmic discrimination: 1) the distinction between direct and indirect discrimination, 2) the notion of disadvantageous treatment, and 3) the moral badness of discriminatory automated decision-making. It argues that some conventional distinctions between direct and indirect discrimination appear not to apply to algorithmic discrimination, that algorithmic discrimination may often be discrimination between groups, as opposed to against groups, and that it is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral assumptions of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Just Machines.Clinton Castro - 2022 - Public Affairs Quarterly 36 (2):163-183.
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, owed to Kleinberg (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  39. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   9 citations  
  40. Re-assessing Google as Epistemic Tool in the Age of Personalisation.Tanya de Villiers-Botha - 2022 - The Proceedings of SACAIR2022 Online Conference, the 3rd Southern African Conference for Artificial Intelligence Research.
    Google Search is arguably one of the primary epistemic tools in use today, with the lion’s share of the search-engine market globally. Scholarship on countering the current scourge of misinformation often recommends “digital lit- eracy” where internet users, especially those who get their information from so- cial media, are encouraged to fact-check such information using reputable sources. Given our current internet-based epistemic landscape, and Google’s dominance of the internet, it is very likely that such acts of epistemic hygiene will take (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Siri, Stereotypes, and the Mechanics of Sexism.Alexis Elder - 2022 - Feminist Philosophy Quarterly 8 (3).
    Feminized AIs designed for in-home verbal assistance are often subjected to gendered verbal abuse by their users. I survey a variety of features contributing to this phenomenon—from financial incentives for businesses to build products likely to provoke gendered abuse, to the impact of such behavior on household members—and identify a potential worry for attempts to criticize the phenomenon; while critics may be tempted to argue that engaging in gendered abuse of AI increases the chances that one will direct this abuse (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. When Gig Workers Become Essential: Leveraging Customer Moral Self-Awareness Beyond COVID-19.Julian Friedland - 2022 - Business Horizons 66 (2):181-190.
    The COVID-19 pandemic has intensified the extent to which economies in the developed and developing world rely on gig workers to perform essential tasks such as health care, personal transport, food and package delivery, and ad hoc tasking services. As a result, workers who provide such services are no longer perceived as mere low-skilled laborers, but as essential workers who fulfill a crucial role in society. The newly elevated moral and economic status of these workers increases consumer demand for corporate (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness in healthcare. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Algorithmic Bias and Risk Assessments: Lessons from Practice.Ali Hasan, Shea Brown, Jovana Davidovic, Benjamin Lange & Mitt Regan - 2022 - Digital Society 1 (1):1-15.
    In this paper, we distinguish between different sorts of assessments of algorithmic systems, describe our process of assessing such systems for ethical risk, and share some key challenges and lessons for future algorithm assessments and audits. Given the distinctive nature and function of a third-party audit, and the uncertain and shifting regulatory landscape, we suggest that second-party assessments are currently the primary mechanisms for analyzing the social impacts of systems that incorporate artificial intelligence. We then discuss two kinds of as-sessments: (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can be handled (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  47. The Limits of Reallocative and Algorithmic Policing.Luke William Hunt - 2022 - Criminal Justice Ethics 41 (1):1-24.
    Policing in many parts of the world—the United States in particular—has embraced an archetypal model: a conception of the police based on the tenets of individuated archetypes, such as the heroic police “warrior” or “guardian.” Such policing has in part motivated moves to (1) a reallocative model: reallocating societal resources such that the police are no longer needed in society (defunding and abolishing) because reform strategies cannot fix the way societal problems become manifest in (archetypal) policing; and (2) an algorithmic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Rule by Automation: How Automated Decision Systems Promote Freedom and Equality.Athmeya Jayaram & Jacob Sparks - 2022 - Moral Philosophy and Politics 9 (2):201-218.
    Using automated systems to avoid the need for human discretion in government contexts – a scenario we call ‘rule by automation’ – can help us achieve the ideal of a free and equal society. Drawing on relational theories of freedom and equality, we explain how rule by automation is a more complete realization of the rule of law and why thinkers in these traditions have strong reasons to support it. Relational theories are based on the absence of human domination and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy.Atoosa Kasirzadeh - 2022 - Aies '22: Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society.
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in -- ideals (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. Algorithmic Microaggressions.Emma McClure & Benjamin Wald - 2022 - Feminist Philosophy Quarterly 8 (3).
    We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, stereotyping and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 77