Results for 'algorithmic bias'

998 found
Order:
  1. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic (...) are hampered by conflations of various understandings of bias, ranging from neutral deviations from a standard to morally problematic instances of injustice due to prejudice, discrimination, and disparate treatment. This terminological confusion impedes efforts to address clear cases of discrimination. -/- In this paper, we examine the promises and challenges of different approaches to disambiguating bias and designing for justice. While both approaches aid in understanding and addressing clear algorithmic harms, we argue that they also risk being leveraged in ways that ultimately deflect accountability from those building and deploying these systems. Applying this analysis to recent examples of generative AI, our argument highlights unseen dangers in current methods of evaluating algorithmic bias and points to ways to redirect approaches to addressing bias in generative AI at its early stages in ways that can more robustly meet the demands of justice. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  3. Algorithmic Bias and Risk Assessments: Lessons from Practice.Ali Hasan, Shea Brown, Jovana Davidovic, Benjamin Lange & Mitt Regan - 2022 - Digital Society 1 (1):1-15.
    In this paper, we distinguish between different sorts of assessments of algorithmic systems, describe our process of assessing such systems for ethical risk, and share some key challenges and lessons for future algorithm assessments and audits. Given the distinctive nature and function of a third-party audit, and the uncertain and shifting regulatory landscape, we suggest that second-party assessments are currently the primary mechanisms for analyzing the social impacts of systems that incorporate artificial intelligence. We then discuss two kinds of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. The Bias Dilemma: The Ethics of Algorithmic Bias in Natural-Language Processing.Oisín Deery & Katherine Bailey - 2022 - Feminist Philosophy Quarterly 8 (3).
    Addressing biases in natural-language processing (NLP) systems presents an underappreciated ethical dilemma, which we think underlies recent debates about bias in NLP models. In brief, even if we could eliminate bias from language models or their outputs, we would thereby often withhold descriptively or ethically useful information, despite avoiding perpetuating or amplifying bias. Yet if we do not debias, we can perpetuate or amplify bias, even if we retain relevant descriptively or ethically useful information. Understanding this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic (...) against people’s political orientation can arise in some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  6. Algorithmic Political Bias Can Reduce Political Polarization.Uwe Peters - 2022 - Philosophy and Technology 35 (3):1-7.
    Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that (...) political classifications entrench political identities, I contend that they may often produce the opposite result. They can lead people to change in ways that disconfirm the classifications. Consequently and counterintuitively, algorithmic political bias can in fact decrease political entrenchment and polarization. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing control over our lives—over which jobs we get, whether we're granted loans, what information we're exposed to online, and so on. Algorithms can, and often do, wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has gone largely neglected. I investigate three questions about algorithmic neutrality: What is it? Is it possible? And when we have it in mind, what can we learn (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral assumptions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  10. Algorithmic Microaggressions.Emma McClure & Benjamin Wald - 2022 - Feminist Philosophy Quarterly 8 (3).
    We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. Algorithms, Agency, and Respect for Persons.Alan Rubel, Clinton Castro & Adam Pham - 2020 - Social Theory and Practice 46 (3):547-572.
    Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  13. On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  14. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Should Algorithms that Predict Recidivism Have Access to Race?Duncan Purves & Jeremy Davis - 2023 - American Philosophical Quarterly 60 (2):205-220.
    Recent studies have shown that recidivism scoring algorithms like COMPAS have significant racial bias: Black defendants are roughly twice as likely as white defendants to be mistakenly classified as medium- or high-risk. This has led some to call for abolishing COMPAS. But many others have argued that algorithms should instead be given access to a defendant's race, which, perhaps counterintuitively, is likely to improve outcomes. This approach can involve either establishing race-sensitive risk thresholds, or distinct racial ‘tracks’. Is there (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Negligent Algorithmic Discrimination.Andrés Páez - 2021 - Law and Contemporary Problems 84 (3):19-33.
    The use of machine learning algorithms has become ubiquitous in hiring decisions. Recent studies have shown that many of these algorithms generate unlawful discriminatory effects in every step of the process. The training phase of the machine learning models used in these decisions has been identified as the main source of bias. For a long time, discrimination cases have been analyzed under the banner of disparate treatment and disparate impact, but these concepts have been shown to be ineffective in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. An Epistemic Lens on Algorithmic Fairness.Elizabeth Edenberg & Alexandra Wood - 2023 - Eaamo '23: Proceedings of the 3Rd Acm Conference on Equity and Access in Algorithms, Mechanisms, and Optimization.
    In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18.  86
    A Framework for Assurance Audits of Algorithmic Systems.Benjamin Lange, Khoa Lam, Borhane Hamelin, Davidovic Jovana, Shea Brown & Ali Hasan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    An increasing number of regulations propose the notion of ‘AI audits’ as an enforcement mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently have little to no agreed upon practices, procedures, taxonomies, and standards. We propose the ‘criterion audit’ as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. We might be afraid of black-box algorithms.Carissa Veliz, Milo Phillips-Brown, Carina Prunkl & Ted Lechterman - 2021 - Journal of Medical Ethics 47.
    Fears of black-box algorithms are multiplying. Black-box algorithms are said to prevent accountability, make it harder to detect bias and so on. Some fears concern the epistemology of black-box algorithms in medicine and the ethical implications of that epistemology. In ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI,' Durán and Jongsma seek to allay such fears. While some of their arguments are compelling, we still see reasons for fear.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  20. Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - forthcoming - Philosophical Studies:1-29.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. From human resources to human rights: Impact assessments for hiring algorithms.Josephine Yam & Joshua August Skorburg - 2021 - Ethics and Information Technology 23 (4):611-623.
    Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  23. The Limits of Reallocative and Algorithmic Policing.Luke William Hunt - 2022 - Criminal Justice Ethics 41 (1):1-24.
    Policing in many parts of the world—the United States in particular—has embraced an archetypal model: a conception of the police based on the tenets of individuated archetypes, such as the heroic police “warrior” or “guardian.” Such policing has in part motivated moves to (1) a reallocative model: reallocating societal resources such that the police are no longer needed in society (defunding and abolishing) because reform strategies cannot fix the way societal problems become manifest in (archetypal) policing; and (2) an (...) model: subsuming policing into technocratic judgements encoded in algorithms through strategies such as predictive policing (mitigating archetypal bias). This paper begins by considering the normative basis of the relationship between political community and policing. It then examines the justification of reallocative and algorithmic models in light of the relationship between political community and police. Given commitments to the depth and distribution of security—and proscriptions against dehumanizing strategies—the paper concludes that a nonideal-theory priority rule promoting respect for personhood (manifest in community and dignity-promoting policing strategies) is a necessary condition for the justification of the above models. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  24. CG-Art: demystifying the anthropocentric bias of artistic creativity.Leonardo Arriagada - 2020 - Connection Science 32 (4):398-405.
    The following aesthetic discussion examines in a philosophical-scientific way the relationship between computation and artistic creativity. Currently, there is a criticism about the possible artistic creativity that an algorithm could have. Supporting the above, the term computer-generated art (CG-Art) defined by Margaret Boden would seem to have no exponents yet. Moreover, it has been pointed out that, rather than a matter of primitive technological development, CG-Art would have in its very foundations the inability to exist. This, because art is considered (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. How to Save Face & the Fourth Amendment: Developing an Algorithmic Auditing and Accountability Industry for Facial Recognition Technology in Law Enforcement.Lin Patrick - 2023 - Albany Law Journal of Science and Technology 33 (2):189-235.
    For more than two decades, police in the United States have used facial recognition to surveil civilians. Local police departments deploy facial recognition technology to identify protestors’ faces while federal law enforcement agencies quietly amass driver’s license and social media photos to build databases containing billions of faces. Yet, despite the widespread use of facial recognition in law enforcement, there are neither federal laws governing the deployment of this technology nor regulations settings standards with respect to its development. To make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. From the Eyeball Test to the Algorithm — Quality of Life, Disability Status, and Clinical Decision Making in Surgery.Charles Binkley, Joel Michael Reynolds & Andrew Shuman - 2022 - New England Journal of Medicine 14 (387):1325-1328.
    Qualitative evidence concerning the relationship between QoL and a wide range of disabilities suggests that subjective judgments regarding other people’s QoL are wrong more often than not and that such judgments by medical practitioners in particular can be biased. Guided by their desire to do good and avoid harm, surgeons often rely on "the eyeball test" to decide whether a patient will or will not benefit from surgery. But the eyeball test can easily harbor a range of implicit judgments and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27.  79
    Shared decision-making and maternity care in the deep learning age: Acknowledging and overcoming inherited defeaters.Keith Begley, Cecily Begley & Valerie Smith - 2021 - Journal of Evaluation in Clinical Practice 27 (3):497–503.
    In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. In this paper the authors, with backgrounds in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. What's Fair about Individual Fairness?Will Fleisher - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.
    One of the main lines of research in algorithmic fairness involves individual fairness (IF) methods. Individual fairness is motivated by an intuitive principle, similar treatment, which requires that similar individuals be treated similarly. IF offers a precise account of this principle using distance metrics to evaluate the similarity of individuals. Proponents of individual fairness have argued that it gives the correct definition of algorithmic fairness, and that it should therefore be preferred to other methods for determining fairness. I (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Identity and the Limits of Fair Assessment.Rush T. Stewart - 2022 - Journal of Theoretical Politics 34 (3):415-442.
    In many assessment problems—aptitude testing, hiring decisions, appraisals of the risk of recidivism, evaluation of the credibility of testimonial sources, and so on—the fair treatment of different groups of individuals is an important goal. But individuals can be legitimately grouped in many different ways. Using a framework and fairness constraints explored in research on algorithmic fairness, I show that eliminating certain forms of bias across groups for one way of classifying individuals can make it impossible to eliminate such (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  30. Iudicium ex Machinae – The Ethical Challenges of Automated Decision-Making in Criminal Sentencing.Frej Thomsen - 2022 - In Julian Roberts & Jesper Ryberg (eds.), Principled Sentencing and Artificial Intelligence. Oxford University Press.
    Automated decision making for sentencing is the use of a software algorithm to analyse a convicted offender’s case and deliver a sentence. This chapter reviews the moral arguments for and against employing automated decision making for sentencing and finds that its use is in principle morally permissible. Specifically, it argues that well-designed automated decision making for sentencing will better approximate the just sentence than human sentencers. Moreover, it dismisses common concerns about transparency, privacy and bias as unpersuasive or inapplicable. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. É Possível Evitar Vieses Algorítmicos?Carlos Barth - 2021 - Revista de Filosofia Moderna E Contemporânea 8 (3):39-68.
    Artificial intelligence (AI) techniques are used to model human activities and predict behavior. Such systems have shown race, gender and other kinds of bias, which are typically understood as technical problems. Here we try to show that: 1) to get rid of such biases, we need a system that can understand the structure of human activities and;2) to create such a system, we need to solve foundational problems of AI, such as the common-sense problem. Additionally, when informational platforms uses (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Search Engines, White Ignorance, and the Social Epistemology of Technology.Joshua Habgood-Coote - manuscript
    How should we think about the ways search engines can go wrong? Following the publication of Safiya Noble’s Algorithms of Oppression (Noble 2018), a view has emerged that racist, sexist, and other problematic results should be thought of as indicative of algorithmic bias. In this paper, I offer an alternative angle on these results, building on Noble’s suggestion that search engines are complicit in a racial contract (Mills 1990). I argue that racist and sexist results should be thought (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a variety of normative egalitarian considerations. We (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  34. Shadowboxing with Social Justice Warriors. A Review of Endre Begby’s Prejudice: A Study in Non-Ideal Epistemology.Alex Madva - 2022 - Philosophical Psychology.
    Endre Begby’s Prejudice: A Study in Non-Ideal Epistemology engages a wide range of issues of enduring interest to epistemologists, applied ethicists, and anyone concerned with how knowledge and justice intersect. Topics include stereotypes and generics, evidence and epistemic justification, epistemic injustice, ethical-epistemic dilemmas, moral encroachment, and the relations between blame and accountability. Begby applies his views about these topics to an equally wide range of pressing social questions, such as conspiracy theories, misinformation, algorithmic bias, discrimination, and criminal justice. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Agency Laundering and Information Technologies.Alan Rubel, Clinton Castro & Adam Pham - 2019 - Ethical Theory and Moral Practice 22 (4):1017-1041.
    When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  36. An Impossibility Theorem for Base Rate Tracking and Equalised Odds.Rush T. Stewart, Benjamin Eva, Shanna Slank & Reuben Stern - forthcoming - Analysis.
    There is a theorem that shows that it is impossible for an algorithm to jointly satisfy the statistical fairness criteria of Calibration and Equalised Odds non-trivially. But what about the recently advocated alternative to Calibration, Base Rate Tracking? Here, we show that Base Rate Tracking is strictly weaker than Calibration, and then take up the question of whether it is possible to jointly satisfy Base Rate Tracking and Equalised Odds in non-trivial scenarios. We show that it is not, thereby establishing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Ethical assessments and mitigation strategies for biases in AI-systems used during the COVID-19 pandemic.Alicia De Manuel, Janet Delgado, Parra Jonou Iris, Txetxu Ausín, David Casacuberta, Maite Cruz Piqueras, Ariel Guersenzvaig, Cristian Moyano, David Rodríguez-Arias, Jon Rueda & Angel Puyol - 2023 - Big Data and Society 10 (1).
    The main aim of this article is to reflect on the impact of biases related to artificial intelligence (AI) systems developed to tackle issues arising from the COVID-19 pandemic, with special focus on those developed for triage and risk prediction. A secondary aim is to review assessment tools that have been developed to prevent biases in AI systems. In addition, we provide a conceptual clarification for some terms related to biases in this particular context. We focus mainly on nonracial biases (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Only Human (In the Age of Social Media).Barrett Emerick & Shannon Dea - forthcoming - In Hilkje Hänel & Johanna Müller (eds.), The Routledge Handbook of Non-Ideal Theory. Routledge.
    This chapter argues that for human, technological, and human-technological reasons, disagreement, critique, and counterspeech on social media fall squarely into the province of non-ideal theory. It concludes by suggesting a modest but challenging disposition that can help us when we are torn between opposing oppression and contributing to a flame war.
    Download  
     
    Export citation  
     
    Bookmark  
  39. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Materializing Systemic Racism, Materializing Health Disparities.Vanessa Carbonell & Shen-yi Liao - 2021 - American Journal of Bioethics 21 (9):16-18.
    The purpose of cultural competence education for medical professionals is to ensure respectful care and reduce health disparities. Yet as Berger and Miller (2021) show, the cultural competence framework is dated, confused, and self-defeating. They argue that the framework ignores the primary driver of health disparities—systemic racism—and is apt to exacerbate rather than mitigate bias and ethnocentrism. They propose replacing cultural competence with a framework that attends to two social aspects of structural inequality: health and social policy, and institutional-system (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Artificial intelligence as a public service: learning from Amsterdam and Helsinki.Luciano Floridi - 2020 - Philosophy and Technology 33 (4):541–⁠546.
    In September 2020, Helsinki and Amsterdam announced the launch of their open AI registers—the first cities in the world to offer such a service. The AI registers describe what, where, and how AI applications are being used in the two municipalities; how algorithms were assessed for potential bias or risks; and how humans use the AI services. Examining issues of security and transparency, this paper discusses the potential for implementing AI in an urban public service setting and how this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42.  96
    The Homo Rationalis in the Digital Society: an Announced Tragedy.Tommaso Ostillio - 2023 - Dissertation, University of Warsaw
    This dissertation compares the notions of homo rationalis in Philosophy and homo oeconomicus in Economics. Particularly, in Part I, we claim that both notions are close methodological substitutes. Accordingly, we show that the constraints involved in the notion of economic rationality apply to the philosophical notion of rationality. On these premises, we explore the links between the notions of Kantian and Humean rationality in Philosophy and the constructivist and ecological approaches to rationality in economics, respectively. Particularly, we show that the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Five Ethical Challenges for Data-Driven Policing.Jeremy Davis, Duncan Purves, Juan Gilbert & Schuyler Sturm - 2022 - AI and Ethics 2:185-198.
    This paper synthesizes scholarship from several academic disciplines to identify and analyze five major ethical challenges facing data-driven policing. Because the term “data-driven policing” emcompasses a broad swath of technologies, we first outline several data-driven policing initiatives currently in use in the United States. We then lay out the five ethical challenges. Certain of these challenges have received considerable attention already, while others have been largely overlooked. In many cases, the challenges have been articulated in the context of related discussions, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44.  93
    Non-Conscious Data Collection: A Critical Analysis of Risks and Public Perspectives.Matomäki Sofia - 2024 - Dissertation, Aalto University School of Business
    This literature review explores the issues and risks in non-conscious data collection and evaluates people’s attitudes towards it. In the modern world, data is one of the most valuable resources, yet studies focused on the potential negative implications of the new data-driven technologies are lacking. Therefore, this thesis conducts a comprehensive literature review to identify and assess risks in non-conscious data collection technologies that are most relevant and referenced in current literature. Accordingly, the most prominent risks are related to privacy (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Implicit Bias, Character and Control.Jules Holroyd & Daniel Kelly - 2016 - In Alberto Masala & Jonathan Webber (eds.), From Personality to Virtue: Essays on the Philosophy of Character. Oxford: Oxford University Press UK. pp. 106-133.
    Our focus here is on whether, when influenced by implicit biases, those behavioural dispositions should be understood as being a part of that person’s character: whether they are part of the agent that can be morally evaluated.[4] We frame this issue in terms of control. If a state, process, or behaviour is not something that the agent can, in the relevant sense, control, then it is not something that counts as part of her character. A number of theorists have argued (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  46. Bias and Perception.Susanna Siegel - 2020 - In Erin Beeghly & Alex Madva (eds.), An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind. New York, NY, USA: Routledge. pp. 99-115.
    chapter on perception and bias including implicit bias.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  47. Bias and Knowledge: Two Metaphors.Erin Beeghly - 2020 - In Erin Beeghly & Alex Madva (eds.), An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind. New York, NY, USA: Routledge. pp. 77-98.
    If you care about securing knowledge, what is wrong with being biased? Often it is said that we are less accurate and reliable knowers due to implicit biases. Likewise, many people think that biases reflect inaccurate claims about groups, are based on limited experience, and are insensitive to evidence. Chapter 3 investigates objections such as these with the help of two popular metaphors: bias as fog and bias as shortcut. Guiding readers through these metaphors, I argue that they (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  48. The Heterogeneity of Implicit Bias.Jules Holroyd & Joseph Sweetman - 2016 - In Michael Brownstein & Jennifer Mather Saul (eds.), Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology. Oxford, United Kingdom: Oxford University Press.
    The term 'implicit bias' has very swiftly been incorporated into philosophical discourse. Our aim in this paper is to scrutinise the phenomena that fall under the rubric of implicit bias. The term is often used in a rather broad sense, to capture a range of implicit social cognitions, and this is useful for some purposes. However, we here articulate some of the important differences between phenomena identified as instances of implicit bias. We caution against ignoring these differences: (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  49. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  50. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
1 — 50 / 998