Results for 'Algorithmic justice'

952 found
Order:
  1. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Public Trust, Institutional Legitimacy, and the Use of Algorithms in Criminal Justice.Duncan Purves & Jeremy Davis - 2022 - Public Affairs Quarterly 36 (2):136-162.
    A common criticism of the use of algorithms in criminal justice is that algorithms and their determinations are in some sense ‘opaque’—that is, difficult or impossible to understand, whether because of their complexity or because of intellectual property protections. Scholars have noted some key problems with opacity, including that opacity can mask unfair treatment and threaten public accountability. In this paper, we explore a different but related concern with algorithmic opacity, which centers on the role of public trust (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  3. Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy.Atoosa Kasirzadeh - 2022 - Aies '22: Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society.
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in -- (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. (1 other version)Abolish! Against the Use of Risk Assessment Algorithms at Sentencing in the US Criminal Justice System.Katia Schwerzmann - 2021 - Philosophy and Technology 1:1-22.
    In this article, I show why it is necessary to abolish the use of predictive algorithms in the US criminal justice system at sentencing. After presenting the functioning of these algorithms in their context of emergence, I offer three arguments to demonstrate why their abolition is imperative. First, I show that sentencing based on predictive algorithms induces a process of rewriting the temporality of the judged individual, flattening their life into a present inescapably doomed by its past. Second, I (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  5. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  7. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  8. Algorithms and Posthuman Governance.James Hughes - 2017 - Journal of Posthuman Studies.
    Since the Enlightenment, there have been advocates for the rationalizing efficiency of enlightened sovereigns, bureaucrats, and technocrats. Today these enthusiasms are joined by calls for replacing or augmenting government with algorithms and artificial intelligence, a process already substantially under way. Bureaucracies are in effect algorithms created by technocrats that systematize governance, and their automation simply removes bureaucrats and paper. The growth of algorithmic governance can already be seen in the automation of social services, regulatory oversight, policing, the justice (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. An Epistemic Lens on Algorithmic Fairness.Elizabeth Edenberg & Alexandra Wood - 2023 - Eaamo '23: Proceedings of the 3Rd Acm Conference on Equity and Access in Algorithms, Mechanisms, and Optimization.
    In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Distributive justice as an ethical principle for autonomous vehicle behavior beyond hazard scenarios.Manuel Dietrich & Thomas H. Weisswange - 2019 - Ethics and Information Technology 21 (3):227-239.
    Through modern driver assistant systems, algorithmic decisions already have a significant impact on the behavior of vehicles in everyday traffic. This will become even more prominent in the near future considering the development of autonomous driving functionality. The need to consider ethical principles in the design of such systems is generally acknowledged. However, scope, principles and strategies for their implementations are not yet clear. Most of the current discussions concentrate on situations of unavoidable crashes in which the life of (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  12.  72
    “Democratizing AI” and the Concern of Algorithmic Injustice.Ting-an Lin - 2024 - Philosophy and Technology 37 (3):1-27.
    The call to make artificial intelligence (AI) more democratic, or to “democratize AI,” is sometimes framed as a promising response for mitigating algorithmic injustice or making AI more aligned with social justice. However, the notion of “democratizing AI” is elusive, as the phrase has been associated with multiple meanings and practices, and the extent to which it may help mitigate algorithmic injustice is still underexplored. In this paper, based on a socio-technical understanding of algorithmic injustice, I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Genealogy of Algorithms: Datafication as Transvaluation.Virgil W. Brower - 2020 - le Foucaldien 6 (1):1-43.
    This article investigates religious ideals persistent in the datafication of information society. Its nodal point is Thomas Bayes, after whom Laplace names the primal probability algorithm. It reconsiders their mathematical innovations with Laplace's providential deism and Bayes' singular theological treatise. Conceptions of divine justice one finds among probability theorists play no small part in the algorithmic data-mining and microtargeting of Cambridge Analytica. Theological traces within mathematical computation are emphasized as the vantage over large numbers shifts to weights beyond (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  14. Disoriented and alone in the “experience machine” - On Netflix, shared world deceptions and the consequences of deepening algorithmic personalization.Maria Brincker - 2021 - SATS 22 (1):75-96.
    Most online platforms are becoming increasingly algorithmically personalized. The question is if these practices are simply satisfying users preferences or if something is lost in this process. This article focuses on how to reconcile the personalization with the importance of being able to share cultural objects - including fiction – with others. In analyzing two concrete personalization examples from the streaming giant Netflix, several tendencies are observed. One is to isolate users and sometimes entirely eliminate shared world aspects. Another tendency (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. The Limits of Reallocative and Algorithmic Policing.Luke William Hunt - 2022 - Criminal Justice Ethics 41 (1):1-24.
    Policing in many parts of the world—the United States in particular—has embraced an archetypal model: a conception of the police based on the tenets of individuated archetypes, such as the heroic police “warrior” or “guardian.” Such policing has in part motivated moves to (1) a reallocative model: reallocating societal resources such that the police are no longer needed in society (defunding and abolishing) because reform strategies cannot fix the way societal problems become manifest in (archetypal) policing; and (2) an (...) model: subsuming policing into technocratic judgements encoded in algorithms through strategies such as predictive policing (mitigating archetypal bias). This paper begins by considering the normative basis of the relationship between political community and policing. It then examines the justification of reallocative and algorithmic models in light of the relationship between political community and police. Given commitments to the depth and distribution of security—and proscriptions against dehumanizing strategies—the paper concludes that a nonideal-theory priority rule promoting respect for personhood (manifest in community and dignity-promoting policing strategies) is a necessary condition for the justification of the above models. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. The Police Identity Crisis – Hero, Warrior, Guardian, Algorithm.Luke William Hunt - 2021 - New York, NY, USA: Routledge.
    This book provides a comprehensive examination of the police role from within a broader philosophical context. Contending that the police are in the midst of an identity crisis that exacerbates unjustified law enforcement tactics, Luke William Hunt examines various major conceptions of the police—those seeing them as heroes, warriors, and guardians. The book looks at the police role considering the overarching societal goal of justice and seeks to present a synthetic theory that draws upon history, law, society, psychology, and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Preface to Forenames of God: Enumerations of Ernesto Laclau toward a Political Theology of Algorithms.Virgil W. Brower - 2021 - Internationales Jahrbuch Für Medienphilosophie 7 (1):243-251.
    Perhaps nowhere better than, "On the Names of God," can readers discern Laclau's appreciation of theology, specifically, negative theology, and the radical potencies of political theology. // It is Laclau's close attention to Eckhart and Dionysius in this essay that reveals a core theological strategy to be learned by populist reasons or social logics and applied in politics or democracies to come. // This mode of algorithmically informed negative political theology is not mathematically inert. It aspires to relate a fraction (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Shadowboxing with Social Justice Warriors. A Review of Endre Begby’s Prejudice: A Study in Non-Ideal Epistemology.Alex Madva - 2022 - Philosophical Psychology.
    Endre Begby’s Prejudice: A Study in Non-Ideal Epistemology engages a wide range of issues of enduring interest to epistemologists, applied ethicists, and anyone concerned with how knowledge and justice intersect. Topics include stereotypes and generics, evidence and epistemic justification, epistemic injustice, ethical-epistemic dilemmas, moral encroachment, and the relations between blame and accountability. Begby applies his views about these topics to an equally wide range of pressing social questions, such as conspiracy theories, misinformation, algorithmic bias, discrimination, and criminal (...). Through it all, the book’s central thesis is that prejudices can be epistemically rational, a corrective against what Begby takes to be the received view that prejudices are always and everywhere bad. However, Begby’s arguments do not engage consistently with relevant empirical literatures, misrepresent the positions of his interlocutors, and rehearse ideas already well-established across a range of intellectual traditions. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  19. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  20. On the Possibility of Testimonial Justice.Rush T. Stewart & Michael Nielsen - 2020 - Australasian Journal of Philosophy 98 (4):732-746.
    Recent impossibility theorems for fair risk assessment extend to the domain of epistemic justice. We translate the relevant model, demonstrating that the problems of fair risk assessment and just credibility assessment are structurally the same. We motivate the fairness criteria involved in the theorems as also being appropriate in the setting of testimonial justice. Any account of testimonial justice that implies the fairness/justice criteria must be abandoned, on pain of triviality.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  21. If the Difference Principle Won’t Make a Real Difference in Algorithmic Fairness, What Will? [REVIEW]Reuben Binns - manuscript
    In ‘Rawlsian algorithmic fairness and a missing aggregation property of the difference Principle’, the authors argue that there is a false assumption in algorithmic fairness interventions inspired by John Rawls’ theory of justice. They argue that applying the difference principle at the level of a local algorithmic decision-making context (what they term a ‘constituent situation’), is neither necessary nor sufficient for the difference principle to be upheld at the aggregate level of society at large. I find (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Ethical assessments and mitigation strategies for biases in AI-systems used during the COVID-19 pandemic.Alicia De Manuel, Janet Delgado, Parra Jonou Iris, Txetxu Ausín, David Casacuberta, Maite Cruz Piqueras, Ariel Guersenzvaig, Cristian Moyano, David Rodríguez-Arias, Jon Rueda & Angel Puyol - 2023 - Big Data and Society 10 (1).
    The main aim of this article is to reflect on the impact of biases related to artificial intelligence (AI) systems developed to tackle issues arising from the COVID-19 pandemic, with special focus on those developed for triage and risk prediction. A secondary aim is to review assessment tools that have been developed to prevent biases in AI systems. In addition, we provide a conceptual clarification for some terms related to biases in this particular context. We focus mainly on nonracial biases (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. When Gig Workers Become Essential: Leveraging Customer Moral Self-Awareness Beyond COVID-19.Julian Friedland - 2022 - Business Horizons 66 (2):181-190.
    The COVID-19 pandemic has intensified the extent to which economies in the developed and developing world rely on gig workers to perform essential tasks such as health care, personal transport, food and package delivery, and ad hoc tasking services. As a result, workers who provide such services are no longer perceived as mere low-skilled laborers, but as essential workers who fulfill a crucial role in society. The newly elevated moral and economic status of these workers increases consumer demand for corporate (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  25. What We Informationally Owe Each Other.Alan Rubel, Clinton Castro & Adam Pham - 2021 - In Alan Rubel, Clinton Castro & Adam Pham (eds.), Algorithms and Autonomy: The Ethics of Automated Decision Systems. Cambridge University Press. pp. 21-42.
    ABSTRACT: One important criticism of algorithmic systems is that they lack transparency. Such systems can be opaque because they are complex, protected by patent or trade secret, or deliberately obscure. In the EU, there is a debate about whether the General Data Protection Regulation (GDPR) contains a “right to explanation,” and if so what such a right entails. Our task in this chapter is to address this informational component of algorithmic systems. We argue that information access is integral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Proceed with Caution.Annette Zimmermann & Chad Lee-Stronach - 2021 - Canadian Journal of Philosophy (1):6-25.
    It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  28. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  29. Iudicium ex Machinae – The Ethical Challenges of Automated Decision-Making in Criminal Sentencing.Frej Thomsen - 2022 - In Julian Roberts & Jesper Ryberg (eds.), Principled Sentencing and Artificial Intelligence. Oxford University Press.
    Automated decision making for sentencing is the use of a software algorithm to analyse a convicted offender’s case and deliver a sentence. This chapter reviews the moral arguments for and against employing automated decision making for sentencing and finds that its use is in principle morally permissible. Specifically, it argues that well-designed automated decision making for sentencing will better approximate the just sentence than human sentencers. Moreover, it dismisses common concerns about transparency, privacy and bias as unpersuasive or inapplicable. The (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Supporting human autonomy in AI systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - 2020 - In Christopher Burr & Luciano Floridi (eds.), Ethics of digital well-being: a multidisciplinary approach. Springer.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  31. Conflicting Aims and Values in the Application of Smart Sensors in Geriatric Rehabilitation: Ethical Analysis.Christopher Predel, Cristian Timmermann, Frank Ursin, Marcin Orzechowski, Timo Ropinski & Florian Steger - 2022 - JMIR mHealth and uHealth 10 (6):e32910.
    Background: Smart sensors have been developed as diagnostic tools for rehabilitation to cover an increasing number of geriatric patients. They promise to enable an objective assessment of complex movement patterns. -/- Objective: This research aimed to identify and analyze the conflicting ethical values associated with smart sensors in geriatric rehabilitation and provide ethical guidance on the best use of smart sensors to all stakeholders, including technology developers, health professionals, patients, and health authorities. -/- Methods: On the basis of a systematic (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. (1 other version)A Ghost Workers' Bill of Rights: How to Establish a Fair and Safe Gig Work Platform.Julian Friedland, David Balkin & Ramiro Montealegre - 2020 - California Management Review 62 (2).
    Many of us assume that all the free editing and sorting of online content we ordinarily rely on is carried out by AI algorithms — not human persons. Yet in fact, that is often not the case. This is because human workers remain cheaper, quicker, and more reliable than AI for performing myriad tasks where the right answer turns on ineffable contextual criteria too subtle for algorithms to yet decode. The output of this work is then used for machine learning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Standing by our principles: Meaningful guidance, moral foundations, and multi-principle methodology in medical scarcity.Govind C. Persad, Alan Wertheimer & Ezekiel J. Emanuel - 2010 - American Journal of Bioethics 10 (4):46 – 48.
    In this short response to Kerstein and Bognar, we clarify three aspects of the complete lives system, which we propose as a system of allocating scarce medical interventions. We argue that the complete lives system provides meaningful guidance even though it does not provide an algorithm. We also defend the investment modification to the complete lives system, which prioritizes adolescents and older children over younger children; argue that sickest-first allocation remains flawed when scarcity is absolute and ongoing; and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  34. Digital Monology: The Authority of the Search Engine.Walter Barta - 2019 - Media and the Moving Image at University of Houston.
    2019 Applied Technology Award for the Media and the Moving Image Awards at University of Houston. -/- The Google algorithm, as a ranking and ordering structure, cannot be “objective” as long as the page-ranking mechanism produces social effects and always inadvertently and inescapably affects social priorities. Imitable units of information (memes) on the internet change according to the laws of exponential growth, like other social phenomena, which include Google rankings. Mathematically and graphically represented, the effects of mimetic inflation on Google (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Language, Truth and The Just Society.Charles Justice - manuscript
    All that philosophical “theories” of truth do is to demonstrate what is entailed by assuming our common uses and common understandings of the concept of truth. But our common understanding of what truth is is only a part of how truth functions. If we only look at that, we are missing the rest of the picture, namely how truth functions as the foundation for all human communication. I propose that truth functions a lot like morality, in the sense that both (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36.  85
    (5 other versions)Algorithm Evaluation Without Autonomy.Scott Hill - forthcoming - AI and Ethics.
    In Algorithms & Autonomy, Rubel, Castro, and Pham (hereafter RCP), argue that the concept of autonomy is especially central to understanding important moral problems about algorithms. In particular, autonomy plays a role in analyzing the version of social contract theory that they endorse. I argue that although RCP are largely correct in their diagnosis of what is wrong with the algorithms they consider, those diagnoses can be appropriated by moral theories RCP see as in competition with their autonomy based theory. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37.  47
    Algorithmic Decision-Making, Agency Costs, and Institution-Based Trust.Keith Dowding & Brad R. Taylor - 2024 - Philosophy and Technology 37 (2):1-22.
    Algorithm Decision Making (ADM) systems designed to augment or automate human decision-making have the potential to produce better decisions while also freeing up human time and attention for other pursuits. For this potential to be realised, however, algorithmic decisions must be sufficiently aligned with human goals and interests. We take a Principal-Agent (P-A) approach to the questions of ADM alignment and trust. In a broad sense, ADM is beneficial if and only if human principals can trust algorithmic agents (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  39. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Algorithms, Agency, and Respect for Persons.Alan Rubel, Clinton Castro & Adam Pham - 2020 - Social Theory and Practice 46 (3):547-572.
    Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four key (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  41.  23
    Introduction: Algorithmic Thought.M. Beatrice Fazi - 2021 - Theory, Culture and Society 38 (7-8):5-11.
    This introduction to a special section on algorithmic thought provides a framework through which the articles in that collection can be contextualised and their individual contributions highlighted. Over the past decade, there has been a growing interest in artificial intelligence (AI). This special section reflects on this AI boom and its implications for studying what thinking is. Focusing on the algorithmic character of computing machines and the thinking that these machines might express, each of the special section’s essays (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2):2053951716679679.
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Download  
     
    Export citation  
     
    Bookmark   211 citations  
  43. Algorithmic paranoia: the temporal governmentality of predictive policing.Bonnie Sheehey - 2019 - Ethics and Information Technology 21 (1):49-58.
    In light of the recent emergence of predictive techniques in law enforcement to forecast crimes before they occur, this paper examines the temporal operation of power exercised by predictive policing algorithms. I argue that predictive policing exercises power through a paranoid style that constitutes a form of temporal governmentality. Temporality is especially pertinent to understanding what is ethically at stake in predictive policing as it is continuous with a historical racialized practice of organizing, managing, controlling, and stealing time. After first (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  44. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  45. On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  46. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing control over our lives—over the jobs we get, the loans we're granted, the information we see online. Algorithms can and often do wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has been largely neglected. I investigate algorithmic neutrality, tackling three questions: What is algorithmic neutrality? Is it possible? And when we have it in mind, what can we learn about algorithmic (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do not (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  48. Are algorithms always arbitrary? Three types of arbitrariness and ways to overcome the computationalist’s trilemma.C. Percy - manuscript
    Implementing an algorithm on part of our causally-interconnected physical environment requires three choices that are typically considered arbitrary, i.e. no single option is innately privileged without invoking an external observer perspective. First, how to delineate one set of local causal relationships from the environment. Second, within this delineation, which inputs and outputs to designate for attention. Third, what meaning to assign to particular states of the designated inputs and outputs. Having explained these types of arbitrariness, we assess their relevance for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Algorithmic Political Bias Can Reduce Political Polarization.Uwe Peters - 2022 - Philosophy and Technology 35 (3):1-7.
    Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that algorithmic political (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Crash Algorithms for Autonomous Cars: How the Trolley Problem Can Move Us Beyond Harm Minimisation.Dietmar Hübner & Lucie White - 2018 - Ethical Theory and Moral Practice 21 (3):685-698.
    The prospective introduction of autonomous cars into public traffic raises the question of how such systems should behave when an accident is inevitable. Due to concerns with self-interest and liberal legitimacy that have become paramount in the emerging debate, a contractarian framework seems to provide a particularly attractive means of approaching this problem. We examine one such attempt, which derives a harm minimisation rule from the assumptions of rational self-interest and ignorance of one’s position in a future accident. We contend, (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
1 — 50 / 952