Results for 'algorithms and algorithmic constellations'

982 found
Order:
  1. From the Closed Classical Algorithmic Universe to an Open World of Algorithmic Constellations.Mark Burgin & Gordana Dodig-Crnkovic - 2013 - In Gordana Dodig-Crnkovic Raffaela Giovagnoli, Computing Nature. pp. 241--253.
    In this paper we analyze methodological and philosophical implications of algorithmic aspects of unconventional computation. At first, we describe how the classical algorithmic universe developed and analyze why it became closed in the conventional approach to computation. Then we explain how new models of algorithms turned the classical closed algorithmic universe into the open world of algorithmic constellations, allowing higher flexibility and expressive power, supporting constructivism and creativity in mathematical modeling. As Goedels undecidability theorems (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Algorithmic decision-making: the right to explanation and the significance of stakes.Lauritz Munch, Jens Christian Bjerring & Jakob Mainz - 2024 - Big Data and Society.
    The stakes associated with an algorithmic decision are often said to play a role in determining whether the decision engenders a right to an explanation. More specifically, “high stakes” decisions are often said to engender such a right to explanation whereas “low stakes” or “non-high” stakes decisions do not. While the overall gist of these ideas is clear enough, the details are lacking. In this paper, we aim to provide these details through a detailed investigation of what we will (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  3. Algorithms, Agency, and Respect for Persons.Alan Rubel, Clinton Castro & Adam Pham - 2020 - Social Theory and Practice 46 (3):547-572.
    Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  4. Algorithms and Autonomy: The Ethics of Automated Decision Systems.Alan Rubel, Clinton Castro & Adam Pham - 2021 - Cambridge University Press.
    Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work… the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predictive policing, background checks, news feeds, ride-sharing platforms, social media, and election (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  5. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  6. Algorithm and Parameters: Solving the Generality Problem for Reliabilism.Jack C. Lyons - 2019 - Philosophical Review 128 (4):463-509.
    The paper offers a solution to the generality problem for a reliabilist epistemology, by developing an “algorithm and parameters” scheme for type-individuating cognitive processes. Algorithms are detailed procedures for mapping inputs to outputs. Parameters are psychological variables that systematically affect processing. The relevant process type for a given token is given by the complete algorithmic characterization of the token, along with the values of all the causally relevant parameters. The typing that results is far removed from the typings (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  7. Disoriented and alone in the “experience machine” - On Netflix, shared world deceptions and the consequences of deepening algorithmic personalization.Maria Brincker - 2021 - SATS 22 (1):75-96.
    Most online platforms are becoming increasingly algorithmically personalized. The question is if these practices are simply satisfying users preferences or if something is lost in this process. This article focuses on how to reconcile the personalization with the importance of being able to share cultural objects - including fiction – with others. In analyzing two concrete personalization examples from the streaming giant Netflix, several tendencies are observed. One is to isolate users and sometimes entirely eliminate shared world aspects. Another tendency (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  8. (1 other version)Algorithmic correspondence and completeness in modal logic. IV. Semantic extensions of SQEMA.Willem Conradie & Valentin Goranko - 2008 - Journal of Applied Non-Classical Logics 18 (2):175-211.
    In a previous work we introduced the algorithm \SQEMA\ for computing first-order equivalents and proving canonicity of modal formulae, and thus established a very general correspondence and canonical completeness result. \SQEMA\ is based on transformation rules, the most important of which employs a modal version of a result by Ackermann that enables elimination of an existentially quantified predicate variable in a formula, provided a certain negative polarity condition on that variable is satisfied. In this paper we develop several extensions of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Algorithms and Arguments: The Foundational Role of the ATAI-question.Paola Cantu' & Italo Testa - 2011 - In Frans H. van Eemeren, Bart Garssen, David Godden & Gordon Mitchell, Proceedings of the Seventh International Conference of the International Society for the Study of Argumentation. Rozenberg / Sic Sat.
    Argumentation theory underwent a significant development in the Fifties and Sixties: its revival is usually connected to Perelman's criticism of formal logic and the development of informal logic. Interestingly enough it was during this period that Artificial Intelligence was developed, which defended the following thesis (from now on referred to as the AI-thesis): human reasoning can be emulated by machines. The paper suggests a reconstruction of the opposition between formal and informal logic as a move against a premise of an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy.Atoosa Kasirzadeh - 2022 - Aies '22: Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society.
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  12. Algorithmic Bias and Risk Assessments: Lessons from Practice.Ali Hasan, Shea Brown, Jovana Davidovic, Benjamin Lange & Mitt Regan - 2022 - Digital Society 1 (1):1-15.
    In this paper, we distinguish between different sorts of assessments of algorithmic systems, describe our process of assessing such systems for ethical risk, and share some key challenges and lessons for future algorithm assessments and audits. Given the distinctive nature and function of a third-party audit, and the uncertain and shifting regulatory landscape, we suggest that second-party assessments are currently the primary mechanisms for analyzing the social impacts of systems that incorporate artificial intelligence. We then discuss two kinds of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  13. Algorithmic Randomness and Probabilistic Laws.Jeffrey A. Barrett & Eddy Keming Chen - manuscript
    We consider two ways one might use algorithmic randomness to characterize a probabilistic law. The first is a generative chance* law. Such laws involve a nonstandard notion of chance. The second is a probabilistic* constraining law. Such laws impose relative frequency and randomness constraints that every physically possible world must satisfy. While each notion has virtues, we argue that the latter has advantages over the former. It supports a unified governing account of non-Humean laws and provides independently motivated solutions (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  14. Algorithmic Fairness and Feasibility.Eva Erman & Markus Furendal - 2025 - Philosophy and Technology 38 (1):1-9.
    The “impossibility results” in algorithmic fairness suggest that a predictive model cannot fully meet two common fairness criteria – sufficiency and separation – except under extraordinary circumstances. These findings have sparked a discussion on fairness in algorithms, prompting debates over whether predictive models can avoid unfair discrimination based on protected attributes, such as ethnicity or gender. As shown by Otto Sahlgren, however, the discussion of the impossibility results would gain from importing some of the tools developed in the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2):2053951716679679.
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can (...)
    Download  
     
    Export citation  
     
    Bookmark   249 citations  
  16. (1 other version)The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  17. Algorithms and the Individual in Criminal Law.Renée Jorgensen - 2022 - Canadian Journal of Philosophy 52 (1):1-17.
    Law-enforcement agencies are increasingly able to leverage crime statistics to make risk predictions for particular individuals, employing a form of inference that some condemn as violating the right to be “treated as an individual.” I suggest that the right encodes agents’ entitlement to a fair distribution of the burdens and benefits of the rule of law. Rather than precluding statistical prediction, it requires that citizens be able to anticipate which variables will be used as predictors and act intentionally to avoid (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  18. Algorithmic Decision-Making, Agency Costs, and Institution-Based Trust.Keith Dowding & Brad R. Taylor - 2024 - Philosophy and Technology 37 (2):1-22.
    Algorithm Decision Making (ADM) systems designed to augment or automate human decision-making have the potential to produce better decisions while also freeing up human time and attention for other pursuits. For this potential to be realised, however, algorithmic decisions must be sufficiently aligned with human goals and interests. We take a Principal-Agent (P-A) approach to the questions of ADM alignment and trust. In a broad sense, ADM is beneficial if and only if human principals can trust algorithmic agents (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Three Lessons For and From Algorithmic Discrimination.Frej Klem Thomsen - 2023 - Res Publica (2):1-23.
    Algorithmic discrimination has rapidly become a topic of intense public and academic interest. This article explores three issues raised by algorithmic discrimination: 1) the distinction between direct and indirect discrimination, 2) the notion of disadvantageous treatment, and 3) the moral badness of discriminatory automated decision-making. It argues that some conventional distinctions between direct and indirect discrimination appear not to apply to algorithmic discrimination, that algorithmic discrimination may often be discrimination between groups, as opposed to against groups, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Taste and the algorithm.Emanuele Arielli - 2018 - Studi di Estetica 12 (3):77-97.
    Today, a consistent part of our everyday interaction with art and aesthetic artefacts occurs through digital media, and our preferences and choices are systematically tracked and analyzed by algorithms in ways that are far from transparent. Our consumption is constantly documented, and then, we are fed back through tailored information. We are therefore witnessing the emergence of a complex interrelation between our aesthetic choices, their digital elaboration, and also the production of content and the dynamics of creative processes. All (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  21. Algorithms and Posthuman Governance.James Hughes - 2017 - Journal of Posthuman Studies.
    Since the Enlightenment, there have been advocates for the rationalizing efficiency of enlightened sovereigns, bureaucrats, and technocrats. Today these enthusiasms are joined by calls for replacing or augmenting government with algorithms and artificial intelligence, a process already substantially under way. Bureaucracies are in effect algorithms created by technocrats that systematize governance, and their automation simply removes bureaucrats and paper. The growth of algorithmic governance can already be seen in the automation of social services, regulatory oversight, policing, the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Algorithmic paranoia: the temporal governmentality of predictive policing.Bonnie Sheehey - 2019 - Ethics and Information Technology 21 (1):49-58.
    In light of the recent emergence of predictive techniques in law enforcement to forecast crimes before they occur, this paper examines the temporal operation of power exercised by predictive policing algorithms. I argue that predictive policing exercises power through a paranoid style that constitutes a form of temporal governmentality. Temporality is especially pertinent to understanding what is ethically at stake in predictive policing as it is continuous with a historical racialized practice of organizing, managing, controlling, and stealing time. After (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  23. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing control over our lives—over the jobs we get, the loans we're granted, the information we see online. Algorithms can and often do wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has been largely neglected. I investigate algorithmic neutrality, tackling three questions: What is algorithmic neutrality? Is it possible? And when we have it in mind, what can we learn about (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Are algorithms always arbitrary? Three types of arbitrariness and ways to overcome the computationalist’s trilemma.C. Percy - manuscript
    Implementing an algorithm on part of our causally-interconnected physical environment requires three choices that are typically considered arbitrary, i.e. no single option is innately privileged without invoking an external observer perspective. First, how to delineate one set of local causal relationships from the environment. Second, within this delineation, which inputs and outputs to designate for attention. Third, what meaning to assign to particular states of the designated inputs and outputs. Having explained these types of arbitrariness, we assess their relevance for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  26. Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - 2025 - Philosophical Studies 182 (1):25-53.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Algorithmic Fairness Criteria as Evidence.Will Fleisher - forthcoming - Ergo: An Open Access Journal of Philosophy.
    Statistical fairness criteria are widely used for diagnosing and ameliorating algorithmic bias. However, these fairness criteria are controversial as their use raises several difficult questions. I argue that the major problems for statistical algorithmic fairness criteria stem from an incorrect understanding of their nature. These criteria are primarily used for two purposes: first, evaluating AI systems for bias, and second constraining machine learning optimization problems in order to ameliorate such bias. The first purpose typically involves treating each criterion (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John, AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Crash Algorithms for Autonomous Cars: How the Trolley Problem Can Move Us Beyond Harm Minimisation.Dietmar Hübner & Lucie White - 2018 - Ethical Theory and Moral Practice 21 (3):685-698.
    The prospective introduction of autonomous cars into public traffic raises the question of how such systems should behave when an accident is inevitable. Due to concerns with self-interest and liberal legitimacy that have become paramount in the emerging debate, a contractarian framework seems to provide a particularly attractive means of approaching this problem. We examine one such attempt, which derives a harm minimisation rule from the assumptions of rational self-interest and ignorance of one’s position in a future accident. We contend, (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  30. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  31. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  32. Algorithm Evaluation Without Autonomy.Scott Hill - forthcoming - AI and Ethics.
    In Algorithms & Autonomy, Rubel, Castro, and Pham (hereafter RCP), argue that the concept of autonomy is especially central to understanding important moral problems about algorithms. In particular, autonomy plays a role in analyzing the version of social contract theory that they endorse. I argue that although RCP are largely correct in their diagnosis of what is wrong with the algorithms they consider, those diagnoses can be appropriated by moral theories RCP see as in competition with their (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Rituals and Algorithms: Genealogy of Reflective Faith and Postmetaphysical Thinking.Martin Beck Matuštík - 2019 - European Journal for Philosophy of Religion 11 (4):163-184.
    What happens when mindless symbols of algorithmic AI encounter mindful performative rituals? I return to my criticisms of Habermas’ secularising reading of Kierkegaard’s ethics. Next, I lay out Habermas’ claim that the sacred complex of ritual and myth contains the ur-origins of postmetaphysical thinking and reflective faith. If reflective faith shares with ritual same origins as does communicative interaction, how do we access these archaic ritual sources of human solidarity in the age of AI?
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Big Tech, Algorithmic Power, and Democratic Control.Ugur Aytac - 2024 - Journal of Politics 86 (4):1431-1445.
    This paper argues that instituting Citizen Boards of Governance (CBGs) is the optimal strategy to democratically contain Big Tech’s algorithmic powers in the digital public sphere. CBGs are bodies of randomly selected citizens that are authorized to govern the algorithmic infrastructure of Big Tech platforms. The main advantage of CBGs is to tackle the concentrated powers of private tech corporations without giving too much power to governments. I show why this is a better approach than ordinary state regulation (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. AI Recruitment Algorithms and the Dehumanization Problem.Megan Fritts & Frank Cabrera - 2021 - Ethics and Information Technology (4):1-11.
    According to a recent survey by the HR Research Institute, as the presence of artificial intelligence (AI) becomes increasingly common in the workplace, HR professionals are worried that the use of recruitment algorithms will lead to a “dehumanization” of the hiring process. Our main goals in this paper are threefold: i) to bring attention to this neglected issue, ii) to clarify what exactly this concern about dehumanization might amount to, and iii) to sketch an argument for why dehumanizing the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  36. Algorithmic Political Bias Can Reduce Political Polarization.Uwe Peters - 2022 - Philosophy and Technology 35 (3):1-7.
    Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that algorithmic political (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Algorithmic information theory and undecidability.Panu Raatikainen - 2000 - Synthese 123 (2):217-225.
    Chaitin’s incompleteness result related to random reals and the halting probability has been advertised as the ultimate and the strongest possible version of the incompleteness and undecidability theorems. It is argued that such claims are exaggerations.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  38. Inscrutable Processes: Algorithms, Agency, and Divisions of Deliberative Labour.Marinus Ferreira - 2021 - Journal of Applied Philosophy 38 (4):646-661.
    As the use of algorithmic decision‐making becomes more commonplace, so too does the worry that these algorithms are often inscrutable and our use of them is a threat to our agency. Since we do not understand why an inscrutable process recommends one option over another, we lose our ability to judge whether the guidance is appropriate and are vulnerable to being led astray. In response, I claim that a process being inscrutable does not automatically make its guidance inappropriate. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  39. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  40. On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  41. (1 other version)Attention, Moral Skill, and Algorithmic Recommendation.Nick Schuster & Seth Lazar - 2024 - Philosophical Studies 182 (1).
    Recommender systems are artificial intelligence technologies, deployed by online platforms, that model our individual preferences and direct our attention to content we’re likely to engage with. As the digital world has become increasingly saturated with information, we’ve become ever more reliant on these tools to efficiently allocate our attention. And our reliance on algorithmic recommendation may, in turn, reshape us as moral agents. While recommender systems could in principle enhance our moral agency by enabling us to cut through the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  42. Algorithmic Opinion Mining and the History of Philosophy: A Response to Mizrahi’s For and Against Scientism.Andreas Vrahimis - 2023 - Social Epistemology Review and Reply Collective 12 (5):33-41.
    At the heart of Mizrahi’s project lies a sociological narrative concerning the recent history of philosophers’ negative attitudes towards scientism. Critics (e.g. de Ridder (2019), Wilson (2019) and Bryant (2020)), have detected various empirical inadequacies in Mizrahi’s methodology for discussing these attitudes. Bryant (2020) points out one of the main pertinent methodological deficiencies here, namely that the mere appearance of the word ‘scientism’ in a text does not suffice in determining whether the author feels threatened by it. Not all philosophers (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Public Trust, Institutional Legitimacy, and the Use of Algorithms in Criminal Justice.Duncan Purves & Jeremy Davis - 2022 - Public Affairs Quarterly 36 (2):136-162.
    A common criticism of the use of algorithms in criminal justice is that algorithms and their determinations are in some sense ‘opaque’—that is, difficult or impossible to understand, whether because of their complexity or because of intellectual property protections. Scholars have noted some key problems with opacity, including that opacity can mask unfair treatment and threaten public accountability. In this paper, we explore a different but related concern with algorithmic opacity, which centers on the role of public (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  44. Algorithmic Microaggressions.Emma McClure & Benjamin Wald - 2022 - Feminist Philosophy Quarterly 8 (3).
    We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  46. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  47. Algorithmic Indirect Discrimination, Fairness, and Harm.Frej Klem Thomsen - 2023 - AI and Ethics.
    Over the past decade, scholars, institutions, and activists have voiced strong concerns about the potential of automated decision systems to indirectly discriminate against vulnerable groups. This article analyses the ethics of algorithmic indirect discrimination, and argues that we can explain what is morally bad about such discrimination by reference to the fact that it causes harm. The article first sketches certain elements of the technical and conceptual background, including definitions of direct and indirect algorithmic differential treatment. It next (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. The Limits of Reallocative and Algorithmic Policing.Luke William Hunt - 2022 - Criminal Justice Ethics 41 (1):1-24.
    Policing in many parts of the world—the United States in particular—has embraced an archetypal model: a conception of the police based on the tenets of individuated archetypes, such as the heroic police “warrior” or “guardian.” Such policing has in part motivated moves to (1) a reallocative model: reallocating societal resources such that the police are no longer needed in society (defunding and abolishing) because reform strategies cannot fix the way societal problems become manifest in (archetypal) policing; and (2) an (...) model: subsuming policing into technocratic judgements encoded in algorithms through strategies such as predictive policing (mitigating archetypal bias). This paper begins by considering the normative basis of the relationship between political community and policing. It then examines the justification of reallocative and algorithmic models in light of the relationship between political community and police. Given commitments to the depth and distribution of security—and proscriptions against dehumanizing strategies—the paper concludes that a nonideal-theory priority rule promoting respect for personhood (manifest in community and dignity-promoting policing strategies) is a necessary condition for the justification of the above models. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  49. Fill In, Accept, Submit, and Prove that You Are not a Robot: Ubiquity as the Power of the Algorithmic Bureaucracy.Mikhail Bukhtoyarov & Anna Bukhtoyarova - 2024 - In Ljubiša Bojić, Simona Žikić, Jörg Matthes & Damian Trilling, Navigating the Digital Age. An In-Depth Exploration into the Intersection of Modern Technologies and Societal Transformation. Belgrade: Institute for Philosophy and Social Theory, University of Belgrade. pp. 220-243.
    Internet users fill in interactive forms with multiple fields, check/uncheck checkboxes, select options and agree to submit. People give their consents without keeping track of them. Dominance of the machine producing human consent is ubiquitous. Humanless bureaucratic procedures become embedded into routine usage of digital products and services automating human behavior. This bureaucracy does not make individuals wait in conveyor-like lines (which sometimes can cause a collective action), it patiently waits or suddenly pops up in an annoying message requiring immediate (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Diving into Fair Pools: Algorithmic Fairness, Ensemble Forecasting, and the Wisdom of Crowds.Rush T. Stewart & Lee Elkin - forthcoming - Analysis.
    Is the pool of fair predictive algorithms fair? It depends, naturally, on both the criteria of fairness and on how we pool. We catalog the relevant facts for some of the most prominent statistical criteria of algorithmic fairness and the dominant approaches to pooling forecasts: linear, geometric, and multiplicative. Only linear pooling, a format at the heart of ensemble methods, preserves any of the central criteria we consider. Drawing on work in the social sciences and social epistemology on (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 982