Results for 'Algorithmic Opacity'

783 found
Order:
  1. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  2. Public Trust, Institutional Legitimacy, and the Use of Algorithms in Criminal Justice.Duncan Purves & Jeremy Davis - 2022 - Public Affairs Quarterly 36 (2):136-162.
    A common criticism of the use of algorithms in criminal justice is that algorithms and their determinations are in some sense ‘opaque’—that is, difficult or impossible to understand, whether because of their complexity or because of intellectual property protections. Scholars have noted some key problems with opacity, including that opacity can mask unfair treatment and threaten public accountability. In this paper, we explore a different but related concern with algorithmic opacity, which centers on the role of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  3. Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies from the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Legitimacy, Authority, and the Political Value of Explanations.Seth Lazar - manuscript
    Here is my thesis (and the outline of this paper). Increasingly secret, complex and inscrutable computational systems are being used to intensify existing power relations, and to create new ones (Section II). To be all-things-considered morally permissible, new, or newly intense, power relations must in general meet standards of procedural legitimacy and proper authority (Section III). Legitimacy and authority constitutively depend, in turn, on a publicity requirement: reasonably competent members of the political community in which power is being exercised must (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  7. Peeking Inside the Black Box: A New Kind of Scientific Visualization.Michael T. Stuart & Nancy J. Nersessian - 2018 - Minds and Machines 29 (1):87-107.
    Computational systems biologists create and manipulate computational models of biological systems, but they do not always have straightforward epistemic access to the content and behavioural profile of such models because of their length, coding idiosyncrasies, and formal complexity. This creates difficulties both for modellers in their research groups and for their bioscience collaborators who rely on these models. In this paper we introduce a new kind of visualization that was developed to address just this sort of epistemic opacity. The (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  8. A pluralist hybrid model for moral AIs.Fei Song & Shing Hay Felix Yeung - forthcoming - AI and Society:1-10.
    With the increasing degrees A.I.s and machines are applied across different social contexts, the need for implementing ethics in A.I.s is pressing. In this paper, we argue for a pluralist hybrid model for the implementation of moral A.I.s. We first survey current approaches to moral A.I.s and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral A.I.s can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of an (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. The Boundaries of Meaning: A Case Study in Neural Machine Translation.Yuri Balashov - 2022 - Inquiry: An Interdisciplinary Journal of Philosophy 66.
    The success of deep learning in natural language processing raises intriguing questions about the nature of linguistic meaning and ways in which it can be processed by natural and artificial systems. One such question has to do with subword segmentation algorithms widely employed in language modeling, machine translation, and other tasks since 2016. These algorithms often cut words into semantically opaque pieces, such as ‘period’, ‘on’, ‘t’, and ‘ist’ in ‘period|on|t|ist’. The system then represents the resulting segments in a dense (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Entitlement, opacity, and connection.Brad Majors & Sarah Sawyer - 2007 - In Sanford Goldberg (ed.), Internalism and externalism in semantics and epistemology. New York: Oxford University Press. pp. 131.
    This paper looks at the debates between internalism and externalism in mind and epistemology. In each realm, internalists face what we call 'The Connection Problem', while externalists face what we call 'The Problem of Opacity'. We offer an integrated account of thought content and epistemic warrant that overcomes the problems. We then apply the framework to debates between internalists and externalists in metaethics.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  11. Inner Opacity. Nietzsche on Introspection and Agency.Mattia Riccardi - 2015 - Inquiry: An Interdisciplinary Journal of Philosophy 58 (3):221-243.
    Nietzsche believes that we do not know our own actions, nor their real motives. This belief, however, is but a consequence of his assuming a quite general skepticism about introspection. The main aim of this paper is to offer a reading of this last view, which I shall call the Inner Opacity (IO) view. In the first part of the paper I show that a strong motivation behind IO lies in Nietzsche’s claim that self-knowledge exploits the same set of (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  12. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  13. Classical Opacity.Michael Caie, Jeremy Goodman & Harvey Lederman - 2019 - Philosophy and Phenomenological Research 101 (3):524-566.
    Philosophy and Phenomenological Research, EarlyView.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  14. The opacity of play: a reply to commentators.C. Thi Nguyen - 2021 - Journal of the Philosophy of Sport 48 (3):448-475.
    This is a reply to commentators in the Journal of the Philosophy of Sport's special issue symposium on GAMES: AGENCY AS ART. I respond to criticisms concerning the value of achievement play and striving play, the transparency and opacity of play, the artistic status of games, and many more.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15. The Logic of Opacity.Andrew Bacon & Jeffrey Sanford Russell - 2019 - Philosophy and Phenomenological Research 99 (1):81-114.
    We explore the view that Frege's puzzle is a source of straightforward counterexamples to Leibniz's law. Taking this seriously requires us to revise the classical logic of quantifiers and identity; we work out the options, in the context of higher-order logic. The logics we arrive at provide the resources for a straightforward semantics of attitude reports that is consistent with the Millian thesis that the meaning of a name is just the thing it stands for. We provide models to show (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  16. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  17. The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do not (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  18. Does Opacity Undermine Privileged Access?Timothy Allen & Joshua May - 2014 - International Journal of Philosophical Studies 22 (4):617-629.
    Carruthers argues that knowledge of our own propositional attitudes is achieved by the same mechanism used to attain knowledge of other people's minds. This seems incompatible with "privileged access"---the idea that we have more reliable beliefs about our own mental states, regardless of the mechanism. At one point Carruthers seems to suggest he may be able to maintain privileged access, because we have additional sensory information in our own case. We raise a number of worries for this suggestion, concluding that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  19. Opacity.Francey Russell - 2022 - The Philosopher 110 (3):37-41.
    Download  
     
    Export citation  
     
    Bookmark  
  20. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing control over our lives—over which jobs we get, whether we're granted loans, what information we're exposed to online, and so on. Algorithms can, and often do, wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has gone largely neglected. I investigate three questions about algorithmic neutrality: What is it? Is it possible? And when we have it in mind, what can we learn about (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Algorithms, Agency, and Respect for Persons.Alan Rubel, Clinton Castro & Adam Pham - 2020 - Social Theory and Practice 46 (3):547-572.
    Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four key (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  23. Poetic Opacity: How to Paint Things with Words.Jesse J. Prinz & Eric Mandelbaum - 2015 - In John Gibson (ed.), The Philosophy of Poetry. Oxford University Press. pp. 63-87.
    Download  
     
    Export citation  
     
    Bookmark  
  24. Algorithmic paranoia: the temporal governmentality of predictive policing.Bonnie Sheehey - 2019 - Ethics and Information Technology 21 (1):49-58.
    In light of the recent emergence of predictive techniques in law enforcement to forecast crimes before they occur, this paper examines the temporal operation of power exercised by predictive policing algorithms. I argue that predictive policing exercises power through a paranoid style that constitutes a form of temporal governmentality. Temporality is especially pertinent to understanding what is ethically at stake in predictive policing as it is continuous with a historical racialized practice of organizing, managing, controlling, and stealing time. After first (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  25. Algorithms and the Individual in Criminal Law.Renée Jorgensen - 2022 - Canadian Journal of Philosophy 52 (1):1-17.
    Law-enforcement agencies are increasingly able to leverage crime statistics to make risk predictions for particular individuals, employing a form of inference that some condemn as violating the right to be “treated as an individual.” I suggest that the right encodes agents’ entitlement to a fair distribution of the burdens and benefits of the rule of law. Rather than precluding statistical prediction, it requires that citizens be able to anticipate which variables will be used as predictors and act intentionally to avoid (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  26. Opacity of Character: Virtue Ethics and the Legal Admissibility of Character Evidence.Jacob Smith & Georgi Gardiner - 2021 - Philosophical Issues 31 (1):334-354.
    Many jurisdictions prohibit or severely restrict the use of evidence about a defendant’s character to prove legal culpability. Situationists, who argue that conduct is largely determined by situational features rather than by character, can easily defend this prohibition. According to situationism, character evidence is misleading or paltry. -/- Proscriptions on character evidence seem harder to justify, however, on virtue ethical accounts. It appears that excluding character evidence either denies the centrality of character for explaining conduct—the situationist position—or omits probative evidence. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  28. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  30. Crash Algorithms for Autonomous Cars: How the Trolley Problem Can Move Us Beyond Harm Minimisation.Dietmar Hübner & Lucie White - 2018 - Ethical Theory and Moral Practice 21 (3):685-698.
    The prospective introduction of autonomous cars into public traffic raises the question of how such systems should behave when an accident is inevitable. Due to concerns with self-interest and liberal legitimacy that have become paramount in the emerging debate, a contractarian framework seems to provide a particularly attractive means of approaching this problem. We examine one such attempt, which derives a harm minimisation rule from the assumptions of rational self-interest and ignorance of one’s position in a future accident. We contend, (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  31. Algorithmic Randomness and Probabilistic Laws.Jeffrey A. Barrett & Eddy Keming Chen - manuscript
    We consider two ways one might use algorithmic randomness to characterize a probabilistic law. The first is a generative chance* law. Such laws involve a nonstandard notion of chance. The second is a probabilistic* constraining law. Such laws impose relative frequency and randomness constraints that every physically possible world must satisfy. While each notion has virtues, we argue that the latter has advantages over the former. It supports a unified governing account of non-Humean laws and provides independently motivated solutions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33.  75
    Probabilistically coherent credences despite opacity.Christian List - forthcoming - Economics and Philosophy:1-10.
    Real human agents, even when they are rational by everyday standards, sometimes assign different credences to objectively equivalent statements, such as “George Orwell is a writer” and “Eric Arthur Blair is a writer”, or credences less than 1 to necessarily true statements, such as not-yet-proven theorems of arithmetic. Anna Mahtani calls this the phenomenon of “opacity” (a form of hyperintensionality). Opaque credences seem probabilistically incoherent, which goes against a key modelling assumption of probability theory. I sketch a modelling strategy (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  35. Algorithm exploitation: humans are keen to exploit benevolent AI.Jurgis Karpus, Adrian Krüger, Julia Tovar Verba, Bahador Bahrami & Ophelia Deroy - 2021 - iScience 24 (6):102679.
    We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  36. Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. Algorithmic decision-making: the right to explanation and the significance of stakes.Lauritz Munch, Jens Christian Bjerring & Jakob Mainz - forthcoming - Big Data and Society.
    The stakes associated with an algorithmic decision are often said to play a role in determining whether the decision engenders a right to an explanation. More specifically, “high stakes” decisions are often said to engender such a right to explanation whereas “low stakes” or “non-high” stakes decisions do not. While the overall gist of these ideas is clear enough, the details are lacking. In this paper, we aim to provide these details through a detailed investigation of what we will (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  39. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  40. Algorithms and Arguments: The Foundational Role of the ATAI-question.Paola Cantu' & Italo Testa - 2011 - In Frans H. van Eemeren, Bart Garssen, David Godden & Gordon Mitchell (eds.), Proceedings of the Seventh International Conference of the International Society for the Study of Argumentation (pp. 192-203). Rozenberg / Sic Sat.
    Argumentation theory underwent a significant development in the Fifties and Sixties: its revival is usually connected to Perelman's criticism of formal logic and the development of informal logic. Interestingly enough it was during this period that Artificial Intelligence was developed, which defended the following thesis (from now on referred to as the AI-thesis): human reasoning can be emulated by machines. The paper suggests a reconstruction of the opposition between formal and informal logic as a move against a premise of an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Algorithmic Bias and Risk Assessments: Lessons from Practice.Ali Hasan, Shea Brown, Jovana Davidovic, Benjamin Lange & Mitt Regan - 2022 - Digital Society 1 (1):1-15.
    In this paper, we distinguish between different sorts of assessments of algorithmic systems, describe our process of assessing such systems for ethical risk, and share some key challenges and lessons for future algorithm assessments and audits. Given the distinctive nature and function of a third-party audit, and the uncertain and shifting regulatory landscape, we suggest that second-party assessments are currently the primary mechanisms for analyzing the social impacts of systems that incorporate artificial intelligence. We then discuss two kinds of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Algorithm and Parameters: Solving the Generality Problem for Reliabilism.Jack C. Lyons - 2019 - Philosophical Review 128 (4):463-509.
    The paper offers a solution to the generality problem for a reliabilist epistemology, by developing an “algorithm and parameters” scheme for type-individuating cognitive processes. Algorithms are detailed procedures for mapping inputs to outputs. Parameters are psychological variables that systematically affect processing. The relevant process type for a given token is given by the complete algorithmic characterization of the token, along with the values of all the causally relevant parameters. The typing that results is far removed from the typings of (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  43. The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2).
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Download  
     
    Export citation  
     
    Bookmark   163 citations  
  44. The Ethics of Algorithmic Outsourcing in Everyday Life.John Danaher - forthcoming - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford, UK: Oxford University Press.
    We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Algorithmic Political Bias Can Reduce Political Polarization.Uwe Peters - 2022 - Philosophy and Technology 35 (3):1-7.
    Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that algorithmic political (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  47. Algorithmic Structuring of Cut-free Proofs.Matthias Baaz & Richard Zach - 1993 - In Börger Egon, Kleine Büning Hans, Jäger Gerhard, Martini Simone & Richter Michael M. (eds.), Computer Science Logic. CSL’92, San Miniato, Italy. Selected Papers. Springer. pp. 29–42.
    The problem of algorithmic structuring of proofs in the sequent calculi LK and LKB ( LK where blocks of quantifiers can be introduced in one step) is investigated, where a distinction is made between linear proofs and proofs in tree form. In this framework, structuring coincides with the introduction of cuts into a proof. The algorithmic solvability of this problem can be reduced to the question of k-l-compressibility: "Given a proof of length k , and l ≤ k (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Algorithmic Nudging: The Need for an Interdisciplinary Oversight.Christian Schmauder, Jurgis Karpus, Maximilian Moll, Bahador Bahrami & Ophelia Deroy - 2023 - Topoi 42 (3):799-807.
    Nudge is a popular public policy tool that harnesses well-known biases in human judgement to subtly guide people’s decisions, often to improve their choices or to achieve some socially desirable outcome. Thanks to recent developments in artificial intelligence (AI) methods new possibilities emerge of how and when our decisions can be nudged. On the one hand, algorithmically personalized nudges have the potential to vastly improve human daily lives. On the other hand, blindly outsourcing the development and implementation of nudges to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Opacity, belief and analyticity.Consuelo Preti - 1992 - Philosophical Studies 66 (3):297 - 306.
    Contrary to appearances, semantic innocence can be claimed for a Fregean account of the semantics of expressions in indirect discourse. Given externalism about meaning, an expression that refers to its ordinary sense in an opaque context refers, ultimately, to its "references"; for, on this view, the reference of an expression directly determines its meaning. Externalism seems to have similar consequences for the truth-conditions of analytic sentences. If reference determines meaning, how can we distinguish a class of sentences as true in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Big Tech, Algorithmic Power, and Democratic Control.Ugur Aytac - forthcoming - Journal of Politics.
    This paper argues that instituting Citizen Boards of Governance (CBGs) is the optimal strategy to democratically contain Big Tech’s algorithmic powers in the digital public sphere. CBGs are bodies of randomly selected citizens that are authorized to govern the algorithmic infrastructure of Big Tech platforms. The main advantage of CBGs is to tackle the concentrated powers of private tech corporations without giving too much power to governments. I show why this is a better approach than ordinary state regulation (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 783