Results for ' Clearing algorithms'

958 found
Order:
  1. TORC3: Token-Ring Clearing Heuristic for Currency Circulation.Julio Michael Stern, Carlos Humes, Marcelo de Souza Lauretto, Fabio Nakano, Carlos Alberto de Braganca Pereira & Guilherme Frederico Gazineu Rafare - 2012 - AIP Conference Proceedings 1490:179-188.
    Clearing algorithms are at the core of modern payment systems, facilitating the settling of multilateral credit messages with (near) minimum transfers of currency. Traditional clearing procedures use batch processing based on MILP - mixed-integer linear programming algorithms. The MILP approach demands intensive computational resources; moreover, it is also vulnerable to operational risks generated by possible defaults during the inter-batch period. This paper presents TORC3 - the Token-Ring Clearing Algorithm for Currency Circulation. In contrast to the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Algorithmic decision-making: the right to explanation and the significance of stakes.Lauritz Munch, Jens Christian Bjerring & Jakob Mainz - forthcoming - Big Data and Society.
    The stakes associated with an algorithmic decision are often said to play a role in determining whether the decision engenders a right to an explanation. More specifically, “high stakes” decisions are often said to engender such a right to explanation whereas “low stakes” or “non-high” stakes decisions do not. While the overall gist of these ideas is clear enough, the details are lacking. In this paper, we aim to provide these details through a detailed investigation of what we will call (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness in healthcare. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. Condensation of Algorithmic Supremacy Claims.Nadisha-Marie Aliman - manuscript
    In the presently unfolding deepfake era, previously unrelated algorithmic superintelligence possibility claims cannot be scientifically analyzed in isolation anymore due to the connected inevitable epistemic interactions that have already commenced. For instance, deep-learning (DL) related algorithmic supremacy claims may intrinsically compete with both neuro-symbolic (NS) algorithmic and further quantum (Q) algorithmic superintelligence achievement claims. Concurrently, a variety of experimental combinations of DL, NS and Q directions are conceivable. While research on these three illustrative variants did not yet offer any clear (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or to the epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7.  28
    A Portrait of the Artist as a Young Algorithm.Sofie Vlaad - 2024 - Ethics and Information Technology 26 (3):1-11.
    This article explores the question as to whether images generated by Artificial Intelligence such as DALL-E 2 can be considered artworks. After providing a brief primer on how technologies such as DALL-E 2 work in principle, I give an overview of three contemporary accounts of art and then show that there is at least one case where an AI-generated image meets the criteria for art membership under all three accounts. I suggest that our collective hesitancy to call AI-generated images art (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. The ethical debate about the gig economy: a review and critical analysis.Zhi Ming Tan, Nikita Aggarwal, Josh Cowls, Jessica Morley, Mariarosaria Taddeo & Luciano Floridi - 2021 - Technology in Society 65 (2):101954.
    The gig economy is a phenomenon that is rapidly expanding, redefining the nature of work and contributing to a significant change in how contemporary economies are organised. Its expansion is not unproblematic. This article provides a clear and systematic analysis of the main ethical challenges caused by the gig economy. Following a brief overview of the gig economy, its scope and scale, we map the key ethical problems that it gives rise to, as they are discussed in the relevant literature. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Toward an Ethics of AI Assistants: an Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  10. (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):239–256.
    As the range of potential uses for Artificial Intelligence, in particular machine learning, has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  11. Distributive justice as an ethical principle for autonomous vehicle behavior beyond hazard scenarios.Manuel Dietrich & Thomas H. Weisswange - 2019 - Ethics and Information Technology 21 (3):227-239.
    Through modern driver assistant systems, algorithmic decisions already have a significant impact on the behavior of vehicles in everyday traffic. This will become even more prominent in the near future considering the development of autonomous driving functionality. The need to consider ethical principles in the design of such systems is generally acknowledged. However, scope, principles and strategies for their implementations are not yet clear. Most of the current discussions concentrate on situations of unavoidable crashes in which the life of human (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  12. The problem of evaluating automated large-scale evidence aggregators.Nicolas Wüthrich & Katie Steele - 2019 - Synthese (8):3083-3102.
    In the biomedical context, policy makers face a large amount of potentially discordant evidence from different sources. This prompts the question of how this evidence should be aggregated in the interests of best-informed policy recommendations. The starting point of our discussion is Hunter and Williams’ recent work on an automated aggregation method for medical evidence. Our negative claim is that it is far from clear what the relevant criteria for evaluating an evidence aggregator of this sort are. What is the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. A philosophical perspective on visualization for digital humanities.Hein Van Den Berg, Arianna Betti, Thom Castermans, Rob Koopman, Bettina Speckmann, K. A. B. Verbeek, Titia Van der Werf, Shenghui Wang & Michel A. Westenberg - 2018 - 3Rd Workshop on Visualization for the Digital Humanities.
    In this position paper, we describe a number of methodological and philosophical challenges that arose within our interdisciplinary Digital Humanities project CatVis, which is a collaboration between applied geometric algorithms and visualization researchers, data scientists working at OCLC, and philosophers who have a strong interest in the methodological foundations of visualization research. The challenges we describe concern aspects of one single epistemic need: that of methodologically securing (an increase in) trust in visualizations. We discuss the lack of ground truths (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Challenges for an Ontology of Artificial Intelligence.Scott H. Hawley - 2019 - Perspectives on Science and Christian Faith 71 (2):83-95.
    Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15. What Does it Mean that PRIMES is in P: Popularization and Distortion Revisited.Boaz Miller - 2009 - Social Studies of Science 39 (2):257-288.
    In August 2002, three Indian computer scientists published a paper, ‘PRIMES is in P’, online. It presents a ‘deterministic algorithm’ which determines in ‘polynomial time’ if a given number is a prime number. The story was quickly picked up by the general press, and by this means spread through the scientific community of complexity theorists, where it was hailed as a major theoretical breakthrough. This is although scientists regarded the media reports as vulgar popularizations. When the paper was published in (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  16. Conceptual atomism and the computational theory of mind: a defense of content-internalism and semantic externalism.John-Michael Kuczynski - 2007 - John Benjamins & Co.
    Contemporary philosophy and theoretical psychology are dominated by an acceptance of content-externalism: the view that the contents of one's mental states are constitutively, as opposed to causally, dependent on facts about the external world. In the present work, it is shown that content-externalism involves a failure to distinguish between semantics and pre-semantics---between, on the one hand, the literal meanings of expressions and, on the other hand, the information that one must exploit in order to ascertain their literal meanings. It is (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Сo-evolutionary biosemantics of evolutionary risk at technogenic civilization: Hiroshima, Chernobyl – Fukushima and further….Valentin Cheshko & Valery Glazko - 2016 - International Journal of Environmental Problems 3 (1):14-25.
    From Chernobyl to Fukushima, it became clear that the technology is a system evolutionary factor, and the consequences of man-made disasters, as the actualization of risk related to changes in the social heredity (cultural transmission) elements. The uniqueness of the human phenomenon is a characteristic of the system arising out of the nonlinear interaction of biological, cultural and techno-rationalistic adaptive modules. Distribution emerging adaptive innovation within each module is in accordance with the two algorithms that are characterized by the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. A Metatheoretical Basis for Interpretations of Problem-solving Behavior.Steven James Bartlett - 1978 - Methodology and Science: Interdisciplinary Journal for the Empirical Study of the Foundations of Science and Their Methodology 11 (2):59-85.
    The paper identifies defining characteristics of the principal models of problem-solving behavior which are useful in developing a general theory of problem-solving. An attempt is made both to make explicit those disagreements between theorists of different persuasions which have served as obstacles to an integrated approach, and to show that these disagreements have arisen from a number of conceptual confusions: The conflict between information processors and behavioral analysts has resulted from a common failure to understand theoretical sufficiency, and hence these (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  19. Conformism, Ignorance & Injustice: AI as a Tool of Epistemic Oppression.Martin Miragoli - 2024 - Episteme: A Journal of Social Epistemology:1-19.
    From music recommendation to assessment of asylum applications, machine-learning algorithms play a fundamental role in our lives. Naturally, the rise of AI implementation strategies has brought to public attention the ethical risks involved. However, the dominant anti-discrimination discourse, too often preoccupied with identifying particular instances of harmful AIs, has yet to bring clearly into focus the more structural roots of AI-based injustice. This paper addresses the problem of AI-based injustice from a distinctively epistemic angle. More precisely, I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Evaluation of the Image Quality of Ultra-Low-Dose Paranasal Sinus Computed Tomography Scans.Melih Akşamoğlu & Mehmet Sait Menzilcioğlu - 2023 - European Journal of Therapeutics 29 (2):143-148.
    Objective: We aimed to investigate the image quality of paranasal sinus computed tomography (CT) scans obtained with the “Advanced intelligent Clear-IQ Engine” (AICE) software and ultra-low dose parameters in patients with prediagnosed rhinitis, sinusitis or nasal septum deviation. -/- Methods: The first 50 patients (31 men and 19 women, aged 18-70 years) who agreed to participate in our prospectively planned study were included in the study. Imaging of the patients was performed with a 160-slice multidetector CT device Canon Aquilion Prime (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Meršić o Hilbertovoj aksiomatskoj metodi [Meršić on Hilbert's axiomatic method].Srećko Kovač - 2006 - In E. Banić-Pajnić & M. Girardi Karšulin (eds.), Zbornik u čast Franji Zenku. pp. 123-135.
    The criticism of Hilbert's axiomatic system of geometry by Mate Meršić (Merchich, 1850-1928), presented in his work "Organistik der Geometrie" (1914, also in "Modernes und Modriges", 1914), is analyzed and discussed. According to Meršić, geometry cannot be based on its own axioms, as a logical analysis of spatial intuition, but must be derived as a "spatial concretion" using "higher" axioms of arithmetic, logic, and "rational algorithmics." Geometry can only be one, because space is also only one. It cannot be reduced (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Schema-Centred Unity and Process-Centred Pluralism of the Predictive Mind.Nina Poth - 2022 - Minds and Machines 32 (3):433-459.
    Proponents of the predictive processing (PP) framework often claim that one of the framework’s significant virtues is its unificatory power. What is supposedly unified are predictive processes in the mind, and these are explained in virtue of a common prediction error-minimisation (PEM) schema. In this paper, I argue against the claim that PP currently converges towards a unified explanation of cognitive processes. Although the notion of PEM systematically relates a set of posits such as ‘efficiency’ and ‘hierarchical coding’ into a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Computational logic. Vol. 1: Classical deductive computing with classical logic. 2nd ed.Luis M. Augusto - 2022 - London: College Publications.
    This is the 3rd edition. Although a number of new technological applications require classical deductive computation with non-classical logics, many key technologies still do well—or exclusively, for that matter—with classical logic. In this first volume, we elaborate on classical deductive computing with classical logic. The objective of the main text is to provide the reader with a thorough elaboration on both classical computing – a.k.a. formal languages and automata theory – and classical deduction with the classical first-order predicate calculus with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Automatic Face Mask Detection Using Python.M. Madan Mohan - 2021 - Journal of Science Technology and Research (JSTAR) 2 (1):91-100.
    The corona virus COVID-19 pandemic is causing a global health crisis so the effective protection methods is wearing a face mask in public areas according to the World Health Organization (WHO). The COVID-19 pandemic forced governments across the world to impose lockdowns to prevent virus transmissions. Reports indicate that wearing facemasks while at work clearly reduces the risk of transmission. An efficient and economic approach of using AI to create a safe environment in a manufacturing setup. A hybrid model using (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  27. Algorithmic paranoia: the temporal governmentality of predictive policing.Bonnie Sheehey - 2019 - Ethics and Information Technology 21 (1):49-58.
    In light of the recent emergence of predictive techniques in law enforcement to forecast crimes before they occur, this paper examines the temporal operation of power exercised by predictive policing algorithms. I argue that predictive policing exercises power through a paranoid style that constitutes a form of temporal governmentality. Temporality is especially pertinent to understanding what is ethically at stake in predictive policing as it is continuous with a historical racialized practice of organizing, managing, controlling, and stealing time. After (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  28.  83
    (5 other versions)Algorithm Evaluation Without Autonomy.Scott Hill - forthcoming - AI and Ethics.
    In Algorithms & Autonomy, Rubel, Castro, and Pham (hereafter RCP), argue that the concept of autonomy is especially central to understanding important moral problems about algorithms. In particular, autonomy plays a role in analyzing the version of social contract theory that they endorse. I argue that although RCP are largely correct in their diagnosis of what is wrong with the algorithms they consider, those diagnoses can be appropriated by moral theories RCP see as in competition with their (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Algorithms, Agency, and Respect for Persons.Alan Rubel, Clinton Castro & Adam Pham - 2020 - Social Theory and Practice 46 (3):547-572.
    Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four key ways (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  30. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only demonstrate (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Crash Algorithms for Autonomous Cars: How the Trolley Problem Can Move Us Beyond Harm Minimisation.Dietmar Hübner & Lucie White - 2018 - Ethical Theory and Moral Practice 21 (3):685-698.
    The prospective introduction of autonomous cars into public traffic raises the question of how such systems should behave when an accident is inevitable. Due to concerns with self-interest and liberal legitimacy that have become paramount in the emerging debate, a contractarian framework seems to provide a particularly attractive means of approaching this problem. We examine one such attempt, which derives a harm minimisation rule from the assumptions of rational self-interest and ignorance of one’s position in a future accident. We contend, (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  32. The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2):2053951716679679.
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can (...)
    Download  
     
    Export citation  
     
    Bookmark   211 citations  
  33. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  34. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing control over our lives—over the jobs we get, the loans we're granted, the information we see online. Algorithms can and often do wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has been largely neglected. I investigate algorithmic neutrality, tackling three questions: What is algorithmic neutrality? Is it possible? And when we have it in mind, what can we learn about algorithmic bias?
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  36. (1 other version)Algorithmic correspondence and completeness in modal logic. IV. Semantic extensions of SQEMA.Willem Conradie & Valentin Goranko - 2008 - Journal of Applied Non-Classical Logics 18 (2):175-211.
    In a previous work we introduced the algorithm \SQEMA\ for computing first-order equivalents and proving canonicity of modal formulae, and thus established a very general correspondence and canonical completeness result. \SQEMA\ is based on transformation rules, the most important of which employs a modal version of a result by Ackermann that enables elimination of an existentially quantified predicate variable in a formula, provided a certain negative polarity condition on that variable is satisfied. In this paper we develop several extensions of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  38. The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  39. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can be handled (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Are algorithms always arbitrary? Three types of arbitrariness and ways to overcome the computationalist’s trilemma.C. Percy - manuscript
    Implementing an algorithm on part of our causally-interconnected physical environment requires three choices that are typically considered arbitrary, i.e. no single option is innately privileged without invoking an external observer perspective. First, how to delineate one set of local causal relationships from the environment. Second, within this delineation, which inputs and outputs to designate for attention. Third, what meaning to assign to particular states of the designated inputs and outputs. Having explained these types of arbitrariness, we assess their relevance for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  42. Algorithms and Arguments: The Foundational Role of the ATAI-question.Paola Cantu' & Italo Testa - 2011 - In Frans H. van Eemeren, Bart Garssen, David Godden & Gordon Mitchell (eds.), Proceedings of the Seventh International Conference of the International Society for the Study of Argumentation. Rozenberg / Sic Sat.
    Argumentation theory underwent a significant development in the Fifties and Sixties: its revival is usually connected to Perelman's criticism of formal logic and the development of informal logic. Interestingly enough it was during this period that Artificial Intelligence was developed, which defended the following thesis (from now on referred to as the AI-thesis): human reasoning can be emulated by machines. The paper suggests a reconstruction of the opposition between formal and informal logic as a move against a premise of an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. (1 other version)The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  44.  23
    Introduction: Algorithmic Thought.M. Beatrice Fazi - 2021 - Theory, Culture and Society 38 (7-8):5-11.
    This introduction to a special section on algorithmic thought provides a framework through which the articles in that collection can be contextualised and their individual contributions highlighted. Over the past decade, there has been a growing interest in artificial intelligence (AI). This special section reflects on this AI boom and its implications for studying what thinking is. Focusing on the algorithmic character of computing machines and the thinking that these machines might express, each of the special section’s essays considers different (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Algorithmic Political Bias Can Reduce Political Polarization.Uwe Peters - 2022 - Philosophy and Technology 35 (3):1-7.
    Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that algorithmic political classifications entrench (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46.  44
    Algorithmic Decision-Making, Agency Costs, and Institution-Based Trust.Keith Dowding & Brad R. Taylor - 2024 - Philosophy and Technology 37 (2):1-22.
    Algorithm Decision Making (ADM) systems designed to augment or automate human decision-making have the potential to produce better decisions while also freeing up human time and attention for other pursuits. For this potential to be realised, however, algorithmic decisions must be sufficiently aligned with human goals and interests. We take a Principal-Agent (P-A) approach to the questions of ADM alignment and trust. In a broad sense, ADM is beneficial if and only if human principals can trust algorithmic agents to act (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Algorithms and the Individual in Criminal Law.Renée Jorgensen - 2022 - Canadian Journal of Philosophy 52 (1):1-17.
    Law-enforcement agencies are increasingly able to leverage crime statistics to make risk predictions for particular individuals, employing a form of inference that some condemn as violating the right to be “treated as an individual.” I suggest that the right encodes agents’ entitlement to a fair distribution of the burdens and benefits of the rule of law. Rather than precluding statistical prediction, it requires that citizens be able to anticipate which variables will be used as predictors and act intentionally to avoid (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  48. Algorithm exploitation: humans are keen to exploit benevolent AI.Jurgis Karpus, Adrian Krüger, Julia Tovar Verba, Bahador Bahrami & Ophelia Deroy - 2021 - iScience 24 (6):102679.
    We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  49. Algorithmic Bias and Risk Assessments: Lessons from Practice.Ali Hasan, Shea Brown, Jovana Davidovic, Benjamin Lange & Mitt Regan - 2022 - Digital Society 1 (1):1-15.
    In this paper, we distinguish between different sorts of assessments of algorithmic systems, describe our process of assessing such systems for ethical risk, and share some key challenges and lessons for future algorithm assessments and audits. Given the distinctive nature and function of a third-party audit, and the uncertain and shifting regulatory landscape, we suggest that second-party assessments are currently the primary mechanisms for analyzing the social impacts of systems that incorporate artificial intelligence. We then discuss two kinds of as-sessments: (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Negligent Algorithmic Discrimination.Andrés Páez - 2021 - Law and Contemporary Problems 84 (3):19-33.
    The use of machine learning algorithms has become ubiquitous in hiring decisions. Recent studies have shown that many of these algorithms generate unlawful discriminatory effects in every step of the process. The training phase of the machine learning models used in these decisions has been identified as the main source of bias. For a long time, discrimination cases have been analyzed under the banner of disparate treatment and disparate impact, but these concepts have been shown to be ineffective (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 958