Results for 'Algorithmic Fairness'

998 found
Order:
  1. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  2. On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  3. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  4. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. Algorithmic fairness in mortgage lending: from absolute conditions to relational trade-offs.Michelle Seng Ah Lee & Luciano Floridi - 2020 - Minds and Machines 31 (1):165-191.
    To address the rising concern that algorithmic decision-making may reinforce discriminatory biases, researchers have proposed many notions of fairness and corresponding mathematical formalizations. Each of these notions is often presented as a one-size-fits-all, absolute condition; however, in reality, the practical and ethical trade-offs are unavoidable and more complex. We introduce a new approach that considers fairness—not as a binary, absolute mathematical condition—but rather, as a relational notion in comparison to alternative decisionmaking processes. Using US mortgage lending as (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  6. Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy.Atoosa Kasirzadeh - 2022 - Aies '22: Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society.
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. An Epistemic Lens on Algorithmic Fairness.Elizabeth Edenberg & Alexandra Wood - 2023 - Eaamo '23: Proceedings of the 3Rd Acm Conference on Equity and Access in Algorithms, Mechanisms, and Optimization.
    In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  10. Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - forthcoming - Philosophical Studies:1-29.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  12. Algorithmic Indirect Discrimination, Fairness, and Harm.Frej Klem Thomsen - 2023 - AI and Ethics.
    Over the past decade, scholars, institutions, and activists have voiced strong concerns about the potential of automated decision systems to indirectly discriminate against vulnerable groups. This article analyses the ethics of algorithmic indirect discrimination, and argues that we can explain what is morally bad about such discrimination by reference to the fact that it causes harm. The article first sketches certain elements of the technical and conceptual background, including definitions of direct and indirect algorithmic differential treatment. It next (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Jessica Dai, Sina Fazelpour & Zachary Lipton (eds.), Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect the allocation (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  16. What's Fair about Individual Fairness?Will Fleisher - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.
    One of the main lines of research in algorithmic fairness involves individual fairness (IF) methods. Individual fairness is motivated by an intuitive principle, similar treatment, which requires that similar individuals be treated similarly. IF offers a precise account of this principle using distance metrics to evaluate the similarity of individuals. Proponents of individual fairness have argued that it gives the correct definition of algorithmic fairness, and that it should therefore be preferred to other (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Identity and the Limits of Fair Assessment.Rush T. Stewart - 2022 - Journal of Theoretical Politics 34 (3):415-442.
    In many assessment problems—aptitude testing, hiring decisions, appraisals of the risk of recidivism, evaluation of the credibility of testimonial sources, and so on—the fair treatment of different groups of individuals is an important goal. But individuals can be legitimately grouped in many different ways. Using a framework and fairness constraints explored in research on algorithmic fairness, I show that eliminating certain forms of bias across groups for one way of classifying individuals can make it impossible to eliminate (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  18. Algorithms and the Individual in Criminal Law.Renée Jorgensen - 2022 - Canadian Journal of Philosophy 52 (1):1-17.
    Law-enforcement agencies are increasingly able to leverage crime statistics to make risk predictions for particular individuals, employing a form of inference that some condemn as violating the right to be “treated as an individual.” I suggest that the right encodes agents’ entitlement to a fair distribution of the burdens and benefits of the rule of law. Rather than precluding statistical prediction, it requires that citizens be able to anticipate which variables will be used as predictors and act intentionally to avoid (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  19. Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. Three Lessons For and From Algorithmic Discrimination.Frej Klem Thomsen - 2023 - Res Publica (2):1-23.
    Algorithmic discrimination has rapidly become a topic of intense public and academic interest. This article explores three issues raised by algorithmic discrimination: 1) the distinction between direct and indirect discrimination, 2) the notion of disadvantageous treatment, and 3) the moral badness of discriminatory automated decision-making. It argues that some conventional distinctions between direct and indirect discrimination appear not to apply to algorithmic discrimination, that algorithmic discrimination may often be discrimination between groups, as opposed to against groups, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. The philosophical basis of algorithmic recourse.Suresh Venkatasubramanian & Mark Alfano - forthcoming - Fairness, Accountability, and Transparency Conference 2020.
    Philosophers have established that certain ethically important val- ues are modally robust in the sense that they systematically deliver correlative benefits across a range of counterfactual scenarios. In this paper, we contend that recourse – the systematic process of reversing unfavorable decisions by algorithms and bureaucracies across a range of counterfactual scenarios – is such a modally ro- bust good. In particular, we argue that two essential components of a good life – temporally extended agency and trust – are under- (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  23. Adversarial Sampling for Fairness Testing in Deep Neural Network.Tosin Ige, William Marfo, Justin Tonkinson, Sikiru Adewale & Bolanle Hafiz Matti - 2023 - International Journal of Advanced Computer Science and Applications 14 (2).
    In this research, we focus on the usage of adversarial sampling to test for the fairness in the prediction of deep neural network model across different classes of image in a given dataset. While several framework had been proposed to ensure robustness of machine learning model against adversarial attack, some of which includes adversarial training algorithm. There is still the pitfall that adversarial training algorithm tends to cause disparity in accuracy and robustness among different group. Our research is aimed (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  24.  44
    Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral assumptions of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. A Framework for Assurance Audits of Algorithmic Systems.Benjamin Lange, Khoa Lam, Borhane Hamelin, Davidovic Jovana, Shea Brown & Ali Hasan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    An increasing number of regulations propose the notion of ‘AI audits’ as an enforcement mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently have little to no agreed upon practices, procedures, taxonomies, and standards. We propose the ‘criterion audit’ as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Decision Time: Normative Dimensions of Algorithmic Speed.Daniel Susser - forthcoming - ACM Conference on Fairness, Accountability, and Transparency (FAccT '22).
    Existing discussions about automated decision-making focus primarily on its inputs and outputs, raising questions about data collection and privacy on one hand and accuracy and fairness on the other. Less attention has been devoted to critically examining the temporality of decision-making processes—the speed at which automated decisions are reached. In this paper, I identify four dimensions of algorithmic speed that merit closer analysis. Duration (how much time it takes to reach a judgment), timing (when automated systems intervene in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. A Ghost Workers' Bill of Rights: How to Establish a Fair and Safe Gig Work Platform.Julian Friedland, David Balkin & Ramiro Montealegre - 2020 - California Management Review 62 (2).
    Many of us assume that all the free editing and sorting of online content we ordinarily rely on is carried out by AI algorithms — not human persons. Yet in fact, that is often not the case. This is because human workers remain cheaper, quicker, and more reliable than AI for performing myriad tasks where the right answer turns on ineffable contextual criteria too subtle for algorithms to yet decode. The output of this work is then used for machine learning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. From Confucius to Coding and Avicenna to Algorithms: Cultivating Ethical AI Development through Cross-Cultural Ancient Wisdom.Ammar Younas & Yi Zeng - manuscript
    This paper explores the potential of integrating ancient educational principles from diverse eastern cultures into modern AI ethics curricula. It draws on the rich educational traditions of ancient China, India, Arabia, Persia, Japan, Tibet, Mongolia, and Korea, highlighting their emphasis on philosophy, ethics, holistic development, and critical thinking. By examining these historical educational systems, the paper establishes a correlation with modern AI ethics principles, advocating for the inclusion of these ancient teachings in current AI development and education. The proposed integration (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30.  71
    The Fallacy of Many Questions.Frank Fair - 1973 - Southwestern Journal of Philosophy 4 (1):89-92.
    In this article I explore two accounts of the Fallacy of Many Questions made famous by the question "Have you stopped beating your wife?" The accounts are from the works of Lennart Aqvist and Noel Belnap, and the two authors differ in their accounts of the fallacy. Then I give my own account based on understanding a facet of erotetic logic, i. e., the logic of questions.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  31.  73
    Socrates in the schools: Gains at three-year follow-up.Frank Fair, Lory E. Haas, Carol Gardoski, Daphne Johnson, Debra Price & Olena Leipnik - 2015 - Journal of Philosophy in Schools 2 (2).
    Three recent research reports by Topping and Trickey, by Fair and colleagues, and by Gorard, Siddiqui and Huat See have produced data that support the conclusion that a Philosophy for Children program of one-hour-per-week structured discussions has a marked positive impact on students. This article presents data from a follow up study done three years after the completion of the study reported in Fair et al.. The data show that the positive gains in scores on the Cognitive Abilities Test were (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. An Impossibility Theorem for Base Rate Tracking and Equalised Odds.Rush T. Stewart, Benjamin Eva, Shanna Slank & Reuben Stern - forthcoming - Analysis.
    There is a theorem that shows that it is impossible for an algorithm to jointly satisfy the statistical fairness criteria of Calibration and Equalised Odds non-trivially. But what about the recently advocated alternative to Calibration, Base Rate Tracking? Here, we show that Base Rate Tracking is strictly weaker than Calibration, and then take up the question of whether it is possible to jointly satisfy Base Rate Tracking and Equalised Odds in non-trivial scenarios. We show that it is not, thereby (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. When Gig Workers Become Essential: Leveraging Customer Moral Self-Awareness Beyond COVID-19.Julian Friedland - 2022 - Business Horizons 66 (2):181-190.
    The COVID-19 pandemic has intensified the extent to which economies in the developed and developing world rely on gig workers to perform essential tasks such as health care, personal transport, food and package delivery, and ad hoc tasking services. As a result, workers who provide such services are no longer perceived as mere low-skilled laborers, but as essential workers who fulfill a crucial role in society. The newly elevated moral and economic status of these workers increases consumer demand for corporate (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Proceed with Caution.Annette Zimmermann & Chad Lee-Stronach - 2021 - Canadian Journal of Philosophy (1):6-25.
    It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  35. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In Atoosa Kasirzadeh & Andrew Smart (eds.), ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  36. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a variety of normative (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37.  77
    Effective Procedures.Nathan Salmon - 2023 - Philosophies 8 (2):27.
    This is a non-technical version of "The Decision Problem for Effective Procedures." The “somewhat vague, intuitive” notion from computability theory of an effective procedure (method) or algorithm can be fairly precisely defined, even if it does not have a purely mathematical definition—and even if (as many have asserted) for that reason, the Church–Turing thesis (that the effectively calculable functions on natural numbers are exactly the general recursive functions), cannot be proved. However, it is logically provable from the notion of an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. The paradox of the artificial intelligence system development process: the use case of corporate wellness programs using smart wearables.Alessandra Angelucci, Ziyue Li, Niya Stoimenova & Stefano Canali - forthcoming - AI and Society:1-11.
    Artificial intelligence systems have been widely applied to various contexts, including high-stake decision processes in healthcare, banking, and judicial systems. Some developed AI models fail to offer a fair output for specific minority groups, sparking comprehensive discussions about AI fairness. We argue that the development of AI systems is marked by a central paradox: the less participation one stakeholder has within the AI system’s life cycle, the more influence they have over the way the system will function. This means (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. On the Possibility of Testimonial Justice.Rush T. Stewart & Michael Nielsen - 2020 - Australasian Journal of Philosophy 98 (4):732-746.
    Recent impossibility theorems for fair risk assessment extend to the domain of epistemic justice. We translate the relevant model, demonstrating that the problems of fair risk assessment and just credibility assessment are structurally the same. We motivate the fairness criteria involved in the theorems as also being appropriate in the setting of testimonial justice. Any account of testimonial justice that implies the fairness/justice criteria must be abandoned, on pain of triviality.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  40. The Decision Problem for Effective Procedures.Nathan Salmón - 2023 - Logica Universalis 17 (2):161-174.
    The “somewhat vague, intuitive” notion from computability theory of an effective procedure (method) or algorithm can be fairly precisely defined even if it is not sufficiently formal and precise to belong to mathematics proper (in a narrow sense)—and even if (as many have asserted) for that reason the Church–Turing thesis is unprovable. It is proved logically that the class of effective procedures is not decidable, i.e., that no effective procedure is possible for ascertaining whether a given procedure is effective. This (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Mutual affordances: the dynamics between social media and populism.Jeroen Hopster - 2021 - Media, Culture and Society 43 (3):551-560.
    In a recent contribution to this journal Paolo Gerbaudo has argued that an ‘elective affinity’ exists between social media and populism. The present article expands on Gerbaudo’s argument and examines various dimensions of this affinity in further detail. It argues that it is helpful to conceptually reframe the proposed affinity in terms of affordances. Four affordances are identified which make the social media ecology relatively favourable to both-right as well as left-wing populism, compared to the pre-social media ecology. These affordances (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  42. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. Implications and Applications of Artificial Intelligence in the Legal Domain.Besan S. Abu Nasser, Marwan M. Saleh & Samy S. Abu-Naser - 2024 - International Journal of Academic Information Systems Research (IJAISR) 7 (12):18-25.
    Abstract: As the integration of Artificial Intelligence (AI) continues to permeate various sectors, the legal domain stands on the cusp of a transformative era. This research paper delves into the multifaceted relationship between AI and the law, scrutinizing the profound implications and innovative applications that emerge at the intersection of these two realms. The study commences with an examination of the current landscape, assessing the challenges and opportunities that AI presents within legal frameworks. With an emphasis on efficiency, accuracy, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Russell and the Newman Problem Revisited.Marc Champagne - 2012 - Analysis and Metaphysics 11:65 - 74.
    In his 1927 Analysis of Matter and elsewhere, Russell argued that we can successfully infer the structure of the external world from that of our explanatory schemes. While nothing guarantees that the intrinsic qualities of experiences are shared by their objects, he held that the relations tying together those relata perforce mirror relations that actually obtain (these being expressible in the formal idiom of the Principia Mathematica). This claim was subsequently criticized by the Cambridge mathematician Max Newman as true but (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Bridging Conceptual Gaps: The Kolmogorov-Sinai Entropy.Massimiliano Badino - forthcoming - Isonomía. Revista de Teoría y Filosofía Del Derecho.
    The Kolmogorov-Sinai entropy is a fairly exotic mathematical concept which has recently aroused some interest on the philosophers’ part. The most salient trait of this concept is its working as a junction between such diverse ambits as statistical mechanics, information theory and algorithm theory. In this paper I argue that, in order to understand this very special feature of the Kolmogorov-Sinai entropy, is essential to reconstruct its genealogy. Somewhat surprisingly, this story takes us as far back as the beginning of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Conflicting Aims and Values in the Application of Smart Sensors in Geriatric Rehabilitation: Ethical Analysis.Christopher Predel, Cristian Timmermann, Frank Ursin, Marcin Orzechowski, Timo Ropinski & Florian Steger - 2022 - JMIR mHealth and uHealth 10 (6):e32910.
    Background: Smart sensors have been developed as diagnostic tools for rehabilitation to cover an increasing number of geriatric patients. They promise to enable an objective assessment of complex movement patterns. -/- Objective: This research aimed to identify and analyze the conflicting ethical values associated with smart sensors in geriatric rehabilitation and provide ethical guidance on the best use of smart sensors to all stakeholders, including technology developers, health professionals, patients, and health authorities. -/- Methods: On the basis of a systematic (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  56
    Big Data Analytics in Healthcare: Exploring the Role of Machine Learning in Predicting Patient Outcomes and Improving Healthcare Delivery.Federico Del Giorgio Solfa & Fernando Rogelio Simonato - 2023 - International Journal of Computations Information and Manufacturing (Ijcim) 3 (1):1-9.
    Healthcare professionals decide wisely about personalized medicine, treatment plans, and resource allocation by utilizing big data analytics and machine learning. To guarantee that algorithmic recommendations are impartial and fair, however, ethical issues relating to prejudice and data privacy must be taken into account. Big data analytics and machine learning have a great potential to disrupt healthcare, and as these technologies continue to evolve, new opportunities to reform healthcare and enhance patient outcomes may arise. In order to investigate the patient’s (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Simulating Grice: Emergent Pragmatics in Spatialized Game Theory.Patrick Grim - 2011 - In Anton Benz, Christian Ebert & Robert van Rooij (eds.), Language, Games, and Evolution. Springer-Verlag.
    How do conventions of communication emerge? How do sounds or gestures take on a semantic meaning, and how do pragmatic conventions emerge regarding the passing of adequate, reliable, and relevant information? My colleagues and I have attempted in earlier work to extend spatialized game theory to questions of semantics. Agent-based simulations indicate that simple signaling systems emerge fairly naturally on the basis of individual information maximization in environments of wandering food sources and predators. Simple signaling emerges by means of any (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  50.  95
    Digital Monology: The Authority of the Search Engine.Walter Barta - 2019 - Media and the Moving Image at University of Houston.
    2019 Applied Technology Award for the Media and the Moving Image Awards at University of Houston. -/- The Google algorithm, as a ranking and ordering structure, cannot be “objective” as long as the page-ranking mechanism produces social effects and always inadvertently and inescapably affects social priorities. Imitable units of information (memes) on the internet change according to the laws of exponential growth, like other social phenomena, which include Google rankings. Mathematically and graphically represented, the effects of mimetic inflation on Google (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 998