Results for 'Algorithmic fairness'

961 found
Order:
  1. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  2. On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  3. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  4. The Ideals Program in Algorithmic Fairness.Rush T. Stewart - forthcoming - AI and Society.
    I consider statistical criteria of algorithmic fairness from the perspective of the _ideals_ of fairness to which these criteria are committed. I distinguish and describe three theoretical roles such ideals might play. The usefulness of this program is illustrated by taking Base Rate Tracking and its ratio variant as a case study. I identify and compare the ideals of these two criteria, then consider them in each of the aforementioned three roles for ideals. This ideals program may (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy.Atoosa Kasirzadeh - 2022 - Aies '22: Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society.
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. (1 other version)Algorithmic fairness in mortgage lending: from absolute conditions to relational trade-offs.Michelle Seng Ah Lee & Luciano Floridi - 2020 - Minds and Machines 31 (1):165-191.
    To address the rising concern that algorithmic decision-making may reinforce discriminatory biases, researchers have proposed many notions of fairness and corresponding mathematical formalizations. Each of these notions is often presented as a one-size-fits-all, absolute condition; however, in reality, the practical and ethical trade-offs are unavoidable and more complex. We introduce a new approach that considers fairness—not as a binary, absolute mathematical condition—but rather, as a relational notion in comparison to alternative decisionmaking processes. Using US mortgage lending as (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  8. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. An Epistemic Lens on Algorithmic Fairness.Elizabeth Edenberg & Alexandra Wood - 2023 - Eaamo '23: Proceedings of the 3Rd Acm Conference on Equity and Access in Algorithms, Mechanisms, and Optimization.
    In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  11. Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - forthcoming - Philosophical Studies:1-29.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  13. Algorithmic Indirect Discrimination, Fairness, and Harm.Frej Klem Thomsen - 2023 - AI and Ethics.
    Over the past decade, scholars, institutions, and activists have voiced strong concerns about the potential of automated decision systems to indirectly discriminate against vulnerable groups. This article analyses the ethics of algorithmic indirect discrimination, and argues that we can explain what is morally bad about such discrimination by reference to the fact that it causes harm. The article first sketches certain elements of the technical and conceptual background, including definitions of direct and indirect algorithmic differential treatment. It next (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. New Possibilities for Fair Algorithms.Michael Nielsen & Rush Stewart - 2024 - Philosophy and Technology 37 (4):1-17.
    We introduce a fairness criterion that we call Spanning. Spanning i) is implied by Calibration, ii) retains interesting properties of Calibration that some other ways of relaxing that criterion do not, and iii) unlike Calibration and other prominent ways of weakening it, is consistent with Equalized Odds outside of trivial cases.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  15. What's Fair about Individual Fairness?Will Fleisher - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.
    One of the main lines of research in algorithmic fairness involves individual fairness (IF) methods. Individual fairness is motivated by an intuitive principle, similar treatment, which requires that similar individuals be treated similarly. IF offers a precise account of this principle using distance metrics to evaluate the similarity of individuals. Proponents of individual fairness have argued that it gives the correct definition of algorithmic fairness, and that it should therefore be preferred to other (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Identity and the Limits of Fair Assessment.Rush T. Stewart - 2022 - Journal of Theoretical Politics 34 (3):415-442.
    In many assessment problems—aptitude testing, hiring decisions, appraisals of the risk of recidivism, evaluation of the credibility of testimonial sources, and so on—the fair treatment of different groups of individuals is an important goal. But individuals can be legitimately grouped in many different ways. Using a framework and fairness constraints explored in research on algorithmic fairness, I show that eliminating certain forms of bias across groups for one way of classifying individuals can make it impossible to eliminate (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  17. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. (5 other versions)Algorithm Evaluation Without Autonomy.Scott Hill - forthcoming - AI and Ethics.
    In Algorithms & Autonomy, Rubel, Castro, and Pham (hereafter RCP), argue that the concept of autonomy is especially central to understanding important moral problems about algorithms. In particular, autonomy plays a role in analyzing the version of social contract theory that they endorse. I argue that although RCP are largely correct in their diagnosis of what is wrong with the algorithms they consider, those diagnoses can be appropriated by moral theories RCP see as in competition with their autonomy based theory. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Jessica Dai, Sina Fazelpour & Zachary Lipton (eds.), Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect the allocation (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  97
    Multiplicative Metric Fairness Under Composition.Milan Mossé - 2023 - Symposium on Foundations of Responsible Computing 4.
    Dwork, Hardt, Pitassi, Reingold, & Zemel [6] introduced two notions of fairness, each of which is meant to formalize the notion of similar treatment for similarly qualified individuals. The first of these notions, which we call additive metric fairness, has received much attention in subsequent work studying the fairness of a system composed of classifiers which are fair when considered in isolation [3, 4, 7, 8, 12] and in work studying the relationship between fair treatment of individuals (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  22. Algorithms and the Individual in Criminal Law.Renée Jorgensen - 2022 - Canadian Journal of Philosophy 52 (1):1-17.
    Law-enforcement agencies are increasingly able to leverage crime statistics to make risk predictions for particular individuals, employing a form of inference that some condemn as violating the right to be “treated as an individual.” I suggest that the right encodes agents’ entitlement to a fair distribution of the burdens and benefits of the rule of law. Rather than precluding statistical prediction, it requires that citizens be able to anticipate which variables will be used as predictors and act intentionally to avoid (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  23. Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  24. Three Lessons For and From Algorithmic Discrimination.Frej Klem Thomsen - 2023 - Res Publica (2):1-23.
    Algorithmic discrimination has rapidly become a topic of intense public and academic interest. This article explores three issues raised by algorithmic discrimination: 1) the distinction between direct and indirect discrimination, 2) the notion of disadvantageous treatment, and 3) the moral badness of discriminatory automated decision-making. It argues that some conventional distinctions between direct and indirect discrimination appear not to apply to algorithmic discrimination, that algorithmic discrimination may often be discrimination between groups, as opposed to against groups, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. The philosophical basis of algorithmic recourse.Suresh Venkatasubramanian & Mark Alfano - forthcoming - Fairness, Accountability, and Transparency Conference 2020.
    Philosophers have established that certain ethically important val- ues are modally robust in the sense that they systematically deliver correlative benefits across a range of counterfactual scenarios. In this paper, we contend that recourse – the systematic process of reversing unfavorable decisions by algorithms and bureaucracies across a range of counterfactual scenarios – is such a modally ro- bust good. In particular, we argue that two essential components of a good life – temporally extended agency and trust – are under- (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  26. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  27. Adversarial Sampling for Fairness Testing in Deep Neural Network.Tosin Ige, William Marfo, Justin Tonkinson, Sikiru Adewale & Bolanle Hafiz Matti - 2023 - International Journal of Advanced Computer Science and Applications 14 (2).
    In this research, we focus on the usage of adversarial sampling to test for the fairness in the prediction of deep neural network model across different classes of image in a given dataset. While several framework had been proposed to ensure robustness of machine learning model against adversarial attack, some of which includes adversarial training algorithm. There is still the pitfall that adversarial training algorithm tends to cause disparity in accuracy and robustness among different group. Our research is aimed (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  28. (1 other version)A Ghost Workers' Bill of Rights: How to Establish a Fair and Safe Gig Work Platform.Julian Friedland, David Balkin & Ramiro Montealegre - 2020 - California Management Review 62 (2).
    Many of us assume that all the free editing and sorting of online content we ordinarily rely on is carried out by AI algorithms — not human persons. Yet in fact, that is often not the case. This is because human workers remain cheaper, quicker, and more reliable than AI for performing myriad tasks where the right answer turns on ineffable contextual criteria too subtle for algorithms to yet decode. The output of this work is then used for machine learning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral assumptions of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. A Framework for Assurance Audits of Algorithmic Systems.Benjamin Lange, Khoa Lam, Borhane Hamelin, Davidovic Jovana, Shea Brown & Ali Hasan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    An increasing number of regulations propose the notion of ‘AI audits’ as an enforcement mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently have little to no agreed upon practices, procedures, taxonomies, and standards. We propose the ‘criterion audit’ as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Decision Time: Normative Dimensions of Algorithmic Speed.Daniel Susser - forthcoming - ACM Conference on Fairness, Accountability, and Transparency (FAccT '22).
    Existing discussions about automated decision-making focus primarily on its inputs and outputs, raising questions about data collection and privacy on one hand and accuracy and fairness on the other. Less attention has been devoted to critically examining the temporality of decision-making processes—the speed at which automated decisions are reached. In this paper, I identify four dimensions of algorithmic speed that merit closer analysis. Duration (how much time it takes to reach a judgment), timing (when automated systems intervene in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. An Impossibility Theorem for Base Rate Tracking and Equalized Odds.Rush Stewart, Benjamin Eva, Shanna Slank & Reuben Stern - forthcoming - Analysis.
    There is a theorem that shows that it is impossible for an algorithm to jointly satisfy the statistical fairness criteria of Calibration and Equalised Odds non-trivially. But what about the recently advocated alternative to Calibration, Base Rate Tracking? Here, we show that Base Rate Tracking is strictly weaker than Calibration, and then take up the question of whether it is possible to jointly satisfy Base Rate Tracking and Equalised Odds in non-trivial scenarios. We show that it is not, thereby (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  33. Proceed with Caution.Annette Zimmermann & Chad Lee-Stronach - 2021 - Canadian Journal of Philosophy (1):6-25.
    It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  34. From Confucius to Coding and Avicenna to Algorithms: Cultivating Ethical AI Development through Cross-Cultural Ancient Wisdom.Ammar Younas & Yi Zeng - manuscript
    This paper explores the potential of integrating ancient educational principles from diverse eastern cultures into modern AI ethics curricula. It draws on the rich educational traditions of ancient China, India, Arabia, Persia, Japan, Tibet, Mongolia, and Korea, highlighting their emphasis on philosophy, ethics, holistic development, and critical thinking. By examining these historical educational systems, the paper establishes a correlation with modern AI ethics principles, advocating for the inclusion of these ancient teachings in current AI development and education. The proposed integration (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. When Gig Workers Become Essential: Leveraging Customer Moral Self-Awareness Beyond COVID-19.Julian Friedland - 2022 - Business Horizons 66 (2):181-190.
    The COVID-19 pandemic has intensified the extent to which economies in the developed and developing world rely on gig workers to perform essential tasks such as health care, personal transport, food and package delivery, and ad hoc tasking services. As a result, workers who provide such services are no longer perceived as mere low-skilled laborers, but as essential workers who fulfill a crucial role in society. The newly elevated moral and economic status of these workers increases consumer demand for corporate (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In Atoosa Kasirzadeh & Andrew Smart (eds.), ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  38. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a variety of normative (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  39. The paradox of the artificial intelligence system development process: the use case of corporate wellness programs using smart wearables.Alessandra Angelucci, Ziyue Li, Niya Stoimenova & Stefano Canali - forthcoming - AI and Society:1-11.
    Artificial intelligence systems have been widely applied to various contexts, including high-stake decision processes in healthcare, banking, and judicial systems. Some developed AI models fail to offer a fair output for specific minority groups, sparking comprehensive discussions about AI fairness. We argue that the development of AI systems is marked by a central paradox: the less participation one stakeholder has within the AI system’s life cycle, the more influence they have over the way the system will function. This means (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. On the Possibility of Testimonial Justice.Rush T. Stewart & Michael Nielsen - 2020 - Australasian Journal of Philosophy 98 (4):732-746.
    Recent impossibility theorems for fair risk assessment extend to the domain of epistemic justice. We translate the relevant model, demonstrating that the problems of fair risk assessment and just credibility assessment are structurally the same. We motivate the fairness criteria involved in the theorems as also being appropriate in the setting of testimonial justice. Any account of testimonial justice that implies the fairness/justice criteria must be abandoned, on pain of triviality.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  41. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  42. Effective Procedures.Nathan Salmon - 2023 - Philosophies 8 (2):27.
    This is a non-technical version of "The Decision Problem for Effective Procedures." The “somewhat vague, intuitive” notion from computability theory of an effective procedure (method) or algorithm can be fairly precisely defined, even if it does not have a purely mathematical definition—and even if (as many have asserted) for that reason, the Church–Turing thesis (that the effectively calculable functions on natural numbers are exactly the general recursive functions), cannot be proved. However, it is logically provable from the notion of an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Conflicting Aims and Values in the Application of Smart Sensors in Geriatric Rehabilitation: Ethical Analysis.Christopher Predel, Cristian Timmermann, Frank Ursin, Marcin Orzechowski, Timo Ropinski & Florian Steger - 2022 - JMIR mHealth and uHealth 10 (6):e32910.
    Background: Smart sensors have been developed as diagnostic tools for rehabilitation to cover an increasing number of geriatric patients. They promise to enable an objective assessment of complex movement patterns. -/- Objective: This research aimed to identify and analyze the conflicting ethical values associated with smart sensors in geriatric rehabilitation and provide ethical guidance on the best use of smart sensors to all stakeholders, including technology developers, health professionals, patients, and health authorities. -/- Methods: On the basis of a systematic (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44.  67
    A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences.Lode Lauwaert - 2023 - Artificial Intelligence Review 56:3473–3504.
    Since its emergence in the 1960s, Artifcial Intelligence (AI) has grown to conquer many technology products and their felds of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from diferent domains, together with numerous tools (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. The Decision Problem for Effective Procedures.Nathan Salmón - 2023 - Logica Universalis 17 (2):161-174.
    The “somewhat vague, intuitive” notion from computability theory of an effective procedure (method) or algorithm can be fairly precisely defined even if it is not sufficiently formal and precise to belong to mathematics proper (in a narrow sense)—and even if (as many have asserted) for that reason the Church–Turing thesis is unprovable. It is proved logically that the class of effective procedures is not decidable, i.e., that no effective procedure is possible for ascertaining whether a given procedure is effective. This (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Mutual affordances: the dynamics between social media and populism.Jeroen Hopster - 2021 - Media, Culture and Society 43 (3):551-560.
    In a recent contribution to this journal Paolo Gerbaudo has argued that an ‘elective affinity’ exists between social media and populism. The present article expands on Gerbaudo’s argument and examines various dimensions of this affinity in further detail. It argues that it is helpful to conceptually reframe the proposed affinity in terms of affordances. Four affordances are identified which make the social media ecology relatively favourable to both-right as well as left-wing populism, compared to the pre-social media ecology. These affordances (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  47. Spanning in and Spacing out? A Reply to Eva.Michael Nielsen & Rush Stewart - 2024 - Philosophy and Technology 37 (4):1-4.
    We reply to Eva's comment on our "New Possibilities for Fair Algorithms," comparing and contrasting our Spanning criterion with his suggested Spacing criterion.
    Download  
     
    Export citation  
     
    Bookmark  
  48. Artificial intelligence and human autonomy: the case of driving automation.Fabio Fossa - 2024 - AI and Society:1-12.
    The present paper aims at contributing to the ethical debate on the impacts of artificial intelligence (AI) systems on human autonomy. More specifically, it intends to offer a clearer understanding of the design challenges to the effort of aligning driving automation technologies to this ethical value. After introducing the discussion on the ambiguous impacts that AI systems exert on human autonomy, the analysis zooms in on how the problem has been discussed in the literature on connected and automated vehicles (CAVs). (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Russell and the Newman Problem Revisited.Marc Champagne - 2012 - Analysis and Metaphysics 11:65 - 74.
    In his 1927 Analysis of Matter and elsewhere, Russell argued that we can successfully infer the structure of the external world from that of our explanatory schemes. While nothing guarantees that the intrinsic qualities of experiences are shared by their objects, he held that the relations tying together those relata perforce mirror relations that actually obtain (these being expressible in the formal idiom of the Principia Mathematica). This claim was subsequently criticized by the Cambridge mathematician Max Newman as true but (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  50. Simulating Grice: Emergent Pragmatics in Spatialized Game Theory.Patrick Grim - 2011 - In Anton Benz, Christian Ebert & Robert van Rooij (eds.), Language, Games, and Evolution. Springer-Verlag.
    How do conventions of communication emerge? How do sounds or gestures take on a semantic meaning, and how do pragmatic conventions emerge regarding the passing of adequate, reliable, and relevant information? My colleagues and I have attempted in earlier work to extend spatialized game theory to questions of semantics. Agent-based simulations indicate that simple signaling systems emerge fairly naturally on the basis of individual information maximization in environments of wandering food sources and predators. Simple signaling emerges by means of any (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 961