Switch to: References

Add citations

You must login to add citations.
  1. Choosing how to discriminate: navigating ethical trade-offs in fair algorithmic design for the insurance sector.Michele Loi & Markus Christen - 2021 - Philosophy and Technology 34 (4):967-992.
    Here, we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts, business managers, and regulators, proposing a framework of moral reasoning behind the choice of fairness goals for prediction-based decisions in the insurance domain. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive inaccuracy in (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Compulsion beyond fairness: towards a critical theory of technological abstraction in neural networks.Leonie Hunter - forthcoming - AI and Society:1-10.
    In the field of applied computer research, the problem of the reinforcement of existing inequalities through the processing of “big data” in neural networks is typically addressed via concepts of representation and fairness. These approaches, however, tend to overlook the limits of the liberal antidiscrimination discourse, which are well established in critical theory. In this paper, I address these limits and propose a different framework for understanding technologically amplified oppression departing from the notion of “mute compulsion” (Marx), a specifically modern (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Sensitive loss: Improving accuracy and fairness of face representations with discrimination-aware deep learning.Ignacio Serna, Aythami Morales, Julian Fierrez & Nick Obradovich - 2022 - Artificial Intelligence 305 (C):103682.
    Download  
     
    Export citation  
     
    Bookmark  
  • On prediction-modelers and decision-makers: why fairness requires more than a fair prediction model.Teresa Scantamburlo, Joachim Baumann & Christoph Heitz - forthcoming - AI and Society:1-17.
    An implicit ambiguity in the field of prediction-based decision-making concerns the relation between the concepts of prediction and decision. Much of the literature in the field tends to blur the boundaries between the two concepts and often simply refers to ‘fair prediction’. In this paper, we point out that a differentiation of these concepts is helpful when trying to implement algorithmic fairness. Even if fairness properties are related to the features of the used prediction model, what is more properly called (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges.Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland & Patrick Vinck - 2018 - Philosophy and Technology 31 (4):611-627.
    The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • The right to refuse diagnostics and treatment planning by artificial intelligence.Thomas Ploug & Søren Holm - 2020 - Medicine, Health Care and Philosophy 23 (1):107-114.
    In an analysis of artificially intelligent systems for medical diagnostics and treatment planning we argue that patients should be able to exercise a right to withdraw from AI diagnostics and treatment planning for reasons related to (1) the physician’s role in the patients’ formation of and acting on personal preferences and values, (2) the bias and opacity problem of AI systems, and (3) rational concerns about the future societal effects of introducing AI systems in the health care sector.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Better decision support through exploratory discrimination-aware data mining: foundations and empirical evidence.Bettina Berendt & Sören Preibusch - 2014 - Artificial Intelligence and Law 22 (2):175-209.
    Decision makers in banking, insurance or employment mitigate many of their risks by telling “good” individuals and “bad” individuals apart. Laws codify societal understandings of which factors are legitimate grounds for differential treatment —or are considered unfair discrimination, including gender, ethnicity or age. Discrimination-aware data mining implements the hope that information technology supporting the decision process can also keep it free from unjust grounds. However, constraining data mining to exclude a fixed enumeration of potentially discriminatory features is insufficient. We argue (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • From privacy to anti-discrimination in times of machine learning.Thilo Hagendorff - 2019 - Ethics and Information Technology 21 (4):331-343.
    Due to the technology of machine learning, new breakthroughs are currently being achieved with constant regularity. By using machine learning techniques, computer applications can be developed and used to solve tasks that have hitherto been assumed not to be solvable by computers. If these achievements consider applications that collect and process personal data, this is typically perceived as a threat to information privacy. This paper aims to discuss applications from both fields of personality and image analysis. These applications are often (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Combating discrimination using Bayesian networks.Koray Mancuhan & Chris Clifton - 2014 - Artificial Intelligence and Law 22 (2):211-238.
    Discrimination in decision making is prohibited on many attributes, but often present in historical decisions. Use of such discriminatory historical decision making as training data can perpetuate discrimination, even if the protected attributes are not directly present in the data. This work focuses on discovering discrimination in instances and preventing discrimination in classification. First, we propose a discrimination discovery method based on modeling the probability distribution of a class using Bayesian networks. This measures the effect of a protected attribute in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation