Switch to: References

Add citations

You must login to add citations.
  1. You are not worth the risk: Lawful discrimination in hiring.Vanessa Scholes - 2014 - Rationality, Markets and Morals 5.
    Increasing empirical research on productivity supports the use of statistical or ‘rational’ discrimination in hiring. The practice is legal for features of job applicants not covered by human rights discrimination laws, such as being a smoker, residing in a particular neighbourhood or being a particular height. The practice appears largely morally innocuous under existing philosophical accounts of wrongful discrimination. This paper argues that lawful statistical discrimination treats job applicants in a way that may be considered degrading, and is likely to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   208 citations  
  • Normative Challenges of Risk Regulation of Artificial Intelligence.Carsten Orwat, Jascha Bareis, Anja Folberth, Jutta Jahnel & Christian Wadephul - 2024 - NanoEthics 18 (2):1-29.
    Approaches aimed at regulating artificial intelligence (AI) include a particular form of risk regulation, i.e. a risk-based approach. The most prominent example is the European Union’s Artificial Intelligence Act (AI Act). This article addresses the challenges for adequate risk regulation that arise primarily from the specific type of risks involved, i.e. risks to the protection of fundamental rights and fundamental societal values. This is mainly due to the normative ambiguity of such rights and societal values when attempts are made to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Challenging algorithmic profiling: The limits of data protection and anti-discrimination in responding to emergent discrimination.Tobias Matzner & Monique Mann - 2019 - Big Data and Society 6 (2).
    The potential for biases being built into algorithms has been known for some time, yet literature has only recently demonstrated the ways algorithmic profiling can result in social sorting and harm marginalised groups. We contend that with increased algorithmic complexity, biases will become more sophisticated and difficult to identify, control for, or contest. Our argument has four steps: first, we show how harnessing algorithms means that data gathered at a particular place and time relating to specific persons, can be used (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Personal Data v. Big Data in the EU: Control Lost, Discrimination Found.Maria Bottis & George Bouchagiar - 2018 - Open Journal of Philosophy 8 (3):192-205.
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic Accountability and Public Reason.Reuben Binns - 2018 - Philosophy and Technology 31 (4):543-556.
    The ever-increasing application of algorithms to decision-making in a range of social contexts has prompted demands for algorithmic accountability. Accountable decision-makers must provide their decision-subjects with justifications for their automated system’s outputs, but what kinds of broader principles should we expect such justifications to appeal to? Drawing from political philosophy, I present an account of algorithmic accountability in terms of the democratic ideal of ‘public reason’. I argue that situating demands for algorithmic accountability within this justificatory framework enables us to (...)
    Download  
     
    Export citation  
     
    Bookmark   52 citations