Switch to: References

Add citations

You must login to add citations.
  1. Decolonizing AI Ethics: Relational Autonomy as a Means to Counter AI Harms.Sábëlo Mhlambi & Simona Tiribelli - 2023 - Topoi 42 (3):867-880.
    Many popular artificial intelligence (AI) ethics frameworks center the principle of autonomy as necessary in order to mitigate the harms that might result from the use of AI within society. These harms often disproportionately affect the most marginalized within society. In this paper, we argue that the principle of autonomy, as currently formalized in AI ethics, is itself flawed, as it expresses only a mainstream mainly liberal notion of autonomy as rational self-determination, derived from Western traditional philosophy. In particular, we (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Introduction: Digital Technologies and Human Decision-Making.Sofia Bonicalzi, Mario De Caro & Benedetta Giovanola - 2023 - Topoi 42 (3):793-797.
    Download  
     
    Export citation  
     
    Bookmark  
  • Melting contestation: insurance fairness and machine learning.Laurence Barry & Arthur Charpentier - 2023 - Ethics and Information Technology 25 (4):1-13.
    With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources (...)
    Download  
     
    Export citation  
     
    Bookmark