Switch to: References

Add citations

You must login to add citations.
  1. User-centered AI-based voice-assistants for safe mobility of older people in urban context.Bokolo Anthony Jnr - forthcoming - AI and Society:1-24.
    Voice-assistants are becoming increasingly popular and can be deployed to offers a low-cost tool that can support and potentially reduce falls, injuries, and accidents faced by older people within the age of 65 and older. But, irrespective of the mobility and walkability challenges faced by the aging population, studies that employed Artificial Intelligence (AI)-based voice-assistants to reduce risks faced by older people when they use public transportation and walk in built environment are scarce. This is because the development of AI-based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Introduction: special issue—critical robotics research.Sofia Serholt, Sara Ljungblad & Niamh Ní Bhroin - 2022 - AI and Society 37 (2):417-423.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies.Justyna Stypinska - 2023 - AI and Society 38 (2):665-677.
    In the last few years, we have witnessed a surge in scholarly interest and scientific evidence of how algorithms can produce discriminatory outcomes, especially with regard to gender and race. However, the analysis of fairness and bias in AI, important for the debate of AI for social good, has paid insufficient attention to the category of age and older people. Ageing populations have been largely neglected during the turn to digitality and AI. In this article, the concept of AI ageism (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • “The Human Must Remain the Central Focus”: Subjective Fairness Perceptions in Automated Decision-Making.Daria Szafran & Ruben L. Bach - 2024 - Minds and Machines 34 (3):1-37.
    The increasing use of algorithms in allocating resources and services in both private industry and public administration has sparked discussions about their consequences for inequality and fairness in contemporary societies. Previous research has shown that the use of automated decision-making (ADM) tools in high-stakes scenarios like the legal justice system might lead to adverse societal outcomes, such as systematic discrimination. Scholars have since proposed a variety of metrics to counteract and mitigate biases in ADM processes. While these metrics focus on (...)
    Download  
     
    Export citation  
     
    Bookmark