Switch to: References

Add citations

You must login to add citations.
  1. Engineers on responsibility: feminist approaches to who’s responsible for ethical AI.Eleanor Drage, Kerry McInerney & Jude Browne - 2024 - Ethics and Information Technology 26 (1):1-13.
    Responsibility has become a central concept in AI ethics; however, little research has been conducted into practitioners’ personal understandings of responsibility in the context of AI, including how responsibility should be defined and who is responsible when something goes wrong. In this article, we present findings from a 2020–2021 data set of interviews with AI practitioners and tech workers at a single multinational technology company and interpret them through the lens of feminist political thought. We reimagine responsibility in the context (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Code of Digital Ethics: laying the foundation for digital ethics in a science and technology company.Sarah J. Becker, André T. Nemat, Simon Lucas, René M. Heinitz, Manfred Klevesath & Jean Enno Charton - 2023 - AI and Society 38 (6):2629-2639.
    The rapid and dynamic nature of digital transformation challenges companies that wish to develop and deploy novel digital technologies. Like other actors faced with this transformation, companies need to find robust ways to ethically guide their innovations and business decisions. Digital ethics has recently featured in a plethora of both practical corporate guidelines and compilations of high-level principles, but there remains a gap concerning the development of sound ethical guidance in specific business contexts. As a multinational science and technology company (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A phenomenological perspective on AI ethical failures: The case of facial recognition technology.Yuni Wen & Matthias Holweg - forthcoming - AI and Society:1-18.
    As more and more companies adopt artificial intelligence to increase the efficiency and effectiveness of their products and services, they expose themselves to ethical crises and potentially damaging public controversy associated with its use. Despite the prevalence of AI ethical problems, most companies are strategically unprepared to respond effectively to the public. This paper aims to advance our empirical understanding of company responses to AI ethical crises by focusing on the rise and fall of facial recognition technology. Specifically, through a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Operationalising AI ethics: barriers, enablers and next steps.Jessica Morley, Libby Kinsey, Anat Elhalal, Francesca Garcia, Marta Ziosi & Luciano Floridi - 2023 - AI and Society 38 (1):411-423.
    By mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part, this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of a searchable typology (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • The Principle-at-Risk Analysis (PaRA): Operationalising Digital Ethics by Bridging Principles and Operations of a Digital Ethics Advisory Panel.André T. Nemat, Sarah J. Becker, Simon Lucas, Sean Thomas, Isabel Gadea & Jean Enno Charton - 2023 - Minds and Machines 33 (4):737-760.
    Recent attempts to develop and apply digital ethics principles to address the challenges of the digital transformation leave organisations with an operationalisation gap. To successfully implement such guidance, they must find ways to translate high-level ethics frameworks into practical methods and tools that match their specific workflows and needs. Here, we describe the development of a standardised risk assessment tool, the Principle-at-Risk Analysis (PaRA), as a means to close this operationalisation gap for a key level of the ethics infrastructure at (...)
    Download  
     
    Export citation  
     
    Bookmark