Switch to: References

Add citations

You must login to add citations.
  1. Challenges of responsible AI in practice: scoping review and recommended actions.Malak Sadek, Emma Kallina, Thomas Bohné, Céline Mougenot, Rafael A. Calvo & Stephen Cave - forthcoming - AI and Society:1-17.
    Responsible AI (RAI) guidelines aim to ensure that AI systems respect democratic values. While a step in the right direction, they currently fail to impact practice. Our work discusses reasons for this lack of impact and clusters them into five areas: (1) the abstract nature of RAI guidelines, (2) the problem of selecting and reconciling values, (3) the difficulty of operationalising RAI success metrics, (4) the fragmentation of the AI pipeline, and (5) the lack of internal advocacy and accountability. Afterwards, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The landscape of data and AI documentation approaches in the European policy context.Josep Soler-Garrido, Blagoj Delipetrev, Isabelle Hupont & Marina Micheli - 2023 - Ethics and Information Technology 25 (4):1-21.
    Nowadays, Artificial Intelligence (AI) is present in all sectors of the economy. Consequently, both data-the raw material used to build AI systems- and AI have an unprecedented impact on society and there is a need to ensure that they work for its benefit. For this reason, the European Union has put data and trustworthy AI at the center of recent legislative initiatives. An important element in these regulations is transparency, understood as the provision of information to relevant stakeholders to support (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - 2023 - Minds and Machines 33 (1):221-248.
    Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Missed opportunities for AI governance: lessons from ELS programs in genomics, nanotechnology, and RRI.Maximilian Braun & Ruth Müller - forthcoming - AI and Society:1-14.
    Since the beginning of the current hype around Artificial Intelligence (AI), governments, research institutions, and the industry invited ethical, legal, and social sciences (ELS) scholars to research AI’s societal challenges from various disciplinary viewpoints and perspectives. This approach builds upon the tradition of supporting research on the societal aspects of emerging sciences and technologies, which started with the Ethical, Legal, and Social Implications (ELSI) Program in the Human Genome Project (HGP) in the early 1990s. However, although a diverse ELS research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Doing versus saying: responsible AI among large firms.Jacques Bughin - forthcoming - AI and Society:1-13.
    Responsible Artificial Intelligence (RAI) is a subset of the ethics associated with the use of artificial intelligence, which will only increase with the recent advent of new regulatory frameworks. However, if many firms have announced the establishment of AI governance rules, there is currently an important gap in understanding whether and why these announcements are being implemented or remain “decoupled” from operations. We assess how large global firms have so far implemented RAI, and the antecedents to RAI implementation across a (...)
    Download  
     
    Export citation  
     
    Bookmark