Switch to: References

Add citations

You must login to add citations.
  1. Failure of chatbot Tay was evil, ugliness and uselessness in its nature or do we judge it through cognitive shortcuts and biases?Tomáš Zemčík - 2021 - AI and Society 36 (1):361-367.
    This study deals with the failure of one of the most advanced chatbots called Tay, created by Microsoft. Many users, commentators and experts strongly anthropomorphised this chatbot in their assessment of the case around Tay. This view is so widespread that we can identify it as a certain typical cognitive distortion or bias. This study presents a summary of facts concerning the Tay case, collaborative perspectives from eminent experts: Tay did not mean anything by its morally objectionable statements because, in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Immune moral models? Pro-social rule breaking as a moral enhancement approach for ethical AI.Rajitha Ramanayake, Philipp Wicke & Vivek Nallur - 2023 - AI and Society 38 (2):801-813.
    We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. From healthcare decision-making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. Ethical behaviour is a critical characteristic that we would like in a human-centric AI. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons. This behaviour among humans (...)
    Download  
     
    Export citation  
     
    Bookmark