Switch to: References

Add citations

You must login to add citations.
  1. AI Within Online Discussions: Rational, Civil, Privileged?Jonas Aaron Carstens & Dennis Friess - 2024 - Minds and Machines 34 (2):1-25.
    While early optimists have seen online discussions as potential spaces for deliberation, the reality of many online spaces is characterized by incivility and irrationality. Increasingly, AI tools are considered as a solution to foster deliberative discourse. Against the backdrop of previous research, we show that AI tools for online discussions heavily focus on the deliberative norms of rationality and civility. In the operationalization of those norms for AI tools, the complex deliberative dimensions are simplified, and the focus lies on the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From FAIR data to fair data use: Methodological data fairness in health-related social media research.Hywel Williams, Lora Fleming, Benedict W. Wheeler, Rebecca Lovell & Sabina Leonelli - 2021 - Big Data and Society 8 (1).
    The paper problematises the reliability and ethics of using social media data, such as sourced from Twitter or Instagram, to carry out health-related research. As in many other domains, the opportunity to mine social media for information has been hailed as transformative for research on well-being and disease. Considerations around the fairness, responsibilities and accountabilities relating to using such data have often been set aside, on the understanding that as long as data were anonymised, no real ethical or scientific issue (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected (...)
    Download  
     
    Export citation  
     
    Bookmark   67 citations  
  • Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media and the Commercialisation of Presentations of Self.Charlie Harry Smith - 2020 - In Christopher Burr & Luciano Floridi (eds.), Ethics of digital well-being: a multidisciplinary approach. Springer.
    Goffman’s (1959) dramaturgical identity theory requires modification when theorising about presentations of self on social media. This chapter contributes to these efforts, refining a conception of digital identities by differentiating them from ‘corporatised identities’. Armed with this new distinction, I ultimately argue that social media platforms’ production of corporatised identities undermines their users’ autonomy and digital well-being. This follows from the disentanglement of several commonly conflated concepts. Firstly, I distinguish two kinds of presentation of self that I collectively refer to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Bringing older people’s perspectives on consumer socially assistive robots into debates about the future of privacy protection and AI governance.Andrea Slane & Isabel Pedersen - forthcoming - AI and Society:1-20.
    A growing number of consumer technology companies are aiming to convince older people that humanoid robots make helpful tools to support aging-in-place. As hybrid devices, socially assistive robots (SARs) are situated between health monitoring tools, familiar digital assistants, security aids, and more advanced AI-powered devices. Consequently, they implicate older people’s privacy in complex ways. Such devices are marketed to perform functions common to smart speakers (e.g., Amazon Echo) and smart home platforms (e.g., Google Home), while other functions are more specific (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Data identity: privacy and the construction of self.Jens-Erik Mai & Sille Obelitz Søe - 2022 - Synthese 200 (6):1-22.
    This paper argues in favor of a hybrid conception of identity. A common conception of identity in datafied society is a split between a digital self and a real self, which has resulted in concepts such as the data double, algorithmic identity, and data shadows. These data-identity metaphors have played a significant role in the conception of informational privacy as control over information—the control of or restricted access to your digital identity. Through analyses of various data-identity metaphors as well as (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Hard choices in artificial intelligence.Roel Dobbe, Thomas Krendl Gilbert & Yonatan Mintz - 2021 - Artificial Intelligence 300 (C):103555.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • How to protect privacy in a datafied society? A presentation of multiple legal and conceptual approaches.Oskar J. Gstrein & Anne Beaulieu - 2022 - Philosophy and Technology 35 (1):1-38.
    The United Nations confirmed that privacy remains a human right in the digital age, but our daily digital experiences and seemingly ever-increasing amounts of data suggest that privacy is a mundane, distributed and technologically mediated concept. This article explores privacy by mapping out different legal and conceptual approaches to privacy protection in the context of datafication. It provides an essential starting point to explore the entwinement of technological, ethical and regulatory dynamics. It clarifies why each of the presented approaches emphasises (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Beyond explainability: justifiability and contestability of algorithmic decision systems.Clément Henin & Daniel Le Métayer - 2022 - AI and Society 37 (4):1397-1410.
    In this paper, we point out that explainability is useful but not sufficient to ensure the legitimacy of algorithmic decision systems. We argue that the key requirements for high-stakes decision systems should be justifiability and contestability. We highlight the conceptual differences between explanations and justifications, provide dual definitions of justifications and contestations, and suggest different ways to operationalize justifiability and contestability.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Contesting algorithms: Restoring the public interest in content filtering by artificial intelligence.Niva Elkin-Koren - 2020 - Big Data and Society 7 (2).
    In recent years, artificial intelligence has been deployed by online platforms to prevent the upload of allegedly illegal content or to remove unwarranted expressions. These systems are trained to spot objectionable content and to remove it, block it, or filter it out before it is even uploaded. Artificial intelligence filters offer a robust approach to content moderation which is shaping the public sphere. This dramatic shift in norm setting and law enforcement is potentially game-changing for democracy. Artificial intelligence filters carry (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations