Switch to: References

Add citations

You must login to add citations.
  1. Are publicly available (personal) data “up for grabs”? Three privacy arguments.Elisa Orrù - 2024 - In Paul De Hert, Hideyuki Matsumi, Dara Hallinan, Diana Dimitrova & Eleni Kosta (eds.), Data Protection and Privacy, Volume 16: Ideas That Drive Our Digital World. London: Hart. pp. 105-123.
    The re-use of publicly available (personal) data for originally unanticipated purposes has become common practice. Without such secondary uses, the development of many AI systems like large language models (LLMs) and ChatGPT would not even have been possible. This chapter addresses the ethical implications of such secondary processing, with a particular focus on data protection and privacy issues. Legal and ethical evaluations of secondary processing of publicly available personal data diverge considerably both among scholars and the general public. While some (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Should People Have a Right Not to Be Subjected to AI Profiling based on Publicly Available Data? A Comment on Ploug.Sune Holm - 2023 - Philosophy and Technology 36 (2):1-5.
    Several studies have documented that when presented with data from social media platforms machine learning (ML) models can make accurate predictions about users, e.g., about whether they are likely to suffer health-related conditions such as depression, mental disorders, and risk of suicide. In a recent article, Ploug (Philos Technol 36:14, 2023) defends a right not to be subjected to AI profiling based on publicly available data. In this comment, I raise some questions in relation to Ploug’s argument that I think (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • People Should Have a Right Not to Be Subjected to AI Profiling Based on Publicly Available Data! A Reply to Holm.Thomas Ploug - 2023 - Philosophy and Technology 36 (3):1-6.
    Studies suggest that machine learning models may accurately predict depression and other mental health-related conditions based on social media data. I have recently argued that individuals should have sui generis right not to be subjected to AI profiling based on publicly available data without their explicit informed consent. In a comment, Holm claims that there are scenarios in which individuals have a reason to prefer attempts of social control exercised on the basis of accurate AI predictions and that the suggested (...)
    Download  
     
    Export citation  
     
    Bookmark