Switch to: References

Add citations

You must login to add citations.
  1. Toward children-centric AI: a case for a growth model in children-AI interactions.Karolina La Fors - 2024 - AI and Society 39 (3):1303-1315.
    This article advocates for a hermeneutic model for children-AI (age group 7–11 years) interactions in which the desirable purpose of children’s interaction with artificial intelligence (AI) systems is children's growth. The article perceives AI systems with machine-learning components as having a recursive element when interacting with children. They can learn from an encounter with children and incorporate data from interaction, not only from prior programming. Given the purpose of growth and this recursive element of AI, the article argues for distinguishing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models.Indrė Žliobaitė & Bart Custers - 2016 - Artificial Intelligence and Law 24 (2):183-201.
    Increasing numbers of decisions about everyday life are made using algorithms. By algorithms we mean predictive models (decision rules) captured from historical data using data mining. Such models often decide prices we pay, select ads we see and news we read online, match job descriptions and candidate CVs, decide who gets a loan, who goes through an extra airport security check, or who gets released on parole. Yet growing evidence suggests that decision making by algorithms may discriminate people, even if (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Self-fulfilling Prophecy in Practical and Automated Prediction.Owen C. King & Mayli Mertens - 2023 - Ethical Theory and Moral Practice 26 (1):127-152.
    A self-fulfilling prophecy is, roughly, a prediction that brings about its own truth. Although true predictions are hard to fault, self-fulfilling prophecies are often regarded with suspicion. In this article, we vindicate this suspicion by explaining what self-fulfilling prophecies are and what is problematic about them, paying special attention to how their problems are exacerbated through automated prediction. Our descriptive account of self-fulfilling prophecies articulates the four elements that define them. Based on this account, we begin our critique by showing (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Policing based on automatic facial recognition.Zhilong Guo & Lewis Kennedy - 2023 - Artificial Intelligence and Law 31 (2):397-443.
    Advances in technology have transformed and expanded the ways in which policing is run. One new manifestation is the mass acquisition and processing of private facial images via automatic facial recognition by the police: what we conceptualise as AFR-based policing. However, there is still a lack of clarity on the manner and extent to which this largely-unregulated technology is used by law enforcement agencies and on its impact on fundamental rights. Social understanding and involvement are still insufficient in the context (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Transparency as Manipulation? Uncovering the Disciplinary Power of Algorithmic Transparency.Hao Wang - 2022 - Philosophy and Technology 35 (3):1-25.
    Automated algorithms are silently making crucial decisions about our lives, but most of the time we have little understanding of how they work. To counter this hidden influence, there have been increasing calls for algorithmic transparency. Much ink has been spilled over the informational account of algorithmic transparency—about how much information should be revealed about the inner workings of an algorithm. But few studies question the power structure beneath the informational disclosure of the algorithm. As a result, the information disclosure (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Algorithmic augmentation of democracy: considering whether technology can enhance the concepts of democracy and the rule of law through four hypotheticals.Paul Burgess - 2022 - AI and Society 37 (1):97-112.
    The potential use, relevance, and application of AI and other technologies in the democratic process may be obvious to some. However, technological innovation and, even, its consideration may face an intuitive push-back in the form of algorithm aversion (Dietvorst et al. J Exp Psychol 144(1):114–126, 2015). In this paper, I confront this intuition and suggest that a more ‘extreme’ form of technological change in the democratic process does not necessarily result in a worse outcome in terms of the fundamental concepts (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines.Scott Robbins - 2020 - AI and Society 35 (2):391-400.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The moral limits of the market: the case of consumer scoring data.Clinton Castro & Adam Pham - 2019 - Ethics and Information Technology 21 (2):117-126.
    We offer an ethical assessment of the market for data used to generate what are sometimes called “consumer scores” (i.e., numerical expressions that are used to describe or predict people’s dispositions and behavior), and we argue that the assessment has ethical implications on how the market for consumer scoring data should be regulated. To conduct the assessment, we employ two heuristics for evaluating markets. One is the “harm” criterion, which relates to whether the market produces serious harms, either for participants (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Societal and ethical issues of digitization.Lambèr Royakkers, Jelte Timmer, Linda Kool & Rinie van Est - 2018 - Ethics and Information Technology 20 (2):127-142.
    In this paper we discuss the social and ethical issues that arise as a result of digitization based on six dominant technologies: Internet of Things, robotics, biometrics, persuasive technology, virtual & augmented reality, and digital platforms. We highlight the many developments in the digitizing society that appear to be at odds with six recurring themes revealing from our analysis of the scientific literature on the dominant technologies: privacy, autonomy, security, human dignity, justice, and balance of power. This study shows that (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Making the black box society transparent.Daniel Innerarity - 2021 - AI and Society 36 (3):975-981.
    The growing presence of smart devices in our lives turns all of society into something largely unknown to us. The strategy of demanding transparency stems from the desire to reduce the ignorance to which this automated society seems to condemn us. An evaluation of this strategy first requires that we distinguish the different types of non-transparency. Once we reveal the limits of the transparency needed to confront these devices, the article examines the alternative strategy of explainable artificial intelligence and concludes (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • A Genealogical Approach to Algorithmic Bias.Marta Ziosi, David Watson & Luciano Floridi - 2024 - Minds and Machines 34 (2):1-17.
    The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires ex post solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions (...)
    Download  
     
    Export citation  
     
    Bookmark