Switch to: References

Add citations

You must login to add citations.
  1. Algorithmic augmentation of democracy: considering whether technology can enhance the concepts of democracy and the rule of law through four hypotheticals.Paul Burgess - 2022 - AI and Society 37 (1):97-112.
    The potential use, relevance, and application of AI and other technologies in the democratic process may be obvious to some. However, technological innovation and, even, its consideration may face an intuitive push-back in the form of algorithm aversion (Dietvorst et al. J Exp Psychol 144(1):114–126, 2015). In this paper, I confront this intuition and suggest that a more ‘extreme’ form of technological change in the democratic process does not necessarily result in a worse outcome in terms of the fundamental concepts (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Algorithmen der Alterität - Alterität der Algorithmen : Überlegungen zu einem komplexen Verhältnis.Sebastian Berg, Ann-Kathrin Koster, Felix Maschewski, Tobias Matzner & Anna-Verena Nosthoff - 2022 - Behemoth. A Journal on Civilisation 15 (2):1-16.
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence and democratic legitimacy. The problem of publicity in public authority.Ludvig Beckman, Jonas Hultin Rosenberg & Karim Jebari - forthcoming - AI and Society.
    Machine learning algorithms are increasingly used to support decision-making in the exercise of public authority. Here, we argue that an important consideration has been overlooked in previous discussions: whether the use of ML undermines the democratic legitimacy of public institutions. From the perspective of democratic legitimacy, it is not enough that ML contributes to efficiency and accuracy in the exercise of public authority, which has so far been the focus in the scholarly literature engaging with these developments. According to one (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Philosophical Investigations into AI Alignment: A Wittgensteinian Framework.José Antonio Pérez-Escobar & Deniz Sarikaya - 2024 - Philosophy and Technology 37 (3):1-25.
    We argue that the later Wittgenstein’s philosophy of language and mathematics, substantially focused on rule-following, is relevant to understand and improve on the Artificial Intelligence (AI) alignment problem: his discussions on the categories that influence alignment between humans can inform about the categories that should be controlled to improve on the alignment problem when creating large data sets to be used by supervised and unsupervised learning algorithms, as well as when introducing hard coded guardrails for AI models. We cast these (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Track Thyself? The Value and Ethics of Self-knowledge Through Technology.Muriel Https://Orcidorg Leuenberger - 2024 - Philosophy and Technology 37 (1):1-22.
    Novel technological devices, applications, and algorithms can provide us with a vast amount of personal information about ourselves. Given that we have ethical and practical reasons to pursue self-knowledge, should we use technology to increase our self-knowledge? And which ethical issues arise from the pursuit of technologically sourced self-knowledge? In this paper, I explore these questions in relation to bioinformation technologies (health and activity trackers, DTC genetic testing, and DTC neurotechnologies) and algorithmic profiling used for recommender systems, targeted advertising, and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Algorithmic reparation.Michael W. Yang, Apryl Williams & Jenny L. Davis - 2021 - Big Data and Society 8 (2).
    Machine learning algorithms pervade contemporary society. They are integral to social institutions, inform processes of governance, and animate the mundane technologies of daily life. Consistently, the outcomes of machine learning reflect, reproduce, and amplify structural inequalities. The field of fair machine learning has emerged in response, developing mathematical techniques that increase fairness based on anti-classification, classification parity, and calibration standards. In practice, these computational correctives invariably fall short, operating from an algorithmic idealism that does not, and cannot, address systemic, Intersectional (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Social impacts of algorithmic decision-making: A research agenda for the social sciences.Frauke Kreuter, Christoph Kern, Ruben L. Bach & Frederic Gerdon - 2022 - Big Data and Society 9 (1).
    Academic and public debates are increasingly concerned with the question whether and how algorithmic decision-making may reinforce social inequality. Most previous research on this topic originates from computer science. The social sciences, however, have huge potentials to contribute to research on social consequences of ADM. Based on a process model of ADM systems, we demonstrate how social sciences may advance the literature on the impacts of ADM on social inequality by uncovering and mitigating biases in training data, by understanding data (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Chilling Effects of Digital Dataveillance: A Theoretical Model and an Empirical Research Agenda.Michael Latzer, Noemi Festic & Moritz Büchi - 2022 - Big Data and Society 9 (1).
    People's sense of being subject to digital dataveillance can cause them to restrict their digital communication behavior. Such a chilling effect is essentially a form of self-censorship in everyday digital media use with the attendant risks of undermining individual autonomy and well-being. This article combines the existing theoretical and limited empirical work on surveillance and chilling effects across fields with an analysis of novel data toward a research agenda. The institutional practice of dataveillance—the automated, continuous, and unspecific collection, retention, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Designing for human rights in AI.Jeroen van den Hoven & Evgeni Aizenberg - 2020 - Big Data and Society 7 (2).
    In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. Artificial intelligence systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident that these technological developments are consequential to people’s fundamental human rights. Despite increasing attention to (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Security, digital border technologies, and immigration admissions: Challenges of and to non-discrimination, liberty and equality.Natasha Saunders - forthcoming - European Journal of Political Theory.
    Normative debates on migration control, while characterised by profound disagreement, do appear to agree that the state has at least a prima facie right to prevent the entry of security threats. While concern is sometimes raised that this ‘security exception’ can be abused, there has been little focus by normative theorists on concrete practices of security, and how we can determine what a ‘principled’ use of the security exception would be. I argue that even if states have a right to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Toward children-centric AI: a case for a growth model in children-AI interactions.Karolina La Fors - forthcoming - AI and Society:1-13.
    This article advocates for a hermeneutic model for children-AI interactions in which the desirable purpose of children’s interaction with artificial intelligence systems is children's growth. The article perceives AI systems with machine-learning components as having a recursive element when interacting with children. They can learn from an encounter with children and incorporate data from interaction, not only from prior programming. Given the purpose of growth and this recursive element of AI, the article argues for distinguishing the interpretation of bias within (...)
    Download  
     
    Export citation  
     
    Bookmark