Switch to: References

Add citations

You must login to add citations.
  1. The trainer, the verifier, the imitator: Three ways in which human platform workers support artificial intelligence.Marion Coville, Antonio A. Casilli & Paola Tubaro - 2020 - Big Data and Society 7 (1).
    This paper sheds light on the role of digital platform labour in the development of today’s artificial intelligence, predicated on data-intensive machine learning algorithms. Focus is on the specific ways in which outsourcing of data tasks to myriad ‘micro-workers’, recruited and managed through specialized platforms, powers virtual assistants, self-driving vehicles and connected objects. Using qualitative data from multiple sources, we show that micro-work performs a variety of functions, between three poles that we label, respectively, ‘artificial intelligence preparation’, ‘artificial intelligence verification’ (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The Epistemological Danger of Large Language Models.Elise Li Zheng & Sandra Soo-Jin Lee - 2023 - American Journal of Bioethics 23 (10):102-104.
    The potential of ChatGPT looms large for the practice of medicine, as both boon and bane. The use of Large Language Models (LLMs) in platforms such as ChatGPT raises critical ethical questions of w...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Algorithms Don’t Have A Future: On the Relation of Judgement and Calculation.Daniel Stader - 2024 - Philosophy and Technology 37 (1):1-29.
    This paper is about the opposite of judgement and calculation. This opposition has been a traditional anchor of critiques concerned with the rise of AI decision making over human judgement. Contrary to these approaches, it is argued that human judgement is not and cannot be replaced by calculation, but that it is human judgement that contextualises computational structures and gives them meaning and purpose. The article focuses on the epistemic structure of algorithms and artificial neural networks to find that they (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Making data science systems work.Phoebe Sengers & Samir Passi - 2020 - Big Data and Society 7 (2).
    How are data science systems made to work? It may seem that whether a system works is a function of its technical design, but it is also accomplished through ongoing forms of discretionary work by many actors. Based on six months of ethnographic fieldwork with a corporate data science team, we describe how actors involved in a corporate project negotiated what work the system should do, how it should work, and how to assess whether it works. These negotiations laid the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Lifting the curtain: Strategic visibility of human labour in AI-as-a-Service.Gemma Newlands - 2021 - Big Data and Society 8 (1).
    Artificial Intelligence-as-a-Service empowers individuals and organisations to access AI on-demand, in either tailored or ‘off-the-shelf’ forms. However, institutional separation between development, training and deployment can lead to critical opacities, such as obscuring the level of human effort necessary to produce and train AI services. Information about how, where, and for whom AI services have been produced are valuable secrets, which vendors strategically disclose to clients depending on commercial interests. This article provides a critical analysis of how AIaaS vendors manipulate the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial Intelligence in the Colonial Matrix of Power.James Muldoon & Boxi A. Wu - 2023 - Philosophy and Technology 36 (4):1-24.
    Drawing on the analytic of the “colonial matrix of power” developed by Aníbal Quijano within the Latin American modernity/coloniality research program, this article theorises how a system of coloniality underpins the structuring logic of artificial intelligence (AI) systems. We develop a framework for critiquing the regimes of global labour exploitation and knowledge extraction that are rendered invisible through discourses of the purported universality and objectivity of AI. ​​Through bringing the political economy literature on AI production into conversation with scholarly work (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • For a situational analytics: An interpretative methodology for the study of situations in computational settings.Noortje Marres - 2020 - Big Data and Society 7 (2).
    This article introduces an interpretative approach to the analysis of situations in computational settings called situational analytics. I outline the theoretical and methodological underpinnings of this approach, which is still under development, and show how it can be used to surface situations from large data sets derived from online platforms such as YouTube. Situational analytics extends to computationally-mediated settings a qualitative methodology developed by Adele Clarke, Situational Analysis, which uses data mapping to detect heterogeneous entities in fieldwork data to determine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mass personalization: Predictive marketing algorithms and the reshaping of consumer knowledge.Baptiste Kotras - 2020 - Big Data and Society 7 (2).
    This paper focuses on the conception and use of machine-learning algorithms for marketing. In the last years, specialized service providers as well as in-house data scientists have been increasingly using machine learning to predict consumer behavior for large companies. Predictive marketing thus revives the old dream of one-to-one, perfectly adjusted selling techniques, now at an unprecedented scale. How do predictive marketing devices change the way corporations know and model their customers? Drawing from STS and the sociology of quantification, I propose (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics.Christian Katzenbach & Jascha Bareis - 2022 - Science, Technology, and Human Values 47 (5):855-881.
    How to integrate artificial intelligence technologies in the functioning and structures of our society has become a concern of contemporary politics and public debates. In this paper, we investigate national AI strategies as a peculiar form of co-shaping this development, a hybrid of policy and discourse that offers imaginaries, allocates resources, and sets rules. Conceptually, the paper is informed by sociotechnical imaginaries, the sociology of expectations, myths, and the sublime. Empirically we analyze AI policy documents of four key players in (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Politicizing Algorithms by Other Means: Toward Inquiries for Affective Dissensions.Florian Jaton & Dominique Vinck - 2023 - Perspectives on Science 31 (1):84-118.
    In this paper, we build upon Bruno Latour’s political writings to address the current impasse regarding algorithms in public life. We assert that the increasing difficulties at governing algorithms—be they qualified as “machine learning,” “big data,” or “artificial intelligence”—can be related to their current ontological thinness: deriving from constricted views on theoretical practices, algorithms’ standard definition as problem-solving computerized methods provides poor grips for affective dissensions. We then emphasize on the role historical and ethnographic studies of algorithms can potentially play (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Assessing biases, relaxing moralism: On ground-truthing practices in machine learning design and application.Florian Jaton - 2021 - Big Data and Society 8 (1).
    This theoretical paper considers the morality of machine learning algorithms and systems in the light of the biases that ground their correctness. It begins by presenting biases not as a priori negative entities but as contingent external referents—often gathered in benchmarked repositories called ground-truth datasets—that define what needs to be learned and allow for performance measures. I then argue that ground-truth datasets and their concomitant practices—that fundamentally involve establishing biases to enable learning procedures—can be described by their respective morality, here (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Thick Machine: Anthropological AI between explanation and explication.Mathieu Jacomy, Asger Gehrt Olesen & Anders Kristian Munk - 2022 - Big Data and Society 9 (1).
    According to Clifford Geertz, the purpose of anthropology is not to explain culture but to explicate it. That should cause us to rethink our relationship with machine learning. It is, we contend, perfectly possible that machine learning algorithms, which are unable to explain, and could even be unexplainable themselves, can still be of critical use in a process of explication. Thus, we report on an experiment with anthropological AI. From a dataset of 175K Facebook comments, we trained a neural network (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Making plant pathology algorithmically recognizable.Cornelius Heimstädt - 2023 - Agriculture and Human Values 40 (3):865-878.
    This article examines the construction of image recognition algorithms for the classification of plant pathology problems. Rooted in science and technology studies research on the effects of agricultural big data and agricultural algorithms, the study ethnographically examines how algorithms for the recognition of plant pathology are made. To do this, the article looks at the case of a German agtech startup developing image recognition algorithms for an app that aims to help small-scale farmers diagnose plant damages based on digital images (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Experimental Design: Ethics, Integrity and the Scientific Method.Jonathan Lewis - 2020 - In Ron Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity. Cham, Switzerland: pp. 459-474.
    Experimental design is one aspect of a scientific method. A well-designed, properly conducted experiment aims to control variables in order to isolate and manipulate causal effects and thereby maximize internal validity, support causal inferences, and guarantee reliable results. Traditionally employed in the natural sciences, experimental design has become an important part of research in the social and behavioral sciences. Experimental methods are also endorsed as the most reliable guides to policy effectiveness. Through a discussion of some of the central concepts (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations