Switch to: References

Add citations

You must login to add citations.
  1. Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul B. de Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms.Bas de Boer & Olya Kudina - 2021 - Theoretical Medicine and Bioethics 42 (5):245-266.
    In this paper, we examine the qualitative moral impact of machine learning-based clinical decision support systems in the process of medical diagnosis. To date, discussions about machine learning in this context have focused on problems that can be measured and assessed quantitatively, such as by estimating the extent of potential harm or calculating incurred risks. We maintain that such discussions neglect the qualitative moral impact of these technologies. Drawing on the philosophical approaches of technomoral change and technological mediation theory, which (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do not (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Toward an Ethics of AI Assistants: an Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Binding the Smart City Human-Digital System with Communicative Processes.Brandt Dainow - 2021 - In Michael Nagenborg, Taylor Stone, Margoth González Woge & Pieter E. Vermaas (eds.), Technology and the City: Towards a Philosophy of Urban Technologies. Springer Verlag. pp. 389-411.
    This chapter will explore the dynamics of power underpinning ethical issues within smart cities via a new paradigm derived from Systems Theory. The smart city is an expression of technology as a socio-technical system. The vision of the smart city contains a deep fusion of many different technical systems into a single integrated “ambient intelligence”. ETICA Project, 2010, p. 102). Citizens of the smart city will not experience a succession of different technologies, but a single intelligent and responsive environment through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • SAF: Stakeholders’ Agreement on Fairness in the Practice of Machine Learning Development.Georgina Curto & Flavio Comim - 2023 - Science and Engineering Ethics 29 (4):1-19.
    This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML) and proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development as an ongoing agreement with stakeholders. The pro-ethical iterative process presented in the paper aims to challenge asymmetric power dynamics in the fairness decision making within ML design and support ML development teams to identify, mitigate and monitor bias at each step of ML systems development. The process (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Speeding up to keep up: exploring the use of AI in the research process.Jennifer Chubb, Peter Cowling & Darren Reed - 2022 - AI and Society 37 (4):1439-1457.
    There is a long history of the science of intelligent machines and its potential to provide scientific insights have been debated since the dawn of AI. In particular, there is renewed interest in the role of AI in research and research policy as an enabler of new methods, processes, management and evaluation which is still relatively under-explored. This empirical paper explores interviews with leading scholars on the potential impact of AI on research practice and culture through deductive, thematic analysis to (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Enculturating Algorithms.Rafael Capurro - 2019 - NanoEthics 13 (2):131-137.
    The paper deals with the difference between who and what we are in order to take an ethical perspective on algorithms and their regulation. The present casting of ourselves as homo digitalis implies the possibility of projecting who we are as social beings sharing a world, into the digital medium, thereby engendering what can be called digital whoness, or a digital reification of ourselves. A main ethical challenge for the evolving digital age consists in unveiling this ethical difference, particularly when (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsible nudging for social good: new healthcare skills for AI-driven digital personal assistants.Marianna Capasso & Steven Umbrello - 2022 - Medicine, Health Care and Philosophy 25 (1):11-22.
    Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Occluded algorithms.Adam Burke - 2019 - Big Data and Society 6 (2).
    Two definitions of algorithm, their uses, and their implied models of computing in society, are reviewed. The first, termed the structural programming definition, aligns more with usage in computer science, and as the name suggests, the intellectual project of structured programming. The second, termed the systemic definition, is more informal and emerges from ethnographic observations of discussions of software in both professional and everyday settings. Specific examples of locating algorithms within modern codebases are shared, as well as code directly impacting (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Algorithmic augmentation of democracy: considering whether technology can enhance the concepts of democracy and the rule of law through four hypotheticals.Paul Burgess - 2022 - AI and Society 37 (1):97-112.
    The potential use, relevance, and application of AI and other technologies in the democratic process may be obvious to some. However, technological innovation and, even, its consideration may face an intuitive push-back in the form of algorithm aversion (Dietvorst et al. J Exp Psychol 144(1):114–126, 2015). In this paper, I confront this intuition and suggest that a more ‘extreme’ form of technological change in the democratic process does not necessarily result in a worse outcome in terms of the fundamental concepts (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Managing Algorithmic Accountability: Balancing Reputational Concerns, Engagement Strategies, and the Potential of Rational Discourse.Alexander Buhmann, Johannes Paßmann & Christian Fieseler - 2020 - Journal of Business Ethics 163 (2):265-280.
    While organizations today make extensive use of complex algorithms, the notion of algorithmic accountability remains an elusive ideal due to the opacity and fluidity of algorithms. In this article, we develop a framework for managing algorithmic accountability that highlights three interrelated dimensions: reputational concerns, engagement strategies, and discourse principles. The framework clarifies that accountability processes for algorithms are driven by reputational concerns about the epistemic setup, opacity, and outcomes of algorithms; that the way in which organizations practically engage with emergent (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Digital hyperconnectivity and the self.Rogers Brubaker - 2020 - Theory and Society 49 (5-6):771-801.
    Digital hyperconnectivity is a defining fact of our time. In addition to recasting social interaction, culture, economics, and politics, it has profoundly transformed the self. It has created new ways of being and constructing a self, but also new ways of being constructed as a self from the outside, new ways of being configured, represented, and governed as a self by sociotechnical systems. Rather than analyze theories of the self, I focus on practices of the self, using this expression in (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Just data? Solidarity and justice in data-driven medicine.Matthias Braun & Patrik Hummel - 2020 - Life Sciences, Society and Policy 16 (1):1-18.
    This paper argues that data-driven medicine gives rise to a particular normative challenge. Against the backdrop of a distinction between the good and the right, harnessing personal health data towards the development and refinement of data-driven medicine is to be welcomed from the perspective of the good. Enacting solidarity drives progress in research and clinical practice. At the same time, such acts of sharing could—especially considering current developments in big data and artificial intelligence—compromise the right by leading to injustices and (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Introduction: Digital Technologies and Human Decision-Making.Sofia Bonicalzi, Mario De Caro & Benedetta Giovanola - 2023 - Topoi 42 (3):793-797.
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Listening without ears: Artificial intelligence in audio mastering.Thomas Birtchnell - 2018 - Big Data and Society 5 (2).
    Since the inception of recorded music there has been a need for standards and reliability across sound formats and listening environments. The role of the audio mastering engineer is prestigious and akin to a craft expert combining scientific knowledge, musical learning, manual precision and skill, and an awareness of cultural fashions and creative labour. With the advent of algorithms, big data and machine learning, loosely termed artificial intelligence in this creative sector, there is now the possibility of automating human audio (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding and Managing Responsible Innovation.Hans Bennink - 2020 - Philosophy of Management 19 (3):317-348.
    As a relational concept, responsible innovation can be made more tangible by asking innovation of what and responsibility of whom for what? Arranging the scattered field of responsible innovation comprehensively, starting from an anthropological point of view, into five fields of tension and five categories of spearheads, may be theoretically and practically helpful while offering suggestions for both research and management.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Evil and roboethics in management studies.Enrico Beltramini - 2019 - AI and Society 34 (4):921-929.
    In this article, I address the issue of evil and roboethics in the context of management studies and suggest that management scholars should locate evil in the realm of the human rather than of the artificial. After discussing the possibility of addressing the reality of evil machines in ontological terms, I explore users’ reaction to robots in a social context. I conclude that the issue of evil machines in management is more precisely a case of technology anthropomorphization.
    Download  
     
    Export citation  
     
    Bookmark  
  • A Code of Digital Ethics: laying the foundation for digital ethics in a science and technology company.Sarah J. Becker, André T. Nemat, Simon Lucas, René M. Heinitz, Manfred Klevesath & Jean Enno Charton - 2023 - AI and Society 38 (6):2629-2639.
    The rapid and dynamic nature of digital transformation challenges companies that wish to develop and deploy novel digital technologies. Like other actors faced with this transformation, companies need to find robust ways to ethically guide their innovations and business decisions. Digital ethics has recently featured in a plethora of both practical corporate guidelines and compilations of high-level principles, but there remains a gap concerning the development of sound ethical guidance in specific business contexts. As a multinational science and technology company (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Training philosopher engineers for better AI.Brian Ball & Alexandros Koliousis - 2023 - AI and Society 38 (2):861-868.
    There is a deluge of AI-assisted decision-making systems, where our data serve as proxy to our actions, suggested by AI. The closer we investigate our data (raw input, or their learned representations, or the suggested actions), we begin to discover “bugs”. Outside of their test, controlled environments, AI systems may encounter situations investigated primarily by those in other disciplines, but experts in those fields are typically excluded from the design process and are only invited to attest to the ethical features (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Beyond mystery: Putting algorithmic accountability in context.Andrea Ballestero, Baki Cakici & Elizabeth Reddy - 2019 - Big Data and Society 6 (1).
    Critical algorithm scholarship has demonstrated the difficulties of attributing accountability for the actions and effects of algorithmic systems. In this commentary, we argue that we cannot stop at denouncing the lack of accountability for algorithms and their effects but must engage the broader systems and distributed agencies that algorithmic systems exist within; including standards, regulations, technologies, and social relations. To this end, we explore accountability in “the Generated Detective,” an algorithmically generated comic. Taking up the mantle of detectives ourselves, we (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations.Kristina Astromskė, Eimantas Peičius & Paulius Astromskis - forthcoming - AI and Society.
    This paper inquiries into the complex issue of informed consent applying artificial intelligence in medical diagnostic consultations. The aim is to expose the main ethical and legal concerns of the New Health phenomenon, powered by intelligent machines. To achieve this objective, the first part of the paper analyzes ethical aspects of the alleged right to explanation, privacy, and informed consent, applying artificial intelligence in medical diagnostic consultations. This analysis is followed by a legal analysis of the limits and requirements for (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Mental time-travel, semantic flexibility, and A.I. ethics.Marcus Arvan - 2023 - AI and Society 38 (6):2577-2596.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Characteristics and challenges in the industries towards responsible AI: a systematic literature review.Marianna Anagnostou, Olga Karvounidou, Chrysovalantou Katritzidaki, Christina Kechagia, Kyriaki Melidou, Eleni Mpeza, Ioannis Konstantinidis, Eleni Kapantai, Christos Berberidis, Ioannis Magnisalis & Vassilios Peristeras - 2022 - Ethics and Information Technology 24 (3):1-18.
    Today humanity is in the midst of the massive expansion of new and fundamental technology, represented by advanced artificial intelligence (AI) systems. The ongoing revolution of these technologies and their profound impact across various sectors, has triggered discussions about the characteristics and values that should guide their use and development in a responsible manner. In this paper, we conduct a systematic literature review with the aim of pointing out existing challenges and required principles in AI-based systems in different industries. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Politics of data reuse in machine learning systems: Theorizing reuse entanglements.Louise Amoore, Mikkel Flyverbom, Kristian Bondo Hansen & Nanna Bonde Thylstrup - 2022 - Big Data and Society 9 (2).
    Policy discussions and corporate strategies on machine learning are increasingly championing data reuse as a key element in digital transformations. These aspirations are often coupled with a focus on responsibility, ethics and transparency, as well as emergent forms of regulation that seek to set demands for corporate conduct and the protection of civic rights. And the Protective measures include methods of traceability and assessments of ‘good’ and ‘bad’ datasets and algorithms that are considered to be traceable, stable and contained. However, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Epistemology of Non-distributive Profiles.Patrick Allo - 2020 - Philosophy and Technology 33 (3):379-409.
    The distinction between distributive and non-distributive profiles figures prominently in current evaluations of the ethical and epistemological risks that are associated with automated profiling practices. The diagnosis that non-distributive profiles may coincidentally situate an individual in the wrong category is often perceived as the central shortcoming of such profiles. According to this diagnosis, most risks can be retraced to the use of non-universal generalisations and various other statistical associations. This article develops a top-down analysis of non-distributive profiles in which this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Constructionist Philosophy of Logic.Patrick Allo - 2017 - Minds and Machines 27 (3):545-564.
    This paper develops and refines the suggestion that logical systems are conceptual artefacts that are the outcome of a design-process by exploring how a constructionist epistemology and meta-philosophy can be integrated within the philosophy of logic.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Tensions in transparent urban AI: designing a smart electric vehicle charge point.Kars Alfrink, Ianus Keller, Neelke Doorn & Gerd Kortuem - 2023 - AI and Society 38 (3):1049-1065.
    The increasing use of artificial intelligence (AI) by public actors has led to a push for more transparency. Previous research has conceptualized AI transparency as knowledge that empowers citizens and experts to make informed choices about the use and governance of AI. Conversely, in this paper, we critically examine if transparency-as-knowledge is an appropriate concept for a public realm where private interests intersect with democratic concerns. We conduct a practice-based design research study in which we prototype and evaluate a transparent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Influence of Business Incentives and Attitudes on Ethics Discourse in the Information Technology Industry.Sanju Ahuja & Jyoti Kumar - 2021 - Philosophy and Technology 34 (4):941-966.
    As information technologies have become synonymous with progress in modern society, several ethical concerns have surfaced about their societal implications. In the past few decades, information technologies have had a value-laden impact on social evolution. However, there is limited agreement on the responsibility of businesses and innovators concerning the ethical aspects of information technologies. There is a need to understand the role of business incentives and attitudes in driving technological progress and to understand how they steer the ethics discourse on (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Big Data in the workplace: Privacy Due Diligence as a human rights-based approach to employee privacy protection.Jeremias Adams-Prassl, Isabelle Wildhaber & Isabel Ebert - 2021 - Big Data and Society 8 (1).
    Data-driven technologies have come to pervade almost every aspect of business life, extending to employee monitoring and algorithmic management. How can employee privacy be protected in the age of datafication? This article surveys the potential and shortcomings of a number of legal and technical solutions to show the advantages of human rights-based approaches in addressing corporate responsibility to respect privacy and strengthen human agency. Based on this notion, we develop a process-oriented model of Privacy Due Diligence to complement existing frameworks (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Democratizing AI from a Sociotechnical Perspective.Merel Noorman & Tsjalling Swierstra - 2023 - Minds and Machines 33 (4):563-586.
    Artificial Intelligence (AI) technologies offer new ways of conducting decision-making tasks that influence the daily lives of citizens, such as coordinating traffic, energy distributions, and crowd flows. They can sort, rank, and prioritize the distribution of fines or public funds and resources. Many of the changes that AI technologies promise to bring to such tasks pertain to decisions that are collectively binding. When these technologies become part of critical infrastructures, such as energy networks, citizens are affected by these decisions whether (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Jaz u odgovornosti u informatičkoj eri.Jelena Mijić - 2023 - Društvo I Politika 4 (4):25-38.
    Odgovornost pripisujemo sa namerom da postignemo neki cilj. Jedno od opših mesta u filozofskoj literaturi je da osobi možemo pripisati moralnu odgovornost ako su zadovoljena bar dva uslova: da subjekt delanja ima kontrolu nad svojim postupcima i da je u stanju da navede razloge u prilog svog postupka. Međutim, četvrtu industrijsku revoluciju karakterišu sociotehnološke pojave koje nas potencijalno suočavaju sa tzv. problemom jaza u odgovornosti. Rasprave o odgovornosti u kontekstu veštačke inteligencije karakteriše nejasna i neodređena upotreba ovog pojma. Da bismo (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach.Filippo Santoni de Sio, Giulio Mecacci, Simeon Calvert, Daniel Heikoop, Marjan Hagenzieker & Bart van Arem - 2023 - Minds and Machines 33 (4):587-611.
    The paper presents a framework to realise “meaningful human control” over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project “Meaningful Human Control over Automated Driving Systems” lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Against the Double Standard Argument in AI Ethics.Scott Hill - 2024 - Philosophy and Technology 37 (1):1-5.
    In an important and widely cited paper, Zerilli, Knott, Maclaurin, and Gavaghan (2019) argue that opaque AI decision makers are at least as transparent as human decision makers and therefore the concern that opaque AI is not sufficiently transparent is mistaken. I argue that the concern about opaque AI should not be understood as the concern that such AI fails to be transparent in a way that humans are transparent. Rather, the concern is that the way in which opaque AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI and the need for justification (to the patient).Anantharaman Muralidharan, Julian Savulescu & G. Owen Schaefer - 2024 - Ethics and Information Technology 26 (1):1-12.
    This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Agency Laundering and Algorithmic Decision Systems.Alan Rubel, Adam Pham & Clinton Castro - 2019 - In N. Taylor, C. Christian-Lamb, M. Martin & B. Nardi (eds.), Information in Contemporary Society (Lecture Notes in Computer Science). Springer Nature. pp. 590-598.
    This paper has two aims. The first is to explain a type of wrong that arises when agents obscure responsibility for their actions. Call it “agency laundering.” The second is to use the concept of agency laundering to understand the underlying moral issues in a number of recent cases involving algorithmic decision systems. From the Proceedings of the 14th International Conference, iConference 2019, Washington D.C., March 31-April 3, 2019.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Research and Practice of AI Ethics: A Case Study Approach Juxtaposing Academic Discourse with Organisational Reality.Bernd Stahl, Kevin Macnish, Tilimbe Jiya, Laurence Brooks, Josephina Antoniou & Mark Ryan - 2021 - Science and Engineering Ethics 27 (2):1-29.
    This study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD + AI)—using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ethical issues, (from the literature), into clusters to offer a comparison with the proposed classification in the literature. The results show that despite the variety (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Effectiveness of Embedded Values Analysis Modules in Computer Science Education: An Empirical Study.Matthew Kopec, Meica Magnani, Vance Ricks, Roben Torosyan, John Basl, Nicholas Miklaucic, Felix Muzny, Ronald Sandler, Christo Wilson, Adam Wisniewski-Jensen, Cora Lundgren, Kevin Mills & Mark Wells - 2023 - Big Data and Society 10 (1).
    Embedding ethics modules within computer science courses has become a popular response to the growing recognition that CS programs need to better equip their students to navigate the ethical dimensions of computing technologies like AI, machine learning, and big data analytics. However, the popularity of this approach has outpaced the evidence of its positive outcomes. To help close that gap, this empirical study reports positive results from Northeastern’s program that embeds values analysis modules into CS courses. The resulting data suggest (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Manipulate to empower: Hyper-relevance and the contradictions of marketing in the age of surveillance capitalism.Detlev Zwick & Aron Darmody - 2020 - Big Data and Society 7 (1).
    In this article, we explore how digital marketers think about marketing in the age of Big Data surveillance, automatic computational analyses, and algorithmic shaping of choice contexts. Our starting point is a contradiction at the heart of digital marketing namely that digital marketing brings about unprecedented levels of consumer empowerment and autonomy and total control over and manipulation of consumer decision-making. We argue that this contradiction of digital marketing is resolved via the notion of relevance, which represents what Fredric Jameson (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   68 citations  
  • Fairness as Equal Concession: Critical Remarks on Fair AI.Christopher Yeomans & Ryan van Nood - 2021 - Science and Engineering Ethics 27 (6):1-14.
    Although existing work draws attention to a range of obstacles in realizing fair AI, the field lacks an account that emphasizes how these worries hang together in a systematic way. Furthermore, a review of the fair AI and philosophical literature demonstrates the unsuitability of ‘treat like cases alike’ and other intuitive notions as conceptions of fairness. That review then generates three desiderata for a replacement conception of fairness valuable to AI research: (1) It must provide a meta-theory for understanding tradeoffs, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers have (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • What has the Trolley Dilemma ever done for us ? On some recent debates about the ethics of self-driving cars.Andreas Wolkenstein - 2018 - Ethics and Information Technology 20 (3):163-173.
    Self-driving cars currently face a lot of technological problems that need to be solved before the cars can be widely used. However, they also face ethical problems, among which the question of crash-optimization algorithms is most prominently discussed. Reviewing current debates about whether we should use the ethics of the Trolley Dilemma as a guide towards designing self-driving cars will provide us with insights about what exactly ethical research does. It will result in the view that although we need the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition.Rosalie Waelen & Michał Wieczorek - 2022 - Philosophy and Technology 35 (2).
    AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of recognition (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Modeling Ethics: Approaches to Data Creep in Higher Education.Madisson Whitman - 2021 - Science and Engineering Ethics 27 (6):1-18.
    Though rapid collection of big data is ubiquitous across domains, from industry settings to academic contexts, the ethics of big data collection and research are contested. A nexus of data ethics issues is the concept of creep, or repurposing of data for other applications or research beyond the conditions of original collection. Data creep has proven controversial and has prompted concerns about the scope of ethical oversight. Institutional review boards offer little guidance regarding big data, and problematic research can still (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations