Switch to: References

Add citations

You must login to add citations.
  1. How to Make AlphaGo’s Children Explainable.Woosuk Park - 2022 - Philosophies 7 (3):55.
    Under the rubric of understanding the problem of explainability of AI in terms of abductive cognition, I propose to review the lessons from AlphaGo and her more powerful successors. As AI players in Baduk have arrived at superhuman level, there seems to be no hope for understanding the secret of their breathtakingly brilliant moves. Without making AI players explainable in some ways, both human and AI players would be less-than omniscient, if not ignorant, epistemic agents. Are we bound to have (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Political machines: a framework for studying politics in social machines.Orestis Papakyriakopoulos - 2022 - AI and Society 37 (1):113-130.
    In the age of ubiquitous computing and artificially intelligent applications, social machines serves as a powerful framework for understanding and interpreting interactions in socio-algorithmic ecosystems. Although researchers have largely used it to analyze the interactions of individuals and algorithms, limited attempts have been made to investigate the politics in social machines. In this study, I claim that social machines are per se political machines, and introduce a five-point framework for classifying influence processes in socio-algorithmic ecosystems. By drawing from scholars from (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The contradictions of digital modernity.Kieron O’Hara - 2020 - AI and Society 35 (1):197-208.
    This paper explores the concept of digital modernity, the extension of narratives of modernity with the special affordances of digital networked technology. Digital modernity produces a new narrative which can be taken in many ways: to be descriptive of reality; a teleological account of an inexorable process; or a normative account of an ideal sociotechnical state. However, it is understood that narratives of digital modernity help shape reality via commercial and political decision-makers, and examples are given from the politics and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Philosophical Inquiry into Computer Intentionality: Machine Learning and Value Sensitive Design.Dmytro Mykhailov - 2023 - Human Affairs 33 (1):115-127.
    Intelligent algorithms together with various machine learning techniques hold a dominant position among major challenges for contemporary value sensitive design. Self-learning capabilities of current AI applications blur the causal link between programmer and computer behavior. This creates a vital challenge for the design, development and implementation of digital technologies nowadays. This paper seeks to provide an account of this challenge. The main question that shapes the current analysis is the following: What conceptual tools can be developed within the value sensitive (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   77 citations  
  • Will intelligent machines become moral patients?Parisa Moosavi - forthcoming - Philosophy and Phenomenological Research.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - 2023 - Minds and Machines 33 (1):221-248.
    Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics-based auditing of automated decision-making systems: nature, scope, and limitations.Jakob Mökander, Jessica Morley, Mariarosaria Taddeo & Luciano Floridi - 2021 - Science and Engineering Ethics 27 (4):1–30.
    Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Ethics-based auditing of automated decision-making systems: intervention points and policy implications.Jakob Mökander & Maria Axente - 2023 - AI and Society 38 (1):153-171.
    Organisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS. In previous work, we proposed that ethics-based auditing (EBA)—that is, a structured process by which ADMS are assessed for consistency with relevant principles or norms—can (a) help organisations (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • From Individual to Group Privacy in Big Data Analytics.Brent Mittelstadt - 2017 - Philosophy and Technology 30 (4):475-494.
    Mature information societies are characterised by mass production of data that provide insight into human behaviour. Analytics has arisen as a practice to make sense of the data trails generated through interactions with networked devices, platforms and organisations. Persistent knowledge describing the behaviours and characteristics of people can be constructed over time, linking individuals into groups or classes of interest to the platform. Analytics allows for a new type of algorithmically assembled group to be formed that does not necessarily align (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Ethics of the health-related internet of things: a narrative review.Brent Mittelstadt - 2017 - Ethics and Information Technology 19 (3):1-19.
    The internet of things is increasingly spreading into the domain of medical and social care. Internet-enabled devices for monitoring and managing the health and well-being of users outside of traditional medical institutions have rapidly become common tools to support healthcare. Health-related internet of things (H-IoT) technologies increasingly play a key role in health management, for purposes including disease prevention, real-time tele-monitoring of patient’s functions, testing of treatments, fitness and well-being monitoring, medication dispensation, and health research data collection. H-IoT promises many (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Transparent AI: reliabilist and proud.Abhishek Mishra - forthcoming - Journal of Medical Ethics.
    Durán et al argue in ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’1 that traditionally proposed solutions to make black box machine learning models in medicine less opaque and more transparent are, though necessary, ultimately not sufficient to establish their overall trustworthiness. This is because transparency procedures currently employed, such as the use of an interpretable predictor,2 cannot fully overcome the opacity of such models. Computational reliabilism, an alternate approach to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The autonomous choice architect.Stuart Mills & Henrik Skaug Sætra - forthcoming - AI and Society:1-13.
    Choice architecture describes the environment in which choices are presented to decision-makers. In recent years, public and private actors have looked at choice architecture with great interest as they seek to influence human behaviour. These actors are typically called choice architects. Increasingly, however, this role of architecting choice is not performed by a human choice architect, but an algorithm or artificial intelligence, powered by a stream of Big Data and infused with an objective it has been programmed to maximise. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Data Science as Machinic Neoplatonism.Dan McQuillan - 2018 - Philosophy and Technology 31 (2):253-272.
    Data science is not simply a method but an organising idea. Commitment to the new paradigm overrides concerns caused by collateral damage, and only a counterculture can constitute an effective critique. Understanding data science requires an appreciation of what algorithms actually do; in particular, how machine learning learns. The resulting ‘insight through opacity’ drives the observable problems of algorithmic discrimination and the evasion of due process. But attempts to stem the tide have not grasped the nature of data science as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Prediction Promises: Towards a Metaphorology of Artificial Intelligence.Leonie A. Möck - 2023 - Journal of Aesthetics and Phenomenology 9 (2):119-139.
    1. Artificial Intelligence is an ambiguous umbrella term. Until today there is no uniform definition of AI and the term carries several meanings. As exemplary, I will give two definitions of AI tha...
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical Implications and Accountability of Algorithms.Kirsten Martin - 2018 - Journal of Business Ethics 160 (4):835-850.
    Algorithms silently structure our lives. Algorithms can determine whether someone is hired, promoted, offered a loan, or provided housing as well as determine which political ads and news articles consumers see. Yet, the responsibility for algorithms in these important decisions is not clear. This article identifies whether developers have a responsibility for their algorithms later in use, what those firms are responsible for, and the normative grounding for that responsibility. I conceptualize algorithms as value-laden, rather than neutral, in that algorithms (...)
    Download  
     
    Export citation  
     
    Bookmark   61 citations  
  • Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions.Kirsten Martin & Ari Waldman - 2022 - Journal of Business Ethics 183 (3):653-670.
    Firms use algorithms to make important business decisions. To date, the algorithmic accountability literature has elided a fundamentally empirical question important to business ethics and management: Under what circumstances, if any, are algorithmic decision-making systems considered legitimate? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the impact of decision importance, governance, outcomes, and data inputs on perceptions of the legitimacy of algorithmic decisions made by firms. We find that many of the procedural governance (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Understanding via exemplification in XAI: how explaining image classification benefits from exemplars.Sara Mann - forthcoming - AI and Society:1-16.
    Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Emotional AI and the future of wellbeing in the post-pandemic workplace.Peter Mantello & Manh-Tung Ho - forthcoming - AI and Society:1-7.
    This paper interrogates the growing pervasiveness of affect recognition tools as an emerging layer human-centric automated management in the global workplace. While vendors tout the neoliberal incentives of emotion-recognition technology as a pre-eminent tool of workplace wellness, we argue that emotional AI recalibrates the horizons of capital not by expanding outward into the consumer realm (like surveillance capitalism). Rather, as a new genus of digital Taylorism, it turns inward, passing through the corporeal exterior to extract greater surplus value and managerial (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be derived (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • A Neo-Republican Critique of AI ethics.Jonne Maas - 2022 - Journal of Responsible Technology 9 (C):100022.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI-Assisted Decision-making in Healthcare: The Application of an Ethics Framework for Big Data in Health and Research.Tamra Lysaght, Hannah Yeefen Lim, Vicki Xafis & Kee Yuan Ngiam - 2019 - Asian Bioethics Review 11 (3):299-314.
    Artificial intelligence is set to transform healthcare. Key ethical issues to emerge with this transformation encompass the accountability and transparency of the decisions made by AI-based systems, the potential for group harms arising from algorithmic bias and the professional roles and integrity of clinicians. These concerns must be balanced against the imperatives of generating public benefit with more efficient healthcare systems from the vastly higher and accurate computational power of AI. In weighing up these issues, this paper applies the deliberative (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • The paradoxical transparency of opaque machine learning.Felix Tun Han Lo - forthcoming - AI and Society:1-13.
    This paper examines the paradoxical transparency involved in training machine-learning models. Existing literature typically critiques the opacity of machine-learning models such as neural networks or collaborative filtering, a type of critique that parallels the black-box critique in technology studies. Accordingly, people in power may leverage the models’ opacity to justify a biased result without subjecting the technical operations to public scrutiny, in what Dan McQuillan metaphorically depicts as an “algorithmic state of exception”. This paper attempts to differentiate the black-box abstraction (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Exploring the roles of trust and social group preference on the legitimacy of algorithmic decision-making vs. human decision-making for allocating COVID-19 vaccinations.Marco Lünich & Kimon Kieslich - forthcoming - AI and Society:1-19.
    In combating the ongoing global health threat of the COVID-19 pandemic, decision-makers have to take actions based on a multitude of relevant health data with severe potential consequences for the affected patients. Because of their presumed advantages in handling and analyzing vast amounts of data, computer systems of algorithmic decision-making are implemented and substitute humans in decision-making processes. In this study, we focus on a specific application of ADM in contrast to human decision-making, namely the allocation of COVID-19 vaccines to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Challenges of Algorithm-Based HR Decision-Making for Personal Integrity.Ulrich Leicht-Deobald, Thorsten Busch, Christoph Schank, Antoinette Weibel, Simon Schafheitle, Isabelle Wildhaber & Gabriel Kasper - 2019 - Journal of Business Ethics 160 (2):377-392.
    Organizations increasingly rely on algorithm-based HR decision-making to monitor their employees. This trend is reinforced by the technology industry claiming that its decision-making tools are efficient and objective, downplaying their potential biases. In our manuscript, we identify an important challenge arising from the efficiency-driven logic of algorithm-based HR decision-making, namely that it may shift the delicate balance between employees’ personal integrity and compliance more in the direction of compliance. We suggest that critical data literacy, ethical awareness, the use of participatory (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act.Johann Laux - forthcoming - AI and Society:1-14.
    Human oversight has become a key mechanism for the governance of artificial intelligence (“AI”). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three contributions. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reducing the contingency of the world: magic, oracles, and machine-learning technology.Simon Larsson & Martin Viktorelius - forthcoming - AI and Society.
    The concept of magic is frequently used to discuss technology, a practice considered useful by some with others arguing that viewing technology as magic precludes a proper understanding of technology. The concept of magic is especially prominent in discussions of artificial intelligence and machine learning. Based on an anthropological perspective, this paper juxtaposes ML technology with magic, using descriptions drawn from a project on an ML-powered system for propulsion control of cargo ships. The paper concludes that prior scholarly work on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • “Strongly Recommended” Revisiting Decisional Privacy to Judge Hypernudging in Self-Tracking Technologies.Marjolein Lanzing - 2019 - Philosophy and Technology 32 (3):549-568.
    This paper explores and rehabilitates the value of decisional privacy as a conceptual tool, complementary to informational privacy, for critiquing personalized choice architectures employed by self-tracking technologies. Self-tracking technologies are promoted and used as a means to self-improvement. Based on large aggregates of personal data and the data of other users, self-tracking technologies offer personalized feedback that nudges the user into behavioral change. The real-time personalization of choice architectures requires continuous surveillance and is a very powerful technology, recently coined as (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Artificial Intelligence, Social Media and Depression. A New Concept of Health-Related Digital Autonomy.Sebastian Laacke, Regina Mueller, Georg Schomerus & Sabine Salloch - 2021 - American Journal of Bioethics 21 (7):4-20.
    The development of artificial intelligence (AI) in medicine raises fundamental ethical issues. As one example, AI systems in the field of mental health successfully detect signs of mental disorders, such as depression, by using data from social media. These AI depression detectors (AIDDs) identify users who are at risk of depression prior to any contact with the healthcare system. The article focuses on the ethical implications of AIDDs regarding affected users’ health-related autonomy. Firstly, it presents the (ethical) discussion of AI (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • The person of the category: the pricing of risk and the politics of classification in insurance and credit.Greta R. Krippner & Daniel Hirschman - 2022 - Theory and Society 51 (5):685-727.
    In recent years, scholars in the social sciences and humanities have turned their attention to how the rise of digital technologies is reshaping political life in contemporary society. Here, we analyze this issue by distinguishing between two classification technologies typical of pre-digital and digital eras that differently constitute the relationship between individuals and groups. In class-based systems, characteristic of the pre-digital era, one’s status as an individual is gained through membership in a group in which salient social identities are shared (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • We Have No Satisfactory Social Epistemology of AI-Based Science.Inkeri Koskinen - forthcoming - Social Epistemology.
    In the social epistemology of scientific knowledge, it is largely accepted that relationships of trust, not just reliance, are necessary in contemporary collaborative science characterised by relationships of opaque epistemic dependence. Such relationships of trust are taken to be possible only between agents who can be held accountable for their actions. But today, knowledge production in many fields makes use of AI applications that are epistemically opaque in an essential manner. This creates a problem for the social epistemology of scientific (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Who's Leading This Dance?: Theorizing Automatic and Strategic Synchrony in Human-Exoskeleton Interactions.Gavin Lawrence Kirkwood, Christopher D. Otmar & Mohemmad Hansia - 2021 - Frontiers in Psychology 12:624108.
    Wearable robots are an emerging form of technology that allow organizations to combine the strength, precision, and performance of machines with the flexibility, intelligence, and problem-solving abilities of human wearers. Active exoskeletons are a type of wearable robot that gives wearers the ability to effortlessly lift up to 200 lbs., as well as perform other types of physically demanding tasks that would be too strenuous for most humans. Synchronization between exoskeleton suits and wearers is one of the most challenging requirements (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach.Tae Wan Kim & Bryan R. Routledge - 2022 - Business Ethics Quarterly 32 (1):75-102.
    Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • A View From Nowhere: the passage of rough sea at dover from camera to algorithm.Erika Kerruish & Warwick Mules - 2022 - Angelaki 27 (6):3-20.
    In cinematic experience, a view from nowhere appears in an instituting moment – neither in time nor out of time, but part of time itself – when a camera reflex lifts the viewer’s perception out of somewhere and into the infinite time of the film. We argue that the view from nowhere found in Birt Acres’s film Rough Sea at Dover – a fifteen-second shot of waves breaking against a sea wall in Dover, England in 1895 – transcends all attempts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Expectations of artificial intelligence and the performativity of ethics: Implications for communication governance.John D. Kelleher, Marguerite Barry & Aphra Kerr - 2020 - Big Data and Society 7 (1).
    This article draws on the sociology of expectations to examine the construction of expectations of ‘ethical AI’ and considers the implications of these expectations for communication governance. We first analyse a range of public documents to identify the key actors, mechanisms and issues which structure societal expectations around artificial intelligence and an emerging discourse on ethics. We then explore expectations of AI and ethics through a survey of members of the public. Finally, we discuss the implications of our findings for (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Algorithmic content moderation: Technical and political challenges in the automation of platform governance.Christian Katzenbach, Reuben Binns & Robert Gorwa - 2020 - Big Data and Society 7 (1):1–15.
    As government pressure on major technology companies builds, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinformation. Automated hash-matching and predictive machine learning tools – what we define here as algorithmic moderation systems – are increasingly being deployed to conduct content moderation at scale by major platforms for user-generated content such as Facebook, YouTube and Twitter. This article provides an accessible technical primer on how algorithmic moderation works; examines some (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • From Reality to World. A Critical Perspective on AI Fairness.Jean-Marie John-Mathews, Dominique Cardon & Christine Balagué - 2022 - Journal of Business Ethics 178 (4):945-959.
    Fairness of Artificial Intelligence decisions has become a big challenge for governments, companies, and societies. We offer a theoretical contribution to consider AI ethics outside of high-level and top-down approaches, based on the distinction between “reality” and “world” from Luc Boltanski. To do so, we provide a new perspective on the debate on AI fairness and show that criticism of ML unfairness is “realist”, in other words, grounded in an already instituted reality based on demographic categories produced by institutions. Second, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Automating anticorruption?María Carolina Jiménez & Emanuela Ceva - 2022 - Ethics and Information Technology 24 (4):1-14.
    The paper explores some normative challenges concerning the integration of Machine Learning (ML) algorithms into anticorruption in public institutions. The challenges emerge from the tensions between an approach treating ML algorithms as allies to an exclusively legalistic conception of anticorruption and an approach seeing them within an institutional ethics of office accountability. We explore two main challenges. One concerns the variable opacity of some ML algorithms, which may affect public officeholders’ capacity to account for institutional processes relying upon ML techniques. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Making the black box society transparent.Daniel Innerarity - forthcoming - AI and Society:1-7.
    The growing presence of smart devices in our lives turns all of society into something largely unknown to us. The strategy of demanding transparency stems from the desire to reduce the ignorance to which this automated society seems to condemn us. An evaluation of this strategy first requires that we distinguish the different types of non-transparency. Once we reveal the limits of the transparency needed to confront these devices, the article examines the alternative strategy of explainable artificial intelligence and concludes (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Handle with care: Assessing performance measures of medical AI for shared clinical decision‐making.Sune Holm - 2021 - Bioethics 36 (2):178-186.
    In this article I consider two pertinent questions that practitioners must consider when they deploy an algorithmic system as support in clinical shared decision‐making. The first question concerns how to interpret and assess the significance of different performance measures for clinical decision‐making. The second question concerns the professional obligations that practitioners have to communicate information about the quality of an algorithm's output to patients in light of the principles of autonomy, beneficence, and justice. In the article I review the four (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI transparency: a matter of reconciling design with critique.Tomasz Hollanek - forthcoming - AI and Society.
    In the late 2010s, various international committees, expert groups, and national strategy boards have voiced the demand to ‘open’ the algorithmic black box, to audit, expound, and demystify artificial intelligence. The opening of the algorithmic black box, however, cannot be seen only as an engineering challenge. In this article, I argue that only the sort of transparency that arises from critique—a method of theoretical examination that, by revealing pre-existing power structures, aims to challenge them—can help us produce technological systems that (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • AI management beyond the hype: exploring the co-constitution of AI and organizational context.Jonny Holmström & Markus Hällgren - 2022 - AI and Society 37 (4):1575-1585.
    AI technologies hold great promise for addressing existing problems in organizational contexts, but the potential benefits must not obscure the potential perils associated with AI. In this article, we conceptually explore these promises and perils by examining AI use in organizational contexts. The exploration complements and extends extant literature on AI management by providing a typology describing four types of AI use, based on the idea of co-constitution of AI technologies and organizational context. Building on this typology, we propose three (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Saved by Design? The Case of Legal Protection by Design.Mireille Hildebrandt - 2017 - NanoEthics 11 (3):307-311.
    This discussion note does three things: it explains the notion of ‘legal protection by design’ in relation to data-driven infrastructures that form the backbone of our new ‘onlife world’, it explains how the notion of ‘by design’ relates to the relational nature of what an environment affords its inhabitants, referring to the work of James Gibson, and it explains how this affects our understanding of human capabilities in relation to the affordances of changing environments. Finally, this brief note argues that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Ethical and Epistemological Utility of Explicable AI in Medicine.Christian Herzog - 2022 - Philosophy and Technology 35 (2):1-31.
    In this article, I will argue in favor of both the ethical and epistemological utility of explanations in artificial intelligence -based medical technology. I will build on the notion of “explicability” due to Floridi, which considers both the intelligibility and accountability of AI systems to be important for truly delivering AI-powered services that strengthen autonomy, beneficence, and fairness. I maintain that explicable algorithms do, in fact, strengthen these ethical principles in medicine, e.g., in terms of direct patient–physician contact, as well (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The epistemic opacity of autonomous systems and the ethical consequences.Mihály Héder - 2023 - AI and Society 38 (5):1819-1827.
    This paper takes stock of all the various factors that cause the design-time opacity of autonomous systems behaviour. The factors include embodiment effects, design-time knowledge gap, human factors, emergent behaviour and tacit knowledge. This situation is contrasted with the usual representation of moral dilemmas that assume perfect information. Since perfect information is not achievable, the traditional moral dilemma representations are not valid and the whole problem of ethical autonomous systems design proves to be way more empirical than previously understood.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Algorithms and values in justice and security.Paul Hayes, Ibo van de Poel & Marc Steen - 2020 - AI and Society 35 (3):533-555.
    This article presents a conceptual investigation into the value impacts and relations of algorithms in the domain of justice and security. As a conceptual investigation, it represents one step in a value sensitive design based methodology. Here, we explicate and analyse the expression of values of accuracy, privacy, fairness and equality, property and ownership, and accountability and transparency in this context. We find that values are sensitive to disvalue if algorithms are designed, implemented or deployed inappropriately or without sufficient consideration (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The virtues of interpretable medical artificial intelligence.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • 15 challenges for AI: or what AI (currently) can’t do.Thilo Hagendorff & Katharina Wezel - 2020 - AI and Society 35 (2):355-365.
    The current “AI Summer” is marked by scientific breakthroughs and economic successes in the fields of research, development, and application of systems with artificial intelligence. But, aside from the great hopes and promises associated with artificial intelligence, there are a number of challenges, shortcomings and even limitations of the technology. For one, these challenges arise from methodological and epistemological misconceptions about the capabilities of artificial intelligence. Secondly, they result from restrictions of the social context in which the development of applications (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations