Switch to: References

Add citations

You must login to add citations.
  1. Transparency in AI.Tolgahan Toy - 2024 - AI and Society 39 (6):2841-2851.
    In contemporary artificial intelligence, the challenge is making intricate connectionist systems—comprising millions of parameters—more comprehensible, defensible, and rationally grounded. Two prevailing methodologies address this complexity. The inaugural approach amalgamates symbolic methodologies with connectionist paradigms, culminating in a hybrid system. This strategy systematizes extensive parameters within a limited framework of formal, symbolic rules. Conversely, the latter strategy remains staunchly connectionist, eschewing hybridity. Instead of internal transparency, it fabricates an external, transparent proxy system. This ancillary system’s mandate is elucidating the principal system’s (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The ethical use of artificial intelligence in human resource management: a decision-making framework.Sarah Bankins - 2021 - Ethics and Information Technology 23 (4):841-854.
    Artificial intelligence is increasingly inputting into various human resource management functions, such as sourcing job applicants and selecting staff, allocating work, and offering personalized career coaching. While the use of AI for such tasks can offer many benefits, evidence suggests that without careful and deliberate implementation its use also has the potential to generate significant harms. This raises several ethical concerns regarding the appropriateness of AI deployment to domains such as HRM, which directly deal with managing sometimes sensitive aspects of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial intelligence, transparency, and public decision-making.Karl de Fine Licht & Jenny de Fine Licht - 2020 - AI and Society 35 (4):917-926.
    The increasing use of Artificial Intelligence for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence.Steven Umbrello & Stefan Lorenz Sorgner - 2019 - Philosophies 4 (2):24.
    Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The paradoxical transparency of opaque machine learning.Felix Tun Han Lo - forthcoming - AI and Society:1-13.
    This paper examines the paradoxical transparency involved in training machine-learning models. Existing literature typically critiques the opacity of machine-learning models such as neural networks or collaborative filtering, a type of critique that parallels the black-box critique in technology studies. Accordingly, people in power may leverage the models’ opacity to justify a biased result without subjecting the technical operations to public scrutiny, in what Dan McQuillan metaphorically depicts as an “algorithmic state of exception”. This paper attempts to differentiate the black-box abstraction (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Framing the effects of machine learning on science.Victo J. Silva, Maria Beatriz M. Bonacelli & Carlos A. Pacheco - forthcoming - AI and Society:1-17.
    Studies investigating the relationship between artificial intelligence and science tend to adopt a partial view. There is no broad and holistic view that synthesizes the channels through which this interaction occurs. Our goal is to systematically map the influence of the latest AI techniques on science. We draw on the work of Nathan Rosenberg to develop a taxonomy of the effects of technology on science. The proposed framework comprises four categories of technology effects on science: intellectual, economic, experimental and instrumental. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Cognitive architectures for artificial intelligence ethics.Steve J. Bickley & Benno Torgler - 2023 - AI and Society 38 (2):501-519.
    As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Sources of Understanding in Supervised Machine Learning Models.Paulo Pirozelli - 2022 - Philosophy and Technology 35 (2):1-19.
    In the last decades, supervised machine learning has seen the widespread growth of highly complex, non-interpretable models, of which deep neural networks are the most typical representative. Due to their complexity, these models have showed an outstanding performance in a series of tasks, as in image recognition and machine translation. Recently, though, there has been an important discussion over whether those non-interpretable models are able to provide any sort of understanding whatsoever. For some scholars, only interpretable models can provide understanding. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI: artistic collaborator?Claire Anscomb - forthcoming - AI and Society:1-11.
    Increasingly, artists describe the feeling of creating images with generative AI systems as like working with a “collaborator”—a term that is also common in the scholarly literature on AI image-generation. If it is appropriate to describe these dynamics in terms of collaboration, as I demonstrate, it is important to determine the form and nature of these joint efforts, given the appreciative relevance of different types of contribution to the production of an artwork. Accordingly, I examine three kinds of collaboration that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Crossing the Trust Gap in Medical AI: Building an Abductive Bridge for xAI.Steven S. Gouveia & Jaroslav Malík - 2024 - Philosophy and Technology 37 (3):1-25.
    In this paper, we argue that one way to approach what is known in the literature as the “Trust Gap” in Medical AI is to focus on explanations from an Explainable AI (xAI) perspective. Against the current framework on xAI – which does not offer a real solution – we argue for a pragmatist turn, one that focuses on understanding how we provide explanations in Traditional Medicine (TM), composed by human agents only. Following this, explanations have two specific relevant components: (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The perfect technological storm: artificial intelligence and moral complacency.Marten H. L. Kaas - 2024 - Ethics and Information Technology 26 (3):1-12.
    Artificially intelligent machines are different in kind from all previous machines and tools. While many are used for relatively benign purposes, the types of artificially intelligent machines that we should care about, the ones that are worth focusing on, are the machines that purport to replace humans entirely and thereby engage in what Brian Cantwell Smith calls “judgment.” As impressive as artificially intelligent machines are, their abilities are still derived from humans and as such lack the sort of normative commitments (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Throwing light on black boxes: emergence of visual categories from deep learning.Ezequiel López-Rubio - 2020 - Synthese 198 (10):10021-10041.
    One of the best known arguments against the connectionist approach to artificial intelligence and cognitive science is that neural networks are black boxes, i.e., there is no understandable account of their operation. This difficulty has impeded efforts to explain how categories arise from raw sensory data. Moreover, it has complicated investigation about the role of symbols and language in cognition. This state of things has been radically changed by recent experimental findings in artificial deep learning research. Two kinds of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Opening the black boxes of the black carpet in the era of risk society: a sociological analysis of AI, algorithms and big data at work through the case study of the Greek postal services.Christos Kouroutzas & Venetia Palamari - forthcoming - AI and Society:1-14.
    This article draws on contributions from the Sociology of Science and Technology and Science and Technology Studies, the Sociology of Risk and Uncertainty, and the Sociology of Work, focusing on the transformations of employment regarding expanded automation, robotization and informatization. The new work patterns emerging due to the introduction of software and hardware technologies, which are based on artificial intelligence, algorithms, big data gathering and robotic systems are examined closely. This article attempts to “open the black boxes” of the “black (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence in medicine and the disclosure of risks.Maximilian Kiener - 2021 - AI and Society 36 (3):705-713.
    This paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient’s background situation.Pacecurrent clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient’s informed consent or violates a more general obligation (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Analyzing the justification for using generative AI technology to generate judgments based on the virtue jurisprudence theory.Shilun Zhou - 2024 - Journal of Decision Systems 1:1-24.
    This paper responds to the question of whether judgements generated by judges using ChatGPT can be directly adopted. It posits that it is unjust for judges to rely on and directly adopt ChatGPT-generated judgements based on virtue jurisprudence theory. This paper innovatively applies case-based empirical analysis and is the first to use virtue jurisprudence approach to analyse the question and support its argument. The first section reveals the use of generative AI-based tools in judicial practice and the existence of erroneous (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Epistemo-ethical constraints on AI-human decision making for diagnostic purposes.Dina Babushkina & Athanasios Votsis - 2022 - Ethics and Information Technology 24 (2).
    This paper approaches the interaction of a health professional with an AI system for diagnostic purposes as a hybrid decision making process and conceptualizes epistemo-ethical constraints on this process. We argue for the importance of the understanding of the underlying machine epistemology in order to raise awareness of and facilitate realistic expectations from AI as a decision support system, both among healthcare professionals and the potential benefiters. Understanding the epistemic abilities and limitations of such systems is essential if we are (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moralsk ansvar for handlinger til autonome våpensystemer.Kjetil Holtmon Akø - 2023 - Norsk Filosofisk Tidsskrift 58 (2-3):118-128.
    Download  
     
    Export citation  
     
    Bookmark