Switch to: References

Add citations

You must login to add citations.
  1. Cognitive architectures for artificial intelligence ethics.Steve J. Bickley & Benno Torgler - 2023 - AI and Society 38 (2):501-519.
    As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The ethical use of artificial intelligence in human resource management: a decision-making framework.Sarah Bankins - 2021 - Ethics and Information Technology 23 (4):841-854.
    Artificial intelligence is increasingly inputting into various human resource management functions, such as sourcing job applicants and selecting staff, allocating work, and offering personalized career coaching. While the use of AI for such tasks can offer many benefits, evidence suggests that without careful and deliberate implementation its use also has the potential to generate significant harms. This raises several ethical concerns regarding the appropriateness of AI deployment to domains such as HRM, which directly deal with managing sometimes sensitive aspects of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Moralsk ansvar for handlinger til autonome våpensystemer.Kjetil Holtmon Akø - 2023 - Norsk Filosofisk Tidsskrift 58 (2-3):118-128.
    Download  
     
    Export citation  
     
    Bookmark  
  • Opening the black boxes of the black carpet in the era of risk society: a sociological analysis of AI, algorithms and big data at work through the case study of the Greek postal services.Christos Kouroutzas & Venetia Palamari - forthcoming - AI and Society:1-14.
    This article draws on contributions from the Sociology of Science and Technology and Science and Technology Studies, the Sociology of Risk and Uncertainty, and the Sociology of Work, focusing on the transformations of employment regarding expanded automation, robotization and informatization. The new work patterns emerging due to the introduction of software and hardware technologies, which are based on artificial intelligence, algorithms, big data gathering and robotic systems are examined closely. This article attempts to “open the black boxes” of the “black (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Epistemo-ethical constraints on AI-human decision making for diagnostic purposes.Dina Babushkina & Athanasios Votsis - 2022 - Ethics and Information Technology 24 (2).
    This paper approaches the interaction of a health professional with an AI system for diagnostic purposes as a hybrid decision making process and conceptualizes epistemo-ethical constraints on this process. We argue for the importance of the understanding of the underlying machine epistemology in order to raise awareness of and facilitate realistic expectations from AI as a decision support system, both among healthcare professionals and the potential benefiters. Understanding the epistemic abilities and limitations of such systems is essential if we are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence.Steven Umbrello & Stefan Lorenz Sorgner - 2019 - Philosophies 4 (2):24.
    Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Transparency in AI.Tolgahan Toy - forthcoming - AI and Society:1-11.
    In contemporary artificial intelligence, the challenge is making intricate connectionist systems—comprising millions of parameters—more comprehensible, defensible, and rationally grounded. Two prevailing methodologies address this complexity. The inaugural approach amalgamates symbolic methodologies with connectionist paradigms, culminating in a hybrid system. This strategy systematizes extensive parameters within a limited framework of formal, symbolic rules. Conversely, the latter strategy remains staunchly connectionist, eschewing hybridity. Instead of internal transparency, it fabricates an external, transparent proxy system. This ancillary system’s mandate is elucidating the principal system’s (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Framing the effects of machine learning on science.Victo J. Silva, Maria Beatriz M. Bonacelli & Carlos A. Pacheco - forthcoming - AI and Society:1-17.
    Studies investigating the relationship between artificial intelligence and science tend to adopt a partial view. There is no broad and holistic view that synthesizes the channels through which this interaction occurs. Our goal is to systematically map the influence of the latest AI techniques on science. We draw on the work of Nathan Rosenberg to develop a taxonomy of the effects of technology on science. The proposed framework comprises four categories of technology effects on science: intellectual, economic, experimental and instrumental. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Sources of Understanding in Supervised Machine Learning Models.Paulo Pirozelli - 2022 - Philosophy and Technology 35 (2):1-19.
    In the last decades, supervised machine learning has seen the widespread growth of highly complex, non-interpretable models, of which deep neural networks are the most typical representative. Due to their complexity, these models have showed an outstanding performance in a series of tasks, as in image recognition and machine translation. Recently, though, there has been an important discussion over whether those non-interpretable models are able to provide any sort of understanding whatsoever. For some scholars, only interpretable models can provide understanding. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Throwing light on black boxes: emergence of visual categories from deep learning.Ezequiel López-Rubio - 2020 - Synthese 198 (10):10021-10041.
    One of the best known arguments against the connectionist approach to artificial intelligence and cognitive science is that neural networks are black boxes, i.e., there is no understandable account of their operation. This difficulty has impeded efforts to explain how categories arise from raw sensory data. Moreover, it has complicated investigation about the role of symbols and language in cognition. This state of things has been radically changed by recent experimental findings in artificial deep learning research. Two kinds of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The paradoxical transparency of opaque machine learning.Felix Tun Han Lo - forthcoming - AI and Society:1-13.
    This paper examines the paradoxical transparency involved in training machine-learning models. Existing literature typically critiques the opacity of machine-learning models such as neural networks or collaborative filtering, a type of critique that parallels the black-box critique in technology studies. Accordingly, people in power may leverage the models’ opacity to justify a biased result without subjecting the technical operations to public scrutiny, in what Dan McQuillan metaphorically depicts as an “algorithmic state of exception”. This paper attempts to differentiate the black-box abstraction (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial intelligence, transparency, and public decision-making.Karl de Fine Licht & Jenny de Fine Licht - 2020 - AI and Society 35 (4):917-926.
    The increasing use of Artificial Intelligence for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Artificial intelligence in medicine and the disclosure of risks.Maximilian Kiener - 2021 - AI and Society 36 (3):705-713.
    This paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient’s background situation.Pacecurrent clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient’s informed consent or violates a more general obligation (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations