Switch to: References

Add citations

You must login to add citations.
  1. The expected AI as a sociocultural construct and its impact on the discourse on technology.Auli Viidalepp - 2023 - Dissertation, University of Tartu
    The thesis introduces and criticizes the discourse on technology, with a specific reference to the concept of AI. The discourse on AI is particularly saturated with reified metaphors which drive connotations and delimit understandings of technology in society. To better analyse the discourse on AI, the thesis proposes the concept of “Expected AI”, a composite signifier filled with historical and sociocultural connotations, and numerous referent objects. Relying on cultural semiotics, science and technology studies, and a diverse selection of heuristic concepts, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values incorporated (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • What Do Technology and Artificial Intelligence Mean Today?Scott H. Hawley & Elias Kruger - forthcoming - In Hector Fernandez (ed.), Sociedad Tecnológica y Futuro Humano, vol. 1: Desafíos conceptuales. pp. 17.
    Technology and Artificial Intelligence, both today and in the near future, are dominated by automated algorithms that combine optimization with models based on the human brain to learn, predict, and even influence the large-scale behavior of human users. Such applications can be understood to be outgrowths of historical trends in industry and academia, yet have far-reaching and even unintended consequences for social and political life around the world. Countries in different parts of the world take different regulatory views for the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Theopolis Monk: Envisioning a Future of A.I. Public Service.Scott H. Hawley - 2019 - In Newton Lee (ed.), The Transhumanism Handbook. Springer Verlag. pp. 271-300.
    Visions of future applications of artificial intelligence tend to veer toward the naively optimistic or frighteningly dystopian, neglecting the numerous human factors necessarily involved in the design, deployment and oversight of such systems. The dream that AI systems may somehow replace the irregularities and struggles of human governance with unbiased efficiency is seen to be non-scientific and akin to a religious hope, whereas the current trajectory of AI development indicates that it will increasingly serve as a tool by which humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • Challenges for an Ontology of Artificial Intelligence.Scott H. Hawley - 2019 - Perspectives on Science and Christian Faith 71 (2):83-95.
    Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Granting Automata Human Rights: Challenge to a Basis of Full-Rights Privilege.Lantz Fleming Miller - 2015 - Human Rights Review 16 (4):369-391.
    As engineers propose constructing humanlike automata, the question arises as to whether such machines merit human rights. The issue warrants serious and rigorous examination, although it has not yet cohered into a conversation. To put it into a sure direction, this paper proposes phrasing it in terms of whether humans are morally obligated to extend to maximally humanlike automata full human rights, or those set forth in common international rights documents. This paper’s approach is to consider the ontology of humans (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup.Simon N. Balle - 2022 - AI and Society 37 (2):535-548.
    Empirical research on human–robot interaction has demonstrated how humans tend to react to social robots with empathic responses and moral behavior. How should we ethically evaluate such responses to robots? Are people wrong to treat non-sentient artefacts as moral patients since this rests on anthropomorphism and ‘over-identification’ —or correct since spontaneous moral intuition and behavior toward nonhumans is indicative for moral patienthood, such that social robots become our ‘Others’?. In this research paper, I weave extant HRI studies that demonstrate empathic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations