Switch to: References

Add citations

You must login to add citations.
  1. The Ethics of AI Ethics. A Constructive Critique.Jan-Christoph Heilinger - 2022 - Philosophy and Technology 35 (3):1-20.
    The paper presents an ethical analysis and constructive critique of the current practice of AI ethics. It identifies conceptual substantive and procedural challenges and it outlines strategies to address them. The strategies include countering the hype and understanding AI as ubiquitous infrastructure including neglected issues of ethics and justice such as structural background injustices into the scope of AI ethics and making the procedures and fora of AI ethics more inclusive and better informed with regard to philosophical ethics. These measures (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • Karl Jaspers and Artificial Neural Nets: On the Relation of Explaining and Understanding Artificial Intelligence in Medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3).
    Assistive systems based on Artificial Intelligence are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather than (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Unknown Future, Repeated Present: A Narrative-Centered Analysis of Long-Term AI Discourse.Micaela Simeone - 2022 - Humanist Studies and the Digital Age 7 (1).
    Recent narratives and debates surrounding long-term AI concerns—the prospect of artificial general intelligence in particular—are fraught with hidden assumptions, priorities, and values. This paper employs a humanistic, narrative-centered approach to analyze the works of two vocal, and opposing, thinkers in the field—Luciano Floridi and Nick Bostrom—to ask how the representational, descriptive differences in their works reveal the high stakes of narrative choices for how we form ideas about humanity, urgency, risk, harm, and possibility in relation to AI. This paper closely (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Functional Contextual Account of Background Knowledge in Categorization: Implications for Artificial General Intelligence and Cognitive Accounts of General Knowledge.Darren J. Edwards, Ciara McEnteggart & Yvonne Barnes-Holmes - 2022 - Frontiers in Psychology 13.
    Psychology has benefited from an enormous wealth of knowledge about processes of cognition in relation to how the brain organizes information. Within the categorization literature, this behavior is often explained through theories of memory construction called exemplar theory and prototype theory which are typically based on similarity or rule functions as explanations of how categories emerge. Although these theories work well at modeling highly controlled stimuli in laboratory settings, they often perform less well outside of these settings, such as explaining (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Sociotechnical Perspective for the Future of AI: Narratives, Inequalities, and Human Control.Andreas Theodorou & Laura Sartori - 2022 - Ethics and Information Technology 24 (1).
    Different people have different perceptions about artificial intelligence. It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Minding the gap(s): public perceptions of AI and socio-technical imaginaries.Laura Sartori & Giulia Bocca - forthcoming - AI and Society:1-16.
    Deepening and digging into the social side of AI is a novel but emerging requirement within the AI community. Future research should invest in an “AI for people”, going beyond the undoubtedly much-needed efforts into ethics, explainability and responsible AI. The article addresses this challenge by problematizing the discussion around AI shifting the attention to individuals and their awareness, knowledge and emotional response to AI. First, we outline our main argument relative to the need for a socio-technical perspective in the (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US.Huw Roberts, Josh Cowls, Emmie Hine, Francesca Mazzi, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - 2021 - Science and Engineering Ethics 27 (6):1-25.
    Over the past few years, there has been a proliferation of artificial intelligence strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union and the United States’ AI strategies and considers the visions of a ‘Good AI Society’ that are forwarded in key policy documents and their opportunity costs, the extent to which the implementation of each vision is living up to (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  • Extending Introspection.Lukas Schwengerer - 2021 - In Robert William Clowes, Klaus Gärtner & Inês Hipólito (eds.), The Mind-Technology Problem - Investigating Minds, Selves and 21st Century Artifacts. Springer. pp. 231-251.
    Clark and Chalmers propose that the mind extends further than skin and skull. If they are right, then we should expect this to have some effect on our way of knowing our own mental states. If the content of my notebook can be part of my belief system, then looking at the notebook seems to be a way to get to know my own beliefs. However, it is at least not obvious whether self-ascribing a belief by looking at my notebook (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Digital time: latency, real-time, and the onlife experience of everyday time.Luciano Floridi - 2021 - Philosophy and Technology 34 (3):407–⁠412.
    Digital technologies create and shape our environments, the infosphere, where we spend increasingly more time. Through exploration of such concepts as "latency", "real time" and "unreal time", this article discusses how time has changed due to the digital revolution over the past half-century.
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • GPT-3: Its Nature, Scope, Limits, and Consequences.Luciano Floridi & Massimo Chiriatti - 2020 - Minds and Machines 30 (4):681–⁠694.
    In this commentary, we discuss the nature of reversible and irreversible questions, that is, questions that may enable one to identify the nature of the source of their answers. We then introduce GPT-3, a third-generation, autoregressive language model that uses deep learning to produce human-like texts, and use the previous distinction to analyse it. We expand the analysis to present three tests based on mathematical, semantic, and ethical questions and show that GPT-3 is not designed to pass any of them. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Lifting the Curtain: Strategic Visibility of Human Labour in AI-as-a-Service.Gemma Newlands - 2021 - Big Data and Society 8 (1).
    Artificial Intelligence-as-a-Service empowers individuals and organisations to access AI on-demand, in either tailored or ‘off-the-shelf’ forms. However, institutional separation between development, training and deployment can lead to critical opacities, such as obscuring the level of human effort necessary to produce and train AI services. Information about how, where, and for whom AI services have been produced are valuable secrets, which vendors strategically disclose to clients depending on commercial interests. This article provides a critical analysis of how AIaaS vendors manipulate the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation