Results for 'Anthropomorphism in AI'

994 found
Order:
  1. The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  2. Why You Are (Probably) Anthropomorphizing AI: Varieties of Bias and Anthropomorphism by Proxy.Ali Hasan - manuscript
    In this paper, I discuss biases that take on different subjects and targets: human biases directed at other humans, institutional and technological biases directed at humans, and human biases directed at technology. I start with the core idea, recently defended by Thomas Kelly (2022) among others, that bias involves a systematic departure from a genuine standard or norm. I distinguish biases by the kind(s) of norm they deviate from, e.g., truth norms, epistemic norms, and moral norms. I then introduce distinctions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Why you are (probably) anthropomorphizing AI (Short Version).Ali Hasan - manuscript
    In this paper I argue that, given the way that AI models work and the way that ordinary human rationality works, it is very likely that people are anthropomorphizing AI, with potentially serious consequences. I start with the core idea, recently defended by Thomas Kelly (2022) among others, that bias involves a systematic departure from a genuine standard or norm. I briefly discuss how bias can take on different explicit, implicit, and “truly implicit” (Johnson 2021) forms such as bias by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4.  52
    Can a Robot Smile? Wittgenstein on Facial Expression.Diane Proudfoot - 2013 - In T. P. Racine & K. L. Slaney (eds.), A Wittgensteinian Perspective on the Use of Conceptual Analysis in Psychology. pp. 172-194.
    Recent work in social robotics, which is aimed both at creating an artificial intelligence and providing a test-bed for psychological theories of human social development, involves building robots that can learn from ‘face-to-face’ interaction with human beings — as human infants do. The building-blocks of this interaction include the robot’s ‘expressive’ behaviours, for example, facial-expression and head-and-neck gesture. There is here an ideal opportunity to apply Wittgensteinian conceptual analysis to current theoretical and empirical work in the sciences. Wittgenstein’s philosophical psychology (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. The Concept of Accountability in AI Ethics and Governance.Theodore M. Lechterman - 2022 - In Justin Bullock, Y. C. Chen, Johannes Himmelreich, V. Hudson, M. Korinek, M. Young & B. Zhang (eds.), The Oxford Handbook of AI Governance. Oxford: Oxford University Press.
    Calls to hold artificial intelligence to account are intensifying. Activists and researchers alike warn of an “accountability gap” or even a “crisis of accountability” in AI. Meanwhile, several prominent scholars maintain that accountability holds the key to governing AI. But usage of the term varies widely in discussions of AI ethics and governance. This chapter begins by disambiguating some different senses and dimensions of accountability, distinguishing it from neighboring concepts, and identifying sources of confusion. It proceeds to explore the idea (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  7. Supporting human autonomy in AI systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  8.  8
    Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Download  
     
    Export citation  
     
    Bookmark  
  9. Basic issues in AI policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Cham: Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Download  
     
    Export citation  
     
    Bookmark  
  10.  51
    Why interdisciplinary research in AI is so important, according to Jurassic Park.Marie Oldfield - 2020 - The Tech Magazine 1 (1):1.
    Why interdisciplinary research in AI is so important, according to Jurassic Park. -/- “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” -/- I think this quote resonates with us now more than ever, especially in the world of technological development. The writers of Jurassic Park were years ahead of their time with this powerful quote. -/- As we build new technology, and we push on to see what can actually (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Thinking Fast and Slow in AI: the Role of Metacognition.Marianna Bergamaschi Ganapini - manuscript
    Multiple Authors - please see paper attached. -/- AI systems have seen dramatic advancement in recent years, bringing many applications that pervade our everyday life. However, we are still mostly seeing instances of narrow AI: many of these recent developments are typically focused on a very limited set of competencies and goals, e.g., image interpretation, natural language processing, classification, prediction, and many others. We argue that a better study of the mechanisms that allow humans to have these capabilities can help (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology.Piercosma Bisconti, Davide Orsitto, Federica Fedorczyk, Fabio Brau, Marianna Capasso, Lorenzo De Marinis, Hüseyin Eken, Federica Merenda, Mirko Forti, Marco Pacini & Claudia Schettini - 2022 - AI and Society 1 (1):1-10.
    In this paper, we propose a methodology to maximize the benefits of interdisciplinary cooperation in AI research groups. Firstly, we build the case for the importance of interdisciplinarity in research groups as the best means to tackle the social implications brought about by AI systems, against the backdrop of the EU Commission proposal for an Artificial Intelligence Act. As we are an interdisciplinary group, we address the multi-faceted implications of the mass-scale diffusion of AI-driven technologies. The result of our exercise (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  31
    “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Levels of Self-Improvement in AI and their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16.  77
    Using Edge Cases to Disentangle Fairness and Solidarity in AI Ethics.James Brusseau - 2021 - AI and Ethics.
    Principles of fairness and solidarity in AI ethics regularly overlap, creating obscurity in practice: acting in accordance with one can appear indistinguishable from deciding according to the rules of the other. However, there exist irregular cases where the two concepts split, and so reveal their disparate meanings and uses. This paper explores two cases in AI medical ethics – one that is irregular and the other more conventional – to fully distinguish fairness and solidarity. Then the distinction is applied to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17.  98
    Argumentation schemes in AI: A literature review. Introduction to the special issue.Fabrizio Macagno - 2021 - Argument and Computation 12 (3):287-302.
    Argumentation schemes [1–3] are a relatively recent notion that continues an extremely ancient debate on one of the foundations of human reasoning, human comprehension, and obviously human argumentation, i.e., the topics. To understand the revolutionary nature of Walton’s work on this subject matter, it is necessary to place it in the debate that it continues and contributes to, namely a view of logic that is much broader than the formalistic perspective that has been adopted from the 20th century until nowadays. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18.  98
    Exploring the Intersection of Rationality, Reality, and Theory of Mind in AI Reasoning: An Analysis of GPT-4's Responses to Paradoxes and ToM Tests.Lucas Freund - manuscript
    This paper investigates the responses of GPT-4, a state-of-the-art AI language model, to ten prominent philosophical paradoxes, and evaluates its capacity to reason and make decisions in complex and uncertain situations. In addition to analyzing GPT-4's solutions to the paradoxes, this paper assesses the model's Theory of Mind (ToM) capabilities by testing its understanding of mental states, intentions, and beliefs in scenarios ranging from classic ToM tests to complex, real-world simulations. Through these tests, we gain insight into AI's potential for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition.Rosalie Waelen & Michał Wieczorek - 2022 - Philosophy and Technology 35 (2).
    AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of recognition (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20.  81
    Study on effect of shared investing strategy on trust in AI.N. YokoiRyosuke & N. Kazuya - 2019 - Japanese Journal of Experimental 59 (1):46-50.
    This study examined the determinants of trust in artificial intelligence (AI) in the area of asset management. Many studies of risk perception have found that value similarity determines trust in risk managers. Some studies have demonstrated that value similarity also influences trust in AI. AI is currently employed in a diverse range of domains, including asset management. However, little is known about the factors that influence trust in asset management-related AI. We developed an investment game and examined whether shared investing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21.  70
    Cyber Security and Dehumanisation.Marie Oldfield - 2021 - 5th Digital Geographies Research Group Annual Symposium.
    Artificial Intelligence is becoming widespread and as we continue ask ‘can we implement this’ we neglect to ask ‘should we implement this’. There are various frameworks and conceptual journeys one should take to ensure a robust AI product; context is one of the vital parts of this. AI is now expected to make decisions, from deciding who gets a credit card to cancer diagnosis. These decisions affect most, if not all, of society. As developers if we do not understand or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  23. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24.  19
    The future of AI in our hands? - To what extent are we as individuals morally responsible for guiding the development of AI in a desirable direction?Erik Persson & Maria Hedlund - 2022 - AI and Ethics 2:683-695.
    Artificial intelligence (AI) is becoming increasingly influential in most people’s lives. This raises many philosophical questions. One is what responsibility we have as individuals to guide the development of AI in a desirable direction. More specifically, how should this responsibility be distributed among individuals and between individuals and other actors? We investigate this question from the perspectives of five principles of distribution that dominate the discussion about responsibility in connection with climate change: effectiveness, equality, desert, need, and ability. Since much (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  26. AI Methods in Bioethics.Joshua August Skorburg, Walter Sinnott-Armstrong & Vincent Conitzer - 2020 - American Journal of Bioethics: Empirical Bioethics 1 (11):37-39.
    Commentary about the role of AI in bioethics for the 10th anniversary issue of AJOB: Empirical Bioethics.
    Download  
     
    Export citation  
     
    Bookmark  
  27. Love in the time of AI.Amy Kind - 2021 - In Barry Dainton, Attila Tanyi & Will Slocombe (eds.), Minding the Future: Artificial Intelligence, Philosophical Visions and Science Fiction. pp. 89-106.
    As we await the increasingly likely advent of genuinely intelligent artificial systems, a fair amount of consideration has been given to how we humans will interact with them. Less consideration has been given to how—indeed if—we humans will love them. What would human-AI romantic relationships look like? What do such relationships tell us about the nature of love? This chapter explores these questions via consideration of several works of science fiction, focusing especially on the Black Mirror episode “Be Right Back” (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Beyond Anthropomorphism: Attributing Psychological Properties to Animals.Kristin Andrews - 2011 - In Tom L. Beauchamp R. G. Frey (ed.), Oxford Handbook of Animal Ethics. Oxford University Press. pp. 469--494.
    In the context of animal cognitive research, anthropomorphism is defined as the attribution of uniquely human mental characteristics to animals. Those who worry about anthropomorphism in research, however, are immediately confronted with the question of which properties are uniquely human. One might think that researchers must first hypothesize the existence of a feature in an animal before they can, with warrant, claim that the property is uniquely human. But all too often, this isn't the approach. Rather, there is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  29. New developments in the philosophy of AI.Vincent C. Müller - 2016 - In Vincent Müller (ed.), Fundamental Issues of Artificial Intelligence. Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  30.  86
    AI’s Role in Creative Processes: A Functionalist Approach.Leonardo Arriagada & Gabriela Arriagada-Bruneau - 2022 - Odradek. Studies in Philosophy of Literature, Aesthetics, and New Media Theories 8 (1):77-110.
    From 1950 onwards, the study of creativity has not stopped. Today, AI has revitalised debates on the subject. That is especially controversial in the artworld, as the 21st century already features AI-generated artworks. Without discussing issues about AI agency, this article argues for AI’s creativity. For this, we first present a new functionalist understanding of Margaret Boden’s definition of creativity. This is followed by an analysis of empirical evidence on anthropocentric barriers in the perception of AI’s creative capabilities, which is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. HARMONIZING LAW AND INNOVATIONS IN NANOMEDICINE, ARTIFICIAL INTELLIGENCE (AI) AND BIOMEDICAL ROBOTICS: A CENTRAL ASIAN PERSPECTIVE.Ammar Younas & Tegizbekova Zhyldyz Chynarbekovna - manuscript
    The recent progression in AI, nanomedicine and robotics have increased concerns about ethics, policy and law. The increasing complexity and hybrid nature of AI and nanotechnologies impact the functionality of “law in action” which can lead to legal uncertainty and ultimately to a public distrust. There is an immediate need of collaboration between Central Asian biomedical scientists, AI engineers and academic lawyers for the harmonization of AI, nanomedicines and robotics in Central Asian legal system.
    Download  
     
    Export citation  
     
    Bookmark  
  32. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33.  46
    The AI-Stance: Crossing the Terra Incognita of Human-Machine Interactions?Anna Strasser & Michael Wilby - 2023 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy’22. Amsterdam: IOS Press. pp. 286-295.
    Although even very advanced artificial systems do not meet the demanding conditions which are required for humans to be a proper participant in a social interaction, we argue that not all human-machine interactions (HMIs) can appropriately be reduced to mere tool-use. By criticizing the far too demanding conditions of standard construals of intentional agency we suggest a minimal approach that ascribes minimal agency to some artificial systems resulting in the proposal of taking minimal joint actions as a case of a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34.  58
    Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making.Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer & Abraham Bernstein - 2022 - Proceedings of the 2022 Chi Conference on Human Factors in Computing Systems 160:160:1–17.
    While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Debate: What is Personhood in the Age of AI?David J. Gunkel & Jordan Joseph Wales - 2021 - AI and Society 36:473–486.
    In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  36. Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  37. Excavating “Excavating AI”: The Elephant in the Gallery.Michael J. Lyons - 2020 - arXiv 2009:1-15.
    Two art exhibitions, “Training Humans” and “Making Faces,” and the accompanying essay “Excavating AI: The politics of images in machine learning training sets” by Kate Crawford and Trevor Paglen, are making substantial impact on discourse taking place in the social and mass media networks, and some scholarly circles. Critical scrutiny reveals, however, a self-contradictory stance regarding informed consent for the use of facial images, as well as serious flaws in their critique of ML training sets. Our analysis underlines the non-negotiability (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  38.  98
    Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau - manuscript
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Big Tech corporations and AI: A Social License to Operate and Multi-Stakeholder Partnerships in the Digital Age.Marianna Capasso & Steven Umbrello - 2023 - In Francesca Mazzi & Luciano Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals. Cham: Springer. pp. 231–249.
    The pervasiveness of AI-empowered technologies across multiple sectors has led to drastic changes concerning traditional social practices and how we relate to one another. Moreover, market-driven Big Tech corporations are now entering public domains, and concerns have been raised that they may even influence public agenda and research. Therefore, this chapter focuses on assessing and evaluating what kind of business model is desirable to incentivise the AI for Social Good (AI4SG) factors. In particular, the chapter explores the implications of this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40.  86
    From Conceptual Content in Big Apes and AI, to the Classical Principle of Explosion: An Interview with Robert B. Brandom [Del contenido conceptual en los grandes monos e IA, hasta el principio de explosión clásico: una entrevista con Robert B. Brandom].María José Frápolli & Kurt Wischin - 2019 - Disputatio. Philosophical Research Bulletin 8 (9).
    In this Interview, Professor Robert B. Brandom answered ten detailed questions about his philosophy of Rational Pragmatism and Semantic Expressivism, grouped into four topics. 1. Metaphysics and Anthropology, 2. Pragmatics and Semantics, 3. Epistemic Expressivism and 4. Philosophy of Logic. With his careful answers Professor Brandom offers many additional insights into his rigorously constructed account of the relationship “between what we say and think, and what we are saying and thinking about” around the human practice of asking for and giving (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. A united framework of five principles for AI in society.Luciano Floridi & Josh Cowls - 2019 - Harvard Data Science Review 1 (1).
    Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  43. Australia's Approach to AI Governance in Security and Defence.Susannah Kate Devitt & Damian Copeland - forthcoming - In M. Raska, Z. Stanley-Lockman & R. Bitzinger (eds.), AI Governance for National Security and Defence: Assessing Military AI Strategic Perspectives. Milton Park: Routledge. pp. 38.
    Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Ethical AI at work: the social contract for Artificial Intelligence and its implications for the workplace psychological contract.Sarah Bankins & Paul Formosa - 2021 - In Redefining the psychological contract in the digital era: issues for research and practice. Cham, Switzerland: pp. 55-72.
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Can AI Help Us to Understand Belief? Sources, Advances, Limits, and Future Directions.Andrea Vestrucci, Sara Lumbreras & Lluis Oviedo - 2021 - International Journal of Interactive Multimedia and Artificial Intelligence 7 (1):24-33.
    The study of belief is expanding and involves a growing set of disciplines and research areas. These research programs attempt to shed light on the process of believing, understood as a central human cognitive function. Computational systems and, in particular, what we commonly understand as Artificial Intelligence (AI), can provide some insights on how beliefs work as either a linear process or as a complex system. However, the computational approach has undergone some scrutiny, in particular about the differences between what (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46.  72
    Biases, Evidence and Inferences in the story of Ai.Efraim Wallach - manuscript
    This treatise covers the history, now more than 170 years long, of researches and debates concerning the biblical city of Ai. This archetypical chapter in the evolution of biblical archaeology and historiography was never presented in full. I use the historical data as a case study to explore a number of epistemological issues, such as the creation and revision of scientific knowledge, the formation and change of consensus, the Kuhnian model of paradigm shift, several models of discrimination between hypotheses about (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48.  26
    What does AI believe in?Evgeny Smirnov - manuscript
    I conducted an experiment by using four different artificial intelligence models developed by OpenAI to estimate the persuasiveness and rational justification of various philosophical stances. The AI models used were text-davinci-003, text-ada-001, text-curie-001, and text-babbage-001, which differed in complexity and the size of their training data sets. For the philosophical stances, the list of 30 questions created by Bourget & Chalmers (2014) was used. The results indicate that it seems that each model has its own plausible ‘cognitive’ style. The outcomes (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. A value sensitive design approach for designing AI-based worker assistance systems in manufacturing.Susanne Vernim, Harald Bauer, Erwin Rauch, Marianne Thejls Ziegler & Steven Umbrello - 2022 - Procedia Computer Science 200:505-516.
    Although artificial intelligence has been given an unprecedented amount of attention in both the public and academic domains in the last few years, its convergence with other transformative technologies like cloud computing, robotics, and augmented/virtual reality is predicted to exacerbate its impacts on society. The adoption and integration of these technologies within industry and manufacturing spaces is a fundamental part of what is called Industry 4.0, or the Fourth Industrial Revolution. The impacts of this paradigm shift on the human operators (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  50.  54
    AI and Structural Injustice: Foundations for Equity, Values, and Responsibility.Johannes Himmelreich & Désirée Lim - forthcoming - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), Oxford Handbook of AI Governance. Oxford University Press.
    This chapter argues for a structural injustice approach to the governance of AI. Structural injustice has an analytical and an evaluative component. The analytical component consists of structural explanations that are well-known in the social sciences. The evaluative component is a theory of justice. Structural injustice is a powerful conceptual tool that allows researchers and practitioners to identify, articulate, and perhaps even anticipate, AI biases. The chapter begins with an example of racial bias in AI that arises from structural injustice. (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 994