Results for 'human-level AI'

1000+ found
Order:
  1. Supporting human autonomy in AI systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - 2020 - In Christopher Burr & Luciano Floridi (eds.), Ethics of digital well-being: a multidisciplinary approach. Springer.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  2. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Towards a Body Fluids Ontology: A unified application ontology for basic and translational science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, physiological, and biochemical data, We briefly review the Blood Ontology (BLO), Saliva Ontology (SALO) and Kidney and Urinary Pathway Ontology (KUPO) initiatives. We discuss the methods employed in each, and address the project of using them as starting point for a unified body fluids ontology resource. We conclude with a description of how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. The AI Human Condition is a Dilemma between Authenticity and Freedom.James Brusseau - manuscript
    Big data and predictive analytics applied to economic life is forcing individuals to choose between authenticity and freedom. The fact of the choice cuts philosophy away from the traditional understanding of the two values as entwined. This essay describes why the split is happening, how new conceptions of authenticity and freedom are rising, and the human experience of the dilemma between them. Also, this essay participates in recent philosophical intersections with Shoshana Zuboff’s work on surveillance capitalism, but the investigation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making.Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer & Abraham Bernstein - 2022 - Proceedings of the 2022 Chi Conference on Human Factors in Computing Systems 160:160:1–17.
    While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. AI Wellbeing.Simon Goldstein & Cameron Domenico Kirk-Giannini - manuscript
    Under what conditions would an artificially intelligent system have wellbeing? Despite its obvious bearing on the ethics of human interactions with artificial systems, this question has received little attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Ethical AI at work: the social contract for Artificial Intelligence and its implications for the workplace psychological contract.Sarah Bankins & Paul Formosa - 2021 - In Sarah Bankins & Paul Formosa (eds.), Redefining the psychological contract in the digital era: issues for research and practice. Cham, Switzerland: pp. 55-72.
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. The role of robotics and AI in technologically mediated human evolution: a constructive proposal.Jeffrey White - 2020 - AI and Society 35 (1):177-185.
    This paper proposes that existing computational modeling research programs may be combined into platforms for the information of public policy. The main idea is that computational models at select levels of organization may be integrated in natural terms describing biological cognition, thereby normalizing a platform for predictive simulations able to account for both human and environmental costs associated with different action plans and institutional arrangements over short and long time spans while minimizing computational requirements. Building from established research programs, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. The Blood Ontology: An ontology in the domain of hematology.Almeida Mauricio Barcellos, Proietti Anna Barbara de Freitas Carneiro, Ai Jiye & Barry Smith - 2011 - In Proceedings of the Second International Conference on Biomedical Ontology, Buffalo, NY, July 28-30, 2011 (CEUR 883). pp. (CEUR Workshop Proceedings, 833).
    Despite the importance of human blood to clinical practice and research, hematology and blood transfusion data remain scattered throughout a range of disparate sources. This lack of systematization concerning the use and definition of terms poses problems for physicians and biomedical professionals. We are introducing here the Blood Ontology, an ongoing initiative designed to serve as a controlled vocabulary for use in organizing information about blood. The paper describes the scope of the Blood Ontology, its stage of development and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Reasons to Respond to AI Emotional Expressions.Rodrigo Díaz & Jonas Blatter - forthcoming - American Philosophical Quarterly.
    Human emotional expressions can communicate the emotional state of the expresser, but they can also communicate appeals to perceivers. For example, sadness expressions such as crying request perceivers to aid and support, and anger expressions such as shouting urge perceivers to back off. Some contemporary artificial intelligence (AI) systems can mimic human emotional expressions in a (more or less) realistic way, and they are progressively being integrated into our daily lives. How should we respond to them? Do we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2012 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Extinction Risks from AI: Invisible to Science?Vojtech Kovarik, Christiaan van Merwijk & Ida Mattsson - manuscript
    In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart’s Law as “Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity”, and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart’s Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Panpsychism and AI consciousness.Marcus Arvan & Corey J. Maley - 2022 - Synthese 200 (3):1-22.
    This article argues that if panpsychism is true, then there are grounds for thinking that digitally-based artificial intelligence may be incapable of having coherent macrophenomenal conscious experiences. Section 1 briefly surveys research indicating that neural function and phenomenal consciousness may be both analog in nature. We show that physical and phenomenal magnitudes—such as rates of neural firing and the phenomenally experienced loudness of sounds—appear to covary monotonically with the physical stimuli they represent, forming the basis for an analog relationship between (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  17. Making AI Intelligible: Philosophical Foundations. By Herman Cappelen and Josh Dever. [REVIEW]Nikhil Mahant - forthcoming - Philosophical Quarterly.
    Linguistic outputs generated by modern machine-learning neural net AI systems seem to have the same contents—i.e., meaning, semantic value, etc.—as the corresponding human-generated utterances and texts. Building upon this essential premise, Herman Cappelen and Josh Dever's Making AI Intelligible sets for itself the task of addressing the question of how AI-generated outputs have the contents that they seem to have (henceforth, ‘the question of AI Content’). In pursuing this ambitious task, the book makes several high-level, framework observations about (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Catching Treacherous Turn: A Model of the Multilevel AI Boxing.Alexey Turchin - manuscript
    With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the field of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Human ≠ AGI.Roman Yampolskiy - manuscript
    Terms Artificial General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used interchangeably to refer to the Holy Grail of Artificial Intelligence (AI) research, creation of a machine capable of achieving goals in a wide range of environments. However, widespread implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified, as humans are not general intelligences. In this paper, we will prove this distinction.
    Download  
     
    Export citation  
     
    Bookmark  
  21. Human achievement and artificial intelligence.Brett Karlan - 2023 - Ethics and Information Technology 25 (3):1-12.
    In domains as disparate as playing Go and predicting the structure of proteins, artificial intelligence (AI) technologies have begun to perform at levels beyond which any humans can achieve. Does this fact represent something lamentable? Does superhuman AI performance somehow undermine the value of human achievements in these areas? Go grandmaster Lee Sedol suggested as much when he announced his retirement from professional Go, blaming the advances of Go-playing programs like AlphaGo for sapping his will to play the game (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. The debate on the ethics of AI in health care: a reconstruction and critical review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  23. Assessing the future plausibility of catastrophically dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence.Hin-Yan Liu & Karolina Zawieska - 2020 - Ethics and Information Technology 22 (4):321-333.
    As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  25. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. On Human Genome Manipulation and Homo technicus: The Legal Treatment of Non-natural Human Subjects.Tyler L. Jaynes - 2021 - AI and Ethics 1 (3):331-345.
    Although legal personality has slowly begun to be granted to non-human entities that have a direct impact on the natural functioning of human societies (given their cultural significance), the same cannot be said for computer-based intelligence systems. While this notion has not had a significantly negative impact on humanity to this point in time that only remains the case because advanced computerised intelligence systems (ACIS) have not been acknowledged as reaching human-like levels. With the integration of ACIS (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Human-Centered AI: The Aristotelian Approach.Jacob Sparks & Ava Wright - 2023 - Divus Thomas 126 (2):200-218.
    As we build increasingly intelligent machines, we confront difficult questions about how to specify their objectives. One approach, which we call human-centered, tasks the machine with the objective of learning and satisfying human objectives by observing our behavior. This paper considers how human-centered AI should conceive the humans it is trying to help. We argue that an Aristotelian model of human agency has certain advantages over the currently dominant theory drawn from economics.
    Download  
     
    Export citation  
     
    Bookmark  
  28.  26
    The impact of intelligent decision-support systems on humans’ ethical decision-making: A systematic literature review and an integrated framework.Franziska Poszler & Benjamin Lange - forthcoming - Technological Forecasting and Social Change.
    With the rise and public accessibility of AI-enabled decision-support systems, individuals outsource increasingly more of their decisions, even those that carry ethical dimensions. Considering this trend, scholars have highlighted that uncritical deference to these systems would be problematic and consequently called for investigations of the impact of pertinent technology on humans’ ethical decision-making. To this end, this article conducts a systematic review of existing scholarship and derives an integrated framework that demonstrates how intelligent decision-support systems (IDSSs) shape humans’ ethical decision-making. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Interaction and resistance: The recognition of intentions in new human-computer interaction.Vincent C. Müller - 2011 - In Anna Esposito, Antonietta M. Esposito, Raffaele Martone, Vincent C. Müller & Gaetano Scarpetta (eds.), Towards autonomous, adaptive, and context-aware multimodal interfaces: Theoretical and practical issues. Springer. pp. 1-7.
    Just as AI has moved away from classical AI, human-computer interaction (HCI) must move away from what I call ‘good old fashioned HCI’ to ‘new HCI’ – it must become a part of cognitive systems research where HCI is one case of the interaction of intelligent agents (we now know that interaction is essential for intelligent agents anyway). For such interaction, we cannot just ‘analyze the data’, but we must assume intentions in the other, and I suggest these are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Thoughts on Artificial Intelligence and the Origin of Life Resulting from General Relativity, with Neo-Darwinist Reference to Human Evolution and Mathematical Reference to Cosmology.Rodney Bartlett - manuscript
    When this article was first planned, writing was going to be exclusively about two things - the origin of life and human evolution. But it turned out to be out of the question for the author to restrict himself to these biological and anthropological topics. A proper understanding of them required answering questions like “What is the nature of the universe – the home of life – and how did it originate?”, “How can time travel be removed from fantasy (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Machines with human-like commonsense.Antonio Lieto - 2021 - 18th Japanese Society for Artificial Intelligence General-Purpose Artificial Intelligence Meeting Group (SIG-AGI).
    I will review the main problems concerning commonsense reasoning in machines and I will present resent two different applications - namaly: the Dual PECCS linguistic categorization system and the TCL reasoning framework that have been developed to address, respectively, the problem of typicality effects and the one of commonsense compositionality, in a way that is integrated or compliant with different cognitive architectures thus extending their knowledge processing capabilities In doing so I will show how such aspects are better dealt with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  33. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Consciousness in Human and Machine: A Theory and Some Falsifiable Predictions.Richard Loosemore - 2009 - In B. Goertzel, P. Hitzler & M. Hutter (eds.), Proceedings of the Second Conference on Artificial General Intelligence. Atlantis Press.
    To solve the hard problem of consciousness we first note that all cognitive systems of sufficient power must get into difficulty when trying to analyze consciousness concepts, because the mechanism that does the analysis will bottom out in such a way that the system declares these concepts to be both real and ineffable. Rather than use this observation to dismiss consciousness as an artifact, we propose a unifying interpretation that allows consciousness to be regarded as explicable at a meta (...), while at the same time being mysterious and inexplicable on its own terms. It is further suggested that science must concede that there are some aspects of the world that deserve to be called ‘real’, but which are beyond explanation. The main conclusion is that thinking machines of the future will, inevitably, have just the same subjective consciousness that we do. Some testable predictions can be derived from this theory. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Simulation, self-extinction, and philosophy in the service of human civilization.Jeffrey White - 2016 - AI and Society 31 (2):171-190.
    Nick Bostrom’s recently patched ‘‘simulation argument’’ (Bostrom in Philos Q 53:243–255, 2003; Bos- trom and Kulczycki in Analysis 71:54–61, 2011) purports to demonstrate the probability that we ‘‘live’’ now in an ‘‘ancestor simulation’’—that is as a simulation of a period prior to that in which a civilization more advanced than our own—‘‘post-human’’—becomes able to simulate such a state of affairs as ours. As such simulations under consid- eration resemble ‘‘brains in vats’’ (BIVs) and may appear open to similar objections, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  36. Good AI for the Present of Humanity Democratizing AI Governance.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2021 - AI Ethics Journal 2 (2):1-16.
    What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  38. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts forward Sentientist Coherent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. When AI meets PC: exploring the implications of workplace social robots and a human-robot psychological contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  41. As AIs get smarter, understand human-computer interactions with the following five premises.Manh-Tung Ho & Quan-Hoang Vuong - manuscript
    The hypergrowth and hyperconnectivity of networks of artificial intelligence (AI) systems and algorithms increasingly cause our interactions with the world, socially and environmentally, more technologically mediated. AI systems start interfering with our choices or making decisions on our behalf: what we see, what we buy, which contents or foods we consume, where we travel to, who we hire, etc. It is imperative to understand the dynamics of human-computer interaction in the age of progressively more competent AI. This essay presents (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Are Large Language Models "alive"?Francesco Maria De Collibus - manuscript
    The appearance of openly accessible Artificial Intelligence Applications such as Large Language Models, nowadays capable of almost human-level performances in complex reasoning tasks had a tremendous impact on public opinion. Are we going to be "replaced" by the machines? Or - even worse - "ruled" by them? The behavior of these systems is so advanced they might almost appear "alive" to end users, and there have been claims about these programs being "sentient". Since many of our relationships of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. A Runtime Framework of the Mind.Fangfang Li & Xiaojie Zhang - manuscript
    How the mind works is the ultimate mystery for human beings. This paper proposed a framework to solve it. We call it the self-programming system. The self-programming system can learn, store and apply the functions of bodies, external tools, and even the mind itself uniformly. However, due to the generality of the mind, traditional scientific methods are not suitable for validating a theory of mind. Therefore, we appeal to show the explanatory power of the self-programming system. Due to this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI.Jose Hernandez-Orallo & Karina Vold - 2019 - In Jose Hernandez-Orallo & Karina Vold (eds.), Proceedings of the AAAI/ACM. pp. 507-513.
    Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabil- ities, posing new philosophical, ethical and technical chal- lenges. To (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  45. The rise of artificial intelligence and the crisis of moral passivity.Berman Chan - 2020 - AI and Society 35 (4):991-993.
    Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  46. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  47. The AI-Stance: Crossing the Terra Incognita of Human-Machine Interactions?Anna Strasser & Michael Wilby - 2023 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy’22. Amsterdam: IOS Press. pp. 286-295.
    Although even very advanced artificial systems do not meet the demanding conditions which are required for humans to be a proper participant in a social interaction, we argue that not all human-machine interactions (HMIs) can appropriately be reduced to mere tool-use. By criticizing the far too demanding conditions of standard construals of intentional agency we suggest a minimal approach that ascribes minimal agency to some artificial systems resulting in the proposal of taking minimal joint actions as a case of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. AI Language Models Cannot Replace Human Research Participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - forthcoming - AI and Society:1-3.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000