Results for 'Human-AI cocreation'

964 found
Order:
  1. Toward a social theory of Human-AI Co-creation: Bringing techno-social reproduction and situated cognition together with the following seven premises.Manh-Tung Ho & Quan-Hoang Vuong - manuscript
    This article synthesizes the current theoretical attempts to understand human-machine interactions and introduces seven premises to understand our emerging dynamics with increasingly competent, pervasive, and instantly accessible algorithms. The hope that these seven premises can build toward a social theory of human-AI cocreation. The focus on human-AI cocreation is intended to emphasize two factors. First, is the fact that our machine learning systems are socialized. Second, is the coevolving nature of human mind and AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3.  57
    Ensuring Mutual Benefit in Human-AI Coexistence.T. Niedzialek - unknown
    Download  
     
    Export citation  
     
    Bookmark  
  4. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. Towards a Body Fluids Ontology: A unified application ontology for basic and translational science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, physiological, and biochemical data, We briefly review the Blood Ontology (BLO), Saliva Ontology (SALO) and Kidney and Urinary Pathway Ontology (KUPO) initiatives. We discuss the methods employed in each, and address the project of using them as starting point for a unified body fluids ontology resource. We conclude with a description of how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Ethics at the Frontier of Human-AI Relationships.Henry Shevlin - manuscript
    The idea that humans might one day form persistent and dynamic relationships in professional, social, and even romantic contexts is a longstanding one. However, developments in machine learning and especially natural language processing over the last five years have led to this possibility becoming actualised at a previously unseen scale. Apps like Replika, Xiaoice, and CharacterAI boast many millions of active long-term users, and give rise to emotionally complex experiences. In this paper, I provide an overview of these developments, beginning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Developing a Trusted Human-AI Network for Humanitarian Benefit.Susannah Kate Devitt, Jason Scholz, Timo Schless & Larry Lewis - forthcoming - Journal of Digital War:TBD.
    Humans and artificial intelligences (AI) will increasingly participate digitally and physically in conflicts yet there is a lack of trusted communications across agents and platforms. For example, humans in disasters and conflict already use messaging and social media to share information, however, international humanitarian relief organisations treat this information as unverifiable and untrustworthy. AI may reduce the ‘fog-of-war’ and improve outcomes, however current AI implementations are often brittle, have a narrow scope of application and wide ethical risks. Meanwhile, human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Real Feeling and Fictional Time in Human-AI Interactions.Krueger Joel & Tom Roberts - 2024 - Topoi 43 (3).
    As technology improves, artificial systems are increasingly able to behave in human-like ways: holding a conversation; providing information, advice, and support; or taking on the role of therapist, teacher, or counsellor. This enhanced behavioural complexity, we argue, encourages deeper forms of affective engagement on the part of the human user, with the artificial agent helping to stabilise, subdue, prolong, or intensify a person’s emotional condition. Here, we defend a fictionalist account of human/AI interaction, according to which these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Artificial Intelligence Implications for Academic Cheating: Expanding the Dimensions of Responsible Human-AI Collaboration with ChatGPT.Jo Ann Oravec - 2023 - Journal of Interactive Learning Research 34 (2).
    Cheating is a growing academic and ethical concern in higher education. This article examines the rise of artificial intelligence (AI) generative chatbots for use in education and provides a review of research literature and relevant scholarship concerning the cheating-related issues involved and their implications for pedagogy. The technological “arms race” that involves cheating-detection system developers versus technology savvy students is attracting increased attention to cheating. AI has added new dimensions to academic cheating challenges as students (as well as faculty and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  11. As AIs get smarter, understand human-computer interactions with the following five premises.Manh-Tung Ho & Quan-Hoang Vuong - manuscript
    The hypergrowth and hyperconnectivity of networks of artificial intelligence (AI) systems and algorithms increasingly cause our interactions with the world, socially and environmentally, more technologically mediated. AI systems start interfering with our choices or making decisions on our behalf: what we see, what we buy, which contents or foods we consume, where we travel to, who we hire, etc. It is imperative to understand the dynamics of human-computer interaction in the age of progressively more competent AI. This essay presents (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Algorithm exploitation: humans are keen to exploit benevolent AI.Jurgis Karpus, Adrian Krüger, Julia Tovar Verba, Bahador Bahrami & Ophelia Deroy - 2021 - iScience 24 (6):102679.
    We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. Will AI and Humanity Go to War?Simon Goldstein - manuscript
    This paper offers the first careful analysis of the possibility that AI and humanity will go to war. The paper focuses on the case of artificial general intelligence, AI with broadly human capabilities. The paper uses a bargaining model of war to apply standard causes of war to the special case of AI/human conflict. The paper argues that information failures and commitment problems are especially likely in AI/human conflict. Information failures would be driven by the difficulty of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Good AI for the Present of Humanity Democratizing AI Governance.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2021 - AI Ethics Journal 2 (2):1-16.
    What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Supporting human autonomy in AI systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - 2020 - In Christopher Burr & Luciano Floridi (eds.), Ethics of digital well-being: a multidisciplinary approach. Springer.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  17. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  19. When AI meets PC: exploring the implications of workplace social robots and a human-robot psychological contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  20.  80
    AI and Human Rights.Hani Bakeer, Jawad Y. I. Alzamily, Husam Almadhoun, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Engineering' Research (Ijaer) 8 (10):16-24.
    Abstract; As artificial intelligence (AI) technologies become increasingly integrated into various facets of society, their impact on human rights has garnered significant attention. This paper examines the intersection of AI and human rights, focusing on key issues such as privacy, bias, surveillance, access, and accountability. AI systems, while offering remarkable advancements in efficiency and capability, also pose risks to individual privacy and can perpetuate existing biases, leading to potential discrimination. The use of AI in surveillance raises ethical concerns (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. The Blood Ontology: An ontology in the domain of hematology.Almeida Mauricio Barcellos, Proietti Anna Barbara de Freitas Carneiro, Ai Jiye & Barry Smith - 2011 - In Barcellos Almeida Mauricio, Carneiro Proietti Anna Barbara de Freitas, Jiye Ai & Smith Barry (eds.), Proceedings of the Second International Conference on Biomedical Ontology, Buffalo, NY, July 28-30, 2011 (CEUR 883). pp. (CEUR Workshop Proceedings, 833).
    Despite the importance of human blood to clinical practice and research, hematology and blood transfusion data remain scattered throughout a range of disparate sources. This lack of systematization concerning the use and definition of terms poses problems for physicians and biomedical professionals. We are introducing here the Blood Ontology, an ongoing initiative designed to serve as a controlled vocabulary for use in organizing information about blood. The paper describes the scope of the Blood Ontology, its stage of development and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Can AI and humans genuinely communicate?Constant Bonard - 2024 - In Anna Strasser (ed.), Anna's AI Anthology. How to live with smart machines? Berlin: Xenomoi Verlag.
    Can AI and humans genuinely communicate? In this article, after giving some background and motivating my proposal (§1–3), I explore a way to answer this question that I call the ‘mental-behavioral methodology’ (§4–5). This methodology follows the following three steps: First, spell out what mental capacities are sufficient for human communication (as opposed to communication more generally). Second, spell out the experimental paradigms required to test whether a behavior exhibits these capacities. Third, apply or adapt these paradigms to test (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. The AI-Stance: Crossing the Terra Incognita of Human-Machine Interactions?Anna Strasser & Michael Wilby - 2022 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy’22. IOS Press. pp. 286-295.
    Although even very advanced artificial systems do not meet the demanding conditions which are required for humans to be a proper participant in a social interaction, we argue that not all human-machine interactions (HMIs) can appropriately be reduced to mere tool-use. By criticizing the far too demanding conditions of standard construals of intentional agency we suggest a minimal approach that ascribes minimal agency to some artificial systems resulting in the proposal of taking minimal joint actions as a case of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Can AI Achieve Common Good and Well-being? Implementing the NSTC's R&D Guidelines with a Human-Centered Ethical Approach.Jr-Jiun Lian - 2024 - 2024 Annual Conference on Science, Technology, and Society (Sts) Academic Paper, National Taitung University. Translated by Jr-Jiun Lian.
    This paper delves into the significance and challenges of Artificial Intelligence (AI) ethics and justice in terms of Common Good and Well-being, fairness and non-discrimination, rational public deliberation, and autonomy and control. Initially, the paper establishes the groundwork for subsequent discussions using the Academia Sinica LLM incident and the AI Technology R&D Guidelines of the National Science and Technology Council(NSTC) as a starting point. In terms of justice and ethics in AI, this research investigates whether AI can fulfill human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27.  80
    Rethinking AI: Moving Beyond Humans as Exclusive Creators.Renee Ye - 2024 - Proceedings of the Annual Meeting of the Cognitive Science Society, Volume 46.
    Termed the 'Made-by-Human Hypothesis,' I challenge the commonly accepted notion that Artificial Intelligence (AI) is exclusively crafted by humans, emphasizing its impediment to progress. I argue that influences beyond human agency significantly shape AI's trajectory. Introducing the 'Hybrid Hypothesis,' I suggest that the creation of AI is multi-sourced; methods such as evolutionary algorithms influencing AI originate from diverse sources and yield varied impacts. I argue that the development of AI models will increasingly adopt a 'Human+' hybrid composition, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. AI language models cannot replace human research participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - 2024 - AI and Society 39 (5):2603-2605.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Group Prioritarianism: Why AI should not replace humanity.Frank Hong - 2024 - Philosophical Studies:1-19.
    If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation, at least if we assume consequentialism. Without resorting to non-consequentialist normative theories that suggest that we ought not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30.  92
    AI-Enabled Human Capital Management: Tools for Strategic Workforce Adaptation.M. Arulselvan - 2025 - Journal of Science Technology and Research (JSTAR) 5 (1):530-538.
    This paper explores the application of AI-driven HR analytics in shaping workforce agility, focusing on how real-time data collection, analysis, and modeling foster an adaptable workforce. It highlights the role of predictive analytics in forecasting workforce needs, identifying skill gaps, and optimizing talent deployment. Additionally, the paper discusses how AI enhances strategic decision-making by providing precise metrics and insights into employee behavior, productivity, and satisfaction. The integration of AI into HR systems ultimately shifts HR from a traditionally reactive to a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Against AI Ableism: On "Optimal" Machines and "Disabled" Human Beings.George Saad - 2024 - Borderless Philosophy 7:171-190.
    My aim in this paper is to show how the functionalist standards assumed in the AI debate are, in fact, the assumptions of a capitalist, ableist society writ large. The already established argument against the proposed humanity of AI systems implies a wider critique of the entire ideology of functionalism under which the notion of intelligent machines has taken root.
    Download  
     
    Export citation  
     
    Bookmark  
  32. Human-Centered AI: The Aristotelian Approach.Jacob Sparks & Ava Wright - 2023 - Divus Thomas 126 (2):200-218.
    As we build increasingly intelligent machines, we confront difficult questions about how to specify their objectives. One approach, which we call human-centered, tasks the machine with the objective of learning and satisfying human objectives by observing our behavior. This paper considers how human-centered AI should conceive the humans it is trying to help. We argue that an Aristotelian model of human agency has certain advantages over the currently dominant theory drawn from economics.
    Download  
     
    Export citation  
     
    Bookmark  
  33.  63
    Preserving our humanity in the growing AI-mediated politics: Unraveling the concepts of Democracy (民主) and People as the Roots of the state (民本).Manh-Tung Ho & My-Van Luong - manuscript
    Artificial intelligence (AI) has transformed the way people engage with politics around the world: how citizens consume news, how they view the institutions and norms, how civic groups mobilize public interests, how data-driven campaigns are shaping elections, and so on (Ho & Vuong, 2024). Placing people at the center of the increasingly AI-mediated political landscape has become an urgent matter that transcends all forms of institutions. In this essay, we argue that, in this era, it is necessary to look beyond (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. (1 other version)Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making.Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer & Abraham Bernstein - 2022 - Proceedings of the 2022 Chi Conference on Human Factors in Computing Systems 160:160:1–17.
    While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. AI training data, model success likelihood, and informational entropy-based value.Quan-Hoang Vuong, Viet-Phuong La & Minh-Hoang Nguyen - manuscript
    Since the release of OpenAI's ChatGPT, the world has entered a race to develop more capable and powerful AI, including artificial general intelligence (AGI). The development is constrained by the dependency of AI on the model, quality, and quantity of training data, making the AI training process highly costly in terms of resources and environmental consequences. Thus, improving the effectiveness and efficiency of the AI training process is essential, especially when the Earth is approaching the climate tipping points and planetary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. All too human? Identifying and mitigating ethical risks of Social AI.Henry Shevlin - manuscript
    This paper presents an overview of the risks and benefits of Social AI, understood as conversational AI systems that cater to human social needs like romance, companionship, or entertainment. Section 1 of the paper provides a brief history of conversational AI systems and introduces conceptual distinctions to help distinguish varieties of Social AI and pathways to their deployment. Section 2 of the paper adds further context via a brief discussion of anthropomorphism and its relevance to assessment of human-chatbot (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. AI-Driven Human Resource Analytics for Enhancing Workforce Agility and Strategic Decision-Making.S. M. Padmavathi - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):530-540.
    In today’s rapidly evolving business landscape, organizations must continuously adapt to stay competitive. AI-driven human resource (HR) analytics has emerged as a strategic tool to enhance workforce agility and inform decision-making processes. By leveraging advanced algorithms, machine learning models, and predictive analytics, HR departments can transform vast data sets into actionable insights, driving talent management, employee engagement, and overall organizational efficiency. AI’s ability to analyze patterns, forecast trends, and offer data-driven recommendations empowers HR professionals to make proactive decisions in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy.Paul Formosa - 2021 - Minds and Machines 31 (4):595-616.
    Social robots are robots that can interact socially with humans. As social robots and the artificial intelligence that powers them becomes more advanced, they will likely take on more social and work roles. This has many important ethical implications. In this paper, we focus on one of the most central of these, the impacts that social robots can have on human autonomy. We argue that, due to their physical presence and social capacities, there is a strong potential for social (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  41.  85
    Three tragedies and three shades of finitude that shape human life in the AI era.Manh-Tung Ho & Manh-Toan Ho - manuscript
    This essay seeks to understand what it means for the human collective when AI technologies have become a predominant force in each of our lives through identifying three moral dilemmas (i.e., tragedy of the commons, tragedy of commonsense morality, tragedy of apathy) that shape human choices. In the first part, we articulate AI-driven versions of the three moral dilemmas. Then, in the second part, drawing from evolutionary psychology, existentialism, and East Asian philosophies, we argue that a deep appreciation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. How AI’s Self-Prolongation Influences People’s Perceptions of Its Autonomous Mind: The Case of U.S. Residents.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Ruining Jin, Minh-Khanh La & Tam-Tri Le - 2023 - Behavioral Sciences 13 (6):470.
    The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI’s capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  43. First human upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44.  89
    Possibilities and Limitations of AI in Philosophical Inquiry Compared to Human Capabilities.Keita Tsuzuki - manuscript
    Traditionally, philosophy has been strictly a human domain, with wide applications in science and ethics. However, with the rapid advancement of natural language processing technologies like ChatGPT, the question of whether artificial intelligence can engage in philosophical thinking is becoming increasingly important. This work first clarifies the meaning of philosophy based on its historical background, then explores the possibility of AI engaging in philosophy. We conclude that AI has reached a stage where it can engage in philosophical inquiry. The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. The AI Human Condition is a Dilemma between Authenticity and Freedom.James Brusseau - manuscript
    Big data and predictive analytics applied to economic life is forcing individuals to choose between authenticity and freedom. The fact of the choice cuts philosophy away from the traditional understanding of the two values as entwined. This essay describes why the split is happening, how new conceptions of authenticity and freedom are rising, and the human experience of the dilemma between them. Also, this essay participates in recent philosophical intersections with Shoshana Zuboff’s work on surveillance capitalism, but the investigation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. From human to artificial cognition and back: New perspectives on cognitively inspired AI systems.Antonio Lieto & Daniele Radicioni - 2016 - Cognitive Systems Research 39 (c):1-3.
    We overview the main historical and technological elements characterising the rise, the fall and the recent renaissance of the cognitive approaches to Artificial Intelligence and provide some insights and suggestions about the future directions and challenges that, in our opinion, this discipline needs to face in the next years.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. (1 other version)Generative AI and human labor: who is replaceable?AbuMusab Syed - 2023 - AI and Society:1-3.
    Download  
     
    Export citation  
     
    Bookmark  
  49.  75
    Reframing Deception for Human-Centered AI.Steven Umbrello & Simone Natale - 2024 - International Journal of Social Robotics 16 (11-12):2223–2241.
    The philosophical, legal, and HCI literature concerning artificial intelligence (AI) has explored the ethical implications and values that these systems will impact on. One aspect that has been only partially explored, however, is the role of deception. Due to the negative connotation of this term, research in AI and Human–Computer Interaction (HCI) has mainly considered deception to describe exceptional situations in which the technology either does not work or is used for malicious purposes. Recent theoretical and historical work, however, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. AI Wellbeing.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - Asian Journal of Philosophy.
    Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little direct attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 964