Results for 'Ai Matsui'

425 found
Order:
  1.  52
    Inferentialism and Semantic Externalism: A Neglected Debate Between Sellars and Putnam.Takaaki Matsui - forthcoming - British Journal for the History of Philosophy:1-20.
    In his 1975 paper “The Meaning of ‘Meaning’”, Hilary Putnam famously argued for semantic externalism. Little attention has been paid, however, to the fact that already in 1973, Putnam had presented the idea of the linguistic division of labor and the Twin Earth thought experiment in his comment on Wilfrid Sellars’s “Meaning as Functional Classification” at a conference, and Sellars had replied to Putnam from a broadly inferentialist perspective. The first half of this paper aims to trace the development of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Saliva Ontology: An Ontology-Based Framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Bioinformatics Advances in Saliva Diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4.  61
    Towards a Body Fluids Ontology: A Unified Application Ontology for Basic and Translational Science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, physiological, and biochemical data, We briefly review the Blood Ontology (BLO), Saliva Ontology (SALO) and Kidney and Urinary Pathway Ontology (KUPO) initiatives. We discuss the methods employed in each, and address the project of using them as starting point for a unified body fluids ontology resource. We conclude with a description of how the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Transparent, Explainable, and Accountable AI for Robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  6.  86
    AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Why AI Doomsayers Are Like Sceptical Theists and Why It Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2019 - Synthese:arXiv:1901.02918v1.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. New Developments in the Philosophy of AI.Vincent Müller - 2016 - In Fundamental Issues of Artificial Intelligence. Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  10. AI Methods in Bioethics.Joshua August Skorburg, Walter Sinnott-Armstrong & Vincent Conitzer - 2020 - American Journal of Bioethics: Empirical Bioethics 1 (11):37-39.
    Commentary about the role of AI in bioethics for the 10th anniversary issue of AJOB: Empirical Bioethics.
    Download  
     
    Export citation  
     
    Bookmark  
  11. Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2017 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Berlin: Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12.  9
    Expanding Care-Centered Value Sensitive Design: Designing Care Robots with AI for Social Good Norms.Steven Umbrello, Marianna Capasso, Maurizio Balistreri, Alberto Pirni & Federica Merenda - manuscript
    The increasing automation and ubiquity of robotics deployed within the field of care boasts promising advantages. However, challenging ethical issues arise also as a consequence. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. It takes the value sensitive design (VSD) approach to technology design and extends its application to care robots by not only integrating the values of care, but also (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13.  5
    AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Ethics of Artificial Intelligence in Brain and Mental Health.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Philosophy and Theory of Artificial Intelligence, 3–4 October (Report on PT-AI 2011).Vincent C. Müller - 2011 - The Reasoner 5 (11):192-193.
    Report for "The Reasoner" on the conference "Philosophy and Theory of Artificial Intelligence", 3 & 4 October 2011, Thessaloniki, Anatolia College/ACT, http://www.pt-ai.org. --- Organization: Vincent C. Müller, Professor of Philosophy at ACT & James Martin Fellow, Oxford http://www.sophia.de --- Sponsors: EUCogII, Oxford-FutureTech, AAAI, ACM-SIGART, IACAP, ECCAI.
    Download  
     
    Export citation  
     
    Bookmark  
  15. Theory and Philosophy of AI (Minds and Machines, 22/2 - Special Volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Designing AI for Social Good: Seven Essential Factors.Josh Cowls, Thomas C. King, Mariarosaria Taddeo & Luciano Floridi - manuscript
    The idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to address social problems effectively through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies (Cath et al. 2018). This article addresses this gap (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  17. Toward an Ethics of AI Assistants: An Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  18.  36
    Blind Spots in AI Ethics and Biases in AI Governance.Nicholas Kluge Corrêa - manuscript
    There is an interesting link between critical theory and certain genres of literature that may be of interest to the current debate on AI ethics. While critical theory generally points out certain deficiencies in the present to criticize it, futurology, and literary genres such as Cyber-punk, extrapolate our present deficits in possible dystopian futures to criticize the status quo. Given the great advance of the AI industry in recent years, an increasing number of ethical matters have been raised and debated, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19.  24
    Aiming AI at a Moving Target: Health.Mihai Nadin - forthcoming - AI and Society:1-9.
    Justified by spectacular achievements facilitated through applied deep learning methodology, the “Everything is possible” view dominates this new hour in the “boom and bust” curve of AI performance. The optimistic view collides head on with the “It is not possible”—ascertainments often originating in a skewed understanding of both AI and medicine. The meaning of the conflicting views can be assessed only by addressing the nature of medicine. Specifically: Which part of medicine, if any, can and should be entrusted to AI—now (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  20. AI, Concepts, and the Paradox of Mental Representation, with a Brief Discussion of Psychological Essentialism.Eric Dietrich - 2001 - J. Of Exper. And Theor. AI 13 (1):1-7.
    Mostly philosophers cause trouble. I know because on alternate Thursdays I am one -- and I live in a philosophy department where I watch all of them cause trouble. Everyone in artificial intelligence knows how much trouble philosophers can cause (and in particular, we know how much trouble one philosopher -- John Searle -- has caused). And, we know where they tend to cause it: in knowledge representation and the semantics of data structures. This essay is about a recent case (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  21. AI, Situatedness, Creativity, and Intelligence; or the Evolution of the Little Hearing Bones.Eric Dietrich - 1996 - J. Of Experimental and Theoretical AI 8 (1):1-6.
    Good sciences have good metaphors. Indeed, good sciences are good because they have good metaphors. AI could use more good metaphors. In this editorial, I would like to propose a new metaphor to help us understand intelligence. Of course, whether the metaphor is any good or not depends on whether it actually does help us. (What I am going to propose is not something opposed to computationalism -- the hypothesis that cognition is computation. Noncomputational metaphors are in vogue these days, (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  22. AI and the Mechanistic Forces of Darkness.Eric Dietrich - 1995 - J. Of Experimental and Theoretical AI 7 (2):155-161.
    Under the Superstition Mountains in central Arizona toil those who would rob humankind o f its humanity. These gray, soulless monsters methodically tear away at our meaning, our subjectivity, our essence as transcendent beings. With each advance, they steal our freedom and dignity. Who are these denizens of darkness, these usurpers of all that is good and holy? None other than humanity’s arch-foe: The Cognitive Scientists -- AI researchers, fallen philosophers, psychologists, and other benighted lovers of computers. Unless they are (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   3 citations  
  23. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  24. The Debate on the Ethics of AI in Health Care: A Reconstruction and Critical Review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. When Should Co-Authorship Be Given to AI?G. P. Transformer Jr, End X. Note, M. S. Spellchecker & Roman Yampolskiy - manuscript
    If an AI makes a significant contribution to a research paper, should it be listed as a co-author? The current guidelines in the field have been created to reduce duplication of credit between two different authors in scientific articles. A new computer program could be identified and credited for its impact in an AI research paper that discusses an early artificial intelligence system which is currently under development at Lawrence Berkeley National. One way to imagine the future of artificial intelligence (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  26. Supporting Human Autonomy in AI Systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27.  19
    Artificial Intelligence as Art – What the Philosophy of Art Can Offer the Understanding of AI and Consciousness.Hutan Ashrafian - manuscript
    Defining Artificial Intelligence and Artificial General Intelligence remain controversial and disputed. They stem from a longer-standing controversy of what is the definition of consciousness, which if solved could possibly offer a solution to defining AI and AGI. Central to these problems is the paradox that appraising AI and Consciousness requires epistemological objectivity of domains that are ontologically subjective. I propose that applying the philosophy of art, which also aims to define art through a lens of epistemological objectivity where the domains (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28.  17
    Excavating “Excavating AI”: The Elephant in the Gallery.Michael J. Lyons - manuscript
    Contains critical commentary on the exhibitions "Training Humans" and "Making Faces" by Kate Crawford and Trevor Paglen, and on the accompanying essay "Excavating AI: The politics of images in machine learning training sets.".
    Download  
     
    Export citation  
     
    Bookmark  
  29. AI Alignment Problem: “Human Values” Don’T Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30.  97
    AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.Brandon Perry & Risto Uuk - 2019 - Big Data and Cognitive Computing 3 (2):1-17.
    This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are created (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Global Solutions Vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. First Human Upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34.  38
    Unpredictability of AI.Roman Yampolskiy - manuscript
    The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. In conclusion, impact of Unpredictability on AI Safety is discussed.
    Download  
     
    Export citation  
     
    Bookmark  
  35. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  36.  72
    Limits of Trust in Medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37.  53
    Mapping Value Sensitive Design Onto AI for Social Good Principles.Steven Umbrello & Ibo van de Poel - manuscript
    Value Sensitive Design (VSD) is an established method for integrating values in technical design. It has been applied to different technologies and recently also to artificial intelligence (AI) applications. We argue that AI poses a number of specific challenges to VSD that require a somewhat adapted VSD approach. These challenges mainly derive from the use of machine learning (ML) in AI. ML poses two challenges to VSD. First, it may opaque (to humans) how an AI system has learned certain things, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38.  89
    Về vai trò của nghiên cứu trong giáo dục Việt Nam thời đại 4.0.Quan-Hoang Vuong - 2019 - Trang Thông Tin Điện Tử Hội Đồng Lý Luận Trung Ương 2019 (8):1-15.
    Cách mạng 4.0 đã và đang mang đến sự thay đổi lớn trong đời sống của người dân Việt Nam. Sự tiện dụng của Grab, khả năng cập nhật 24/24 của Facebook, hay sự đa dạng về nội dung của YouTube thúc đẩy xã hội tiến tới thời đại mới của khởi nghiệp điện toán (Vuong, 2019). Ở đó, mỗi cá nhân đều có cơ hội để khởi nghiệp thông qua sự kết nối đến khắp mọi nơi trên thế giới (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  39. AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI.Jose Hernandez-Orallo & Karina Vold - 2019 - In Proceedings of the AAAI/ACM 2019 Conference on AIES. pp. 507-513.
    Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabil- ities, posing new philosophical, ethical and technical chal- lenges. To (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. There is No General AI.Jobst Landgrebe & Barry Smith - 2020 - arXiv.
    The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this very ability (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. AI Can Help Us Live More Deliberately.Julian Friedland - 2019 - MIT Sloan Management Review 60 (4).
    Our rapidly increasing reliance on frictionless AI interactions may increase cognitive and emotional distance, thereby letting our adaptive resilience slacken and our ethical virtues atrophy from disuse. Many trends already well underway involve the offloading of cognitive, emotional, and ethical labor to AI software in myriad social, civil, personal, and professional contexts. Gradually, we may lose the inclination and capacity to engage in critically reflective thought, making us more cognitively and emotionally vulnerable and thus more anxious and prone to manipulation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. When AI Meets PC: Exploring the Implications of Workplace Social Robots and a Human-Robot Psychological Contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Making Metaethics Work for AI: Realism and Anti-Realism.Michal Klincewicz & Lily E. Frank - 2018 - In Mark Coeckelbergh, M. Loh, J. Funk, M. Seibt & J. Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and Public Space. Amsterdam, Netherlands: IOS Press. pp. 311-318.
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
    Download  
     
    Export citation  
     
    Bookmark  
  44. Is There a Future for AI Without Representation?Vincent C. Müller - 2007 - Minds and Machines 17 (1):101-115.
    This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  45.  63
    Steps to Designing AI-Empowered Nanotechnology: A Value Sensitive Design Approach.Steven Umbrello - 2019 - Delphi - Interdisciplinary Review of Emerging Technologies 2 (2):79-83.
    Advanced nanotechnology promises to be one of the fundamental transformational emerging technologies alongside others such as artificial intelligence, biotechnology, and other informational and cognitive technologies. Although scholarship on nanotechnology, particularly advanced nanotechnology such as molecular manufacturing has nearly ceased in the last decade, normal nanotechnology that is building the foundations for more advanced versions has permeated many industries and commercial products and has become a billion dollar industry. This paper acknowledges the socialtechnicity of advanced nanotechnology and proposes how its convergence (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Representation, Analytic Pragmatism and AI.Raffaela Giovagnoli - 2013 - In Gordana Dodig-Crnkovic Raffaela Giovagnoli (ed.), Computing Nature. pp. 161--169.
    Our contribution aims at individuating a valid philosophical strategy for a fruitful confrontation between human and artificial representation. The ground for this theoretical option resides in the necessity to find a solution that overcomes, on the one side, strong AI (i.e. Haugeland) and, on the other side, the view that rules out AI as explanation of human capacities (i.e. Dreyfus). We try to argue for Analytic Pragmatism (AP) as a valid strategy to present arguments for a form of weak AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. A Modal Defence of Strong AI.Steffen Borge - 2007 - In Dermot Moran Stephen Voss (ed.), The Proceedings of the Twenty-First World Congress of Philosophy. The Philosophical Society of Turkey. pp. 127-131.
    John Searle has argued that the aim of strong AI of creating a thinking computer is misguided. Searle’s Chinese Room Argument purports to show that syntax does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality. But we are not mainly interested in the program itself but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the implementation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48.  84
    Consciousness as Computation: A Defense of Strong AI Based on Quantum-State Functionalism.R. Michael Perry - 2006 - In Charles Tandy (ed.), Death and Anti-Death, Volume 4: Twenty Years After De Beauvoir, Thirty Years After Heidegger. Palo Alto: Ria University Press.
    The viewpoint that consciousness, including feeling, could be fully expressed by a computational device is known as strong artificial intelligence or strong AI. Here I offer a defense of strong AI based on machine-state functionalism at the quantum level, or quantum-state functionalism. I consider arguments against strong AI, then summarize some counterarguments I find compelling, including Torkel Franzén’s work which challenges Roger Penrose’s claim, based on Gödel incompleteness, that mathematicians have nonalgorithmic levels of “certainty.” Some consequences of strong AI are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49.  66
    Tu Quoque: The Strong AI Challenge to Selfhood, Intentionality and Meaning and Some Artistic Responses.Erik C. Banks - manuscript
    This paper offers a "tu quoque" defense of strong AI, based on the argument that phenomena of self-consciousness and intentionality are nothing but the "negative space" drawn around the concrete phenomena of brain states and causally connected utterances and objects. Any machine that was capable of concretely implementing the positive phenomena would automatically inherit the negative space around these that we call self-consciousness and intention. Because this paper was written for a literary audience, some examples from Greek tragedy, noir fiction, (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  50. Các yếu tố ảnh hưởng tới quyết định chọn trường đại học của học sinh THPT tại Việt Nam: Bằng chứng khảo sát năm 2020.Lê Thị Mỹ Linh & Khúc Văn Quý - manuscript
    Nghiên cứu nhằm xác định và đánh giá mức độ ảnh hưởng của các yếu tố trong việc đưa ra quyết định chọn trường đại học của học sinh THPT. Phương pháp phỏng vấn trực tuyến (online) kết hợp với bảng hỏi được sử dụng để thu thập dữ liệu từ 200 sinh viên năm nhất của các Trường Đại học ở Hà Nội và ngoài khu vực Hà Nội, trong thời gian tháng 2 và tháng 3 năm 2020. Phương (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
1 — 50 / 425