Report for "The Reasoner" on the conference "Philosophy and Theory of Artificial Intelligence", 3 & 4 October 2011, Thessaloniki, Anatolia College/ACT, http://www.pt-ai.org. --- Organization: Vincent C. Müller, Professor of Philosophy at ACT & James Martin Fellow, Oxford http://www.sophia.de --- Sponsors: EUCogII, Oxford-FutureTech, AAAI, ACM-SIGART, IACAP, ECCAI.
Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
This book gathers contributions from the fourth edition of the Conference on "Philosophy and Theory of Artificial Intelligence" (PT-AI), held on 27-28th of September 2021 at Chalmers University of Technology, in Gothenburg, Sweden. It covers topics at the interface between philosophy, cognitive science, ethics and computing. It discusses advanced theories fostering the understanding of human cognition, human autonomy, dignity and morality, and the development of corresponding artificial cognitive structures, analyzing important aspects of the relationship between humans and AI systems, including (...) the ethics of AI. This book offers a thought-provoking snapshot of what is currently going on, and what are the main challenges, in the multidisciplinary field of the philosophy of artificial intelligence. (shrink)
This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; (...) and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
Proceedings of the papers presented at the Symposium on "Revisiting Turing and his Test: Comprehensiveness, Qualia, and the Real World" at the 2012 AISB and IACAP Symposium that was held in the Turing year 2012, 2–6 July at the University of Birmingham, UK. Ten papers. - http://www.pt-ai.org/turing-test --- Daniel Devatman Hromada: From Taxonomy of Turing Test-Consistent Scenarios Towards Attribution of Legal Status to Meta-modular Artificial Autonomous Agents - Michael Zillich: My Robot is Smarter than Your Robot: On the Need for (...) a Total Turing Test for Robots - Adam Linson, Chris Dobbyn and Robin Laney: Interactive Intelligence: Behaviour-based AI, Musical HCI and the Turing Test - Javier Insa, Jose Hernandez-Orallo, Sergio España - David Dowe and M.Victoria Hernandez-Lloreda: The anYnt Project Intelligence Test (Demo) - Jose Hernandez-Orallo, Javier Insa, David Dowe and Bill Hibbard: Turing Machines and Recursive Turing Tests — Francesco Bianchini and Domenica Bruni: What Language for Turing Test in the Age of Qualia? - Paul Schweizer: Could there be a Turing Test for Qualia? - Antonio Chella and Riccardo Manzotti: Jazz and Machine Consciousness: Towards a New Turing Test - William York and Jerry Swan: Taking Turing Seriously (But Not Literally) - Hajo Greif: Laws of Form and the Force of Function: Variations on the Turing Test. (shrink)
A AI Robotics Ethics Society (AIRES) é uma organização sem fins lucrativos fundada em 2018 por Aaron Hui, com o objetivo de se promover a conscientização e a importância da implementação e regulamentação ética da AI. A AIRES é hoje uma organização com capítulos em universidade como UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University e a Pontifícia Universidade Católica do Rio Grande do Sul (Brasil). AIRES na PUCRS é (...) o primeiro capítulo internacional da AIRES, e como tal, estamos empenhados em promover e aprimorar a Missão AIRES. Nossa missão é focar na educação dos líderes da IA de amanhã em princípios éticos, para assegurar que a IA seja criada de forma ética e responsável. Como ainda existem poucas propostas de como devemos implementar princípios éticos e diretrizes normativas na prática do desenvolvimento de sistemas de IA, o objetivo deste trabalho é tentar diminuir esta lacuna entre o discurso e a práxis. Entre princípios abstratos e implementação técnica. Neste trabalho, buscamos introduzir o leitor ao tema da Ética e Segurança da IA. Ao mesmo tempo, apresentamos uma série de ferramentas para auxiliar desenvolvedores de sistemas inteligentes a desenvolver “bons” modelos. Este trabalho é um guia em desenvolvimento publicado em inglês e português. Contribuições e sugestões são bem vindas. (shrink)
The goal of Transhumanism is to change the human condition through radical enhancement of its positive traits and through AI (Artificial Intelligence). Among these traits the transhumanists highlight creativity. Here we first describe human creativity at more fundamental levels than those related to the arts and sciences when, for example, childhood is taken into account. We then admit that creativity is experienced on both its bright and dark sides. In a second moment we describe attempts to improve creativity both at (...) the bodily level and in the emulation of human behavior in AI. These developments are presented both as effective results in laboratories and as transhumanist proposals emerging from them. Third, we have introduced the work of some theologians and ethicists, who study such proposals through the lens of the "created co-creator" metaphor. Finally, we analyze these developments through a reflection on the nature of creativity. We identify three problems in the current ideas about enhancement: first, they are bound to Western standards of creativity; second, the tempo and mode of evolution (involving an extended childhood) are not taken into account; third, the enhancement of creativity develops both its healthy and the perverse side. Proponents of the created co-creator metaphor also have difficulties with the problems pointed out. Concerning AI, there is a promising perspective of approaching human creativity, but there are some intrinsic limits: Human emotions are ambiguous, contradictory and far from rational control; human creativity is iconoclastic, thus highlighting the importance of youth and new generations; we do not expect from AI beings to present the perverse side of creativity. Our conclusions are: First, procreation and the new generations are essential to human creativity; in the same way, the wheat does not grow without the chaff; and finally, that birth breaks with causal connections and allows us to act forgiving--at birth, God's creative act is re-enacted. (shrink)
Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...) ness. In terms of decision makers, the use of human decision makers over AIs generally resulted in better perceptions of respectful treatment. In terms of decision valence, people experiencing positive over negative decisions generally resulted in better perceptions of respectful treatment. In instances where these cases conflict, on some indicators people preferred positive AI decisions over negative human decisions. Qualitative responses show how people identify justice concerns with both AI and human decision making. We outline implications for theory, practice, and future research. (shrink)
Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...) are inadequate for AI-intensive companies. To fully account for contemporary technology, the following categories of evaluation will be developed and featured as vital investing criteria: autonomy, dignity, privacy, performance. With these priorities established, the larger goal is a model for humanitarian investing in AI-intensive companies that is intellectually robust, manageable for analysts, useful for portfolio managers, and credible for investors. (shrink)
Experts in Artificial Intelligence (AI) development predict that advances in the development of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance is not made prudently and critically-reflexively, it can result in negative outcomes for humanity. For this reason, several researchers in the area have developed a robust, beneficial, and safe concept of AI for the preservation of humanity and the environment. Currently, several of the open problems in the field of AI research (...) arise from the difficulty of avoiding unwanted behaviors of intelligent agents and systems, and at the same time specifying what we really want such systems to do, especially when we look for the possibility of intelligent agents acting in several domains over the long term. It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that we cannot expect an AI to develop human moral values simply because of its intelligence, as discussed in the Orthogonality Thesis. Perhaps this difficulty comes from the way we are addressing the problem of expressing objectives, values, and ends, using representational cognitive methods. A solution to this problem would be the dynamic approach proposed by Dreyfus, whose phenomenological philosophy shows that the human experience of being-in-the-world in several aspects is not well represented by the symbolic or connectionist cognitive method, especially in regards to the question of learning values. A possible approach to this problem would be to use theoretical models such as SED (situated embodied dynamics) to address the values learning problem in AI. (shrink)
One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...) effects. AI problems are solved by more AI, not less. Permissions and restrictions governing AI emerge from a decentralized process, instead of a unified authority. The work of ethics is embedded in AI development and application, instead of functioning from outside. Together, these attitudes and practices remake ethics as provoking rather than restraining artificial intelligence. (shrink)
Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabil- ities, posing new philosophical, ethical and technical chal- lenges. To (...) analyse these challenges better, we define and place AI extenders in a continuum between fully-externalized systems, loosely coupled with humans, and fully-internalized processes, with operations ultimately performed by the brain, making the tool redundant. We dissect the landscape of cog- nitive capabilities that can foreseeably be extended by AI and examine their ethical implications. We suggest that cognitive extenders using AI be treated as distinct from other cognitive enhancers by all relevant stakeholders, including developers, policy makers, and human users. (shrink)
Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...) intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy. (shrink)
The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...) chapter we suggest that using AI extenders, i.e., tightly coupled cognitive extenders that are imbued with machine learning and other ‘artificially intelligent’ tools, presents both new ethical challenges and opportunities for mental health. We focus on several mental health conditions that can develop differently by the use of AI extenders for people with cognitive disorders and then discuss some of the related opportunities and challenges. (shrink)
The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...) future psychological contract partners for human employees, given these entities transform notions of workplace technology from being a tool to being an active partner. We first overview the increasing role of robots in the workplace, particularly through the advent of sociable AI, and synthesize the literature on human–robot interaction. We then develop an account of a human-social robot psychological contract and zoom in on the implications of this exchange for the enactment of reciprocity. Given the future focused nature of our work we utilize a thought experiment, a commonly used form of conceptual and mental model reasoning, to expand on our theorizing. We then outline potential implications of human-social robot psychological contracts and offer a range of pathways for future research. (shrink)
Esta enciclopédia abrange, de uma forma introdutória mas desejavelmente rigorosa, uma diversidade de conceitos, temas, problemas, argumentos e teorias localizados numa área relativamente recente de estudos, os quais tem sido habitual qualificar como «estudos lógico-filosóficos». De uma forma apropriadamente genérica, e apesar de o território teórico abrangido ser extenso e de contornos por vezes difusos, podemos dizer que na área se investiga um conjunto de questões fundamentais acerca da natureza da linguagem, da mente, da cognição e do raciocínio humanos, bem (...) como questões acerca das conexões destes com a realidade não mental e extralinguística. A razão daquela qualificação é a seguinte: por um lado, a investigação em questão é qualificada como filosófica em virtude do elevado grau de generalidade e abstracção das questões examinadas (entre outras coisas); por outro, a investigação é qualificada como lógica em virtude de ser uma investigação logicamente disciplinada, no sentido de nela se fazer um uso intenso de conceitos, técnicas e métodos provenientes da disciplina de lógica. O agregado de tópicos que constitui a área de estudos lógico-filosóficos é já visível, pelo menos em parte, no Tractatus Logico-Philosophicus de Ludwig Wittgenstein, uma obra publicada em 1921. E uma boa maneira de ter uma ideia sinóptica do território disciplinar abrangido por esta enciclopédia, ou pelo menos de uma porção substancial dele, é extrair do Tractatus uma lista dos tópicos mais salientes aí discutidos; a lista incluirá certamente tópicos do seguinte género, muitos dos quais se podem encontrar ao longo desta enciclopédia: factos e estados de coisas; objectos; representação; crenças e estados mentais; pensamentos; a proposição; nomes próprios; valores de verdade e bivalência; quantificação; funções de verdade; verdade lógica; identidade; tautologia; o raciocínio matemático; a natureza da inferência; o cepticismo e o solipsismo; a indução; as constantes lógicas; a negação; a forma lógica; as leis da ciência; o número. (shrink)
Will AI take away your job? The answer is probably not. AI systems can be good predictive systems and be very good at pattern recognition. AI systems have a very repetitive approach to sets of data, which can be useful in certain circumstances. However, AI does make obvious mistakes. This is because AI does not have a sense of context. As Humans we have years of experience in the real world. We have vast amounts of contextual data stored in our (...) brains that make it possible to predict and to know boundaries of the real world so that even if we have never been in a particular situation, we are still able to deal with it. The unknown situation is where AI will fall down. In engineering, AI is being developed to monitor and control certain systems, this could extend to Nuclear Power Plants for example. If we examine control systems for Nuclear Power Plants on vessels, the system would likely be programmed using the safety of the reactor as the main priority. Automation is crucial in engineering systems like this, as the likelihood and cost of human error is high and human reaction times are slow in comparison to the system. However, there is always the possibility of unforeseen external factors that may override plant safety which cannot be programmed. If we look at a plant onboard a Submarine, for example, we see such additional factors as ship safety. In one example, the power plant might breach safety parameters, in which case an automatic system may shut down the reactor. However, there might be a greater urgency such as a flood in another compartment or an attack upon the vessel, that would override shutting down the reactor. The AI system can neither observe nor utilise information from the external environment to make this type of judgement, whereas the operator may be completely aware. To programme context into AI systems would be near impossible. We still need operators as we still need to understand the environment around us, which may contain an infinite number of possibilities. In a further example of this; if we examine the ICO (Information Commissioner's Office) guidance, it is clear that in clinical settings a human being will always be required to validate findings and to provide context. Sometimes having a lot of data just doesn't replace social interaction, intuition and experience. (shrink)
Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...) treated in a dignified and respectful way in various healthcare decision contexts. Participants were subject to a 2 (human or AI decision maker) x 2 (positive or negative decision outcome) x 2 (diagnostic or resource allocation healthcare scenario) factorial design. We found evidence of a “human bias” (i.e., a preference for human over AI decision makers) and an “outcome bias” (i.e., a preference for positive over negative outcomes). However, we found that for perceptions of respectful and dignified interpersonal treatment, it matters more who makes the decisions in diagnostic cases and it matters more what the outcomes are for resource allocation cases. We also found that humans were consistently viewed as appropriate decision makers and AI was viewed as dehumanizing, and that participants perceived they were treated better when subject to diagnostic as opposed to resource allocation decisions. Thematic coding of open-ended text responses supported these results. We also outline the theoretical and practical implications of these findings. (shrink)
Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...) overlooks that human decision-making is sometimes significantly more transparent and trustworthy than algorithmic decision-making. This is because when people explain their decisions by giving reasons for them, this frequently prompts those giving the reasons to govern or regulate themselves so as to think and act in ways that confirm their reason reports. AI explanation systems lack this self-regulative feature. Overlooking it when comparing algorithmic and human decision-making can result in underestimations of the transparency of human decision-making and in the development of explainable AI that may mislead people by activating generally warranted beliefs about the regulative dimension of reason-giving. (shrink)
In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the (...) contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combating climate change, while reducing its impact on the environment. (shrink)
The study of belief is expanding and involves a growing set of disciplines and research areas. These research programs attempt to shed light on the process of believing, understood as a central human cognitive function. Computational systems and, in particular, what we commonly understand as Artificial Intelligence (AI), can provide some insights on how beliefs work as either a linear process or as a complex system. However, the computational approach has undergone some scrutiny, in particular about the differences between what (...) is distinctively human and what can be inferred from AI systems. The present article investigates to what extent recent developments in AI provide new elements to the debate and clarify the process of belief acquisition, consolidation, and recalibration. The article analyses and debates current issues and topics of investigation such as: different models to understand belief, the exploration of belief in an automated reasoning environment, the case of religious beliefs, and future directions of research. (shrink)
Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged (...) a new field called ‘explainable AI’ (XAI). When we examine the XAI literature, however, it becomes apparent that its protagonists have redefined the term ‘explanation’ to mean something else, namely: ‘interpretation’. Interpretations are indeed sometimes possible, but we show that they give at best only a subjective understanding of how a model works. We propose an alternative to XAI, namely certified AI (CAI), and describe how an AI can be specified, realized, and tested in order to become certified. The resulting approach combines ontologies and formal logic with statistical learning to obtain reliable AI systems which can be safely used in technical applications. (shrink)
Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...) solution for the so-called NP-complete problems will also be a solution for any other such problems. Its artificial-intelligence analogue is the class of AI-complete problems, for which a complete mathematical formalization still does not exist. In this chapter we will focus on analysing computational classes to better understand possible formalizations of AI-complete problems, and to see whether a universal algorithm, such as a Turing test, could exist for all AI-complete problems. In order to better observe how modern computer science tries to deal with computational complexity issues, we present several different deep-learning strategies involving optimization methods to see that the inability to exactly solve a problem from a higher order computational class does not mean there is not a satisfactory solution using state-of-the-art machine-learning techniques. Such methods are compared to philosophical issues and psychological research regarding human abilities of solving analogous NP-complete problems, to fortify the claim that we do not need to have an exact and correct way of solving AI-complete problems to nevertheless possibly achieve the notion of strong AI. (shrink)
Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity — for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans. -/- According to the theory, advances in AI — specifically of the machine learning type that’s able to take on new information and rewrite its code accordingly — will eventually catch up with the wetware of the biological brain. (...) In this interpretation of events, every AI advance from Jeopardy-winning IBM machines to the massive AI language model GPT-3 is taking humanity one step closer to an existential threat. We’re literally building our soon-to-be-sentient successors. -/- Except that it will never happen. At least, according to the authors of the new book Why Machines Will Never Rule the World: Artificial Intelligence without Fear. (shrink)
We propose four policies of ethical design of human-grade Artificial Intelligence. Two of our policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. Two of our policies concern respect and freedom. If we design AI that deserves moral consideration equivalent to that of human beings, (...) that AI should be designed with self-respect and with the freedom to explore values other than those we might impose. We are especially concerned about the temptation to create human-grade AI pre-installed with the desire to cheerfully sacrifice itself for its creators’ benefit. (shrink)
What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and (...) debated, and several ethical principles and guides have been suggested as governance policies for the tech industry. However, would this be the role of AI Ethics? To serve as a soft and ambiguous version of the law? I would like to promote in this article a more Cyberpunk way of doing AI Ethics, whit a more anarchic way of governance. In this study, I will seek to expose some of the deficits of the underlying power structures of our society, and suggest that AI governance be subject to public opinion, so that ‘good AI’ can become ‘good AI for all’. (shrink)
The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...) more fruitful exploration of this question will involve a different comparison class. We routinely treat judgments made by highly trained experts in specialized fields as fair or well-grounded even though—by the nature of expert/layperson division of epistemic labor—an expert will not be able to provide an explanation of the reasoning behind these judgments that makes sense to most other people. Regardless, laypeople are thought to be acting reasonably—and ethically—in deferring to the judgments of experts that concern their areas of specialization. I suggest that we reframe our question regarding the appropriate standards of transparency in AI as one that asks when, why, and to what degree it would be ethical to accept opacity in AI. I argue that our epistemic relation to certain opaque AI technology may be relevantly similar to the layperson’s epistemic relation to the expert in certain respects, such that the successful expert/layperson division of epistemic labor can serve as a blueprint for the ethical use of opaque AI. (shrink)
We should not be creating conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates.
An AI winter may be defined as the stage when technology, business, and the media come to terms with what AI can or cannot really do as a technology without exaggeration. Through discussion of previous AI winters, this paper examines the hype cycle (which by turn characterises AI as a social panacea or a nightmare of apocalyptic proportions) and argues that AI should be treated as a normal technology, neither as a miracle nor as a plague, but rather as of (...) the many solutions that human ingenuity has managed to devise. (shrink)
Andy Clark and David Chalmers’s theory of extended mind can be reevaluated in today’s world to include computational and Artificial Intelligence (AI) technology. This paper argues that AI can be an extension of human mind, and that if we agree that AI can have mind, it too can be extended. It goes on to explore the example of Ganbreeder, an image-making AI which utilizes human input to direct behavior. Ganbreeder represents one way in which AI extended mind could be achieved. (...) The argument of this paper is that AI can utilize human input as a social extension of mind, allowing AI to access the external world that it would not otherwise be able to access. (shrink)
Under the Superstition Mountains in central Arizona toil those who would rob humankind o f its humanity. These gray, soulless monsters methodically tear away at our meaning, our subjectivity, our essence as transcendent beings. With each advance, they steal our freedom and dignity. Who are these denizens of darkness, these usurpers of all that is good and holy? None other than humanity’s arch-foe: The Cognitive Scientists -- AI researchers, fallen philosophers, psychologists, and other benighted lovers of computers. Unless they are (...) stopped, humanity -- you and I -- will soon be nothing but numbers and algorithms locked away on magnetic tape. (shrink)
An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...) existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed. (shrink)
AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and (...) AI audit frameworks. We highlight the responsibilities of funding bodies to ensure investment is channelled towards trustworthy and safe AI systems and provides case studies as to how other ethical funding principles are managed. We offer a first sight of two proposals for funding bodies to consider regarding procedures they can employ. The first proposal is for the inclusion of a Trustworthy AI Statement’ section in the grant application form and offers an example of the associated guidance. The second proposal outlines the wider management requirements of a funding body for the ethical review and monitoring of funded projects to ensure adherence to the proposed ethical strategies in the applicants Trustworthy AI Statement. The anticipated outcome for such proposals being employed would be to create a ‘stop and think’ section during the project planning and application procedure requiring applicants to implement the methods for the ethically aligned design of AI. In essence it asks funders to send the message “if you want the money, then build trustworthy AI!”. (shrink)
Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...) systematic treatment in the literature: when such algorithms are used in life-changing decisions, they can obstruct us from effectively shaping our lives according to our goals and preferences, thus undermining our autonomy. I argue that this concern deserves closer attention as it furnishes the call for transparency in algorithmic decision-making with both new tools and new challenges. (shrink)
Resumo: neste artigo pretendo estender a evidenciação do encobrimento do mundo da vida formulada por Husserl ao uso das tecnologias disruptivas da 4ª Revolução Industrial, mais precisamente o uso da inteligência artificial (IA) nas plataformas sociais. Primeiro, apresento o conceito de mundo da vida que se mantém consistente ao longo das obras de Husserl; em seguida, apresento seu encobrimento derivado da matematização da natureza e do rompimento com o telos, o que originou a crise da humanidade europeia. Segundo, mais camadas (...) de encobrimento do mundo da vida vão se sobrepondo com a 3ª e 4ª RIs. Em mais detalhe, o encobrimento e o afastamento do mundo da vida provocado pelo uso da IA e seus efeitos negativos, individual e coletivamente. Terceiro, a proposta Husserliana de retorno ao mundo da vida através de uma epoché radical. Concluo que a solução proposta por Husserl para a crise ainda é atual e urgente. Abstract: in this paper I intend to extend the disclosure of the cover-up of the lifeworld formulated by Husserl to the use of disruptive technologies of the 4th Industrial Revolution, more precisely the use of artificial intelligence on social platforms. First, I present the concept of lifeworld that remains consistent throughout Husserl's works; then, I present its cover-up derived from the mathematization of nature and the break with the telos, which originated the crisis of european humanity. Second, more layers of cover-up of the lifeworld overlap with the 3rd and 4th IRs. In more detail, the cover-up and detachment from the lifeworld caused by the use of AI and its negative effects, individually and collectively. Third, the Husserlian proposal of return to the lifeworld through a radical epoché. I conclude that the solution proposed by Husserl to the crisis is still actual and urgent. (shrink)
One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...) autonomous vehicles, can be designed to minimize the risks of opaque architectures. Primarily through an explicit orientation towards designing for the values of explainability and verifiability. In doing so, this research adopts the Value Sensitive Design (VSD) approach as a principled framework for the incorporation of such values within design. VSD is recognized as a potential starting point that offers a systematic way for engineering teams to formally incorporate existing technical solutions within ethical design, while simultaneously remaining pliable to emerging issues and needs. It is concluded that the VSD methodology offers at least a strong enough foundation from which designers can begin to anticipate design needs and formulate salient design flows that can be adapted to the changing ethical landscapes required for utilisation in autonomous vehicles. (shrink)
Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...) This militarization trend increases global catastrophic risk or even existential risk during AI takeoff, which includes the use of nuclear weapons against rival AIs, blackmail by the threat of creating a global catastrophe, and the consequences of a war between two AIs. As a result, even benevolent AI may evolve into potentially dangerous military AI. The type and intensity of militarization drive depend on the relative speed of the AI takeoff and the number of potential rivals. We show that AI militarization drive and evolution of national defense will merge, as a superintelligence created in the defense environment will have quicker takeoff speeds, but a distorted value system. We conclude with peaceful alternatives. (shrink)
This paper explores some ways in which artificial intelligence (AI) could be used to improve human moral judgments in bioethics by avoiding some of the most common sources of error in moral judgment, including ignorance, confusion, and bias. It surveys three existing proposals for building human morality into AI: Top-down, bottom-up, and hybrid approaches. Then it proposes a multi-step, hybrid method, using the example of kidney allocations for transplants as a test case. The paper concludes with brief remarks about how (...) to handle several complications, respond to some objections, and extend this novel method to other important moral issues in bioethics and beyond. (shrink)
On 8th August 2019, Secretary of State for Health and Social Care, Matt Hancock, announced the creation of a £250 million NHS AI Lab. This significant investment is justified on the belief that transforming the UK’s National Health Service (NHS) into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions, will offer significant benefit to patients, clinicians, and the overall system. These opportunities are realistic and should not be wasted. However, they may be missed (one may (...) recall the troubled Care.data programme) if the ethical challenges posed by this transformation are not carefully considered from the start, and then addressed thoroughly, systematically, and in a socially participatory way. To deal with this serious risk, the NHS AI Lab should create an Ethics Advisory Board and monitor, analyse, and address the normative and overarching ethical issues that arise at the individual, interpersonal, group, institutional and societal levels in AI for healthcare. (shrink)
Since the beginning of the 21st century, technologies like neural networks, deep learning and “artificial intelligence” (AI) have gradually entered the artistic realm. We witness the development of systems that aim to assess, evaluate and appreciate artifacts according to artistic and aesthetic criteria or by observing people’s preferences. In addition to that, AI is now used to generate new synthetic artifacts. When a machine paints a Rembrandt, composes a Bach sonata, or completes a Beethoven symphony, we say that this is (...) neither original nor real art, but simply the complex imitation and reproduction of existing products of human culture. We face the old question concerning the nature of creativity: what kind of recombination of ideas, unusual analogies, and conceptual connections are considered the mark of originality? Can AI produce artworks? Could machines reach a point at which we consider them genuinely creative? We also need to investigate the challenges posed by AI-art to the notion of authoriality: Who is the author of an artificially generated artifact? An artificial system could be considered just an artist’s and a programmer’s tool. However, we are also fascinated by the idea of autonomous artificial creativity in the aesthetic domain, as a manifestation of highly intelligent behavior. This paper will try to define some key questions around what could it mean to consider a machine creative or even equipped with artistic intentionality. (shrink)
In this work, we survey skepticism regarding AI risk and show parallels with other types of scientific skepticism. We start by classifying different types of AI Risk skepticism and analyze their root causes. We conclude by suggesting some intervention approaches, which may be successful in reducing AI risk skepticism, at least amongst artificial intelligence researchers.
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...) the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly. (shrink)
Mostly philosophers cause trouble. I know because on alternate Thursdays I am one -- and I live in a philosophy department where I watch all of them cause trouble. Everyone in artificial intelligence knows how much trouble philosophers can cause (and in particular, we know how much trouble one philosopher -- John Searle -- has caused). And, we know where they tend to cause it: in knowledge representation and the semantics of data structures. This essay is about a recent case (...) of this sort of thing. One of the take-home messages will be that AI ought to redouble its efforts t o understand concepts. (shrink)
Good sciences have good metaphors. Indeed, good sciences are good because they have good metaphors. AI could use more good metaphors. In this editorial, I would like to propose a new metaphor to help us understand intelligence. Of course, whether the metaphor is any good or not depends on whether it actually does help us. (What I am going to propose is not something opposed to computationalism -- the hypothesis that cognition is computation. Noncomputational metaphors are in vogue these days, (...) and to date they have all been equally plausible and equally successful. And, just to be explicit, I do not mean “IQ” by “intelligence.” I am using “intelligence” in the way AI uses it: as a semi-techical term referring to a general property of all intelligent systems, animal (including humans), or machine, alike.). (shrink)
This paper examines an insoluble Cartesian problem for classical AI, namely, how linguistic understanding involves knowledge and awareness of u’s meaning, a cognitive process that is irreducible to algorithms. As analyzed, Descartes’ view about reason and intelligence has paradoxically encouraged certain classical AI researchers to suppose that linguistic understanding suffices for machine intelligence. Several advocates of the Turing Test, for example, assume that linguistic understanding only comprises computational processes which can be recursively decomposed into algorithmic mechanisms. Against this background, in (...) the first section, I explain Descartes’ view about language and mind. To show that Turing bites the bullet with his imitation game, in the second section I analyze this method to assess intelligence. Then, in the third section, I elaborate on Schank and Abelsons’ Script Applier Mechanism (SAM, hereby), which supposedly casts doubt on Descartes’ denial that machines can think. Finally, in the fourth section, I explore a challenge that any algorithmic decomposition of linguistic understanding faces. This challenge, I argue, is the core of the Cartesian problem: knowledge and awareness of meaning require a first-person viewpoint which is irreducible to the decomposition of algorithmic mechanisms. (shrink)
Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...) autonomy, justice, and explicability can feed into national- and organisational-level microsocial contracts. We also outline the role of employees’ technology frames in this process. We then use an illustrative example to demonstrate how this multilevel normative background helps to inform the content of individuals’ PCs in the context of working with AI technologies. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.