The article sets out a primitive ontology of the natural world in terms of primitive stuff—that is, stuff that has as such no physical properties at all—but that is not a bare substratum either, being individuated by metrical relations. We focus on quantum physics and employ identity-based Bohmian mechanics to illustrate this view, but point out that it applies all over physics. Properties then enter into the picture exclusively through the role that they play for the dynamics of the primitive (...) stuff. We show that such properties can be local, as well as holistic, and discuss two metaphysical options to conceive them, namely, Humeanism and modal realism in the guise of dispositionalism. 1 Introduction2 Primitive Ontology: Primitive Stuff3 The Physics of Matter as Primitive Stuff4 The Humean Best System Analysis of the Dynamical Variables5 Modal Realism about the Dynamical Variables6 Conclusion. (shrink)
Appealing to imagination for modal justification is very common. But not everyone thinks that all imaginings provide modal justification. Recently, Gregory and Kung :620–663, 2010) have independently argued that, whereas imaginings with sensory imageries can justify modal beliefs, those without sensory imageries don’t because of such imaginings’ extreme liberty. In this essay, I defend the general modal epistemological relevance of imagining. I argue, first, that when the objections that target the liberal nature of non-sensory imaginings are adequately developed, those objections (...) also threaten the sensory imaginings. So, if we think that non-sensory imaginings are too liberal for modal justification, we should say the same about sensory imaginings. I’ll finish my defense by showing that, when it comes to deciding between saying that all imaginings are prima facie justificatory and saying that no imaginings are justificatory, there is an independent reason for accepting the former. (shrink)
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...) by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...) designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Various authors debate the question of whether neuroscience is relevant to criminal responsibility. However, a plethora of different techniques and technologies, each with their own abilities and drawbacks, lurks beneath the label “neuroscience”; and in criminal law responsibility is not a single, unitary and generic concept, but it is rather a syndrome of at least six different concepts. Consequently, there are at least six different responsibility questions that the criminal law asks—at least one for each responsibility concept—and, I will suggest, (...) a multitude of ways in which the techniques and technologies that comprise neuroscience might help us to address those diverse questions. In a way, on my account neuroscience is relevant to criminal responsibility in many ways, but I hesitate to state my position like this because doing so obscures two points which I would rather highlight: one, neither neuroscience nor criminal responsibility are as unified as that; and two, the criminal law asks many different responsibility questions and not just one generic question. (shrink)
Traditionally, discussions of moral participation – and in particular moral agency – have focused on fully formed human actors. There has been some interest in the development of morality in humans, as well as interest in cultural differences when it comes to moral practices, commitments, and actions. However, until relatively recently, there has been little focus on the possibility that nonhuman animals have any role to play in morality, save being the objects of moral concern. Moreover, when nonhuman cases are (...) considered as evidence of moral agency or subjecthood, there has been an anthropocentric tendency to focus on those behaviors that inform our attributions of moral agency to humans. For example, some argue that the ability to evaluate the principles upon which a moral norm is grounded is required for full moral agency. Certainly, if a moral agent must understand what makes an action right or wrong, then most nonhuman animals would not qualify (and perhaps some humans too). However, if we are to understand the evolution of moral psychology and moral practice, we need to turn our attention to the foundations of full moral agency. We must first pay attention to the more broadly normative practices of other animals. Here, we begin that project by considering evidence that great apes and cetaceans participate in normative practices. (shrink)
Garrath Williams claims that truly responsible people must possess a “capacity … to respond [appropriately] to normative demands” (2008:462). However, there are people whom we would normally praise for their responsibility despite the fact that they do not yet possess such a capacity (e.g. consistently well-behaved young children), and others who have such capacity but who are still patently irresponsible (e.g. some badly-behaved adults). Thus, I argue that to qualify for the accolade “a responsible person” one need not possess such (...) a capacity, but only to be earnestly willing to do the right thing and to have a history that testifies to this willingness. Although we may have good reasons to prefer to have such a capacity ourselves, and to associate ourselves with others who have it, at a conceptual level I do not think that such considerations support the claim that having this capacity is a necessary condition of being a responsible person in the virtue sense. (shrink)
Luck egalitarians think that considerations of responsibility can excuse departures from strict equality. However critics argue that allowing responsibility to play this role has objectionably harsh consequences. Luck egalitarians usually respond either by explaining why that harshness is not excessive, or by identifying allegedly legitimate exclusions from the default responsibility-tracking rule to tone down that harshness. And in response, critics respectively deny that this harshness is not excessive, or they argue that those exclusions would be ineffective or lacking in justification. (...) Rather than taking sides, after criticizing both positions I also argue that this way of carrying on the debate – i.e. as a debate about whether the harsh demands of responsibility outweigh other considerations, and about whether exclusions to responsibility-tracking would be effective and/or justified – is deeply problematic. On my account, the demands of responsibility do not – in fact, they can not – conflict with the demands of other normative considerations, because responsibility only provides a formal structure within which those other considerations determine how people may be treated, but it does not generate its own practical demands. (shrink)
Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...) from humans; in fact they increase the abil-ity to hold humans accountable for war crimes. Using LAWS in war would probably reduce human suffering overall. Finally, the availability of LAWS would probably not increase the probability of war or other lethal conflict—especially as compared to extant remote-controlled weapons. The widespread fear of killer robots is unfounded: They are probably good news. (shrink)
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues (...) this volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of Artificial Intelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
In this article, we critically reflect on the concept of biomimicry. On the basis of an analysis of the concept of biomimicry in the literature and its philosophical origin, we distinguish between a strong and a weaker concept of biomimicry. The strength of the strong concept of biomimicry is that nature is seen as a measure by which to judge the ethical rightness of our technological innovations, but its weakness is found in questionable presuppositions. These presuppositions are addressed by the (...) weaker concept of biomimicry, but at the price that it is no longer possible to distinguish between exploitative and ecological types of technological innovations. We compare both concepts of biomimicry by critically reflecting on four dimensions of the concept of biomimicry: mimesis, technology, nature, and ethics. (shrink)
In this paper it is argued that existing ‘self-representational’ theories of phenomenal consciousness do not adequately address the problem of higher-order misrepresentation. Drawing a page from the phenomenal concepts literature, a novel self-representational account is introduced that does. This is the quotational theory of phenomenal consciousness, according to which the higher-order component of a conscious state is constituted by the quotational component of a quotational phenomenal concept. According to the quotational theory of consciousness, phenomenal concepts help to account for the (...) very nature of phenomenally conscious states. Thus, the paper integrates two largely distinct explanatory projects in the field of consciousness studies: (i) the project of explaining how we think about our phenomenally conscious states, and (ii) the project of explaining what phenomenally conscious states are in the first place. (shrink)
T. M. Scanlon’s contractualism is a meta-ethical theory that explains moral motivation and also provides a conception of how to carry out moral deliberation. It supports non-consequentialism – the theory that both consequences and deontological considerations are morally significant in moral deliberation. Regarding the issue of punishment, non-consequentialism allows us to take account of the need for deterrence as well as principles of fairness, justice, and even desert. Moreover, Scanlonian contractualism accounts for permissibility in terms of justifiability: An act is (...) permissible if and only if it can be justified to everyone affected by it. This contractualist thesis explains why it is always impermissible to frame an innocent person, why vicarious punishment is impermissible, and why there has to be a cap on sentences. Contractualism therefore allows us to take deterrence as a goal of punishment without the excess of utilitarianism. This paper further argue that the resulting view is superior to pure retributivism. Finally, it shows why legal excuses and mitigation can be justified in terms of the notion of negative desert. (For access to this paper: http://www.tandfonline.com/eprint/sJ2JBVXkztyFMGmxS7tS/full ) . (shrink)
[This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or (...) considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 (...) - - - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
Imagination is a source of evidence for objective modality. It is through this epistemic connection that the idea of modality first gains traction in our intellectual life. A proper theory of modality should be able to explain our imagination’s modal epistemic behaviors. This chapter highlights a peculiar asymmetry regarding epistemic defeat for imagination-based modal justification. Whereas imagination-based evidence for possibility cannot be undermined by information about the causal origin of our imaginings, unimaginability-based evidence for impossibility can be undermined by information (...) about the causal origin of the unimaginability. It is argued that an acceptance of S4 over S5 as the true logic for objective modality best explains this epistemic asymmetry. (shrink)
Some philosophers argue that non-presentist A-theories problematically imply that we cannot know that this moment is present. The problem is usually presented as arising from the combination of the A-theoretic ideology of a privileged presentness and a non-presentist ontology. The goal of this essay is to show that the epistemic problem can be rephrased as a pessimistic induction. By doing so, I will show that the epistemic problem, in fact, stems from the A-theoretic ideology alone. Hence, once it is properly (...) presented, the epistemic problem presents a serious threat to all A-theories. (shrink)
This thesis considers two allegations which conservatives often level at no-fault systems — namely, that responsibility is abnegated under no-fault systems, and that no-fault systems under- and over-compensate. I argue that although each of these allegations can be satisfactorily met – the responsibility allegation rests on the mistaken assumption that to properly take responsibility for our actions we must accept liability for those losses for which we are causally responsible; and the compensation allegation rests on the mistaken assumption that tort (...) law’s compensatory decisions provide a legitimate norm against which no-fault’s decisions can be compared and criticized – doing so leads in a direction which is at odds with accident law reform advocates’ typical recommendations. On my account, accident law should not just be reformed in line with no-fault’s principles, but rather it should be completely abandoned since the principles that protect no- fault systems from the conservatives’ two allegations are incompatible with retaining the category of accident law, they entail that no-fault systems are a form of social welfare and not accident law systems, and that under these systems serious deprivation – and to a lesser extent causal responsibility – should be conditions of eligibility to claim benefits. (shrink)
In cognitive science, the concept of dissociation has been central to the functional individuation and decomposition of cognitive systems. Setting aside debates about the legitimacy of inferring the existence of dissociable systems from ‘behavioural’ dissociation data, the main idea behind the dissociation approach is that two cognitive systems are dissociable, and thus viewed as distinct, if each can be damaged, or impaired, without affecting the other system’s functions. In this article, I propose a notion of functional independence that does not (...) require dissociability, and describe an approach to the functional decomposition and modelling of cognitive systems that complements the dissociation approach. I show that highly integrated cognitive and neurocognitive systems can be decomposed into non-dissociable but functionally independent components, and argue that this approach can provide a general account of cognitive specialization in terms of a stable structure–function relationship. 1 Introduction2 Functional Independence without Dissociability3 FI Systems and Cognitive Architecture4 FI Systems and Cognitive Specialization. (shrink)
The contribution of the body to cognition and control in natural and artificial agents is increasingly described as “off-loading computation from the brain to the body”, where the body is said to perform “morphological computation”. Our investigation of four characteristic cases of morphological computation in animals and robots shows that the ‘off-loading’ perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We thus distinguish (1) morphology that (...) facilitates control, (2) morphology that facilitates perception and the rare cases of (3) morphological computation proper, such as ‘reservoir computing.’ where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and control – how it contributes to the overall ‘orchestration’ of intelligent behaviour. (shrink)
In October 2011, the “2nd European Network for Cognitive Systems, Robotics and Interaction”, EUCogII, held its meeting in Groningen on “Autonomous activity in real-world environments”, organized by Tjeerd Andringa and myself. This is a brief personal report on why we thought autonomy in real-world environments is central for cognitive systems research and what I think I learned about it. --- The theses that crystallized are that a) autonomy is a relative property and a matter of degree, b) increasing autonomy of (...) an artificial system from its makers and users is a necessary feature of increasingly intelligent systems that can deal with the real-world and c) more such autonomy means less control but at the same time improved interaction with the system. (shrink)
The theory that all processes in the universe are computational is attractive in its promise to provide an understandable theory of everything. I want to suggest here that this pancomputationalism is not sufficiently clear on which problem it is trying to solve, and how. I propose two interpretations of pancomputationalism as a theory: I) the world is a computer and II) the world can be described as a computer. The first implies a thesis of supervenience of the physical over computation (...) and is thus reduced ad absurdum. The second is underdetermined by the world, and thus equally unsuccessful as theory. Finally, I suggest that pancomputationalism as metaphor can be useful. – At the Paderborn workshop in 2008, this paper was presented as a commentary to the relevant paper by Gordana Dodig-Crnkovic " Info-Computationalism and Philosophical Aspects of Research in Information Sciences". (shrink)
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...) the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
In the thirties, Martin Heidegger was heavily involved with the work of Ernst Jünger (1895-1998). He says that he is indebted to Jünger for the ‘enduring stimulus’ provided by his descriptions. The question is: what exactly could this enduring stimulus be? Several interpreters have examined this question, but the recent publication of lectures and annotations of the thirties allow us to follow Heidegger’s confrontation with Jünger more precisely. -/- According to Heidegger, the main theme of his philosophical thinking in the (...) thirties was the overcoming of the metaphysics of the will to power. But whereas he seems to be quite revolutionary in heralding ‘another beginning’ of philosophy in the beginning of the thirties, he later on realized that his own revolutionary vocabulary was itself influenced by the will to power. In his later work, one of the main issues is the releasement from the wilful way of philosophical thinking. My hypothesis is that Jünger has this importance for Heidegger in the thirties, because the confrontation with Jünger’s way of thinking showed him that the other beginning of philosophy presupposes the irrevocable releasement of willing and a gelassen or non-willing way of philosophical thinking. -/- In this article, we test this hypothesis in relation to the recently published lectures, annotations and unpublished notes from the thirties. After a brief explanation of Jünger’s diagnosis of modernity (§1), we consider Heidegger’s reception of the work of Jünger in the thirties (§2). He not only sees that Jünger belongs to Nietzsche’s metaphysics of the will to power, but also shows the modern-metaphysical character of Jünger’s way of thinking. In section three, we focus on Heidegger’s confrontation with Jünger in relation to the consummation of modernity. According to Heidegger, Jünger is not only the end of modern metaphysics, but also the perishing (Verendung) of this end, the oblivion of this end in the will to power of representation. In section four, we focus on the real controversy between Jünger and Heidegger: the releasement of willing and the necessity of a radical other beginning of philosophical thinking. -/- . (shrink)
Cognition is commonly taken to be computational manipulation of representations. These representations are assumed to be digital, but it is not usually specified what that means and what relevance it has for the theory. I propose a specification for being a digital state in a digital system, especially a digital computational system. The specification shows that identification of digital states requires functional directedness, either for someone or for the system of which it is a part. In the case or digital (...) representations, to be a token of a representational type, where the function of the type is to represent. [An earlier version of this paper was discussed in the web-conference "Interdisciplines" https://web.archive.org/web/20100221125700/http://www.interdisciplines.org/adaptation/papers/7 ]. (shrink)
Die Entwicklungen in der Künstlichen Intelligenz (KI) sind spannend. Aber wohin geht die Reise? Ich stelle eine Analyse vor, der zufolge exponentielles Wachstum von Rechengeschwindigkeit und Daten die entscheidenden Faktoren im bisherigen Fortschritt waren. Im Folgenden erläutere ich, unter welchen Annahmen dieses Wachstum auch weiterhin Fortschritt ermöglichen wird: 1) Intelligenz ist eindimensional und messbar, 2) Kognitionswissenschaft wird für KI nicht benötigt, 3) Berechnung (computation) ist hinreichend für Kognition, 4) Gegenwärtige Techniken und Architektur sind ausreichend skalierbar, 5) Technological Readiness Levels (TRL) (...) erweisen sich als machbar. Diese Annahmen werden sich als dubios erweisen. (shrink)
Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of (...) the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents. (shrink)
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...) examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
[Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificial intelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human intelligence. (...) This consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we stand and where we should go from here. (shrink)
Skeptics argue that the acquisition of knowledge is impossible given the standing possibility of error. We present the limiting convergence strategy for responding to skepticism and discuss the relationship between conceivable error and an agent’s knowledge in the limit. We argue that the skeptic must demonstrate that agents are operating with a bad method or are in an epistemically cursed world. Such demonstration involves a significant step beyond conceivability and commits the skeptic to potentially convergent inquiry.
Egalitarians must address two questions: i. What should there be an equality of, which concerns the currency of the ‘equalisandum’; and ii. How should this thing be allocated to achieve the so-called equal distribution? A plausible initial composite answer to these two questions is that resources should be allocated in accordance with choice, because this way the resulting distribution of the said equalisandum will ‘track responsibility’ — responsibility will be tracked in the sense that only we will be responsible for (...) the resources that are available to us, since our allocation of resources will be a consequence of our own choices. But the effects of actual choices should not be preserved until the prior effects of luck in constitution and circumstance are first eliminated. For instance, people can choose badly because their choice-making capacity was compromised due to a lack of intelligence (i.e. due to constitutional bad luck), or because only bad options were open to them (i.e. due to circumstantial bad luck), and under such conditions we are not responsible for our choices. So perhaps a better composite answer to our two questions (from the perspective of tracking responsibility) might be that resources should be allocated so as to reflect people’s choices, but only once those choices have been corrected for the distorting effects of constitutional and circumstantial luck, and on this account choice preservation and luck elimination are two complementary aims of the egalitarian ideal. Nevertheless, it is one thing to say that luck’s effects should be eliminated, but quite another to figure out just how much resource redistribution would be required to achieve this outcome, and so it was precisely for this purpose that in 1981 Ronald Dworkin developed the ingenuous hypothetical insurance market argumentative device (HIMAD), which he then used in conjunction with the talent slavery (TS) argument, to arrive at an estimate of the amount of redistribution that would be required to reduce the extent of luck’s effects. However recently Daniel Markovits has cast doubt over Dworkin’s estimates of the amount of redistribution that would be required, by pointing out flaws with his understanding of how the hypothetical insurance market would function. Nevertheless, Markovits patched it up and he used this patched-up version of Dworkin’s HIMAD together with his own version of the TS argument to reach his own conservative estimate of how much redistribution there ought to be in an egalitarian society. Notably though, on Markovits’ account once the HIMAD is patched-up and properly understood, the TS argument will also allegedly show that the two aims of egalitarianism are not necessarily complementary, but rather that they can actually compete with one another. According to his own ‘equal-agent’ egalitarian theory, the aim of choice preservation is more important than the aim of luck elimination, and so he alleges that when the latter aim comes into conflict with the former aim then the latter will need to be sacrificed to ensure that people are not subordinated to one another as agents. I believe that Markovits’ critique of Dworkin is spot on, but I also think that his own positive thesis — and hence his conclusion about how much redistribution there ought to be in an egalitarian society — is flawed. Hence, this paper will begin in Section I by explaining how Dworkin uses the HIMAD and his TS argument to estimate the amount of redistribution that there ought to be in an egalitarian society — this section will be largely expository in content. Markovits’ critique of Dworkin will then be outlined in Section II, as will be his own positive thesis. My critique of Markovits, and my own positive thesis, will then make a fleeting appearance in Section III. Finally, I will conclude by rejecting both Dworkin’s and Markovits’ estimates of the amount of redistribution that there ought to be in an egalitarian society, and by reaffirming the responsibility-tracking egalitarian claim that choice preservation and luck elimination are complementary and not competing egalitarian aims. (shrink)
The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding. If the mind is a digital computer, as computationalism claims, then it can be computing either over meaningful symbols or over meaningless symbols. If it is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system, i.e. it implies semantic nativism. If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to symbol grounding. (...) In this case, no symbol grounding could take place since any grounding presupposes intentional cognitive processes. So, whether computing in the mind is over meaningless or over meaningful symbols, computationalism implies semantic nativism. (shrink)
Empathy is a topic of continuous debate in the nursing literature. Many argue that empathy is indispensable to effective nursing practice. Yet others argue that nurses should rather rely on sympathy, compassion, or consolation. However, a more troubling disagreement underlies these debates: There’s no consensus on how to define empathy. This lack of consensus is the primary obstacle to a constructive debate over the role and import of empathy in nursing practice. The solution to this problem seems obvious: Nurses need (...) to reach a consensus on the meaning and definition of empathy. But this is easier said than done. Concept analyses, for instance, reveal a profound ambiguity and heterogeneity of the concept of empathy across the nursing literature. Since the term “empathy” is used to refer to a range of perceptual, cognitive, emotional, and behavioral phenomena, the presence of a conceptual ambiguity and heterogeneity is hardly surprising. Our proposal is simple. To move forward, we need to return to the basics. We should develop the concept from the ground up. That is, we should begin by identifying and describing the most fundamental form of empathic experience. Once we identify the most fundamental form of empathy, we will be able to distinguish among the more derivative experiences and behaviors that are addressed by the same name and, ideally, determine the place of these phenomena in the field of nursing. The aim of this article is, consequently, to lay the groundwork for a more coherent concept of empathy and thereby for a more fruitful debate over the role of empathy in nursing. In Part 1, we outline the history of the concept of empathy within nursing, explain why nurses are sometimes warry of adapting concepts from other disciplines, and argue that nurses should distinguish between adapting concepts from applied disciplines and from more theoretical disciplines. In Part 2, we show that the distinction between emotional and cognitive empathy—borrowed from theoretical psychology—has been a major factor in nurses’ negative attitudes toward emotional empathy. We argue, however, that both concepts fail to capture the most fundamental form of empathy. In Part 3, we draw on and present some of the seminal studies of empathy that can be found in the work of phenomenological philosophers including Max Scheler, Edmund Husserl, and Edith Stein. In Part 4, we outline how their understanding of empathy may facilitate current debates about empathy’s role in nursing. (shrink)
I see four symbol grounding problems: 1) How can a purely computational mind acquire meaningful symbols? 2) How can we get a computational robot to show the right linguistic behavior? These two are misleading. I suggest an 'easy' and a 'hard' problem: 3) How can we explain and re-produce the behavioral ability and function of meaning in artificial computational agents?4) How does physics give rise to meaning?
This paper investigates the view that digital hypercomputing is a good reason for rejection or re-interpretation of the Church-Turing thesis. After suggestion that such re-interpretation is historically problematic and often involves attack on a straw man (the ‘maximality thesis’), it discusses proposals for digital hypercomputing with Zeno-machines , i.e. computing machines that compute an infinite number of computing steps in finite time, thus performing supertasks. It argues that effective computing with Zeno-machines falls into a dilemma: either they are specified such (...) that they do not have output states, or they are specified such that they do have output states, but involve contradiction. Repairs though non-effective methods or special rules for semi-decidable problems are sought, but not found. The paper concludes that hypercomputing supertasks are impossible in the actual world and thus no reason for rejection of the Church-Turing thesis in its traditional interpretation. (shrink)
This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; (...) and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
An agent’s knowledge of her own intentional actions (agential knowledge) is non-observational. Yet, intentional actions typically consist of happenings external to the agents. A theory is needed to explain how agents are warranted to form such beliefs independent of observation. This paper first argues for three desirable features of an ideal theory about agential knowledge. After showing that no existing theories possess all three, a novel theory that does is presented. According to this theory, agential knowledge is the same kind (...) of knowledge as the Kripkean contingent a priori: they are knowledge justified a priori by stipulation. (shrink)
It is a truism that there are erroneous convictions in criminal trials. Recent legal findings show that 3.3% to 5%of all convictions in capital rape-murder cases in the U.S. in the 1980s were erroneous convictions. Given this fact, what normative conclusions can be drawn? First, the article argues that a moderately revised version of Scanlon’ s contractualism offers an attractive moral vision that is different from utilitarianism or other consequentialist theories, or from purely deontological theories. It then brings this version (...) of Scanlonian contractualism to bear on the question of whether the death penalty, life imprisonment, long sentences, or shorter sentences can be justified, given that there is a non-negligible rate of erroneous conviction. Contractualism holds that a permissible act must be justifiable to everyone affected by it. Yet, given the non-negligible rate of erroneous conviction, it is unjustifiable to mete out the death penalty, because such a punishment is not justifiable to innocent murder convicts. It is further argued that life imprisonment will probably not be justified (unless lowering the sentence to a long sentence will drastically increase the murder rate). However, whether this line of argument could be further extended would depend on the impact of lowering sentences on communal security. (shrink)
Quantities like mass and temperature are properties that come in degrees. And those degrees (e.g. 5 kg) are properties that are called the magnitudes of the quantities. Some philosophers (e.g., Byrne 2003; Byrne & Hilbert 2003; Schroer 2010) talk about magnitudes of phenomenal qualities as if some of our phenomenal qualities are quantities. The goal of this essay is to explore the anti-physicalist implication of this apparently innocent way of conceptualizing phenomenal quantities. I will first argue for a metaphysical thesis (...) about the nature of magnitudes based on Yablo’s proportionality requirement of causation. Then, I will show that, if some phenomenal qualities are indeed quantities, there can be no demonstrative concepts about some of our phenomenal feelings. That presents a significant restriction on the way physicalists can account for the epistemic gap between the phenomenal and the physical. I’ll illustrate the restriction by showing how that rules out a popular physicalist response to the Knowledge Argument. (shrink)
In “Two Dogmas”, Quine indicates that Carnap’s Aufbau fails “in principle” to reduce our knowledge of the external world to sense data. This is because in projecting the sensory material to reconstruct the physical world, Carnap gives up the use of operating rules and switches to a procedure informed by general principles. This procedure falls short of providing an eliminative translation for the connective “is at”, which is necessary for the reduction. In dissecting Quine’s objection, I argue that Quine has (...) at best proven the claim that the use of general principles essentially fails the task of radical reductionism. However, in order to establish the conclusion that the Aufbau fails in principle, Quine needs to further vindicate two other claims. They are: first, a switch from operating rules to general principles is necessary; second, the set of general principles Carnap adopts is the best alternative. By disambiguating the notion of “explicit definition” and examining the concept of definability in the Aufbau, I explore the possibility of justifying these two claims that Quine overlooks in his objection. The result suggests that Quine’s objection stands in tension with his radical reductionist reading of the Aufbau. (shrink)
Floridi and Taddeo propose a condition of “zero semantic commitment” for solutions to the grounding problem, and a solution to it. I argue briefly that their condition cannot be fulfilled, not even by their own solution. After a look at Luc Steels' very different competing suggestion, I suggest that we need to re-think what the problem is and what role the ‘goals’ in a system play in formulating the problem. On the basis of a proper understanding of computing, I come (...) to the conclusion that the only sensible ground-ing problem is how we can explain and re-produce the behavioral ability and function of meaning in artificial computational agents. (shrink)
In this article, we discuss the modularity of the emotions. In a general methodological section, we discuss the empirical basis for the postulation of modularity. Then we discuss how certain modules -- the emotions in particular -- decompose into distinct anatomical and functional parts.
May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...) crucial moral question is not one of responsibility. Rather, it is whether the technology can satisfy the requirements of fairness in the re-distribution of risk. Not only is this possible in principle, but some killer robots will actually satisfy these requirements. An implication of our argument is that there is a public responsibility to regulate killer robots ’ design and manufacture. (shrink)
There is much discussion about whether the human mind is a computer, whether the human brain could be emulated on a computer, and whether at all physical entities are computers (pancomputationalism). These discussions, and others, require criteria for what is digital. I propose that a state is digital if and only if it is a token of a type that serves a particular function - typically a representational function for the system. This proposal is made on a syntactic level, assuming (...) three levels of description (physical, syntactic, semantic). It suggests that being digital is a matter of discovery or rather a matter of how we wish to describe the world, if a functional description can be assumed. Given the criterion provided and the necessary empirical research, we should be in a position to decide on a given system (e.g. the human brain) whether it is a digital system and can thus be reproduced in a different digital system (since digital systems allow multiple realization). (shrink)
In this paper I offer an alternative phenomenological account of depression as consisting of a degradation of the degree to which one is situated in and attuned to the world. This account contrasts with recent accounts of depression offered by Matthew Ratcliffe and others. Ratcliffe develops an account in which depression is understood in terms of deep moods, or existential feelings, such as guilt or hopelessness. Such moods are capable of limiting the kinds of significance and meaning that one can (...) come across in the world. I argue that Ratcliffe’s account is unnecessarily constrained, making sense of the experience of depression by appealing only to changes in the mode of human existence. Drawing on Merleau-Ponty’s critique of traditional transcendental phenomenology, I show that many cases of severe psychiatric disorders are best understood as changes in the very structure of human existence, rather than changes in the mode of human existence. Working in this vein, I argue that we can make better sense of many first-person reports of the experience of depression by appealing to a loss or degradation of the degree to which one is situated in and attuned to the world, rather than attempting to make sense of depression as a particular mode of being situated and attuned. Finally, I argue that drawing distinctions between disorders of structure and mode will allow us to improve upon the currently heterogeneous categories of disorder offered in the DSM-5. (shrink)
Amongst philosophers and cognitive scientists, modularity remains a popular choice for an architecture of the human mind, primarily because of the supposed explanatory value of this approach. Modular architectures can vary both with respect to the strength of the notion of modularity and the scope of the modularity of mind. We propose a dilemma for modular architectures, no matter how these architectures vary along these two dimensions. First, if a modular architecture commits to the informational encapsulation of modules, as it (...) is the case for modularity theories of perception, then modules are on this account impenetrable. However, we argue that there are genuine cases of the cognitive penetrability of perception and that these cases challenge any strong, encapsulated modular architecture of perception. Second, many recent massive modularity theories weaken the strength of the notion of module, while broadening the scope of modularity. These theories do not require any robust informational encapsulation, and thus avoid the incompatibility with cognitive penetrability. However, the weakened commitment to informational encapsulation greatly weakens the explanatory force of the theory and, ultimately, is conceptually at odds with the core of modularity. (shrink)
The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, reasoning, learning, language, action, interaction, (...) consciousness, humankind, life etc. etc. – and at the same time it has contributed substantially to answering these questions. There is thus a substantial tradition of work, both on AI by philosophers and of theory within AI itself. - The volume contains papers by Bostrom, Dreyfus, Gomila, O'Regan and Shagrir. (shrink)
The paper discusses the extended mind thesis with a view to the notions of “agent” and of “mind”, while helping to clarify the relation between “embodiment” and the “extended mind”. I will suggest that the extended mind thesis constitutes a reductio ad absurdum of the notion of ‘mind’; the consequence of the extended mind debate should be to drop the notion of the mind altogether – rather than entering the discussion how extended it is.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.