Traditionally, discussions of moral participation – and in particular moral agency – have focused on fully formed human actors. There has been some interest in the development of morality in humans, as well as interest in cultural differences when it comes to moral practices, commitments, and actions. However, until relatively recently, there has been little focus on the possibility that nonhuman animals have any role to play in morality, save being the objects of moral concern. Moreover, when nonhuman cases are (...) considered as evidence of moral agency or subjecthood, there has been an anthropocentric tendency to focus on those behaviors that inform our attributions of moral agency to humans. For example, some argue that the ability to evaluate the principles upon which a moral norm is grounded is required for full moral agency. Certainly, if a moral agent must understand what makes an action right or wrong, then most nonhuman animals would not qualify (and perhaps some humans too). However, if we are to understand the evolution of moral psychology and moral practice, we need to turn our attention to the foundations of full moral agency. We must first pay attention to the more broadly normative practices of other animals. Here, we begin that project by considering evidence that great apes and cetaceans participate in normative practices. (shrink)
Values-based practice (VBP), developed as a partner theory to evidence-based medicine (EBM), takes into explicit consideration patients’ and clinicians’ values, preferences, concerns and expectations during the clinical encounter in order to make decisions about proper interventions. VBP takes seriously the importance of life narratives, as well as how such narratives fundamentally shape patients’ and clinicians’ values. It also helps to explain difficulties in the clinical encounter as conflicts of values. While we believe that VBP adds an important dimension to the (...) clinician’s reasoning and decision-making procedures, we argue that it ignores the degree to which values can shift and change, especially in the case of psychiatric disorders. VBP does this in three respects. First, it does not appropriately engage with the fact that a person’s values can change dramatically in light of major life events. Second, it does not acknowledge certain changes in the way people value, or in their modes of valuing, that occur in cases of severe psychiatric disorder. And third, it does not acknowledge the fact that certain disorders can even alter the degree to which one is capable of valuing anything at all. We believe that ignoring such changes limits the degree to which VBP can be effectively applied to clinical treatment and care. We conclude by considering a number of possible remedies to this issue, including the use of proxies and written statements of value generated through interviews and discussions between patient and clinician. (shrink)
Many of our most important goals require months or even years of effort to achieve, and some never get achieved at all. As social psychologists have lately emphasized, success in pursuing such goals requires the capacity for perseverance, or "grit." Philosophers have had little to say about grit, however, insofar as it differs from more familiar notions of willpower or continence. This leaves us ill-equipped to assess the social and moral implications of promoting grit. We propose that grit has an (...) important epistemic component, in that failures of perseverance are often caused by a significant loss of confidence that one will succeed if one continues to try. Correspondingly, successful exercises of grit often involve a kind of epistemic resilience in the face of failure, injury, rejection, and other setbacks that constitute genuine evidence that success is not forthcoming. Given this, we discuss whether and to what extent displays of grit can be epistemically as well as practically rational. We conclude that they can be (although many are not), and that the rationality of grit will depend partly on features of the context the agent normally finds herself in. In particular, grit-friendly norms of deliberation might be irrational to use in contexts of severe material scarcity or oppression. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...) designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
This is a contribution to the symposium on Herman Cappelen’s book Fixing Language. Cappelen proposes a metasemantic framework—the “Austerity Framework”—within which to understand the general phenomenon of conceptual engineering. The proposed framework is austere in the sense that it makes no reference to concepts. Conceptual engineering is then given a “worldly” construal according to which conceptual engineering is a process that operates on the world. I argue, contra Cappelen, that an adequate theory of conceptual engineering must make reference to concepts. (...) This is because concepts are required to account for topic continuity, a phenomenon which lies at the heart of projects in conceptual engineering. I argue that Cappelen’s own account of topic continuity is inadequate as a result of the austerity of his metasemantic framework, and that his worldly construal of conceptual engineering is untenable. (shrink)
Words change meaning over time. Some meaning shift is accompanied by a corresponding change in subject matter; some meaning shift is not. In this paper I argue that an account of linguistic meaning can accommodate the first kind of case, but that a theory of concepts is required to accommodate the second. Where there is stability of subject matter through linguistic change, it is concepts that provide the stability. The stability provided by concepts allows for genuine disagreement and ameliorative change (...) in the context of conceptual engineering. (shrink)
Suppose some person 'A' sets out to accomplish a difficult, long-term goal such as writing a passable Ph.D. thesis. What should you believe about whether A will succeed? The default answer is that you should believe whatever the total accessible evidence concerning A's abilities, circumstances, capacity for self-discipline, and so forth supports. But could it be that what you should believe depends in part on the relationship you have with A? We argue that it does, in the case where A (...) is yourself. The capacity for "grit" involves a kind of epistemic resilience in the face of evidence suggesting that one might fail, and this makes it rational to respond to the relevant evidence differently when you are the agent in question. We then explore whether similar arguments extend to the case of "believing in" our significant others -- our friends, lovers, family members, colleagues, patients, and students. (shrink)
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...) by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...) from humans; in fact they increase the abil-ity to hold humans accountable for war crimes. Using LAWS in war would probably reduce human suffering overall. Finally, the availability of LAWS would probably not increase the probability of war or other lethal conflict—especially as compared to extant remote-controlled weapons. The widespread fear of killer robots is unfounded: They are probably good news. (shrink)
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues (...) this volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of Artificial Intelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
Conceptual engineering is to be explained by appeal to the externalist distinction between concepts and conceptions. If concepts are determined by non-conceptual relations to objective properties rather than by associated conceptions (whether individual or communal), then topic preservation through semantic change will be possible. The requisite level of objectivity is guaranteed by the possibility of collective error and does not depend on a stronger level of objectivity, such as mind-independence or independence from linguistic or social practice more generally. This means (...) that the requisite level of objectivity is exhibited not only by natural kinds, but also by a wide range of philosophical kinds, social kinds and artefactual kinds. The alternative externalist accounts of conceptual engineering offered by Herman Cappelen and Derek Ball fall back into a kind of descriptivism which is antithetical to externalism and fails to recognise this basic level of objectivity. (shrink)
The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
In this article, we critically reflect on the concept of biomimicry. On the basis of an analysis of the concept of biomimicry in the literature and its philosophical origin, we distinguish between a strong and a weaker concept of biomimicry. The strength of the strong concept of biomimicry is that nature is seen as a measure by which to judge the ethical rightness of our technological innovations, but its weakness is found in questionable presuppositions. These presuppositions are addressed by the (...) weaker concept of biomimicry, but at the price that it is no longer possible to distinguish between exploitative and ecological types of technological innovations. We compare both concepts of biomimicry by critically reflecting on four dimensions of the concept of biomimicry: mimesis, technology, nature, and ethics. (shrink)
In this paper it is argued that existing ‘self-representational’ theories of phenomenal consciousness do not adequately address the problem of higher-order misrepresentation. Drawing a page from the phenomenal concepts literature, a novel self-representational account is introduced that does. This is the quotational theory of phenomenal consciousness, according to which the higher-order component of a conscious state is constituted by the quotational component of a quotational phenomenal concept. According to the quotational theory of consciousness, phenomenal concepts help to account for the (...) very nature of phenomenally conscious states. Thus, the paper integrates two largely distinct explanatory projects in the field of consciousness studies: (i) the project of explaining how we think about our phenomenally conscious states, and (ii) the project of explaining what phenomenally conscious states are in the first place. (shrink)
Various authors debate the question of whether neuroscience is relevant to criminal responsibility. However, a plethora of different techniques and technologies, each with their own abilities and drawbacks, lurks beneath the label “neuroscience”; and in criminal law responsibility is not a single, unitary and generic concept, but it is rather a syndrome of at least six different concepts. Consequently, there are at least six different responsibility questions that the criminal law asks—at least one for each responsibility concept—and, I will suggest, (...) a multitude of ways in which the techniques and technologies that comprise neuroscience might help us to address those diverse questions. In a way, on my account neuroscience is relevant to criminal responsibility in many ways, but I hesitate to state my position like this because doing so obscures two points which I would rather highlight: one, neither neuroscience nor criminal responsibility are as unified as that; and two, the criminal law asks many different responsibility questions and not just one generic question. (shrink)
Juhani Yli-Vakkuri has argued that the Twin Earth thought experiments offered in favour of semantic externalism can be replaced by a straightforward deductive argument from premisses widely accepted by both internalists and externalists alike. The deductive argument depends, however, on premisses that, on standard formulations of internalism, cannot be satisfied by a single belief simultaneously. It does not therefore, constitute a proof of externalism. The aim of this article is to explain why.
This paper explores the position that moral enhancement interventions could be medically indicated in cases where they provide a remedy for a lack of empathy, when such a deficit is considered pathological. In order to argue this claim, the question as to whether a deficit of empathy could be considered to be pathological is examined, taking into account the difficulty of defining illness and disorder generally, and especially in the case of mental health. Following this, Psychopathy and a fictionalised mental (...) disorder are explored with a view to consider moral enhancement techniques as possible treatments for both conditions. At this juncture, having asserted and defended the position that moral enhancement interventions could, under certain circumstances, be considered medically indicated, this paper then goes on to briefly explore some of the consequences of this assertion. First, it is acknowledged that this broadening of diagnostic criteria in light of new interventions could fall foul of claims of medicalisation. It is then briefly noted that considering moral enhancement technologies to be akin to therapies in certain circumstances could lead to ethical and legal consequences and questions, such as those regarding regulation, access, and even consent. (shrink)
In a series of recent articles, Robin Jeshion has developed a theory of singular thought which she calls ‘cognitivism’. According to Jeshion, cognitivism offers a middle path between acquaintance theories—which she takes to impose too strong a requirement on singular thought, and semantic instrumentalism—which she takes to impose too weak a requirement. In this article, I raise a series of concerns about Jeshion's theory, and suggest that the relevant data can be accommodated by a version of acquaintance theory that distinguishes (...) unsuccessful thoughts of singular form from successful singular thoughts, and in addition allows for ‘trace-based’ acquaintance. (shrink)
[This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or (...) considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
When we define something as a crime, we generally thereby criminalize the attempt to commit that crime. However, it is a vexing puzzle to specify what must be the case in order for a criminal attempt to have occurred, given that the results element of the crime fails to come about. I argue that the philosophy of action can assist the criminal law in clarifying what kinds of events are properly categorized as criminal attempts. A natural thought is that this (...) project should take the form of specifying what it is in general to attempt or try to perform an action, and then to define criminal attempts as attempts to commit crimes. Focusing on Gideon Yaffe's resourceful work in Attempts (Oxford University Press, 2010) as an example of this strategy, I argue that it results in a view that is overly inclusive: one will count as trying to commit a crime even in the far remote preparatory stages that we in fact have good reason not to criminalize. I offer an alternative proposal to distinguish between mere preparations and genuine attempts that has its basis not in trying, but doing: a criminal attempt is underway once what the agent is doing is a crime. Working out the details of this schema turns out to have important implications for action theory. A recently burgeoning view known as Naive Action Theory holds that all action can be explained by appeal to some further thing that the agent is doing, and that that the same explanatory nexus is at work even when we appeal to what the agent is intending, trying, or preparing to do -- these notions do explanatory work because they too refer to actions that are in progress, albeit in their infancy. If this is right, than the notion of 'doing' will also be too inclusive for the purposes of the criminal law. I argue that we should draw the reverse conclusion: the distinctions between pure intending, trying, preparing, and doing serve an important purpose in the criminal law, and this fact lends support to the view that they are genuine metaphysical and explanatory distinctions. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 (...) - - - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
In this paper, I draw on a recent account of perceptual knowledge according to which knowledge is contrastive. I extend the contrastive account of perceptual knowledge to yield a contrastive account of self-knowledge. Along the way, I develop a contrastive account of the propositional attitudes (beliefs, desires, regrets and so on) and suggest that a contrastive account of the propositional attitudes implies an anti-individualist account of propositional attitude concepts (the concepts of belief, desire, regret, and so on).
This article argues, contra-Derrida, that Foucault does not essentialize or precomprehend the meaning of life or bio- in his writings on biopolitics. Instead, Foucault problematizes life and provokes genealogical questions about the meaning of modernity more broadly. In The Order of Things, the 1974-75 lecture course at the Collège de France, and Herculine Barbin, the monster is an important figure of the uncertain shape of modernity and its entangled problems (life, sex, madness, criminality, etc). Engaging Foucault’s monsters, I show that (...) the problematization of life is far from a “desire for a threshold,” à la Derrida. It is a spur to interrogating and critiquing thresholds, a fraught question mark where we have “something to do.” As Foucault puts it in “The Lives of Infamous Men,” it an ambiguous frontier where beings lived and died and they appear to us “because of an encounter with power which, in striking down a life and turning it to ashes, makes it emerge, like a flash [...]. (shrink)
In cognitive science, the concept of dissociation has been central to the functional individuation and decomposition of cognitive systems. Setting aside debates about the legitimacy of inferring the existence of dissociable systems from ‘behavioural’ dissociation data, the main idea behind the dissociation approach is that two cognitive systems are dissociable, and thus viewed as distinct, if each can be damaged, or impaired, without affecting the other system’s functions. In this article, I propose a notion of functional independence that does not (...) require dissociability, and describe an approach to the functional decomposition and modelling of cognitive systems that complements the dissociation approach. I show that highly integrated cognitive and neurocognitive systems can be decomposed into non-dissociable but functionally independent components, and argue that this approach can provide a general account of cognitive specialization in terms of a stable structure–function relationship. 1 Introduction2 Functional Independence without Dissociability3 FI Systems and Cognitive Architecture4 FI Systems and Cognitive Specialization. (shrink)
In this paper, I argue that content externalism and privileged access are compatible, but that one can, in a sense, have privileged access to the world. The supposedly absurd conclusion should be embraced.
The contribution of the body to cognition and control in natural and artificial agents is increasingly described as “off-loading computation from the brain to the body”, where the body is said to perform “morphological computation”. Our investigation of four characteristic cases of morphological computation in animals and robots shows that the ‘off-loading’ perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We thus distinguish (1) morphology that (...) facilitates control, (2) morphology that facilitates perception and the rare cases of (3) morphological computation proper, such as ‘reservoir computing.’ where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and control – how it contributes to the overall ‘orchestration’ of intelligent behaviour. (shrink)
Garrath Williams claims that truly responsible people must possess a “capacity … to respond [appropriately] to normative demands” (2008:462). However, there are people whom we would normally praise for their responsibility despite the fact that they do not yet possess such a capacity (e.g. consistently well-behaved young children), and others who have such capacity but who are still patently irresponsible (e.g. some badly-behaved adults). Thus, I argue that to qualify for the accolade “a responsible person” one need not possess such (...) a capacity, but only to be earnestly willing to do the right thing and to have a history that testifies to this willingness. Although we may have good reasons to prefer to have such a capacity ourselves, and to associate ourselves with others who have it, at a conceptual level I do not think that such considerations support the claim that having this capacity is a necessary condition of being a responsible person in the virtue sense. (shrink)
In October 2011, the “2nd European Network for Cognitive Systems, Robotics and Interaction”, EUCogII, held its meeting in Groningen on “Autonomous activity in real-world environments”, organized by Tjeerd Andringa and myself. This is a brief personal report on why we thought autonomy in real-world environments is central for cognitive systems research and what I think I learned about it. --- The theses that crystallized are that a) autonomy is a relative property and a matter of degree, b) increasing autonomy of (...) an artificial system from its makers and users is a necessary feature of increasingly intelligent systems that can deal with the real-world and c) more such autonomy means less control but at the same time improved interaction with the system. (shrink)
In this paper I argue first, that a contrastive account of self-knowledge and the propositional attitudes entails an anti-individualist account of propositional attitude concepts, second, that the final account provides a solution to the McKinsey paradox, and third, that the account has the resources to explain why certain anti-skeptical arguments fail.
The theory that all processes in the universe are computational is attractive in its promise to provide an understandable theory of everything. I want to suggest here that this pancomputationalism is not sufficiently clear on which problem it is trying to solve, and how. I propose two interpretations of pancomputationalism as a theory: I) the world is a computer and II) the world can be described as a computer. The first implies a thesis of supervenience of the physical over computation (...) and is thus reduced ad absurdum. The second is underdetermined by the world, and thus equally unsuccessful as theory. Finally, I suggest that pancomputationalism as metaphor can be useful. – At the Paderborn workshop in 2008, this paper was presented as a commentary to the relevant paper by Gordana Dodig-Crnkovic " Info-Computationalism and Philosophical Aspects of Research in Information Sciences". (shrink)
I reconsider the concept of dignity in several ways in this article. My primary aim is to move dignity in a more relational direction, drawing on care ethics to do so. After analyzing the power and perils of dignity and tracing its rhetorical, academic, and historical influence, I discuss three interventions that care ethics can make into the dignity discourse. The first intervention involves an understanding of the ways in which care can be dignifying. The second intervention examines whether the (...) capacity to care should be considered a distinguishing moral power – as rationality often is – in light of which humans have dignity. In the third intervention, I cast dignity as a fundamentally relational concept and argue that relationality is constitutive not only of dignity but also of the wider enterprise of normativity. I understand relationality as the condition of connection in which all human beings stand with some other human beings. A thought experiment involving the last person on earth helps to reframe the normative significance of human relatedness. Dignity emerges as fundamentally grounded in relationality. (shrink)
Luck egalitarians think that considerations of responsibility can excuse departures from strict equality. However critics argue that allowing responsibility to play this role has objectionably harsh consequences. Luck egalitarians usually respond either by explaining why that harshness is not excessive, or by identifying allegedly legitimate exclusions from the default responsibility-tracking rule to tone down that harshness. And in response, critics respectively deny that this harshness is not excessive, or they argue that those exclusions would be ineffective or lacking in justification. (...) Rather than taking sides, after criticizing both positions I also argue that this way of carrying on the debate – i.e. as a debate about whether the harsh demands of responsibility outweigh other considerations, and about whether exclusions to responsibility-tracking would be effective and/or justified – is deeply problematic. On my account, the demands of responsibility do not – in fact, they can not – conflict with the demands of other normative considerations, because responsibility only provides a formal structure within which those other considerations determine how people may be treated, but it does not generate its own practical demands. (shrink)
Concerns that people would be disinclined to voluntarily undergo moral enhancement have led to suggestions that an incentivised programme should be introduced to encourage participation. This paper argues that, while such measures do not necessarily result in coercion or undue inducement (issues with which one may typically associate the use of incentives in general), the use of incentives for this purpose may present a taboo tradeoff. This is due to empirical research suggesting that those characteristics likely to be affected by (...) moral enhancement are often perceived as fundamental to the self; therefore, any attempt to put a price on such traits would likely be deemed morally unacceptable by those who hold this view. A better approach to address the possible lack of participation may be to instead invest in alternative marketing strategies and remove incentives altogether. (shrink)
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...) the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
In the thirties, Martin Heidegger was heavily involved with the work of Ernst Jünger (1895-1998). He says that he is indebted to Jünger for the ‘enduring stimulus’ provided by his descriptions. The question is: what exactly could this enduring stimulus be? Several interpreters have examined this question, but the recent publication of lectures and annotations of the thirties allow us to follow Heidegger’s confrontation with Jünger more precisely. -/- According to Heidegger, the main theme of his philosophical thinking in the (...) thirties was the overcoming of the metaphysics of the will to power. But whereas he seems to be quite revolutionary in heralding ‘another beginning’ of philosophy in the beginning of the thirties, he later on realized that his own revolutionary vocabulary was itself influenced by the will to power. In his later work, one of the main issues is the releasement from the wilful way of philosophical thinking. My hypothesis is that Jünger has this importance for Heidegger in the thirties, because the confrontation with Jünger’s way of thinking showed him that the other beginning of philosophy presupposes the irrevocable releasement of willing and a gelassen or non-willing way of philosophical thinking. -/- In this article, we test this hypothesis in relation to the recently published lectures, annotations and unpublished notes from the thirties. After a brief explanation of Jünger’s diagnosis of modernity (§1), we consider Heidegger’s reception of the work of Jünger in the thirties (§2). He not only sees that Jünger belongs to Nietzsche’s metaphysics of the will to power, but also shows the modern-metaphysical character of Jünger’s way of thinking. In section three, we focus on Heidegger’s confrontation with Jünger in relation to the consummation of modernity. According to Heidegger, Jünger is not only the end of modern metaphysics, but also the perishing (Verendung) of this end, the oblivion of this end in the will to power of representation. In section four, we focus on the real controversy between Jünger and Heidegger: the releasement of willing and the necessity of a radical other beginning of philosophical thinking. -/- . (shrink)
Cognition is commonly taken to be computational manipulation of representations. These representations are assumed to be digital, but it is not usually specified what that means and what relevance it has for the theory. I propose a specification for being a digital state in a digital system, especially a digital computational system. The specification shows that identification of digital states requires functional directedness, either for someone or for the system of which it is a part. In the case or digital (...) representations, to be a token of a representational type, where the function of the type is to represent. [An earlier version of this paper was discussed in the web-conference "Interdisciplines" https://web.archive.org/web/20100221125700/http://www.interdisciplines.org/adaptation/papers/7 ]. (shrink)
Die Entwicklungen in der Künstlichen Intelligenz (KI) sind spannend. Aber wohin geht die Reise? Ich stelle eine Analyse vor, der zufolge exponentielles Wachstum von Rechengeschwindigkeit und Daten die entscheidenden Faktoren im bisherigen Fortschritt waren. Im Folgenden erläutere ich, unter welchen Annahmen dieses Wachstum auch weiterhin Fortschritt ermöglichen wird: 1) Intelligenz ist eindimensional und messbar, 2) Kognitionswissenschaft wird für KI nicht benötigt, 3) Berechnung (computation) ist hinreichend für Kognition, 4) Gegenwärtige Techniken und Architektur sind ausreichend skalierbar, 5) Technological Readiness Levels (TRL) (...) erweisen sich als machbar. Diese Annahmen werden sich als dubios erweisen. (shrink)
Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of (...) the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents. (shrink)
This companion is aimed at specialists and non-specialists in the philosophy of mind and features 13 commissioned research articles on core topics by leading figures in the field. My contribution is on internalism and externalism in the philosophy of mind. I.
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...) examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
[Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificial intelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human intelligence. (...) This consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we stand and where we should go from here. (shrink)
Skeptics argue that the acquisition of knowledge is impossible given the standing possibility of error. We present the limiting convergence strategy for responding to skepticism and discuss the relationship between conceivable error and an agent’s knowledge in the limit. We argue that the skeptic must demonstrate that agents are operating with a bad method or are in an epistemically cursed world. Such demonstration involves a significant step beyond conceivability and commits the skeptic to potentially convergent inquiry.
Åsa Maria Wikforss has proposed a response to Burge's thought-experiments in favour of social externalism, one which allows the individualist to maintain that narrow content is truth-conditional without being idiosyncratic. The narrow aim of this paper is to show that Wikforss' argument against social externalism fails, and hence that the individualist position she endorses is inadequate. The more general aim is to attain clarity on the social externalist thesis. Social externalism need not rest, as is typically thought, on the possibility (...) of incomplete linguistic understanding or conceptual error. I identify the unifying principle that underlies the various externalist thought-experiments. (shrink)
Advances in immunotherapy pave the way for vaccines that target not only infections, but also unhealthy behaviors such as smoking. A nicotine vaccine that eliminates the pleasure associated with smoking could potentially be used to prevent children from adopting this addictive and dangerous behavior. This paper offers an ethical analysis of such vaccines. We argue that it would be permissible for parents to give their child a nicotine vaccine if the following conditions are met: (1) the vaccine is expected to (...) result in a net benefit to each individual vaccinated, (2) the expected harms from the side effects of the vaccine are lower than the non-voluntary harms of smoking, and (3) there are no less manipulative methods available that are as effective at preventing smoking initiation. Finally, we show how the framework developed here could be used to analyze the ethics of other chemical interventions designed to modify children’s behavior. (shrink)
The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding. If the mind is a digital computer, as computationalism claims, then it can be computing either over meaningful symbols or over meaningless symbols. If it is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system, i.e. it implies semantic nativism. If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to symbol grounding. (...) In this case, no symbol grounding could take place since any grounding presupposes intentional cognitive processes. So, whether computing in the mind is over meaningless or over meaningful symbols, computationalism implies semantic nativism. (shrink)
Seneca asserts in Letter 121 that we mature by exercising self-care as we pass through successive psychosomatic “constitutions.” These are babyhood (infantia), childhood (pueritia), adolescence (adulescentia), and young adulthood (iuventus). The self-care described by Seneca is 'self-affiliation' (oikeiōsis, conciliatio) the linchpin of the Stoic ethical system, which defines living well as living in harmony with nature, posits that altruism develops from self-interest, and allows that pleasure and pain are indicators of well-being while denying that happiness consists in pleasure and that (...) pain is misery. Augustine divides the narrative of his own development into the stages of babyhood (infantia), childhood (pueritia), adolescence (adulescentia), and young adulthood (iuventus) in the Confessions, a text wherein he claims familiarity with more than a few works of Seneca (Conf. 5.6.11). Furthermore, he had access to Stoic accounts of affiliation not only in Seneca’s Letter 121, but also in Cicero’s On Goals, and in non-extant sources of Stoic ethical theory. After pointing out that Augustine endorsed the notion of self-affiliation outside of the Confessions, I raise the question of whether he also makes the notion of affiliation thematic in his philosophical autobiography. I argue that he does indeed present himself and some of his primary relationships – with his mother and his long-term girlfriend – in terms of personal and social oikeiōsis. In addition, his self-critiques in the early books of the Confessions can be more fully understood if compared to Stoic developmental theory. He depicts himself as failing to progress intellectually, socially, and morally: although he passed through the successive constitutions, becoming physically larger and cognitively capable, he did not mature correctly by the standards of his Stoic sources. (shrink)
I see four symbol grounding problems: 1) How can a purely computational mind acquire meaningful symbols? 2) How can we get a computational robot to show the right linguistic behavior? These two are misleading. I suggest an 'easy' and a 'hard' problem: 3) How can we explain and re-produce the behavioral ability and function of meaning in artificial computational agents?4) How does physics give rise to meaning?
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.