For millennia people have speculated about the nature of time—without success. Time plays a role with all processes and events studied by all the disciplines. It is reported here that the existence of time is a direct consequence of the existence of space. Space exists, and it continues to exist. Space is there, and it continues to be there. Space exists as place, the three-dimensional place that matter can occupy. The three-dimensional extension of spatial-place is measured with a ruler of (...) some kind. Place, three-dimensionality, and extension are intrinsic-qualities of space itself. The continuing-existence of space is a direct consequence of the existence of space. The continuing-existence of space is measured with a clock. Spatial continuing-existence also has intrinsic-qualities. A list of the qualities of the continuing-existence of space appears to be the same as a list of the qualities of time. Seventeen qualities of the continuing-existence of space are compared to the equivalent qualities of time. There is an exact match between the qualities of time and the qualities of spatial continuing-existence. The qualities and roles of the continuing-existence of space fulfill all the known qualities and roles of time. Realizing that spatial continuing-existence is time makes time understandable, and clarifies the relations of time to matter, change, and process. (shrink)
Garrath Williams claims that truly responsible people must possess a “capacity … to respond [appropriately] to normative demands” (2008:462). However, there are people whom we would normally praise for their responsibility despite the fact that they do not yet possess such a capacity (e.g. consistently well-behaved young children), and others who have such capacity but who are still patently irresponsible (e.g. some badly-behaved adults). Thus, I argue that to qualify for the accolade “a responsible person” one need not possess such (...) a capacity, but only to be earnestly willing to do the right thing and to have a history that testifies to this willingness. Although we may have good reasons to prefer to have such a capacity ourselves, and to associate ourselves with others who have it, at a conceptual level I do not think that such considerations support the claim that having this capacity is a necessary condition of being a responsible person in the virtue sense. (shrink)
Luck egalitarians think that considerations of responsibility can excuse departures from strict equality. However critics argue that allowing responsibility to play this role has objectionably harsh consequences. Luck egalitarians usually respond either by explaining why that harshness is not excessive, or by identifying allegedly legitimate exclusions from the default responsibility-tracking rule to tone down that harshness. And in response, critics respectively deny that this harshness is not excessive, or they argue that those exclusions would be ineffective or lacking in justification. (...) Rather than taking sides, after criticizing both positions I also argue that this way of carrying on the debate – i.e. as a debate about whether the harsh demands of responsibility outweigh other considerations, and about whether exclusions to responsibility-tracking would be effective and/or justified – is deeply problematic. On my account, the demands of responsibility do not – in fact, they can not – conflict with the demands of other normative considerations, because responsibility only provides a formal structure within which those other considerations determine how people may be treated, but it does not generate its own practical demands. (shrink)
Egalitarians must address two questions: i. What should there be an equality of, which concerns the currency of the ‘equalisandum’; and ii. How should this thing be allocated to achieve the so-called equal distribution? A plausible initial composite answer to these two questions is that resources should be allocated in accordance with choice, because this way the resulting distribution of the said equalisandum will ‘track responsibility’ — responsibility will be tracked in the sense that only we will be responsible for (...) the resources that are available to us, since our allocation of resources will be a consequence of our own choices. But the effects of actual choices should not be preserved until the prior effects of luck in constitution and circumstance are first eliminated. For instance, people can choose badly because their choice-making capacity was compromised due to a lack of intelligence (i.e. due to constitutional bad luck), or because only bad options were open to them (i.e. due to circumstantial bad luck), and under such conditions we are not responsible for our choices. So perhaps a better composite answer to our two questions (from the perspective of tracking responsibility) might be that resources should be allocated so as to reflect people’s choices, but only once those choices have been corrected for the distorting effects of constitutional and circumstantial luck, and on this account choice preservation and luck elimination are two complementary aims of the egalitarian ideal. Nevertheless, it is one thing to say that luck’s effects should be eliminated, but quite another to figure out just how much resource redistribution would be required to achieve this outcome, and so it was precisely for this purpose that in 1981 Ronald Dworkin developed the ingenuous hypothetical insurance market argumentative device (HIMAD), which he then used in conjunction with the talent slavery (TS) argument, to arrive at an estimate of the amount of redistribution that would be required to reduce the extent of luck’s effects. However recently Daniel Markovits has cast doubt over Dworkin’s estimates of the amount of redistribution that would be required, by pointing out flaws with his understanding of how the hypothetical insurance market would function. Nevertheless, Markovits patched it up and he used this patched-up version of Dworkin’s HIMAD together with his own version of the TS argument to reach his own conservative estimate of how much redistribution there ought to be in an egalitarian society. Notably though, on Markovits’ account once the HIMAD is patched-up and properly understood, the TS argument will also allegedly show that the two aims of egalitarianism are not necessarily complementary, but rather that they can actually compete with one another. According to his own ‘equal-agent’ egalitarian theory, the aim of choice preservation is more important than the aim of luck elimination, and so he alleges that when the latter aim comes into conflict with the former aim then the latter will need to be sacrificed to ensure that people are not subordinated to one another as agents. I believe that Markovits’ critique of Dworkin is spot on, but I also think that his own positive thesis — and hence his conclusion about how much redistribution there ought to be in an egalitarian society — is flawed. Hence, this paper will begin in Section I by explaining how Dworkin uses the HIMAD and his TS argument to estimate the amount of redistribution that there ought to be in an egalitarian society — this section will be largely expository in content. Markovits’ critique of Dworkin will then be outlined in Section II, as will be his own positive thesis. My critique of Markovits, and my own positive thesis, will then make a fleeting appearance in Section III. Finally, I will conclude by rejecting both Dworkin’s and Markovits’ estimates of the amount of redistribution that there ought to be in an egalitarian society, and by reaffirming the responsibility-tracking egalitarian claim that choice preservation and luck elimination are complementary and not competing egalitarian aims. (shrink)
In "Torts, Egalitarianism and Distributive Justice" , Tsachi Keren-Paz presents impressingly detailed analysis that bolsters the case in favour of incremental tort law reform. However, although this book's greatest strength is the depth of analysis offered, at the same time supporters of radical law reform proposals may interpret the complexity of the solution that is offered as conclusive proof that tort law can only take adequate account of egalitarian aims at an unacceptably high cost.
This thesis considers two allegations which conservatives often level at no-fault systems — namely, that responsibility is abnegated under no-fault systems, and that no-fault systems under- and over-compensate. I argue that although each of these allegations can be satisfactorily met – the responsibility allegation rests on the mistaken assumption that to properly take responsibility for our actions we must accept liability for those losses for which we are causally responsible; and the compensation allegation rests on the mistaken assumption that tort (...) law’s compensatory decisions provide a legitimate norm against which no-fault’s decisions can be compared and criticized – doing so leads in a direction which is at odds with accident law reform advocates’ typical recommendations. On my account, accident law should not just be reformed in line with no-fault’s principles, but rather it should be completely abandoned since the principles that protect no- fault systems from the conservatives’ two allegations are incompatible with retaining the category of accident law, they entail that no-fault systems are a form of social welfare and not accident law systems, and that under these systems serious deprivation – and to a lesser extent causal responsibility – should be conditions of eligibility to claim benefits. (shrink)
It could be argued that tort law is failing, and arguably an example of this failure is the recent public liability and insurance (‘PL&I’) crisis. A number of solutions have been proposed, but ultimately the chosen solution should address whatever we take to be the cause of this failure. On one account, the PL&I crisis is a result of an unwarranted expansion of the scope of tort law. Proponents of this position sometimes argue that the duty of care owed by (...) defendants to plaintiffs has expanded beyond reasonable levels, such that parties who were not really responsible for another’s misfortune are successfully sued, while those who really were to blame get away without taking any responsibility. However people should take responsibility for their actions, and the only likely consequence of allowing them to shirk it is that they and others will be less likely to exercise due care in the future, since the deterrents of liability and of no compensation for accidentally self-imposed losses will not be there. Others also argue that this expansion is not warranted because it is inappropriately fueled by ‘deep pocket’ considerations rather than by considerations of fault. They argue that the presence of liability insurance sways the judiciary to award damages against defendants since they know that insurers, and not the defendant personally, will pay for it in the end anyway. But although it may seem that no real person has to bear these burdens when they are imposed onto insurers, in reality all of society bears them collectively when insurers are forced to hike their premiums to cover these increasing damages payments. In any case, it seems unfair to force insurers to cover these costs simply because they can afford to do so. If such an expansion is indeed the cause of the PL&I crisis, then a contraction of the scope of tort liability, and a pious return to the fault principle, might remedy the situation. However it could also be argued that inadequate deterrence is the cause of this crisis. On this account the problem would lie not with the tort system’s continued unwarranted expansion, but in the fact that defendants really have been too careless. If prospective injurers were appropriately deterred from engaging in unnecessarily risky activities, then fewer accidents would ever occur in the first place, and this would reduce the need for litigation at its very source. If we take this to be the cause of tort law’s failure then our solution should aim to improve deterrence. Glen Robinson has argued that improved deterrence could be achieved if plaintiffs were allowed to sue defendants for wrongful exposure to ongoing risks of future harm, even in the absence of currently materialized losses. He argues that at least in toxic injury type cases the tortious creation of risk [should be seen as] an appropriate basis of liability, with damages being assessed according to the value of the risk, as an alternative to forcing risk victims to abide the outcome of the event and seek damages only if and when harm materializes. In a sense, Robinson wishes to treat newly-acquired wrongful risks as de-facto wrongful losses, and these are what would be compensated in liability for risk creation (‘LFRC’) cases. Robinson argues that if the extent of damages were fixed to the extent of risk exposure, all detected unreasonable risk creators would be forced to bear the costs of their activities, rather than only those who could be found responsible for another’s injuries ‘on the balance of probabilities’. The incidence of accidents should decrease as a result of improved deterrence, reduce the ‘suing fest’, and so resolve the PL&I crisis. So whilst the first solution involves contracting the scope of tort liability, Robinson’s solution involves an expansion of its scope. However Robinson acknowledges that LFRC seems prima facie incompatible with current tort principles which in the least require the presence of plaintiff losses, defendant fault, and causation to be established before making defendants liable for plaintiffs’ compensation. Since losses would be absent in LFRC cases by definition, the first evidentiary requirement would always be frustrated, and in its absence proof of defendant fault and causation would also seem scant. If such an expansion of tort liability were not supported by current tort principles then it would be no better than proposals to switch accident law across to no-fault, since both solutions would require comprehensive legal reform. However Robinson argues that the above three evidentiary requirements could be met in LFRC cases to the same extent that they are met in other currently accepted cases, and hence that his solution would therefore be preferable to no-fault solutions as it would only require incremental but not comprehensive legal reform. Although I believe that actual losses should be present before allowing plaintiffs to seek compensation, I will not present a positive argument for this conclusion. My aim in this paper is not to debate the relative merits of Robinson’s solution as compared to no-fault solutions, nor to determine which account of the cause of the PL&I crisis is closer to the truth, but rather to find out whether Robinson’s solution would indeed require less radical legal reform than, for example, proposed no-fault solutions. I will argue that Robinson fails to show that current tort principles would support his proposed solution, and hence that his solution is at best on an even footing with no-fault solutions since both would require comprehensive legal reform. (shrink)
Third-party property insurance (TPPI) protects insured drivers who accidentally damage an expensive car from the threat of financial ruin. Perhaps more importantly though, TPPI also protects the victims whose losses might otherwise go uncompensated. Ought responsible drivers therefore take out TPPI? This paper begins by enumerating some reasons for why a rational person might believe that they have a moral obligation to take out TPPI. It will be argued that if what is at stake in taking responsibility is the ability (...) to compensate our possible future victims for their losses, then it might initially seem that most people should be thankful for the availability of relatively inexpensive TPPI because without it they may not have sufficient funds to do the right thing and compensate their victims in the event of an accident. But is the ability to compensate one's victims really what is at stake in taking responsibility? The second part of this paper will critically examine the arguments for the above position, and it will argue that these arguments do not support the conclusion that injurers should compensate their victims for their losses, and hence that drivers need not take out TPPI in order to be responsible. Further still, even if these arguments did support the conclusion that injurers should compensate their victims for their losses, then (perhaps surprisingly) nobody should to be allowed to take out TPPI because doing so would frustrate justice. (shrink)
"Nothing Better Than Death" is a comprehensive analysis of the near-death experiences profiled on the www.near-death.com website. This book provides complete NDE testimonials, summaries of various NDEs, NDE research conclusions, a Question and Answer section, an analysis of NDEs and Christian doctrines, famous quotations about life and death, a NDE bibliography, book notes, a list of NDE resources on the Internet, and a list of NDE support groups associated with IANDS.org - the International Association for Near-Death Studies. The unusual title (...) of this book, "Nothing Better Than Death," was inspired by NDE experiencer Dr. Dianne Morrissey who once said, "If I lived a billion years more, in my body or yours, there's not a single experience on Earth that could ever be as good as being dead. Nothing.". (shrink)
The grand opposition between theories of the mind which is presented in this book will be familiar, in its broad outlines, to many readers. On the one side we have the Cartesians, who understand the mind in terms of representation, causation and the inner life; on the other we have the Wittgensteinians, who understand the mind in terms of activity, normativity and its external embedding in its bodily and social environment. In this book—one of a pair, the second of which (...) has yet to be translated—Vincent Descombes puts up a spirited defence of the Wittgensteinian approach. The Cartesian approach, which he calls ‘mental philosophy’, and which is exemplified most typically in the ‘cognitivism’ of Jerry Fodor, is fundamentally mistaken, he argues, since it underestimates, neglects or ignores both the active and external characteristics of the mind.1 Instead we should2 understand the mind in terms of a human being’s participation in a culture or a ‘form of life’, a form of engagement which is structured by norms rather than causal laws. This ‘anthropological holism’ draws not only upon the work of Wittgenstein, but also on Le´vi-Strauss, Lacan and, among other things, on the role of fiction in shaping our selfunderstanding. (shrink)
George Gemistos Plethon’s work in all its dimensions has attracted many scholars across the ages. One of those scholars was Alexandre Joseph Hidulphe Vincent, a French mathematician and erudite, who in the first and the only critical edition of Plethon’s Book of Laws by C. Alexandre in the nineteenth century, added three notes on his calendar, metrics and music, as he could reconstruct them from the ancient text. Vincent’s calculations were dictated by the main scientific thought of his (...) time, which was Positivism, and through this he thus contributed to the elucidation of some practical aspects of Plethon’s metaphysics. The results of meticulous calculations of the Plethonian calendar, metrics and musical modes show that the scientific spirit, which started appearing in the last days of Byzantium and during the Renaissance, was not only a revival of Antiquity, but an innovative attempt at explaining and being in accordance with the social demands and the physical reality of the time, both being understood as an extension of the metaphysical order. Vincent’s positivistic approach allows considering the impact of Plethon’s system in a postmodern perspective. (shrink)
The trial took place at Bristol Crown Court, England, United Kingdom for the murder of Joanna Yeates, and Dr Vincent Tabak was the Defendant. The author attended at court for this trial and this paper notes many of the obvious and unsatisfactory legal and procedural points in this trial. Dr Vincent Tabak was convicted of the murder at this trial. Of course the jury were not to know the finer points of law as the lower court judge did (...) not advise the jury of any such points before they adjourned to make their decision. All but one concluded that Dr Tabak was guilty as charged, although even the charge was vague and all-encompassing as he was charged with ‘murder between Friday 17 December 2010 and Sunday 19 December 2010’. Defence counsel, very eminent barrister-at-law, appeared to be cowed by the other side and let them walk all over this case procedurally. For example, prosecution counsel handed him 1200 pages of evidence on the first day of the trial, as if to FULFIL disclosure obligations. E was stunned and asked the court for time off. But this was never again discussed. -/- The lower court judge appeared as if he had already decided on the case. When Defence Counsel asked him if he had before him a copy of a certain document, he flippantly say he had it but had left it on his desk in his room. When Defence Counsel, of international renown, was ready to present his opening speech and cross-examination of his client, Dr Vincent Tabak, this lower court judge dismissed the jury to their lunch break knowing that counsel was going to speak. Court had to be adjourned and the members of the jury quickly assembles by the court porters as they were leaving the court, which must have hampered their interest in what counsel’s statement said. A photograph of a dead body by a roadside, allegedly, Miss Yeates, was shown by projector slide a dozen irrelevant times during the trial period, thus embedding that photograph in the jury’s mind. The computer evidence leaves much to be desired and could, some suggested, easily have been concocted by the prosecution team, abusing the law of metadata evidence. Te lady who presented it did not give the court her qualifications and her expertise if any- and much more that beggars belief and loses trust in the British Criminal Justice System, such as it is, following ‘form over substance’. -/- . (shrink)
[This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or (...) considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Report for "The Reasoner" on the conference "Philosophy and Theory of Artificial Intelligence", 3 & 4 October 2011, Thessaloniki, Anatolia College/ACT, http://www.pt-ai.org. --- Organization: Vincent C. Müller, Professor of Philosophy at ACT & James Martin Fellow, Oxford http://www.sophia.de --- Sponsors: EUCogII, Oxford-FutureTech, AAAI, ACM-SIGART, IACAP, ECCAI.
This is the short version, in French translation by Anne Querrien, of the originally jointly authored paper: Müller, Vincent C., ‘Autonomous killer robots are probably good news’, in Ezio Di Nucci and Filippo Santoni de Sio, Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons. - - - L’article qui suit présente un nouveau système d’armes fondé sur des robots qui risque d’être prochainement utilisé. À la différence des drones qui sont manoeuvrés à (...) distance mais comportent une part importante de discernement humain, il s’agit de machines programmées pour défendre, attaquer, ou tuer de manière autonome. Les auteurs, philosophes, préfèrent prévenir de leur prochaine diffusion et obtenir des Nations Unies leur régulation. Une campagne internationale propose plutôt leur interdiction. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 (...) - - - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
[Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificial intelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human intelligence. (...) This consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we stand and where we should go from here. (shrink)
The European Association for Cognitive Systems is the association resulting from the EUCog network, which has been active since 2006. It has ca. 1000 members and is currently chaired by Vincent C. Müller. We ran our annual conference on December 08-09 2016, kindly hosted by the Technical University of Vienna with Markus Vincze as local chair. The invited speakers were David Vernon and Paul F.M.J. Verschure. Out of the 49 submissions for the meeting, we accepted 18 a papers and (...) 25 as posters (after double-blind reviewing). Papers are published here as “full papers” or “short papers” while posters are published here as “short papers” or “abstracts”. Some of the papers presented at the conference will be published in a separate special volume on ‘Cognitive Robot Architectures’ with the journal Cognitive Systems Research. - RC, VCM, YS, MV. (shrink)
This volume offers very selected papers from the 2014 conference of the “International Association for Computing and Philosophy” (IACAP) - a conference tradition of 28 years. - - - Table of Contents - 0 Vincent C. Müller: - Editorial - 1) Philosophy of computing - 1 Çem Bozsahin: - What is a computational constraint? - 2 Joe Dewhurst: - Computing Mechanisms and Autopoietic Systems - 3 Vincenzo Fano, Pierluigi Graziani, Roberto Macrelli and Gino Tarozzi: - Are Gandy Machines really (...) local? - 4 Doukas Kapantais: - A refutation of the Church-Turing thesis according to some interpretation of what the thesis says - 5 Paul Schweizer: - In What Sense Does the Brain Compute? - 2) Philosophy of computer science & discovery - 6 Mark Addis, Peter Sozou, Peter C R Lane and Fernand Gobet: - Computational Scientific Discovery and Cognitive Science Theories - 7 Nicola Angius and Petros Stefaneas: - Discovering Empirical Theories of Modular Software Systems. An Algebraic Approach. - 8 Selmer Bringsjord, John Licato, Daniel Arista, Naveen Sundar Govindarajulu and Paul Bello: - Introducing the Doxastically Centered Approach to Formalizing Relevance Bonds in Conditionals - 9 Orly Stettiner: - From Silico to Vitro: - Computational Models of Complex Biological Systems Reveal Real-world Emergent Phenomena - 3) Philosophy of cognition & intelligence - 10 Douglas Campbell: - Why We Shouldn’t Reason Classically, and the Implications for Artificial Intelligence - 11 Stefano Franchi: - Cognition as Higher Order Regulation - 12 Marcello Guarini: - Eliminativisms, Languages of Thought, & the Philosophy of Computational Cognitive Modeling - 13 Marcin Miłkowski: - A Mechanistic Account of Computational Explanation in Cognitive Science and Computational Neuroscience - 14 Alex Tillas: - Internal supervision & clustering: - A new lesson from ‘old’ findings? - 4) Computing & society - 15 Vasileios Galanos: - Floridi/Flusser: - Parallel Lives in Hyper/Posthistory - 16 Paul Bello: - Machine Ethics and Modal Psychology - 17 Marty J. Wolf and Nir Fresco: - My Liver Is Broken, Can You Print Me a New One? - 18 Marty J. Wolf, Frances Grodzinsky and Keith W. Miller: - Robots, Ethics and Software – FOSS vs. Proprietary Licenses. (shrink)
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...) examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues (...) this volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of Artificial Intelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...) evaluating such proposals. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...) designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...) from humans; in fact they increase the abil-ity to hold humans accountable for war crimes. Using LAWS in war would probably reduce human suffering overall. Finally, the availability of LAWS would probably not increase the probability of war or other lethal conflict—especially as compared to extant remote-controlled weapons. The widespread fear of killer robots is unfounded: They are probably good news. (shrink)
In this paper it is argued that existing ‘self-representational’ theories of phenomenal consciousness do not adequately address the problem of higher-order misrepresentation. Drawing a page from the phenomenal concepts literature, a novel self-representational account is introduced that does. This is the quotational theory of phenomenal consciousness, according to which the higher-order component of a conscious state is constituted by the quotational component of a quotational phenomenal concept. According to the quotational theory of consciousness, phenomenal concepts help to account for the (...) very nature of phenomenally conscious states. Thus, the paper integrates two largely distinct explanatory projects in the field of consciousness studies: (i) the project of explaining how we think about our phenomenally conscious states, and (ii) the project of explaining what phenomenally conscious states are in the first place. (shrink)
In the thirties, Martin Heidegger was heavily involved with the work of Ernst Jünger (1895-1998). He says that he is indebted to Jünger for the ‘enduring stimulus’ provided by his descriptions. The question is: what exactly could this enduring stimulus be? Several interpreters have examined this question, but the recent publication of lectures and annotations of the thirties allow us to follow Heidegger’s confrontation with Jünger more precisely. -/- According to Heidegger, the main theme of his philosophical thinking in the (...) thirties was the overcoming of the metaphysics of the will to power. But whereas he seems to be quite revolutionary in heralding ‘another beginning’ of philosophy in the beginning of the thirties, he later on realized that his own revolutionary vocabulary was itself influenced by the will to power. In his later work, one of the main issues is the releasement from the wilful way of philosophical thinking. My hypothesis is that Jünger has this importance for Heidegger in the thirties, because the confrontation with Jünger’s way of thinking showed him that the other beginning of philosophy presupposes the irrevocable releasement of willing and a gelassen or non-willing way of philosophical thinking. -/- In this article, we test this hypothesis in relation to the recently published lectures, annotations and unpublished notes from the thirties. After a brief explanation of Jünger’s diagnosis of modernity (§1), we consider Heidegger’s reception of the work of Jünger in the thirties (§2). He not only sees that Jünger belongs to Nietzsche’s metaphysics of the will to power, but also shows the modern-metaphysical character of Jünger’s way of thinking. In section three, we focus on Heidegger’s confrontation with Jünger in relation to the consummation of modernity. According to Heidegger, Jünger is not only the end of modern metaphysics, but also the perishing (Verendung) of this end, the oblivion of this end in the will to power of representation. In section four, we focus on the real controversy between Jünger and Heidegger: the releasement of willing and the necessity of a radical other beginning of philosophical thinking. -/- . (shrink)
The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
The contribution of the body to cognition and control in natural and artificial agents is increasingly described as “off-loading computation from the brain to the body”, where the body is said to perform “morphological computation”. Our investigation of four characteristic cases of morphological computation in animals and robots shows that the ‘off-loading’ perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We thus distinguish (1) morphology that (...) facilitates control, (2) morphology that facilitates perception and the rare cases of (3) morphological computation proper, such as ‘reservoir computing.’ where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and control – how it contributes to the overall ‘orchestration’ of intelligent behaviour. (shrink)
In this chapter, I provide an overview of phenomenological approaches to psychiatric classification. My aim is to encourage and facilitate philosophical debate over the best ways to classify psychiatric disorders. First, I articulate phenomenological critiques of the dominant approach to classification and diagnosis—i.e., the operational approach employed in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) and the International Classification of Diseases (ICD-10). Second, I describe the type or typification approach to psychiatric classification, which I distinguish into three different (...) versions: ideal types, essential types, and prototypes. I argue that despite their occasional conflation in the contemporary literature, there are important distinctions among these approaches. Third, I outline a new phenomenological-dimensional approach. I show how this approach, which starts from basic dimensions of human existence, allows us to investigate the full range of psychopathological conditions without accepting the validity of current diagnostic categories. (shrink)
May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...) crucial moral question is not one of responsibility. Rather, it is whether the technology can satisfy the requirements of fairness in the re-distribution of risk. Not only is this possible in principle, but some killer robots will actually satisfy these requirements. An implication of our argument is that there is a public responsibility to regulate killer robots ’ design and manufacture. (shrink)
“On the Subject Matter of Phenomenological Psychopathology” provides a framework for the phenomenological study of mental disorders. The framework relies on a distinction between (ontological) existentials and (ontic) modes. Existentials are the categorial structures of human existence, such as intentionality, temporality, selfhood, and affective situatedness. Modes are the particular, concrete phenomena that belong to these categorial structures, with each existential having its own set of modes. In the first section, we articulate this distinction by drawing primarily on the work of (...) Martin Heidegger—especially his study of the ontological structure of affective situatedness (Befindlichkeit) and its particular, ontic modes, which he calls moods (Stimmungen). In the second section, we draw on a study of grief to demonstrate how this framework can be used when conducting phenomenological interviews and analyses. In the concluding section, we explain how this framework can be guide phenomenological studies across a broad range of existential structures. (shrink)
Relative blindsight is said to occur when different levels of subjective awareness are obtained at equality of objective performance. Using metacontrast masking, Lau and Passingham reported relative blindsight in normal observers at the shorter of two stimulus-onset asynchronies between target and mask. Experiment 1 replicated the critical asymmetry in subjective awareness at equality of objective performance. We argue that this asymmetry cannot be regarded as evidence for relative blindsight because the observers’ responses were based on different attributes of the stimuli (...) at the two SOAs. With an invariant criterion content , there was no asymmetry in subjective awareness across the two SOAs even though objective performance was the same. Experiment 3 examined the effect of criterion level on estimates of relative blindsight. Collectively, the present results question whether metacontrast masking is a suitable paradigm for establishing relative blindsight. Implications for theories of consciousness are discussed. (shrink)
Martin Heidegger (1889–1976) is one of the most influential philosophers of the twentieth century. His influence, however, extends beyond philosophy. His account of Dasein, or human existence, permeates the human and social sciences, including nursing, psychiatry, psychology, sociology, anthropology, and artificial intelligence. In this chapter, I outline Heidegger’s influence on psychiatry and psychology, focusing especially on his relationships with the Swiss psychiatrists Ludwig Binswanger and Medard Boss. The first section outlines Heidegger’s early life and work, up to and including the (...) publication of Being and Time, in which he develops his famous concept of being-in-the-world. The second section focuses on Heidegger’s initial influence on psychiatry via Binswanger’s founding of Daseinsanalysis, a Heideggerian approach to psychopathology and psychotherapy. The third section turns to Heidegger’s relationship with Boss, including Heidegger’s rejection of Binswanger’s Daseinsanalysis and his lectures at Boss’s home in Zollikon, Switzerland. (shrink)
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...) the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
We discuss at some length evidence from the cognitive science suggesting that the representations of objects based on spatiotemporal information and featural information retrieved bottomup from a visual scene precede representations of objects that include conceptual information. We argue that a distinction can be drawn between representations with conceptual and nonconceptual content. The distinction is based on perceptual mechanisms that retrieve information in conceptually unmediated ways. The representational contents of the states induced by these mechanisms that are available to a (...) type of awareness called phenomenal awareness constitute the phenomenal content of experience. The phenomenal content of perception contains the existence of objects as separate things that persist in time and time, spatiotemporal information, and information regarding relative spatial relations, motion, surface properties, shape, size, orientation, color, and their functional properties. (shrink)
While it is often said that robotics should aspire to reproduc- ible and measurable results that allow benchmarking, I argue that a fo- cus on benchmarking can be a hindrance for progress in robotics. The reason is what I call the ‘measure-target confusion’, the confusion be- tween a measure of progress and the target of progress. Progress on a benchmark (the measure) is not identical to scientific or technological progress (the target). In the past, several academic disciplines have been led (...) into pursuing only reproducible and measurable ‘scientific’ results – robotics should be careful to follow that line because results that can be benchmarked must be specific and context-dependent, but robotics targets whole complex systems for a broad variety of contexts. While it is extremely valuable to improve benchmarks to reduce the distance be- tween measure and target, the general problem to measure progress towards more intelligent machines (the target) will not be solved by benchmarks alone; we need a balanced approach with sophisticated benchmarks, plus real-life testing, plus qualitative judgment. (shrink)
Engineers fine-tune the design of robot bodies for control purposes, however, a methodology or set of tools is largely absent, and optimization of morphology (shape, material properties of robot bodies, etc.) is lagging behind the development of controllers. This has become even more prominent with the advent of compliant, deformable or ”soft” bodies. These carry substantial potential regarding their exploitation for control—sometimes referred to as ”morphological computation”. In this article, we briefly review different notions of computation by physical systems and (...) propose the dynamical systems framework as the most useful in the context of describing and eventually designing the interactions of controllers and bodies. Then, we look at the pros and cons of simple vs. complex bodies, critically reviewing the attractive notion of ”soft” bodies automatically taking over control tasks. We address another key dimension of the design space—whether model-based control should be used and to what extent it is feasible to develop faithful models for different morphologies. (shrink)
In cognitive science, the concept of dissociation has been central to the functional individuation and decomposition of cognitive systems. Setting aside debates about the legitimacy of inferring the existence of dissociable systems from ‘behavioural’ dissociation data, the main idea behind the dissociation approach is that two cognitive systems are dissociable, and thus viewed as distinct, if each can be damaged, or impaired, without affecting the other system’s functions. In this article, I propose a notion of functional independence that does not (...) require dissociability, and describe an approach to the functional decomposition and modelling of cognitive systems that complements the dissociation approach. I show that highly integrated cognitive and neurocognitive systems can be decomposed into non-dissociable but functionally independent components, and argue that this approach can provide a general account of cognitive specialization in terms of a stable structure–function relationship. 1 Introduction2 Functional Independence without Dissociability3 FI Systems and Cognitive Architecture4 FI Systems and Cognitive Specialization. (shrink)
In this paper I offer an alternative phenomenological account of depression as consisting of a degradation of the degree to which one is situated in and attuned to the world. This account contrasts with recent accounts of depression offered by Matthew Ratcliffe and others. Ratcliffe develops an account in which depression is understood in terms of deep moods, or existential feelings, such as guilt or hopelessness. Such moods are capable of limiting the kinds of significance and meaning that one can (...) come across in the world. I argue that Ratcliffe’s account is unnecessarily constrained, making sense of the experience of depression by appealing only to changes in the mode of human existence. Drawing on Merleau-Ponty’s critique of traditional transcendental phenomenology, I show that many cases of severe psychiatric disorders are best understood as changes in the very structure of human existence, rather than changes in the mode of human existence. Working in this vein, I argue that we can make better sense of many first-person reports of the experience of depression by appealing to a loss or degradation of the degree to which one is situated in and attuned to the world, rather than attempting to make sense of depression as a particular mode of being situated and attuned. Finally, I argue that drawing distinctions between disorders of structure and mode will allow us to improve upon the currently heterogeneous categories of disorder offered in the DSM-5. (shrink)
Amongst philosophers and cognitive scientists, modularity remains a popular choice for an architecture of the human mind, primarily because of the supposed explanatory value of this approach. Modular architectures can vary both with respect to the strength of the notion of modularity and the scope of the modularity of mind. We propose a dilemma for modular architectures, no matter how these architectures vary along these two dimensions. First, if a modular architecture commits to the informational encapsulation of modules, as it (...) is the case for modularity theories of perception, then modules are on this account impenetrable. However, we argue that there are genuine cases of the cognitive penetrability of perception and that these cases challenge any strong, encapsulated modular architecture of perception. Second, many recent massive modularity theories weaken the strength of the notion of module, while broadening the scope of modularity. These theories do not require any robust informational encapsulation, and thus avoid the incompatibility with cognitive penetrability. However, the weakened commitment to informational encapsulation greatly weakens the explanatory force of the theory and, ultimately, is conceptually at odds with the core of modularity. (shrink)
This is a reply to Vincent Carraud/René Verdon « Remarques circonspectes sur la mort de Descartes » (published in Revue du dix-septième siècle, n° 265, 2014/4, pp. 719-726, online: http://www.cairn.info/revue-dix-septieme-siecle-2014-4-page-719.htm, containing a critique of my "L'énigme de la mort de Descartes" Paris, 2011). I discuss the fatal illness and the death of Descartes, arguing that Descartes was very probably the victim of arsenical poisoning. The suspected murderer is a French priest, François Viogué, living with Descartes in 1650 at the (...) French embassy in Stockholm who may have seen in Descartes an obstacle to the hoped for conversion of queen Christina of Sweden. As against Carraud/Verdon I stress the medical facts, in particular the fact that Descartes himself seems to have suspected poisoning, since he asked for an emetic shortly before his death. (shrink)
A response to a declaration in 'Le Monde', 'Luttons efficacement contre les théories du complot' by Gérald Bronner, Véronique Campion-Vincent, Sylvain Delouvée, Sebastian Dieguez, Karen Douglas, Nicolas Gauvrit, Anthony Lantian, and Pascal Wagner-Egger, published on June the 6th, 2016.
A reply to Gérald Bronner, Véronique Campion-Vincent, Sylvain Delouvée, Sebastian Dieguez, Nicolas Gauvrit, Anthony Lantian, and Pascal Wagner-Egger's piece, '“They” Respond: Comments on Basham et al.’s “Social Science’s Conspiracy-Theory Panic: Now They Want to Cure Everyone”.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.