Reichenbach's 'principle of the common cause' is a foundational assumption of some important recent contributions to quantitative social science methodology but no similar principle appears in econometrics. Reiss (2005) has argued that the principle is necessary for instrumental variables methods in econometrics, and Pearl (2009) builds a framework using it that he proposes as a means of resolving an important methodological dispute among econometricians. We aim to show, through analysis of the main problem instrumental variables methods are used to resolve, (...) that the relationship of the principle to econometric methods is more nuanced than implied by previous work, but nevertheless may make a valuable contribution to the coherence and validity of existing methods. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - (...) - - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...) examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
We have a much better understanding of physics than we do of consciousness. I consider ways in which intrinsically mental aspects of fundamental ontology might induce modifications of the known laws of physics, or whether they could be relevant to accounting for consciousness if no such modifications exist. I suggest that our current knowledge of physics should make us skeptical of hypothetical modifications of the known rules, and that without such modifications it’s hard to imagine how intrinsically mental aspects could (...) play a useful explanatory role. Draft version of a paper submitted to Journal of Consciousness Studies, special issue responding to Philip Goff’s Galileo’s Error: Foundations for a New Science of Consciousness. (shrink)
I defend the extremist position that the fundamental ontology of the world consists of a vector in Hilbert space evolving according to the Schrödinger equation. The laws of physics are determined solely by the energy eigenspectrum of the Hamiltonian. The structure of our observed world, including space and fields living within it, should arise as a higher-level emergent description. I sketch how this might come about, although much work remains to be done.
On one popular view, the general covariance of gravity implies that change is relational in a strong sense, such that all it is for a physical degree of freedom to change is for it to vary with regard to a second physical degree of freedom. At a quantum level, this view of change as relative variation leads to a fundamentally timeless formalism for quantum gravity. Here, we will show how one may avoid this acute ‘problem of time’. Under our view, (...) duration is still regarded as relative, but temporal succession is taken to be absolute. Following our approach, which is presented in more formal terms in, it is possible to conceive of a genuinely dynamical theory of quantum gravity within which time, in a substantive sense, remains. 1 Introduction1.1 The problem of time1.2 Our solution2 Understanding Symmetry2.1 Mechanics and representation2.2 Freedom by degrees2.3 Voluntary redundancy3 Understanding Time3.1 Change and order3.2 Quantization and succession4 Time and Gravitation4.1 The two faces of classical gravity4.2 Retaining succession in quantum gravity5 Discussion5.1 Related arguments5.2 Concluding remarks. (shrink)
Some modern cosmological models predict the appearance of Boltzmann Brains: observers who randomly fluctuate out of a thermal bath rather than naturally evolving from a low-entropy Big Bang. A theory in which most observers are of the Boltzmann Brain type is generally thought to be unacceptable, although opinions differ. I argue that such theories are indeed unacceptable: the real problem is with fluctuations into observers who are locally identical to ordinary observers, and their existence cannot be swept under the rug (...) by a choice of probability distributions over observers. The issue is not that the existence of such observers is ruled out by data, but that the theories that predict them are cognitively unstable: they cannot simultaneously be true and justifiably believed. (shrink)
Cosmological models that invoke a multiverse - a collection of unobservable regions of space where conditions are very different from the region around us - are controversial, on the grounds that unobservable phenomena shouldn't play a crucial role in legitimate scientific theories. I argue that the way we evaluate multiverse models is precisely the same as the way we evaluate any other models, on the basis of abduction, Bayesian inference, and empirical success. There is no scientifically respectable way to do (...) cosmology without taking into account different possibilities for what the universe might be like outside our horizon. Multiverse theories are utterly conventionally scientific, even if evaluating them can be difficult in practice. (shrink)
Effective Field Theory (EFT) is the successful paradigm underlying modern theoretical physics, including the "Core Theory" of the Standard Model of particle physics plus Einstein's general relativity. I will argue that EFT grants us a unique insight: each EFT model comes with a built-in specification of its domain of applicability. Hence, once a model is tested within some domain (of energies and interaction strengths), we can be confident that it will continue to be accurate within that domain. Currently, the Core (...) Theory has been tested in regimes that include all of the energy scales relevant to the physics of everyday life (biology, chemistry, technology, etc.). Therefore, we have reason to be confident that the laws of physics underlying the phenomena of everyday life are completely known. (shrink)
It seems natural to ask why the universe exists at all. Modern physics suggests that the universe can exist all by itself as a self-contained system, without anything external to create or sustain it. But there might not be an absolute answer to why it exists. I argue that any attempt to account for the existence of something rather than nothing must ultimately bottom out in a set of brute facts; the universe simply is, without ultimate cause or explanation.
We provide a derivation of the Born Rule in the context of the Everett (Many-Worlds) approach to quantum mechanics. Our argument is based on the idea of self-locating uncertainty: in the period between the wave function branching via decoherence and an observer registering the outcome of the measurement, that observer can know the state of the universe precisely without knowing which branch they are on. We show that there is a uniquely rational way to apportion credence in such cases, which (...) leads directly to the Born Rule. Our analysis generalizes straightforwardly to cases of combined classical and quantum self-locating uncertainty, as in the cosmological multiverse. (shrink)
It is commonplace in discussions of modern cosmology to assert that the early universe began in a special state. Conventionally, cosmologists characterize this fine-tuning in terms of the horizon and flatness problems. I argue that the fine-tuning is real, but these problems aren't the best way to think about it: causal disconnection of separated regions isn't the real problem, and flatness isn't a problem at all. Fine-tuning is better understood in terms of a measure on the space of trajectories: given (...) reasonable conditions in the late universe, the fraction of cosmological histories that were smooth at early times is incredibly tiny. This discussion helps clarify what is required by a complete theory of cosmological initial conditions. (shrink)
A crucial part of the contemporary interest in logicism in the philosophy of mathematics resides in its idea that arithmetical knowledge may be based on logical knowledge. Here an implementation of this idea is considered that holds that knowledge of arithmetical principles may be based on two things: (i) knowledge of logical principles and (ii) knowledge that the arithmetical principles are representable in the logical principles. The notions of representation considered here are related to theory-based and structure-based notions of representation (...) from contemporary mathematical logic. It is argued that the theory-based versions of such logicism are either too liberal (the plethora problem) or are committed to intuitively incorrect closure conditions (the consistency problem). Structure-based versions must on the other hand respond to a charge of begging the question (the circularity problem) or explain how one may have a knowledge of structure in advance of a knowledge of axioms (the signature problem). This discussion is significant because it gives us a better idea of what a notion of representation must look like if it is to aid in realizing some of the traditional epistemic aims of logicism in the philosophy of mathematics. (shrink)
For over 30 years I have argued that we need to construe science as accepting a metaphysical proposition concerning the comprehensibility of the universe. In a recent paper, Fred Muller criticizes this argument, and its implication that Bas van Fraassen’s constructive empiricism is untenable. In the present paper I argue that Muller’s criticisms are not valid. The issue is of some importance, for my argument that science accepts a metaphysical proposition is the first step in a broader argument (...) intended to demonstrate that we need to bring about a revolution in science, and ultimately in academic inquiry as a whole so that the basic aim becomes wisdom and not just knowledge. (shrink)
I ask whether what we know about the universe from modern physics and cosmology, including fine-tuning, provides compelling evidence for the existence of God, and answer largely in the negative.
In democracies citizens are supposed to have some control over the general direction of policy. According to a pretheoretical interpretation of this idea, the people have control if elections and other democratic institutions compel officials to do what the people want, or what the majority want. This interpretation of popular control fits uncomfortably with insights from social choice theory; some commentators—Riker, most famously—have argued that these insights should make us abandon the idea of popular rule as traditionally understood. This article (...) presents a formal theory of popular control that responds to the challenge from social choice theory. It makes precise a sense in which majorities may be said to have control even if the majority preference relation has an empty core. And it presents a simple game-theoretic model to illustrate how majorities can exercise control in this specified sense, even when incumbents are engaged in purely re-distributive policymaking and the majority rule core is empty. (shrink)
The identity theory’s rise to prominence in analytic philosophy of mind during the late 1950s and early 1960s is widely seen as a watershed in the development of physicalism, in the sense that whereas logical behaviourism proposed analytic and a priori ascertainable identities between the meanings of mental and physical-behavioural concepts, the identity theory proposed synthetic and a posteriori knowable identities between mental and physical properties. While this watershed does exist, the standard account of it is misleading, as it is (...) founded in erroneous intensional misreadings of the logical positivists’—especially Carnap’s—extensional notions of translation and meaning, as well as misinterpretations of the positivists’ shift from the strong thesis of translation-physicalism to the weaker and more liberal notion of reduction-physicalism that occurred in the Unity of Science programme. After setting the historical record straight, the essay traces the first truly modern identity theory to Schlick’s pre-positivist views circa 1920 and goes on to explore its further development in Feigl, arguing that the fundamental difference between the Schlick-Feigl identity theory and the more familiar and influential Place-Smart-Armstrong identity theory has resurfaced in the deep and seemingly unbridgeable gulf in contemporary philosophy of consciousness between inflationary mentalism and deflationary physicalism. (shrink)
Political theorists study the attributes of desirable social-moral states of affairs. Schaefer (forthcoming) aims to show that "static political theory" of this kind rests on shaky foundations. His argument revolves around an application of an abstruse mathematical theorem -- Kakutani's fixed point theorem -- to the social-moral domain. We show that Schaefer has misunderstood the implications of this theorem for political theory. Theorists who wish to study the attributes of social-moral states of affairs should carry on, safe in the knowledge (...) that Kakutani's theorem poses no threat to this enterprise. (shrink)
Several scholars have recently entertained proposals for "epistocracy," a political regime in which decision-making power is concentrated in the hands of a society's most informed and competent citizens. These proposals rest on the claim that we can expect better political outcomes if we exclude incompetent citizens from participating in political decisions because competent voters are more likely to vote "correctly" than incompetent voters. We develop what we call the objection from selection bias to epistocracy: a procedure that selects voters on (...) the basis of their observed competence---as epistocracy does---will often be "biased" in the sense that competent voters will be, on average, more likely than incompetent voters to possess certain attributes that reduce the probability of voting correctly. Our objection generalizes the "demographic objection" discussed in previous literature, showing that the range of realistic scenarios in which epistocracy is vulnerable to selection bias is substantially broader than previous discussions appreciate. Our discussion also shows that previous discussions have obscured the force of the threat of selection bias. Since we lack reasons to believe that epistocratic proposals can avoid selection bias, we have no reason to seriously entertain epistocracy as a practical proposal. (shrink)
Republicans hold that people are dominated merely in virtue of others' having unconstrained abilities to frustrate their choices. They argue further that public officials may dominate citizens unless subject to popular control. Critics identify a dilemma. To maintain the possibility of popular control, republicans must attribute to the people an ability to control public officials merely in virtue of the possibility that they might coordinate their actions. But if the possibility of coordination suffices for attributing abilities to groups, then, even (...) in the best case, countless groups will be dominating because it will be possible for their members to coordinate their actions with the aim of frustrating others' choices. We argue the dilemma is apparent only. To make our argument, we present a novel interpretation of the republican concept of domination with the help of a game-theoretic model that clarifies the significance of collective action problems for republican theory. (shrink)
Ich möchte den Neurowissenschaftlern, die glauben, mit empirischen Mitteln etwas über menschliche Freiheit herausfinden zu können, eine philosophische Herausforderung entgegensetzen. Meine These lautet: Die Frage nach der menschlichen Freiheit ist ein metaphysisches Problem, das sich empirischer Naturforschung entzieht. Um das zu begründen, werde ich ein extremes Gedankenexperiment durchführen. Ich werde zuerst hypothetisch die Situation eines Subjektes beschreiben, dessen Naturwissenschaft berechtigterweise einen durchgängigen kausalen Determinismus im Gehirn postuliert und dessen Libet-Experimente für all seine Handlungen fatal ausgehen (nicht nur für unbedeutende Handbewegungen). (...) Dann werde ich zeigen, dass die deterministische Neurowissenschaft in dieser gedachten Situation gar nichts für oder gegen die Entscheidungsfreiheit des Subjekts austrägt, weil sich die Entscheidungen des hypothetischen Subjekts nicht dort abspielen, wo seine Naturwissenschaft hinzielt. – Wenn es mir gelingt, so eine Situation zu konstruieren, stellt sich die Frage, ob wir nicht in einer ähnlichen Situation stecken könnten. Meiner Ansicht nach lässt sich diese Frage zwar aufwerfen, aber nicht beantworten. Sie gehört ins Reich der metaphysischen Spekulation, genau wie die Frage, ob unser geistiges Leben nach dem biologischen Tod im Jenseits weitergehen könnte. Ziel der Überlegungen ist es, verwirrende Redeweisen wie die von meta-physischer oder transzendentaler Freiheit durch konkrete Modelle verständlicher zu machen. Die Position, die hierbei herauskommen wird, hat einige Gemeinsamkeiten mit Kants Position zur Freiheit. (shrink)
Foucault’s disciplinary society and his notion of panopticism are often invoked in discussions regarding electronic surveillance. Against this use of Foucault, I argue that contemporary trends in surveillance technology abstract human bodies from their territorial settings, separating them into a series of discrete flows through what Deleuze will term, the surveillant assemblage. The surveillant assemblage and its product, the socially sorted body, aim less at molding, punishing and controlling the body and more at triggering events of in- and ex-clusion from (...) life opportunities. The meaning of the body as monitored by latest generation vision technologies formed from machine only surveillance has been transformed. Such a body is no longer disciplinary in the Foucauldian sense. It is a virtual/flesh interface broken into discrete data flows whose comparison and breakage generate bodies as both legible and eligible (or illegible). (shrink)
According to Russellianism (or Millianism), the two sentences ‘Ralph believes George Eliot is a novelist’ and ‘Ralph believes Mary Ann Evans is a novelist’ cannot diverge in truth-value, since they express the same proposition. The problem for the Russellian (or Millian) is that a puzzle of Kaplan’s seems to show that they can diverge in truth-value and that therefore, since the Russellian holds that they express the same proposition, the Russellian view is contradictory. I argue that the standard Russellian appeal (...) to “ways of thinking” or “propositional guises” is not necessary to solve the puzzle. Rather than this retrograde concession to Fregeanism, appeal should be made to second-order belief. The puzzle is solved, and the contradiction avoided, by maintaining that both sentences are indeed true in addition to the sentence ‘Ralph (mistakenly) believes that he does not believe Mary Ann Evans/George Eliot is a novelist’. (shrink)
Man kann die Unterscheidung zwischen synthetischen und analytischen Sätzen aus zwei Perspektiven betrachten – von innen oder von außen: mit Blick auf die eigene Sprache oder mit Blick auf die Sprache anderer. Wer die Außenperspektive einnimmt, sucht eine Antwort auf die deskriptive Frage, welche Sätze einer fremden Sprache als analytisch zu klassifizieren sind. Wer die Innenperspektive einnimmt, sucht dagegen eine Antwort auf folgende normative Frage: Welche Sätze darf ich nicht preisgeben oder zurückweisen – wenn ich keinen Unfug reden will? Die (...) beiden Blickwinkel schließen einander nicht aus; sie ergänzen sich und unterstützen einander. In seinem Aufsatz „Two dogmas of empiricism“ kritisiert Quine die Unterscheidung zwischen synthetischen und analytischen Sätzen aus der Innenperspektive; in seinem Buch Word and object wiederholt er dieselbe Kritik aus der Außenperspektive. In beiden Fällen richtet sich seine Kritik gegen die Ziele des frühen Carnap (der sämtliche Ausdrücke der Wissenschaftssprache mithilfe analytisch wahrer Definitionen verständlich machen wollte). Ich trete beiden Fassungen dieser Kritik entgegen und schlage zwei Definitionen des Begriffs vom analytischen Satz vor, die Quines Kritik entgehen: eine Definition aus der Außen- und eine aus der Innenperspektive. Zum Abschluss führe ich in einer leicht spekulativen Betrachtung vor, wie man bloßen Begriffswandel vom Wandel der inhaltlichen Überzeugungen trennen könnte – und zwar sogar im Fall wissenschaftlicher Revolutionen. (shrink)
Propositionalism is the view that intentional attitudes, such as belief, are relations to propositions. Propositionalists argue that propositionalism follows from the intuitive validity of certain kinds of inferences involving attitude reports. Jubien (2001) argues powerfully against propositions and sketches some interesting positive proposals, based on Russell’s multiple relation theory of judgment, about how to accommodate “propositional phenomena” without appeal to propositions. This paper argues that none of Jubien’s proposals succeeds in accommodating an important range of propositional phenomena, such as the (...) aforementioned validity of attitude-report inferences. It then shows that the notion of a predication act-type, which remains importantly Russellian in spirit, is sufficient to explain the range of propositional phenomena in question, in particular the validity of attitude-report inferences. The paper concludes with a discussion of whether predication act-types are really just propositions by another name. (shrink)
Als Goethe in seiner monumentalen Farbenlehre (1810) versuchte, Newtons Theorie des Lichts und der Farben anzugreifen, setzte er eine Methode ein, die er als Vermannigfachung der Erfahrungen bezeichnete: Er variierte verschiedene Parameter der newtonischen Experimente, um neuen Spielraum für Alternativen zur Theorie Newtons zu gewinnen. Dabei erzielte er durchaus Erfolge. U.a. entdeckte er das Komplement zum newtonischen Spektrum (das aussieht wie dessen Farbnegativ und durch Vertauschung der Rollen von Licht und Finsternis entsteht). Ingo Nussbaumer hat Goethes Methode kongenial fortgeführt. Dabei (...) hat er sechs weitere Farbspektren entdeckt. Sie entstehen, wenn man anstelle des Hell/Dunkel-Kontrasts (in Newtons und Goethes Experimenten) mit Paaren von Komplementärfarben arbeitet. Die neuen Farbspektren sehen genauso differenziert aus wie Newtons und Goethes Spektrum; doch anders als diese enthalten sie die unbunten "Farben" Schwarz und Weiss. Die vielfältigen Ordnungsbeziehungen und Symmetrien, die Ingo Nussbaumer in der Farbenwelt der insgesamt acht Spektren ausgemacht hat, verhelfen uns vielleicht zu einem tieferen Verständnis der Prinzipien menschlicher Farbwahrnehmung. Abgesehen davon haben sie einen hohen ästhetischen Reiz. Und sie regen dazu an, über kontrafaktische Verläufe der Wissenschaftsgeschichte zu spekulieren: Wie gut hätte sich Newtons Theorie des Lichts und der Farben in den Jahren nach ihrer Veröffentlichung (1672) durchsetzen können, wenn damals das newtonische Spektrum zusammen mit seinen sieben Gegenstücken bekannt geworden und daher nicht das einzige Spektrum gewesen wäre, mit dem die Theorie hätte fertigwerden müssen? (shrink)
Newton fächert einen weißen Lichtstrahl durch prismatische Brechung (Refraktion) auf und entdeckt dadurch sein Vollspektrum, das die größte Farbenvielfalt zeigt und aus Lichtstrahlen unterschiedlicher Refrangibilität besteht (1672). Laut Newtons Theorie müsste sich die Farbvielfalt seines Vollspektrums steigern lassen, wenn man den Auffangschirm immer weiter vom brechenden Prisma entfernt; denn dadurch müssten zuvor noch vermischte Lichtbündel ähnlicher Farbe schließlich doch auseinanderstreben. Diese Prognose ist empirisch falsch, wie Goethe in seiner Farbenlehre (1810) herausstreicht: Das Endspektrum enthält weniger Farben, insbesondere fehlt dort das (...) Gelb. Das ist eine Anomalie (à la Kuhn) der newtonischen Theorie. Wie lässt sich diese Anomalie erklären? Der Gelbverlust im Endspektrum muss irgendetwas mit der Licht- und Farbempfindlichkeit des menschlichen Auges zu tun haben. Aber was? Der Bezold/Brücke-Effekt bietet offenbar keine befriedigende Erklärung des Phänomens, wie ich mit Matthias Rangs experimenteller Hilfe empirisch zeige. Ein zweiter Erklärungsansatz beruft sich auf die rot/grüne spektrale Umgebung, die das Gelb offenbar in den Hintergrund treten lässt. Die Farbwahrnehmung einer farbigen Figur hat ja immer auch mit der Umgebungsfarbe zu tun, wie zum Beispiel beim Simultankontrast. Das zeigt sich im Experiment: Man kann das Gelb im Endspektrum zurückgewinnen, wenn man die anderen Farben des Spektrums abdeckt. Woran das wiederum liegt, ist bis heute ungeklärt. Es lässt sich jedenfalls nicht als Spezialfall des Simultankontrasts verstehen. (shrink)
Goethes Protest gegen Newtons Theorie des Lichts und der Farben ist besser, als man gemeinhin denkt. Man kann diesem Protest in den wichtigsten Elementen folgen, ohne Newton in der physikalischen Sache unrecht zu geben. Laut meiner Interpretation hat Goethe in Newtons wissenschaftsphilosophischer Selbsteinschätzung eine entscheidende Schwäche aufgedeckt: Newton glaubte, mithilfe prismatischer Experimente beweisen zu können, dass das Licht der Sonne aus Lichtstrahlen verschiedener Farben zusammengesetzt sei. Goethe zeigt, dass dieser Übergang vom Beobachtbaren zur Theorie problematischer ist, als Newton wahrhaben wollte. (...) Wenn Goethe darauf beharrt, dass uns der Übergang zur Theorie nicht von den Phänomenen aufgezwungen wird, dann kommt dadurch unser eigener, freier und kreativer Beitrag zur Theorie-Bildung ans Tageslicht. Und diese Einsicht Goethes gewinnt eine überraschende Schärfe, weil Goethe plausibel machen kann, dass sich alle entscheidenden prismatischen Experimente Newtons ebenso gut mit einer alternativen Theorie vereinbaren lassen. Wenn ich recht sehe, war Goethe der erste Wissenschaftsphilosoph, der mindestens eine empirisch äquivalente Alternative zu einer wohletablierten physikalischen Theorie gesehen hat: Damit war Goethe seiner Zeit um ein gutes Jahrhundert voraus. (shrink)
A longstanding issue in attempts to understand the Everett (Many-Worlds) approach to quantum mechanics is the origin of the Born rule: why is the probability given by the square of the amplitude? Following Vaidman, we note that observers are in a position of self-locating uncertainty during the period between the branches of the wave function splitting via decoherence and the observer registering the outcome of the measurement. In this period it is tempting to regard each branch as equiprobable, but we (...) argue that the temptation should be resisted. Applying lessons from this analysis, we demonstrate (using methods similar to those of Zurek's envariance-based derivation) that the Born rule is the uniquely rational way of apportioning credence in Everettian quantum mechanics. In doing so, we rely on a single key principle: changes purely to the environment do not affect the probabilities one ought to assign to measurement outcomes in a local subsystem. We arrive at a method for assigning probabilities in cases that involve both classical and quantum self-locating uncertainty. This method provides unique answers to quantum Sleeping Beauty problems, as well as a well-defined procedure for calculating probabilities in quantum cosmological multiverses with multiple similar observers. (shrink)
Athenaeus of Attalia distinguishes two types of exercise or training (γυμνασία) that are required at each stage of life: training of the body and training of the soul. He says that training of the body includes activities like physical exercises, eating, drinking, bathing and sleep. Training of the soul, on the other hand, consists of thinking, education, and emotional regulation (in other words, 'philosophy'). The notion of 'training of the soul' and the contrast between 'bodily' and 'psychic' exercise is common (...) in the Academic and Stoic traditions Athenaeus is drawing from; however, he is the earliest extant medical author to distinguish these kinds of training and to treat them as equally important aspects of regimen. In this paper, I propose some reasons why he found this distinction useful, and I examine how he justified incorporating it into his writings on regimen, namely by attributing Plato's beliefs about regimen to Hippocrates, a strategy Galen would adopt well over a century later. (shrink)
Quine introduced a famous distinction between the ‘notional’ sense and the ‘relational’ sense of certain attitude verbs. The distinction is both intuitive and sound but is often conflated with another distinction Quine draws between ‘dyadic’ and ‘triadic’ (or higher degree) attitudes. I argue that this conflation is largely responsible for the mistaken view that Quine’s account of attitudes is undermined by the problem of the ‘exportation’ of singular terms within attitude contexts. Quine’s system is also supposed to suffer from the (...) problem of ‘suspended judgement with continued belief’. I argue that this criticism fails to take account of a crucial presupposition of Quine’s about the connection between thought and language. The aim of the paper is to defend the spirit of Quine’s account of attitudes by offering solutions to these two problems. (shrink)
Most of Kant's examples for synthetic sentences known apriori have been repudiated by modern physics. Is there a way to modify Kantian anti-empiricist epistemology so that it no longer contradicts the results of modern science? Michael Friedman proposes to relativize Kant's notion of the apriori and thus to explain away the apparent contradiction. But how do we have to understand the relative apriori? I define a sentence to be known apriori relative to a given theory if the sentence makes it (...) possible to test objective knowledge claims that belong within the frame of that theory. Are there sentences that can be known apriori relative to every possible theory, i.e., are there any examples for absolute aprioricity? My answer is to the positive. By weakening Kant's original examples (e.g., the principle of causality) we arrive at sentences that must be true if objective empirical knowledge is to be possible at all. The sentence "Not every change is due to pure chance" is an absolute example for synthetic apriori knowledge. (shrink)
In this article, it is argued that existing democracies might establish popular rule even if Joseph Schumpeter’s notoriously unflattering picture of ordinary citizens is accurate. Some degree of popular rule is in principle compatible with apathetic, ignorant and suggestible citizens, contrary to what Schumpeter and others have maintained. The people may have control over policy, and their control may constitute popular rule, even if citizens lack definite policy opinions and even if their opinions result in part from elites’ efforts to (...) manipulate these opinions. Thus, even a purely descriptive, ‘realist’ account of democracy of the kind that Schumpeter professed to offer may need to concede that there is no democracy without some degree of popular rule. (shrink)
When are inequalities in political power undemocratic, and why? While some writers condemn any inequalities in political power as a deviation from the ideal of democracy, this view is vulnerable to the simple objection that representative democracies concentrate political power in the hands of elected officials rather than distributing it equally among citizens, but they are no less democratic for it. Building on recent literature that interprets democracy as part of a broader vision of social equality, I argue that concentrations (...) of political power are incompatible with democracy, and with a commitment to social equality more generally, when they consist in some having greater arbitrary power to influence decisions according to their idiosyncratic preferences. A novel account of the relationship between power and social status clarifies the role of social equality in the justification of democracy, including a representative democracy in which public officials have more political power than ordinary citizens. (shrink)
We study the conservation of energy, or lack thereof, when measurements are performed in quantum mechanics. The expectation value of the Hamiltonian of a system changes when wave functions collapse in accordance with the standard textbook treatment of quantum measurement, but one might imagine that the change in energy is compensated by the measuring apparatus or environment. We show that this is not true; the change in the energy of a state after measurement can be arbitrarily large, independent of the (...) physical measurement process. In Everettian quantum theory, while the expectation value of the Hamiltonian is conserved for the wave function of the universe, it is not constant within individual worlds. It should therefore be possible to experimentally measure violations of conservation of energy, and we suggest an experimental protocol for doing so. (shrink)
Abstract According to Russellianism, the content of a Russellian thought, in which a person ascribes a monadic property to an object, can be represented as an ordered couple of the object and the property. A consequence of this is that it is not possible for a person to believe that a is F and not to believe b is F, when a=b. Many critics of Russellianism suppose that this is possible and thus that Russellianism is false. Several arguments for this (...) claim are criticized and it is argued that Russellians need not appeal to representational notions in order to defeat them. Contrary to popular opinion, the prospects for a pure Russellianism, a Russellianism without representations, are in fact very good. (shrink)
Rollback arguments focus on long sequences of actions with identical initial conditions in order to explicate the luck problem that indeterminism poses for libertarian free will theories (i.e. the problem that indeterministic actions appear arbitrary in a free-will undermining way). In this paper, I propose a rollback argument for probability incompatibilism, i.e. for the thesis that free will is incompatible with all world-states being governed by objective probabilities. Other than the most prominently discussed rollback arguments, this argument explicitly focusses on (...) the ability to act otherwise. It argues that the negligible probability of the relative frequencies in overall rollback patterns being relevantly different indicates that even the ability to act otherwise with regard to individual actions is not free-will enabling. My proposed argument provides probability incompatibilists with a tool to argue against a classical event-causal response to the luck problem, while it can still motivate an agent-causal response to it. (shrink)
This concluding chapter of _Techno-Fixers: Origins and Implications of Technological Faith_ examines the widespread overconfidence in present-day and proposed 'technological fixes', and provides guidelines - social, ethical and technical - for soberly assessing candidate technological solutions for societal problems.
Starting from the special theory of relativity it is argued that the structure of an experience is extended over time, making experience dynamic rather than static. The paper describes and explains what is meant by phenomenal parts and outlines opposing positions on the experience of time. Time according to he special theory of relativity is defined and the possibility of static experience shown to be implausible, leading to the conclusion that experience is dynamic. Some implications of this for the relationship (...) of phenomenology to the physical world are considered. (shrink)
Communities of nuclear workers have evolved in distinctive contexts. During the Manhattan Project the UK, USA and Canada collectively developed the first reactors, isotope separation plants and atomic bombs and, in the process, nurtured distinct cadres of specialist workers. Their later workplaces were often inherited from wartime facilities, or built anew at isolated locations. For a decade, nuclear specialists were segregated and cossetted to gestate practical expertise. At Oak Ridge Tennessee, for example, the informal ‘Clinch College of Nuclear Knowledge’ aimed (...) to industrialise the use of radioactive materials. ‘We were like children in a toy factory’, said its Director: ‘everyone could play the game of designing new nuclear power piles’. His counterpart at Chalk River, Ontario headed a project ‘completely Canadian in every respect’, while the head of the British project chose the remote Dounreay site in northern Scotland because of design uncertainties in the experimental breeder reactor. With the decline of secrecy during the mid-1950s, the hidden specialists lauded as ‘atomic scientists’ gradually became visible as new breeds of engineers, technologists and technicians responsible for nuclear reactors and power plants. Mutated by their different political contexts, occupational categories, labour affiliations, professional representations and popular depictions, their activities were disputed by distinct audiences. This chapter examines the changing identities of nuclear specialists and the significance of their secure sites. Shaped successively by Cold War secrecy, commercial competition and terrorist threats, nuclear energy remained out of site for wider publics and most nuclear specialists alike. The distinctive episodes reveal the changing working experiences of technical workers in late-twentieth and early twenty-first century environments. (shrink)
Extended Cognition (EC) hypothesizes that there are parts of the world outside the head serving as cognitive vehicles. One criticism of this controversial view is the problem of “cognitive bloat” which says that EC is too permissive and fails to provide an adequate necessary criterion for cognition. It cannot, for instance, distinguish genuine cognitive vehicles from mere supports (e.g. the Yellow Pages). In response, Andy Clark and Mark Rowlands have independently suggested that genuine cognitive vehicles are distinguished from supports in (...) that the former have been “recruited,” i.e. they are either artifacts, or, products of evolution. I argue against this proposal. There are counter examples to the claim that “Teleological” EC is either necessary or sufficient for cognition. Teleological EC conflates different types of scientific projects, and inherits content externalism’s alienation from historically impartial cognitive science. (shrink)
The Pneumatist school of medicine has the distinction of being the only medical school in antiquity named for a belief in a part of a human being. Unlike the Herophileans or the Asclepiadeans, their name does not pick out the founder of the school. Unlike the Dogmatists, Empiricists, or Methodists, their name does not pick out a specific approach to medicine. Instead, the name picks out a belief: the fact that pneuma is of paramount importance, both for explaining health and (...) disease, and for determining treatments for the healthy and sick. In this paper, we re-examine what our sources say about the pneuma of the Pneumatists in order to understand what these physicians thought it was and how it shaped their views on physiology, diagnosis and treatment. (shrink)
This paper is about the history of a question in ancient Greek philosophy and medicine: what holds the parts of a whole together? The idea that there is a single cause responsible for cohesion is usually associated with the Stoics. They refer to it as the synectic cause (αἴτιον συνεκτικόν), a term variously translated as ‘cohesive cause,’ ‘containing cause’ or ‘sustaining cause.’ The Stoics, however, are neither the first nor the only thinkers to raise this question or to propose a (...) single answer. Many earlier thinkers offer their own candidates for what actively binds parts together, with differing implications not only for why we are wholes rather than heaps, but also why our bodies inevitably become diseased and fall apart. This paper assembles, up to the time of the Stoics, one part of the history of such a cause: what is called ‘the synechon’ (τὸ συνέχον) – that which holds things together. Starting with our earliest evidence from Anaximenes (sixth century BCE), the paper looks at different candidates and especially the models and metaphors for thinking about causes of cohesion which were proposed by different philosophers and doctors including Empedocles, early Greek doctors, Diogenes of Apollonia, Plato and Aristotle. My goal is to explore why these candidates and models were proposed and how later philosophical objections to them led to changes in how causes of cohesion were understood. (shrink)
The paper presents a new theory of perceptual demonstrative thought, the property-dependent theory. It argues that the theory is superior to both the object-dependent theory (Evans, McDowell) and the object-independent theory (Burge).
The practice of taking hand-written notes in lectures has been rediscovered recently because of several studies on its learning efficacy in the mainstream media. Students are enjoined to ditch their laptops and return to pen and paper. Such arguments presuppose that notes are taken in order to be revisited after the lecture. Learning is seen to happen only after the event. We argue instead that student’s note-taking is an educational practice worthy in itself as a way to relate to the (...) live event of the lecture. We adopt a phenomenological approach inspired by Vilém Flusser’s phenomenology of gestures, which assumes that a gesture like note-taking is always an event of thinking with media in which a certain freedom is expressed. But Flusser’s description of note-taking focusses on the individual note-taker. What about students’ note-taking in a lecture hall as a collective gesture? Nietzsche considered note-taking ‘mechanical,’ as if students were automatons who mindlessly transcribed a verbal flow, while Benjamin considered it an inaesthetic gesture: at best, boring; at worst, ‘painful to watch.’ In contrast, we argue that the educational potentiality of note-taking—or better, note-making—can be grasped only if we account for its mediaticity, together with but distinct from its political potentiality as a collective mediality. Note-taking enables us to see how collective thinking emerges in the lecture, a kind of thinking that belongs neither to the lecturer nor the student, but emerges in the relation of attention established between the lecturer, students and their object of thought. (shrink)
The received view in the history of the philosophy of psychology is that the logical positivists—Carnap and Hempel in particular—endorsed the position commonly known as “logical” or “analytical” behaviourism, according to which the relations between psychological statements and the physical-behavioural statements intended to give their meaning are analytic and knowable a priori. This chapter argues that this is sheer legend: most, if not all, such relations were viewed by the logical positivists as synthetic and knowable only a posteriori. It then (...) traces the origins of the legend to the logical positivists’ idiosyncratic extensional or at best weakly intensional use of what are now considered crucially strongly intensional semantic notions, such as “translation,” “meaning” and their cognates, focussing on a particular instance of this latter phenomenon, arguing that a conflation of explicit definition and analyticity may be the chief source of the legend. (shrink)
Individuals of many animal species are said to have a personality. It has been shown that some individuals are bolder than other individuals of the same species, or more sociable or more aggressive. In this paper, we analyse what it means to say that an animal has a personality. We clarify what an animal personality is, that is, its ontology, and how different personality concepts relate to each other, and we examine how personality traits are identified in biological practice. Our (...) analysis shows that biologists often study specific personality traits, such as boldness, sociability or aggressiveness, rather than personalities in general. We claim that personality traits are best understood as dispositions and that they are operationally defined in terms of certain sets of behaviours, which are studied in specific experimental set-ups. Furthermore, we develop an integrative philosophical account that specifies and formalises three criteria for identifying personality traits, which are used in biological practice. For an individual animal to have a personality trait it must, first, behave differently than others. Second, these behavioural differences must be stable over a certain time, and third, they must be consistent in different contexts. (shrink)
The existence of object-dependent thoughts has been doubted on the grounds that reference to such thoughts is unnecessary or 'redundant' in the psychological explanation of intentional action. This paper argues to the contrary that reference to object-dependent thoughts is necessary to the proper psychological explanation of intentional action upon objects. Section I sets out the argument for the alleged explanatory redundancy of object-dependent thoughts; an argument which turns on the coherence of an alternative 'dual-component' model of explanation. Section II rebuts (...) this argument by showing the dual-component model to be incoherent precisely because of its exclusion of object-dependent thoughts. Section III concludes with a conjecture about the further possible significance of object-dependent thoughts for the prediction of action. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.