In ‘Excusing Mistakes of Law’, Gideon Yaffe sets out to ‘vindicate’ the claim ‘that mistakes of law never excuse’ by ‘identifying the truth that is groped for but not grasped by those who assert that ignorance of law is no excuse’. Yaffe does not offer a defence of the claim that mistakes of law never excuse. That claim, Yaffe argues, is false. Yaffe’s article is, rather, an effort to assess what plausible thought might be behind the idea that mistakes of (...) law often should not excuse. (Yaffe is interested in more than just the descriptive claim that in Anglo-American legal jurisdictions mistakes of law routinely do not, in fact, excuse.) More particularly, Yaffe is interested in what plausible normative justification there might be for this asymmetric pattern: -/- Asymmetry: False beliefs about non-legal facts often excuse, but false beliefs about the law rarely excuse. -/- Yaffe offers a complex argument in support of Asymmetry. This paper is organised around my reconstruction of Yaffe’s argument. I argue that Yaffe’s argument does not succeed, but that his argument provides a template for an argument that could succeed. (shrink)
In his recent article in Philosophy and Public Affairs, 'The Paradox of Voting and Ethics of Political Representation', Alexander A. Guerrero argues it is rational to vote because each voter should want candidates they support to have the strongest public mandate possible if elected to office, and because every vote contributes to that mandate. The present paper argues that two of Guerrero's premises require correction, and that when those premises are corrected several provocative but compelling conclusions follow (...) about the rationality of voting and duties of elected officials: (A) Voting is typically rational for the members of a political party’s base; (B) Voting is often (but not always) irrational for “swing” voters (i.e. independent voters who are not affiliated with any political party, as well as “undecided” voters who are considering voting across party lines); and (C) Elected officials have a moral duty to respond to changing levels of popular support once in office, as indicated by properly monitored and corroborated public opinion polls of constituents, functioning more as delegates the lower their level of popular support. Finally, I suggest that the last of these conclusions has wide-ranging implications for political ethics. I illustrate these implications by focusing on the questions -- under debate in the 2016 US Presidential election cycle -- of whether a sitting President has a moral duty to nominate or not nominate a new Supreme Court justice during his or her final year in office, and similarly, whether US Senators have a moral duty to obstruct, or not obstruct, confirmation of the President’s eventual nominee. (shrink)
I argue that Hubert Dreyfus’ work on embodied coping, the intentional arc, solicitations and the background as well as his anti-representationalism rest on introspection. I denote with ‘introspection’ the methodological malpractice of formulating ontological statements about the conditions of possibility of phenomena merely based on descriptions. In order to illustrate the insufficiencies of Dreyfus’ methodological strategy in particular and introspection in general, I show that Heidegger, to whom Dreyfus constantly refers as the foundation of his own work, derives ontological statements (...) about the conditions of possibility of phenomena not merely from descriptions, but also from analyses. I further show that deriving ontological statements directly from descriptions entails implausible results. I do so by discussing representative cases. Based on these general methodological considerations, I show that Dreyfus’ work on action, skill and understanding is introspective. First, I demonstrate that Dreyfus’ influential claim that rules and representations do not govern skillful actions is the result of introspection, because it is merely founded on the absence of rules and representations in representative descriptions of skillful actions. Second, I show that Dreyfus’ work on embodied coping, the intentional arc, solicitations and the background is also based on introspection. These ontological structures are merely reifications of descriptions and are not further substantiated by analyses. (shrink)
Can there be grounding without necessitation? Can a fact obtain wholly in virtue of metaphysically more fundamental facts, even though there are possible worlds at which the latter facts obtain but not the former? It is an orthodoxy in recent literature about the nature of grounding, and in first-order philosophical disputes about what grounds what, that the answer is no. I will argue that the correct answer is yes. I present two novel arguments against grounding necessitarianism, and show that grounding (...) contingentism is fully compatible with the various explanatory roles that grounding is widely thought to play. (shrink)
In this paper, we argue that a distinction ought to be drawn between two ways in which a given world might be logically impossible. First, a world w might be impossible because the laws that hold at w are different from those that hold at some other world (say the actual world). Second, a world w might be impossible because the laws of logic that hold in some world (say the actual world) are violated at w. We develop a novel (...) way of modelling logical possibility that makes room for both kinds of logical impossibility. Doing so has interesting implications for the relationship between logical possibility and other kinds of possibility (for example, metaphysical possibility) and implications for the necessity or contingency of the laws of logic. (shrink)
Fine is widely thought to have refuted the simple modal account of essence, which takes the essential properties of a thing to be those it cannot exist without exemplifying. Yet, a number of philosophers have suggested resuscitating the simple modal account by appealing to distinctions akin to the distinction Lewis draws between sparse and abundant properties, treating only those in the former class as candidates for essentiality. I argue that ‘sparse modalism’ succumbs to counterexamples similar to those originally posed by (...) Fine, and fails to capture paradigmatic instances of essence involving abundant properties and relations. (shrink)
This article examines two questions about scientists’ search for knowledge. First, which search strategies generate discoveries effectively? Second, is it advantageous to diversify search strategies? We argue pace Weisberg and Muldoon, “Epistemic Landscapes and the Division of Cognitive Labor”, that, on the first question, a search strategy that deliberately seeks novel research approaches need not be optimal. On the second question, we argue they have not shown epistemic reasons exist for the division of cognitive labor, identifying the errors that led (...) to their conclusions. Furthermore, we generalize the epistemic landscape model, showing that one should be skeptical about the benefits of social learning in epistemically complex environments. (shrink)
A counteridentical is a counterfactual with an identity statement in the antecedent. While counteridenticals generally seem non-trivial, most semantic theories for counterfactuals, when combined with the necessity of identity and distinctness, attribute vacuous truth conditions to such counterfactuals. In light of this, one could try to save the orthodox theories either by appealing to pragmatics or by denying that the antecedents of alleged counteridenticals really contain identity claims. Or one could reject the orthodox theory of counterfactuals in favor of a (...) hyperintensional semantics that accommodates non-trivial counterpossibles. In this paper, I argue that none of these approaches can account for all the peculiar features of counteridenticals. Instead, I propose a modified version of Lewis’s counterpart theory, which rejects the necessity of identity, and show that it can explain all the peculiar features of counteridenticals in a satisfactory way. I conclude by defending the plausibility of contingent identity from objections. (shrink)
It is commonly claimed that the universality of critical phenomena is explained through particular applications of the renormalization group. This article has three aims: to clarify the structure of the explanation of universality, to discuss the physics of such RG explanations, and to examine the extent to which universality is thus explained. The derivation of critical exponents proceeds via a real-space or a field-theoretic approach to the RG. Building on work by Mainwood, this article argues that these approaches ought to (...) be distinguished: while the field-theoretic approach explains universality, the real-space approach fails to provide an adequate explanation. (shrink)
In this paper, we develop a novel response to counterfactual scepticism, the thesis that most ordinary counterfactual claims are false. In the process we aim to shed light on the relationship between debates in the philosophy of science and debates concerning the semantics and pragmatics of counterfactuals. We argue that science is concerned with many domains of inquiry, each with its own characteristic entities and regularities; moreover, statements of scientific law often include an implicit ceteris paribus clause that restricts the (...) scope of the associated regularity to circumstances that are ‘fitting’ to the domain in question. This observation reveals a way of responding to scepticism while, at the same time, doing justice both to the role of counterfactuals in science and to the complexities inherent in ordinary counterfactual discourse and reasoning. (shrink)
Samuel Alexander was one of the first realists of the twentieth century to defend a theory of categories. He thought that the categories are genuinely real and grounded in the intrinsic nature of Space-Time. I present his reduction of the categories in terms of Space-Time, articulate his account of categorial structure and completeness, and offer an interpretation of what he thought the nature of the categories really were. I then argue that his theory of categories has some advantages over (...) competing theories of his day, and finally draw some important lessons that we can learn from his realist yet reductionist theory of categories. (shrink)
Effective quantum field theories are effective insofar as they apply within a prescribed range of length-scales, but within that range they predict and describe with extremely high accuracy and precision. The effectiveness of EFTs is explained by identifying the features—the scaling behaviour of the parameters—that lead to effectiveness. The explanation relies on distinguishing autonomy with respect to changes in microstates, from autonomy with respect to changes in microlaws, and relating these, respectively, to renormalizability and naturalness. It is claimed that the (...) effectiveness of EFTs is a consequence of each theory’s autonomyms rather than its autonomyml.1Introduction2Renormalizability2.1Explaining renormalizability3Naturalness3.1An unnatural but renormalizable theory4Two Kinds of Autonomy5The Effectiveness of Effective Quantum Field Theories6Conclusion. (shrink)
Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and (...) so on forever: that any sequence of organisms (each one a child of the previous) must contain occasional multi-parent organisms, or must terminate. By proving that a certain measure (arguably an intelligence measure) decreases when an idealized parent AGI single-handedly creates a child AGI, we argue that a similar Law holds for AGIs. (shrink)
Much contemporary debate on the nature of mechanisms centers on the issue of modulating negative causes. One type of negative causability, which I refer to as “causation by absence,” appears difficult to incorporate into modern accounts of mechanistic explanation. This paper argues that a recent attempt to resolve this problem, proposed by Benjamin Barros, requires improvement as it overlooks the fact that not all absences qualify as sources of mechanism failure. I suggest that there are a number of additional types (...) of effects caused by absences that need to be incorporated to account for the diversity of causal connections in the biological sciences. Furthermore, it is argued that recognizing natural variability in mechanisms, such as attenuation, leads to some interesting line-drawing issues for contemporary philosophy of mechanisms. (shrink)
It is widely held that counterfactuals, unlike attitude ascriptions, preserve the referential transparency of their constituents, i.e., that counterfactuals validate the substitution of identicals when their constituents do. The only putative counterexamples in the literature come from counterpossibles, i.e., counterfactuals with impossible antecedents. Advocates of counterpossibilism, i.e., the view that counterpossibles are not all vacuous, argue that counterpossibles can generate referential opacity. But in order to explain why most substitution inferences into counterfactuals seem valid, counterpossibilists also often maintain that counterfactuals (...) with possible antecedents are transparency‐preserving. I argue that if counterpossibles can generate opacity, then so can ordinary counterfactuals with possible antecedents. Utilizing an analogy between counterfactuals and attitude ascriptions, I provide a counterpossibilist‐friendly explanation for the apparent validity of substitution inferences into counterfactuals. I conclude by suggesting that the debate over counterpossibles is closely tied to questions concerning the extent to which counterfactuals are more like attitude ascriptions and epistemic operators than previously recognized. (shrink)
Recent discussions of emergence in physics have focussed on the use of limiting relations, and often particularly on singular or asymptotic limits. We discuss a putative example of emergence that does not fit into this narrative: the case of phonons. These quasi-particles have some claim to be emergent, not least because the way in which they relate to the underlying crystal is almost precisely analogous to the way in which quantum particles relate to the underlying quantum field theory. But there (...) is no need to take a limit when moving from a crystal lattice based description to the phonon description. Not only does this demonstrate that we can have emergence without limits, but also provides a way of understanding cases that do involve limits. (shrink)
In this paper, I will discuss what I will call “skeptical pragmatic invariantism” as a potential response to the intuitions we have about scenarios such as the so-called bank cases. SPI, very roughly, is a form of epistemic invariantism that says the following: The subject in the bank cases doesn’t know that the bank will be open. The knowledge ascription in the low standards case seems appropriate nevertheless because it has a true implicature. The goal of this paper is to (...) show that SPI is mistaken. In particular, I will show that SPI is incompatible with reasonable assumptions about how we are aware of the presence of implicatures. Such objections are not new, but extant formulations are wanting for reasons I will point out below. One may worry that refuting SPI is not a worthwhile project given that this view is an implausible minority position anyway. To respond, I will argue that, contrary to common opinion, other familiar objections to SPI fail and, thus, that SPI is a promising position to begin with. (shrink)
To study the influence of divinity on cosmos, Alexander uses the notions of ‘fate’ and ‘providence,’ which were common in the philosophy of his time. In this way, he provides an Aristotelian interpretation of the problems related to such concepts. In the context of this discussion, he offers a description of ‘nature’ different from the one that he usually regards as the standard Aristotelian notion of nature, i.e. the intrinsic principle of motion and rest. The new coined concept is (...) a ‘cosmic’ nature that can be identified with both ‘fate’ and ‘divine power,’ which are the immediate effect of providence upon the world. In the paper it is exposed how the conception of providence defended by Alexander means a rejection of the divine care of the particulars, since the divinities are only provident for species. Several texts belonging to the Middle Platonic philosophers will convince us that such thinkers (and not directly Aristotle) are the origin of the thesis that will be understood as the conventional Aristotelian position, namely that divinity only orders species but not individuals. (shrink)
Proponents of evidence-based medicine have argued convincingly for applying this scientific method to medicine. However, the current methodological framework of the EBM movement has recently been called into question, especially in epidemiology and the philosophy of science. The debate has focused on whether the methodology of randomized controlled trials provides the best evidence available. This paper attempts to shift the focus of the debate by arguing that clinical reasoning involves a patchwork of evidential approaches and that the emphasis on evidence (...) hierarchies of methodology fails to lend credence to the common practice of corroboration in medicine. I argue that the strength of evidence lies in the evidence itself, and not the methodology used to obtain that evidence. Ultimately, when it comes to evaluating the effectiveness of medical interventions, it is the evidence obtained from the methodology rather than the methodology that should establish the strength of the evidence. (shrink)
Epistemic invariantism, or invariantism for short, is the position that the proposition expressed by knowledge sentences does not vary with the epistemic standard of the context in which these sentences can be used. At least one of the major challenges for invariantism is to explain our intuitions about scenarios such as the so-called bank cases. These cases elicit intuitions to the effect that the truth-value of knowledge sentences varies with the epistemic standard of the context in which these sentences can (...) be used. In this paper, I will defend invariantism against this challenge by advocating the following, somewhat deflationary account of the bank case intuitions: Readers of the bank cases assign different truth-values to the knowledge claims in the bank cases because they interpret these scenarios such that the epistemic position of the subject in question differs between the high and the low standards case. To substantiate this account, I will argue, first, that the bank cases are underspecified even with respect to features that should uncontroversially be relevant for the epistemic position of the subject in question. Second, I will argue that readers of the bank cases will fill in these features differently in the low and the high standards case. In particular, I will argue that there is a variety of reasons to think that the fact that an error-possibility is mentioned in the high standards case will lead readers to assume that this error-possibility is supposed to be likely in the high standards case. (shrink)
Jennifer Nagel (2010) has recently proposed a fascinating account of the decreased tendency to attribute knowledge in conversational contexts in which unrealized possibilities of error have been mentioned. Her account appeals to epistemic egocentrism, or what is sometimes called the curse of knowledge, an egocentric bias to attribute our own mental states to other people (and sometimes our own future and past selves). Our aim in this paper is to investigate the empirical merits of Nagel’s hypothesis about the psychology involved (...) in knowledge attribution. (shrink)
Anscombe claims that whenever a subject is doing something intentionally, this subject knows that they are doing it. This essay defends Anscombe's claim from an influential set of counterexamples, due to Davidson. It argues that Davidson's counterexamples are tacit appeals to an argument, on which knowledge can't be essential to doing something intentionally, because some things that can be done intentionally require knowledge of future successes, and because such knowledge can't ever be guaranteed when someone is doing something intentionally. The (...) essay argues that there are apparently sensible grounds for denying each of these two premises. (shrink)
El alcance de la epojé y la posterior reducción ha sido siempre uno de los mayores terrenos de disputa de la fenomenologı́a. Este ensayo tiene la pretensión de reflexionar en torno a este elemento metodológico esencial a la luz del problema del acceso al mundo. ¿Se ha perdido algo, una vez llevada a cabo la reducción? La respuesta será negativa. Para ello, en la primera sección se presentará cómo más bien hay una reapropiación originaria del mundo a través de la (...) reducción. En la segunda sección se revisará la noción de Jan Patočka de mundo como totalidad pre-dada. En efecto, ambas permiten acercarse a la conclusión de que la reducción encamina hacia un renovado acceso al mundo sin dejar nada de este fuera, permitiendo ası́ que el mundo aparezca qua fenómeno. Esto significa, finalmente, que la reducción permite reconocer de modo originario la relación de nosotros con el mundo, sin entenderla como una mera relación entre dos cosas. (shrink)
It seems to be a common and intuitively plausible assumption that conversational implicatures arise only when one of the so-called conversational maxims is violated at the level of what is said. The basic idea behind this thesis is that, unless a maxim is violated at the level of what is said, nothing can trigger the search for an implicature. Thus, non-violating implicatures wouldn’t be calculable. This paper defends the view that some conversational implicatures arise even though no conversational maxim is (...) violated at the level of what is said. (shrink)
We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...) of computable ordinals. We prove that if one agent knows certain things about another agent, then the former necessarily has a higher intelligence level than the latter. This allows our intelligence no- tion to serve as a stepping stone to obtain results which, by themselves, are not stated in terms of our intelligence notion (results of potential in- terest even to readers totally skeptical that our notion correctly captures intelligence). As an application, we argue that these results comprise evidence against the possibility of intelligence explosion (that is, the no- tion that sufficiently intelligent machines will eventually be capable of designing even more intelligent machines, which can then design even more intelligent machines, and so on). (shrink)
Modeling mechanisms is central to the biological sciences – for purposes of explanation, prediction, extrapolation, and manipulation. A closer look at the philosophical literature reveals that mechanisms are predominantly modeled in a purely qualitative way. That is, mechanistic models are conceived of as representing how certain entities and activities are spatially and temporally organized so that they bring about the behavior of the mechanism in question. Although this adequately characterizes how mechanisms are represented in biology textbooks, contemporary biological research practice (...) shows the need for quantitative, probabilistic models of mechanisms, too. In this paper we argue that the formal framework of causal graph theory is well-suited to provide us with models of biological mechanisms that incorporate quantitative and probabilistic information. On the ba-sis of an example from contemporary biological practice, namely feedback regulation of fatty acid biosynthesis in Brassica napus, we show that causal graph theoretical models can account for feedback as well as for the multi-level character of mechanisms. However, we do not claim that causal graph theoretical representations of mechanisms are advantageous in all respects and should replace common qualitative models. Rather, we endorse the more balanced view that causal graph theoretical models of mechanisms are useful for some purposes, while being insufficient for others. (shrink)
Jason Bowers and Meg Wallace have recently argued that those who hold that every individual instantiates a ‘haecceity’ are caught up in a Euthyphro-style dilemma when confronted with familiar cases of fission and fusion. Key to Bowers and Wallace’s dilemma are certain assumptions about the nature of metaphysical explanation and the explanatory commitments of belief in haecceities. However, I argue that the dilemma only arises due to a failure to distinguish between providing a metaphysical explanation of why a fact holds (...) vs. a metaphysical explanation of what it is for a fact to hold. In the process, I also shed light on the explanatory commitments of belief in haecceities. (shrink)
Society has reached a new rupture in the digital age. Traditional technologies of biopower designed around coercion no longer dominate. Psychopower has manifested, and its implementation has changed the way one understands biopolitics. This discussion note references Byung-Chul Han’s interpretation of modern psychopolitics to investigate whether basic human rights violations are committed by Facebook, Inc.’s product against its users at a psychopolitical level. This analysis finds that Facebook use can lead to international human rights violations, specifically cultural rights, social rights, (...) rights to self-determination, political rights, and the right to health. (shrink)
In his recent article entitled ‘Can We Believe the Error Theory?’ Bart Streumer argues that it is impossible (for anyone, anywhere) to believe the error theory. This might sound like a problem for the error theory, but Streumer argues that it is not. He argues that the un-believability of the error theory offers a way for error theorists to respond to several objections commonly made against the view. In this paper, we respond to Streumer’s arguments. In particular, in sections 2-4, (...) we offer several objections to Streumer’s argument for the claim that we cannot believe the error theory. In section 5, we argue that even if Streumer establishes that we cannot believe the error theory, this conclusion is not as helpful for error theorists as he takes it to be. (shrink)
Recent metaphysics has turned its focus to two notions that are—as well as having a common Aristotelian pedigree—widely thought to be intimately related: grounding and essence. Yet how, exactly, the two are related remains opaque. We develop a unified and uniform account of grounding and essence, one which understands them both in terms of a generalized notion of identity examined in recent work by Fabrice Correia, Cian Dorr, Agustín Rayo, and others. We argue that the account comports with antecedently plausible (...) principles governing grounding, essence, and identity taken individually, and illuminates how the three interact. We also argue that the account compares favorably to an alternative unification of grounding and essence recently proposed by Kit Fine. (shrink)
I develop a theory of action inspired by a Heideggerian conception of concern, in particular for phenomenologically-inspired Embodied Cognition (Noë 2004; Wheeler 2008; Rietveld 2008; Chemero 2009; Rietveld and Kiverstein 2014). I proceed in three steps. First, I provide an analysis that identifies four central aspects of action and show that phenomenologically-inspired Embodied Cognition does not adequately account for them. Second, I provide a descriptive phenomenological analysis of everyday action and show that concern is the best candidate for an explanation (...) of action. Third, I show that concern, understood as the integration of affect and embodied understanding, allows us to explain the different aspects of action sufficiently. (shrink)
This article argues that economic crises are incompatible with the realisation of non-domination in capitalist societies. The ineradicable risk that an economic crisis will occur undermines the robust security of the conditions of non-domination for all citizens, not only those who are harmed by a crisis. I begin by demonstrating that the unemployment caused by economic crises violates the egalitarian dimensions of freedom as non-domination. The lack of employment constitutes an exclusion from the social bases of self-respect, and from a (...) practice of mutual social contribution crucial to the intersubjective affirmation of one’s status. While this argument shows that republicans must be concerned about economic crises, I suggest a more powerful argument can be grounded in the republican requirement that freedom must be robust. The systemic risk of economic crisis constitutes a threat to the conditions of free citizenship that cannot be nullified using policy mechanisms. As a result, republicans appear to be faced with the choice of revising their commitments or rejecting the possibility that republican freedom can be robustly secured in capitalist societies. (shrink)
Despite the frequency of stillbirths, the subsequent implications are overlooked and underappreciated. We present findings from comprehensive, systematic literature reviews, and new analyses of published and unpublished data, to establish the effect of stillbirth on parents, families, health-care providers, and societies worldwide. Data for direct costs of this event are sparse but suggest that a stillbirth needs more resources than a livebirth, both in the perinatal period and in additional surveillance during subsequent pregnancies. Indirect and intangible costs of stillbirth are (...) extensive and are usually met by families alone. This issue is particularly onerous for those with few resources. Negative effects, particularly on parental mental health, might be moderated by empathic attitudes of care providers and tailored interventions. The value of the baby, as well as the associated costs for parents, families, care providers, communities, and society, should be considered to prevent stillbirths and reduce associated morbidity. (shrink)
Introduction -- What is personal responsibility? -- Ordinary language -- Common conceptions -- What do philosophers mean by responsibility? -- Personally responsible for what? -- What do philosophers think? part I -- Causes -- Capacity -- Control -- Choice versus brute luck -- Second-order attitudes -- Equality of opportunity -- Deservingness -- Reasonableness -- Reciprocity -- Equal shares -- Combining criteria -- What do philosophers think? part II -- Utility -- Self-respect -- Autonomy -- Human flourishing -- Natural duties and (...) special obligations -- A matter of perspectives -- Combining values -- What do politicians think? -- A brief typology -- International comparisons -- Welfare reform -- Healthcare reform -- Rights and responsibilities -- On the responsibilities of politicians -- What do ordinary people think? -- Why ask? -- Attitudes to welfare claimants -- When push comes to shove -- Perceptions and reality -- Further international comparisons -- The trouble with opinion surveys -- Four contemporary issues in focus -- Unemployment -- Health -- Drug abuse -- Personal debt and financial rewards -- So how do we decide? -- Getting the public involved -- Citizens juries -- Answering some potential criticisms. (shrink)
The willful ignorance doctrine says defendants should sometimes be treated as if they know what they don't. This book provides a careful defense of this method of imputing mental states. Though the doctrine is only partly justified and requires reform, it also demonstrates that the criminal law needs more legal fictions of this kind. The resulting theory of when and why the criminal law can pretend we know what we don't has far-reaching implications for legal practice and reveals a pressing (...) need for change. (shrink)
Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer (...) based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures. (shrink)
Many authors have noted that there are types of English modal sentences cannot be formalized in the language of basic first-order modal logic. Some widely discussed examples include “There could have been things other than there actually are” and “Everyone who is actually rich could have been poor.” In response to this lack of expressive power, many authors have discussed extensions of first-order modal logic with two-dimensional operators. But claims about the relative expressive power of these extensions are often justified (...) only by example rather than by rigorous proof. In this paper, we provide proofs of many of these claims and present a more complete picture of the expressive landscape for such languages. (shrink)
Bertrand Russell famously argued that causation is not part of the fundamental physical description of the world, describing the notion of cause as “a relic of a bygone age”. This paper assesses one of Russell’s arguments for this conclusion: the ‘Directionality Argument’, which holds that the time symmetry of fundamental physics is inconsistent with the time asymmetry of causation. We claim that the coherence and success of the Directionality Argument crucially depends on the proper interpretation of the ‘ time symmetry’ (...) of fundamental physics as it appears in the argument, and offer two alternative interpretations. We argue that: if ‘ time symmetry’ is understood as the time -reversal invariance of physical theories, then the crucial premise of the Directionality Argument should be rejected; and if ‘ time symmetry’ is understood as the temporally bidirectional nomic dependence relations of physical laws, then the crucial premise of the Directionality Argument is far more plausible. We defend the second reading as continuous with Russell’s writings, and consider the consequences of the bidirectionality of nomic dependence relations in physics for the metaphysics of causation. (shrink)
Do the same epistemic standards govern scientific and religious belief? Or should science and religion operate in completely independent epistemic spheres? Commentators have recently been divided on William James’s answer to this question. One side depicts “The Will to Believe” as offering a separate-spheres defense of religious belief in the manner of Galileo. The other contends that “The Will to Believe” seeks to loosen the usual epistemic standards so that religious and scientific beliefs can both be justified by a unitary (...) set of evidentiary rules. I argue that James did build a unitary epistemology but not by loosening cognitive standards. In his psychological research, he had adopted the Comtian view that hypotheses and regulative assumptions play a crucial role in the context of discovery even though they must be provisionally adopted before they can be supported by evidence. “The Will to Believe” relies on this methodological point to achieve a therapeutic goal—to convince despairing Victorians that religious faith can be reconciled with a scientific epistemology. James argues that the prospective theist is in the same epistemic situation with respect to the “religious hypothesis” as the scientist working in the context of discovery. (shrink)
This paper argues that higher-order doubt generates an epistemic dilemma. One has a higher-order doubt with regards to P insofar as one justifiably withholds belief as to what attitude towards P is justified. That is, one justifiably withholds belief as to whether one is justified in believing, disbelieving, or withholding belief in P. Using the resources provided by Richard Feldman’s recent discussion of how to respect one’s evidence, I argue that if one has a higher-order doubt with regards to P, (...) then one is not justified in having any attitude towards P. Otherwise put: No attitude towards the doubted proposition respects one’s higher-order doubt. I argue that the most promising response to this problem is to hold that when one has a higher-order doubt about P, the best one can do to respect such a doubt is to simply have no attitude towards P. Higher-order doubt is thus much more rationally corrosive than non-higher-order doubt, as it undermines the possibility of justifiably having any attitude towards the doubted proposition. (shrink)
As Thomas Uebel has recently argued, some early logical positivists saw American pragmatism as a kindred form of scientific philosophy. They associated pragmatism with William James, whom they rightly saw as allied with Ernst Mach. But what apparently blocked sympathetic positivists from pursuing commonalities with American pragmatism was the concern that James advocated some form of psychologism, a view they thought could not do justice to the a priori. This paper argues that positivists were wrong to read James as offering (...) a psychologistic account of the a priori. They had encountered James by reading Pragmatism as translated by the unabashedly psychologistic Wilhelm Jerusalem. But in more technical works, James had actually developed a form of conventionalism that anticipated the so-called “relativized” a priori positivists themselves would independently develop. While positivists arrived at conventionalism largely through reflection on the exact sciences, though, James’s account of the a priori grew from his reflections on the biological evolution of cognition, particularly in the context of his Darwin-inspired critique of Herbert Spencer. (shrink)
Thought experiments invite us to evaluate philosophical theses by making judgements about hypothetical cases. When the judgements and the theses conflict, it is often the latter that are rejected. But what is the nature of the judgements such that they are able to play this role? I answer this question by arguing that typical judgements about thought experiments are in fact judgements of normal counterfactual sufficiency. I begin by focusing on Anna-Sara Malmgren’s defence of the claim that typical judgements about (...) thought experiments are mere possibility judgements. This view is shown to fail for two closely related reasons: it cannot account for the incorrectness of certain misjudgements, and it cannot account for the inconsistency of certain pairs of conflicting judgements. This prompts a reconsideration of Timothy Williamson’s alternative proposal, according to which typical judgements about thought experiments are counterfactual in nature. I show that taking such judgements to concern what would normally hold in instances of the relevant hypothetical scenarios avoids the objections that have been pressed against this kind of view. I then consider some other potential objections, but argue that they provide no grounds for doubt. (shrink)
I argue that embodied understanding and conceptual-representational understanding interact through schematic structure. I demonstrate that common conceptions of these two kinds of understanding, such as developed by Wheeler (2005, 2008) and Dreyfus (2007a, b, 2013), entail a separation between them that gives rise to significant problems. Notably, it becomes unclear how they could interact; a problem that has been pointed out by Dreyfus (2007a, b, 2013) and McDowell (2007) in particular. I propose a Kantian strategy to close the gap between (...) them. I argue that embodied and conceptual-representational understanding are governed by schemata. Since they are governed by schemata, they can interact through a structure that they have in common. Finally, I spell out two different ways to conceive of the schematic interaction between them—a close, grounding relationship and a looser relationship that allows for a minimal interaction, but preserves the autonomy of both forms of understanding. (shrink)
Phenomenal intentionality theories have recently enjoyed significant attention. According to these theories, the intentionality of a mental representation (what it is about) crucially depends on its phenomenal features. We present a new puzzle for these theories, involving a phenomenon called ‘intentional identity’, or ‘co-intentionality’. Co-intentionality is a ubiquitous intentional phenomenon that involves tracking things even when there is no concrete thing being tracked. We suggest that phenomenal intentionality theories need to either develop new uniquely phenomenal resources for handling the puzzle, (...) or restrict their explanatory ambitions. (shrink)
Contrary to the popular assumption that linguistically mediated social practices constitute the normativity of action (Kiverstein and Rietveld, 2015; Rietveld, 2008a,b; Rietveld and Kiverstein, 2014), I argue that it is affective care for oneself and others that primarily constitutes this kind of normativity. I argue for my claim in two steps. First, using the method of cases I demonstrate that care accounts for the normativity of action, whereas social practices do not. Second, I show that a social practice account of (...) the normativity of action has unwillingly authoritarian consequences in the sense that humans act only normatively if they follow social rules. I suggest that these authoritarian consequences are the result of an uncritical phenomenology of action and the fuzzy use of “normative”. Accounting for the normativity of action with care entails a realistic picture of the struggle between what one cares for and often repressive social rules. (shrink)
The standard, foundationalist reading of Our Knowledge of the External World requires Russell to have a view of perceptual acquaintance that he demonstrably does not have. Russell’s actual purpose in “constructing” physical bodies out of sense-data is instead to show that psychology and physics are consistent. But how seriously engaged was Russell with actual psychology? I show that OKEW makes some non-trivial assumptions about the character of visual space, and I argue that he drew those assumptions from William James’s Principles. (...) This point helps us take a fresh look at the complex relationship between the two men. In light of this surprising background of agreement, I highlight ways their more general approaches to perception finally diverged in ways that put the two at epistemological odds. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.