This paper outlines the methodological and empirical limitations of analysing the potential relationship between complex social phenomena such as democracy and inequality. It shows that the means to assess how they may be related is much more limited than recognised in the existing literature that is laden with contradictory hypotheses and findings. Better understanding our scientific limitations in studying this potential relationship is important for research and policy because many leading economists and other social scientists such as Acemoglu and Robinson (...) mistakenly claim to identify causal linkages between inequality and democracy but at times still inform policy. In contrast to the existing literature, the paper argues that ‘structural’ or ‘causal’ mechanisms that may potentially link the distribution of economic wealth and different political regimes will remain unknown given reasons such as their highly complex and idiosyncratic characteristics, fundamental econometric constraints and analysis at the macro-level. Neither new data sources, different analysed time periods nor new data analysis techniques can resolve this question and provide robust, general conclusions about this potential relationship across countries. Researchers are thus restricted to exploring rough correlations over specific time periods and geographic contexts with imperfect data that are very limited for cross-country comparisons. (shrink)
Almost all philosophers agree that a necessary condition on lying is that one says what one believes to be false. But, philosophers haven’t considered the possibility that the true requirement on lying concerns, rather, one’s degree-of-belief. Liars impose a risk on their audience. The greater the liar’s confidence that what she asserts is false, the greater the risk she’ll think she’s imposing on the dupe, and, therefore, the greater her blameworthiness. From this, I arrive at a dilemma: either the belief (...) requirement is wrong, or lying isn’t interesting. I suggest an alternative necessary condition for lying on a degree-of-belief framework. (shrink)
Many scientists routinely generalize from study samples to larger populations. It is commonly assumed that this cognitive process of scientific induction is a voluntary inference in which researchers assess the generalizability of their data and then draw conclusions accordingly. Here we challenge this view and argue for a novel account. The account describes scientific induction as involving by default a generalization bias that operates automatically and frequently leads researchers to unintentionally generalize their findings without sufficient evidence. The result is unwarranted, (...) overgeneralized conclusions. We support this account of scientific induction by integrating a range of disparate findings from across the cognitive sciences that have until now not been connected to research on the nature of scientific induction. The view that scientific induction involves by default a generalization bias calls for a revision of our current thinking about scientific induction and highlights an overlooked cause of the replication crisis in the sciences. Commonly proposed interventions to tackle scientific overgeneralizations that may feed into this crisis need to be supplemented with cognitive debiasing strategies to most effectively improve science. (shrink)
Can there be grounding without necessitation? Can a fact obtain wholly in virtue of metaphysically more fundamental facts, even though there are possible worlds at which the latter facts obtain but not the former? It is an orthodoxy in recent literature about the nature of grounding, and in first-order philosophical disputes about what grounds what, that the answer is no. I will argue that the correct answer is yes. I present two novel arguments against grounding necessitarianism, and show that grounding (...) contingentism is fully compatible with the various explanatory roles that grounding is widely thought to play. (shrink)
ABSTRACT It has become standard to conceive of metalinguistic disagreement as motivated by a form of negotiation, aimed at reaching consensus because of the practical consequences of using a word with one content rather than another. This paper presents an alternative motive for expressing and pursuing metalinguistic disagreement. In using words with given criteria, we betray our location amongst social categories or groups. Because of this, metalinguistic disagreement can be used as a stage upon which to perform a social identity. (...) The ways in which metalinguistic disagreements motivated in this way diverge in character from metalinguistic negotiations are described, as are several consequences of the existence of metalinguistic disagreements motivated in this way. (shrink)
We develop and defend a new approach to counterlogicals. Non-vacuous counterlogicals, we argue, fall within a broader class of counterfactuals known as counterconventionals. Existing semantics for counterconventionals, 459–482 ) and, 1–27 ) allow counterfactuals to shift the interpretation of predicates and relations. We extend these theories to counterlogicals by allowing counterfactuals to shift the interpretation of logical vocabulary. This yields an elegant semantics for counterlogicals that avoids problems with the usual impossible worlds semantics. We conclude by showing how this approach (...) can be extended to counterpossibles more generally. (shrink)
Effective quantum field theories are effective insofar as they apply within a prescribed range of length-scales, but within that range they predict and describe with extremely high accuracy and precision. The effectiveness of EFTs is explained by identifying the features—the scaling behaviour of the parameters—that lead to effectiveness. The explanation relies on distinguishing autonomy with respect to changes in microstates, from autonomy with respect to changes in microlaws, and relating these, respectively, to renormalizability and naturalness. It is claimed that the (...) effectiveness of EFTs is a consequence of each theory’s autonomyms rather than its autonomyml.1Introduction2Renormalizability2.1Explaining renormalizability3Naturalness3.1An unnatural but renormalizable theory4Two Kinds of Autonomy5The Effectiveness of Effective Quantum Field Theories6Conclusion. (shrink)
In this paper, we argue that a distinction ought to be drawn between two ways in which a given world might be logically impossible. First, a world w might be impossible because the laws that hold at w are different from those that hold at some other world (say the actual world). Second, a world w might be impossible because the laws of logic that hold in some world (say the actual world) are violated at w. We develop a novel (...) way of modelling logical possibility that makes room for both kinds of logical impossibility. Doing so has interesting implications for the relationship between logical possibility and other kinds of possibility (for example, metaphysical possibility) and implications for the necessity or contingency of the laws of logic. (shrink)
Recent metaphysics has turned its focus to two notions that are—as well as having a common Aristotelian pedigree—widely thought to be intimately related: grounding and essence. Yet how, exactly, the two are related remains opaque. We develop a unified and uniform account of grounding and essence, one which understands them both in terms of a generalized notion of identity examined in recent work by Fabrice Correia, Cian Dorr, Agustín Rayo, and others. We argue that the account comports with antecedently plausible (...) principles governing grounding, essence, and identity taken individually, and illuminates how the three interact. We also argue that the account compares favorably to an alternative unification of grounding and essence recently proposed by Kit Fine. (shrink)
Conventional wisdom has it that truth is always evaluated using our actual linguistic conventions, even when considering counterfactual scenarios in which different conventions are adopted. This principle has been invoked in a number of philosophical arguments, including Kripke’s defense of the necessity of identity and Lewy’s objection to modal conventionalism. But it is false. It fails in the presence of what Einheuser (2006) calls c-monsters, or convention-shifting expressions (on analogy with Kaplan’s monsters, or context-shifting expressions). We show that c-monsters naturally (...) arise in contexts, such as metalinguistic negotiations, where speakers entertain alternative conventions. We develop an expressivist theory—inspired by Barker (2002) and MacFarlane (2016) on vague predications and Einheuser (2006) on counterconventionals—to model these shifts in convention. Using this framework, we reassess the philosophical arguments that invoked the conventional wisdom. (shrink)
Achieving space domain awareness requires the identification, characterization, and tracking of space objects. Storing and leveraging associated space object data for purposes such as hostile threat assessment, object identification, and collision prediction and avoidance present further challenges. Space objects are characterized according to a variety of parameters including their identifiers, design specifications, components, subsystems, capabilities, vulnerabilities, origins, missions, orbital elements, patterns of life, processes, operational statuses, and associated persons, organizations, or nations. The Space Object Ontology provides a consensus-based realist framework (...) for formulating such characterizations in a computable fashion. Space object data are aligned with classes and relations in the Space Object Ontology and stored in a dynamically updated Resource Description Framework triple store, which can be queried to support space domain awareness and the needs of spacecraft operators. This paper presents the core of the Space Object Ontology, discusses its advantages over other approaches to space object classification, and demonstrates its ability to combine diverse sets of data from multiple sources within an expandable framework. Finally, we show how the ontology provides benefits for enhancing and maintaining longterm space domain awareness. (shrink)
In this paper, we analyse how GPS-based navigation systems are transforming some of our intellectual virtues and then suggest two strategies to improve our practices regarding the use of such epistemic tools. We start by outlining the two main approaches in virtue epistemology, namely virtue reliabilism and virtue responsibilism. We then discuss how navigation systems can undermine five epistemic virtues, namely memory, perception, attention, intellectual autonomy, and intellectual carefulness. We end by considering two possible interlinked ways of trying to remedy (...) this situation: [i] redesigning the epistemic tool to improve the epistemic virtues of memory, perception, and attention; and [ii] the cultivation of cognitive diligence for wayfinding tasks scaffolding intellectual autonomy and carefulness. (shrink)
In this paper, I will discuss what I will call “skeptical pragmatic invariantism” as a potential response to the intuitions we have about scenarios such as the so-called bank cases. SPI, very roughly, is a form of epistemic invariantism that says the following: The subject in the bank cases doesn’t know that the bank will be open. The knowledge ascription in the low standards case seems appropriate nevertheless because it has a true implicature. The goal of this paper is to (...) show that SPI is mistaken. In particular, I will show that SPI is incompatible with reasonable assumptions about how we are aware of the presence of implicatures. Such objections are not new, but extant formulations are wanting for reasons I will point out below. One may worry that refuting SPI is not a worthwhile project given that this view is an implausible minority position anyway. To respond, I will argue that, contrary to common opinion, other familiar objections to SPI fail and, thus, that SPI is a promising position to begin with. (shrink)
A counteridentical is a counterfactual with an identity statement in the antecedent. While counteridenticals generally seem non-trivial, most semantic theories for counterfactuals, when combined with the necessity of identity and distinctness, attribute vacuous truth conditions to such counterfactuals. In light of this, one could try to save the orthodox theories either by appealing to pragmatics or by denying that the antecedents of alleged counteridenticals really contain identity claims. Or one could reject the orthodox theory of counterfactuals in favor of a (...) hyperintensional semantics that accommodates non-trivial counterpossibles. In this paper, I argue that none of these approaches can account for all the peculiar features of counteridenticals. Instead, I propose a modified version of Lewis’s counterpart theory, which rejects the necessity of identity, and show that it can explain all the peculiar features of counteridenticals in a satisfactory way. I conclude by defending the plausibility of contingent identity from objections. (shrink)
It is commonly claimed that the universality of critical phenomena is explained through particular applications of the renormalization group. This article has three aims: to clarify the structure of the explanation of universality, to discuss the physics of such RG explanations, and to examine the extent to which universality is thus explained. The derivation of critical exponents proceeds via a real-space or a field-theoretic approach to the RG. Building on work by Mainwood, this article argues that these approaches ought to (...) be distinguished: while the field-theoretic approach explains universality, the real-space approach fails to provide an adequate explanation. (shrink)
We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...) of computable ordinals. We prove that if one agent knows certain things about another agent, then the former necessarily has a higher intelligence level than the latter. This allows our intelligence no- tion to serve as a stepping stone to obtain results which, by themselves, are not stated in terms of our intelligence notion (results of potential in- terest even to readers totally skeptical that our notion correctly captures intelligence). As an application, we argue that these results comprise evidence against the possibility of intelligence explosion (that is, the no- tion that sufficiently intelligent machines will eventually be capable of designing even more intelligent machines, which can then design even more intelligent machines, and so on). (shrink)
Fine is widely thought to have refuted the simple modal account of essence, which takes the essential properties of a thing to be those it cannot exist without exemplifying. Yet, a number of philosophers have suggested resuscitating the simple modal account by appealing to distinctions akin to the distinction Lewis draws between sparse and abundant properties, treating only those in the former class as candidates for essentiality. I argue that ‘sparse modalism’ succumbs to counterexamples similar to those originally posed by (...) Fine, and fails to capture paradigmatic instances of essence involving abundant properties and relations. (shrink)
Epistemic invariantism, or invariantism for short, is the position that the proposition expressed by knowledge sentences does not vary with the epistemic standard of the context in which these sentences can be used. At least one of the major challenges for invariantism is to explain our intuitions about scenarios such as the so-called bank cases. These cases elicit intuitions to the effect that the truth-value of knowledge sentences varies with the epistemic standard of the context in which these sentences can (...) be used. In this paper, I will defend invariantism against this challenge by advocating the following, somewhat deflationary account of the bank case intuitions: Readers of the bank cases assign different truth-values to the knowledge claims in the bank cases because they interpret these scenarios such that the epistemic position of the subject in question differs between the high and the low standards case. To substantiate this account, I will argue, first, that the bank cases are underspecified even with respect to features that should uncontroversially be relevant for the epistemic position of the subject in question. Second, I will argue that readers of the bank cases will fill in these features differently in the low and the high standards case. In particular, I will argue that there is a variety of reasons to think that the fact that an error-possibility is mentioned in the high standards case will lead readers to assume that this error-possibility is supposed to be likely in the high standards case. (shrink)
Recent discussions of emergence in physics have focussed on the use of limiting relations, and often particularly on singular or asymptotic limits. We discuss a putative example of emergence that does not fit into this narrative: the case of phonons. These quasi-particles have some claim to be emergent, not least because the way in which they relate to the underlying crystal is almost precisely analogous to the way in which quantum particles relate to the underlying quantum field theory. But there (...) is no need to take a limit when moving from a crystal lattice based description to the phonon description. Not only does this demonstrate that we can have emergence without limits, but also provides a way of understanding cases that do involve limits. (shrink)
Rational agents face choices, even when taking seriously the possibility of determinism. Rational agents also follow the advice of Causal Decision Theory (CDT). Although many take these claims to be well-motivated, there is growing pressure to reject one of them, as CDT seems to go badly wrong in some deterministic cases. We argue that deterministic cases do not undermine a counterfactual model of rational deliberation, which is characteristic of CDT. Rather, they force us to distinguish between counterfactuals that are relevant (...) and ones that are irrelevant for the purposes of deliberation. We incorporate this distinction into decision theory to develop ‘Selective Causal Decision Theory’, which delivers the correct recommendations in deterministic cases while respecting the key motivations behind CDT. (shrink)
This article argues that economic crises are incompatible with the realisation of non-domination in capitalist societies. The ineradicable risk that an economic crisis will occur undermines the robust security of the conditions of non-domination for all citizens, not only those who are harmed by a crisis. I begin by demonstrating that the unemployment caused by economic crises violates the egalitarian dimensions of freedom as non-domination. The lack of employment constitutes an exclusion from the social bases of self-respect, and from a (...) practice of mutual social contribution crucial to the intersubjective affirmation of one’s status. While this argument shows that republicans must be concerned about economic crises, I suggest a more powerful argument can be grounded in the republican requirement that freedom must be robust. The systemic risk of economic crisis constitutes a threat to the conditions of free citizenship that cannot be nullified using policy mechanisms. As a result, republicans appear to be faced with the choice of revising their commitments or rejecting the possibility that republican freedom can be robustly secured in capitalist societies. (shrink)
Samuel Alexander was a central figure of the new wave of realism that swept across the English-speaking world in the early twentieth century. His Space, Time, and Deity (1920a, 1920b) was taken to be the official statement of realism as a metaphysical system. But many historians of philosophy are quick to point out the idealist streak in Alexander’s thought. After all, as a student he was trained at Oxford in the late 1870s and early 1880s as British Idealism (...) was beginning to flourish. This naturally had some effect on his philosophical outlook and it is said that his early work is overtly idealist. In this paper I examine his neglected and understudied reactions to British Idealism in the 1880s. I argue that Alexander was not an idealist during this period and should not be considered as part of the British Idealist tradition, philosophically speaking. (shrink)
The recent literature abounds with accounts of the semantics and pragmatics of so-called predicates of personal taste, i.e. predicates whose application is, in some sense or other, a subjective matter. Relativism and contextualism are the major types of theories. One crucial difference between these theories concerns how we should assess previous taste claims. Relativism predicts that we should assess them in the light of the taste standard governing the context of assessment. Contextualism predicts that we should assess them in the (...) light of the taste standard governing the context of use. We show in a range of experiments that neither prediction is correct. People have no clear preferences either way and which taste standard they choose in evaluating a previous taste claim crucially depends on whether they start out with a favorable attitude towards the object in question and then come to have an unfavorable attitude or vice versa. We suggest an account of the data in terms of what we call hybrid relativism. (shrink)
It seems to be a common and intuitively plausible assumption that conversational implicatures arise only when one of the so-called conversational maxims is violated at the level of what is said. The basic idea behind this thesis is that, unless a maxim is violated at the level of what is said, nothing can trigger the search for an implicature. Thus, non-violating implicatures wouldn’t be calculable. This paper defends the view that some conversational implicatures arise even though no conversational maxim is (...) violated at the level of what is said. (shrink)
Whether or not quantum physics can account for molecular structure is a matter of considerable controversy. Three of the problems raised in this regard are the problems of molecular structure. We argue that these problems are just special cases of the measurement problem of quantum mechanics: insofar as the measurement problem is solved, the problems of molecular structure are resolved as well. In addition, we explore one consequence of our argument: that claims about the reduction or emergence of molecular structure (...) cannot be settled independently of the choice of a particular resolution to the measurement problem. Specifically, we consider how three standard putative solutions to the measurement problem inform our under- standing of a molecule in isolation, as well as of chemistry’s relation to quantum physics. (shrink)
There are two families of influential and stubborn puzzles that many theories of aboutness (intentionality) face: underdetermination puzzles and puzzles concerning representations that appear to be about things that do not exist. I propose an approach that elegantly avoids both kinds of puzzle. The central idea is to explain aboutness (the relation supposed to stand between thoughts and terms and their objects) in terms of relations of co-aboutness (the relation of being about the same thing that stands between the thoughts (...) and terms themselves). (shrink)
Grounding and explanation are said to be intimately connected. Some even maintain that grounding just is a form of explanation. But grounding and explanation also seem importantly different—on the face of it, the former is ‘worldy’ or ‘objective’ while the latter isn’t. In this paper, we develop and respond to an argument to the effect that there is no way to fruitfully address this tension that retains orthodox views about grounding and explanation but doesn’t undermine a central piece of methodology, (...) namely that explanation is a guide to ground. (shrink)
Sentences about logic are often used to show that certain embedding expressions are hyperintensional. Yet it is not clear how to regiment “logic talk” in the object language so that it can be compositionally embedded under such expressions. In this paper, I develop a formal system called hyperlogic that is designed to do just that. I provide a hyperintensional semantics for hyperlogic that doesn’t appeal to logically impossible worlds, as traditionally understood, but instead uses a shiftable parameter that determines the (...) interpretation of the logical connectives. I argue this semantics compares favorably to the more common impossible worlds semantics, which faces difficulties interpreting propositionally quantified logic talk. (shrink)
In his recent article entitled ‘Can We Believe the Error Theory?’ Bart Streumer argues that it is impossible (for anyone, anywhere) to believe the error theory. This might sound like a problem for the error theory, but Streumer argues that it is not. He argues that the un-believability of the error theory offers a way for error theorists to respond to several objections commonly made against the view. In this paper, we respond to Streumer’s arguments. In particular, in sections 2-4, (...) we offer several objections to Streumer’s argument for the claim that we cannot believe the error theory. In section 5, we argue that even if Streumer establishes that we cannot believe the error theory, this conclusion is not as helpful for error theorists as he takes it to be. (shrink)
Over almost a half-century, evidence law scholars and philosophers have contended with what have come to be called the “Proof Paradoxes.” In brief, the following sort of paradox arises: Factfinders in criminal and civil trials are charged with reaching a verdict if the evidence presented meets a particular standard of proof—beyond a reasonable doubt, in criminal cases, and preponderance of the evidence, in civil trials. It seems that purely statistical evidence can suffice for just such a level of certainty in (...) a variety of cases where our intuition is that it would nonetheless be wrong to convict the defendant, or find in favor of the plaintiff, on merely statistical evidence. So, we either have to convict with statistical evidence, in spite of an intuition that this is unsettling, or else explain what (dispositive) deficiency statistical evidence has. -/- Most scholars have tried to justify the resistance to relying on merely statistical evidence: by relying on epistemic deficiencies in this kind of evidence; by relying on court practice; and also by reference to the psychological literature. In fact, I argue, the epistemic deficiencies philosophers and legal scholars allege are suspect. And, I argue, while scholars often discuss unfairness to civil defendants, they ignore a long history of relying on statistical evidence in a variety of civil matters, including employment discrimination, toxic torts, and market share liability cases. Were the dominant arguments in the literature to prevail, it would extremely difficult for plaintiffs to recover in a variety of cases. -/- The various considerations I advance lead to the conclusion that when it comes to naked statistical evidence, philosophers and legal scholars who argue for its insufficiency have been caught with their pants down. (shrink)
I develop a theory of action inspired by a Heideggerian conception of concern, in particular for phenomenologically-inspired Embodied Cognition (Noë 2004; Wheeler 2008; Rietveld 2008; Chemero 2009; Rietveld and Kiverstein 2014). I proceed in three steps. First, I provide an analysis that identifies four central aspects of action and show that phenomenologically-inspired Embodied Cognition does not adequately account for them. Second, I provide a descriptive phenomenological analysis of everyday action and show that concern is the best candidate for an explanation (...) of action. Third, I show that concern, understood as the integration of affect and embodied understanding, allows us to explain the different aspects of action sufficiently. (shrink)
Thought experiments invite us to evaluate philosophical theses by making judgements about hypothetical cases. When the judgements and the theses conflict, it is often the latter that are rejected. But what is the nature of the judgements such that they are able to play this role? I answer this question by arguing that typical judgements about thought experiments are in fact judgements of normal counterfactual sufficiency. I begin by focusing on Anna-Sara Malmgren’s defence of the claim that typical judgements about (...) thought experiments are mere possibility judgements. This view is shown to fail for two closely related reasons: it cannot account for the incorrectness of certain misjudgements, and it cannot account for the inconsistency of certain pairs of conflicting judgements. This prompts a reconsideration of Timothy Williamson’s alternative proposal, according to which typical judgements about thought experiments are counterfactual in nature. I show that taking such judgements to concern what would normally hold in instances of the relevant hypothetical scenarios avoids the objections that have been pressed against this kind of view. I then consider some other potential objections, but argue that they provide no grounds for doubt. (shrink)
Modeling mechanisms is central to the biological sciences – for purposes of explanation, prediction, extrapolation, and manipulation. A closer look at the philosophical literature reveals that mechanisms are predominantly modeled in a purely qualitative way. That is, mechanistic models are conceived of as representing how certain entities and activities are spatially and temporally organized so that they bring about the behavior of the mechanism in question. Although this adequately characterizes how mechanisms are represented in biology textbooks, contemporary biological research practice (...) shows the need for quantitative, probabilistic models of mechanisms, too. In this paper we argue that the formal framework of causal graph theory is well-suited to provide us with models of biological mechanisms that incorporate quantitative and probabilistic information. On the ba-sis of an example from contemporary biological practice, namely feedback regulation of fatty acid biosynthesis in Brassica napus, we show that causal graph theoretical models can account for feedback as well as for the multi-level character of mechanisms. However, we do not claim that causal graph theoretical representations of mechanisms are advantageous in all respects and should replace common qualitative models. Rather, we endorse the more balanced view that causal graph theoretical models of mechanisms are useful for some purposes, while being insufficient for others. (shrink)
Near the end of 'Naming the Colours', Lewis (1997) makes an interesting claim about the relationship between linguistic and mental content; we are typically unable to read the content of a belief off the content of a sentence used to express that belief or vice versa. I call this view autonomism. I motivate and defend autonomism and discuss its importance in the philosophy of mind and language. In a nutshell, I argue that the different theoretical roles that mental and linguistic (...) content play suggest these kinds of content should be understood as sensitive to different things. (shrink)
While it is a point of agreement in contemporary republican political theory that property ownership is closely connected to freedom as non-domination, surprisingly little work has been done to elucidate the nature of this connection or the constraints on property regimes that might be required as a result. In this paper, I provide a systematic model of the boundaries within which republican property systems must sit and explore some of the wider implications that thinking of property in these terms may (...) have for republicans. The boundaries I focus on relate to the distribution of property and the application of types of property claims over particular kinds of goods. I develop this model from those elements of non-domination most directly related to the operation of a property regime: (a) economic independence, (b) limiting material inequalities, and (c) the promotion of common goods. The limits that emerge from this analysis support intuitive judgments that animate much republican discussion of property distribution. My account diverges from much orthodox republican theory, though, in challenging the primacy of private property rights in the realization of economic independence. The value of property on republican terms can be realized without private ownership of the means of production. (shrink)
It is widely held that counterfactuals, unlike attitude ascriptions, preserve the referential transparency of their constituents, i.e., that counterfactuals validate the substitution of identicals when their constituents do. The only putative counterexamples in the literature come from counterpossibles, i.e., counterfactuals with impossible antecedents. Advocates of counterpossibilism, i.e., the view that counterpossibles are not all vacuous, argue that counterpossibles can generate referential opacity. But in order to explain why most substitution inferences into counterfactuals seem valid, counterpossibilists also often maintain that counterfactuals (...) with possible antecedents are transparency‐preserving. I argue that if counterpossibles can generate opacity, then so can ordinary counterfactuals with possible antecedents. Utilizing an analogy between counterfactuals and attitude ascriptions, I provide a counterpossibilist‐friendly explanation for the apparent validity of substitution inferences into counterfactuals. I conclude by suggesting that the debate over counterpossibles is closely tied to questions concerning the extent to which counterfactuals are more like attitude ascriptions and epistemic operators than previously recognized. (shrink)
Skeptical invariantists maintain that the expression “knows” invariably expresses an epistemically extremely demanding relation. This leads to an immediate challenge. The knowledge relation will hardly if ever be satisfied. Consequently, we can rarely if ever apply “knows” truly. The present paper assesses a prominent strategy for skeptical invariantists to respond to this challenge, which appeals to loose talk. Based on recent developments in the theory of loose talk, I argue that such appeals to loose talk fail. I go on to (...) present a closely related, more promising response strategy, which combines assumptions about the dynamics of pragmatic presuppostions from Blome-Tillmann (2014) with an appeal to conversational exculpature, a phenomenon recently studied by Hoek (2018, 2019). (shrink)
Proponents of evidence-based medicine have argued convincingly for applying this scientific method to medicine. However, the current methodological framework of the EBM movement has recently been called into question, especially in epidemiology and the philosophy of science. The debate has focused on whether the methodology of randomized controlled trials provides the best evidence available. This paper attempts to shift the focus of the debate by arguing that clinical reasoning involves a patchwork of evidential approaches and that the emphasis on evidence (...) hierarchies of methodology fails to lend credence to the common practice of corroboration in medicine. I argue that the strength of evidence lies in the evidence itself, and not the methodology used to obtain that evidence. Ultimately, when it comes to evaluating the effectiveness of medical interventions, it is the evidence obtained from the methodology rather than the methodology that should establish the strength of the evidence. (shrink)
We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement φ if and only if that AGI would include φ in the resulting enumeration if that AGI were commanded: “Enumerate all the EA-sentences which you (...) know.” This definition is non-circular because an AGI, being capable of practical English communication, is capable of understanding the everyday English word “know” independently of how any philosopher formally defines knowledge; we elaborate further on the non-circularity of this circular-looking definition. This elegantly solves the problem that different AGIs may have different internal knowledge definitions and yet we want to study knowledge of AGIs in general, without having to study different AGIs separately just because they have separate internal knowledge definitions. Finally, we suggest how this definition of AGI knowledge can be used as a bridge which could allow the AGI research community to import certain abstract results about mechanical knowing agents from mathematical logic. (shrink)
Samuel Alexander was one of the first realists of the twentieth century to defend a theory of categories. He thought that the categories are genuinely real and grounded in the intrinsic nature of Space-Time. I present his reduction of the categories in terms of Space-Time, articulate his account of categorial structure and completeness, and offer an interpretation of what he thought the nature of the categories really were. I then argue that his theory of categories has some advantages over (...) competing theories of his day, and finally draw some important lessons that we can learn from his realist yet reductionist theory of categories. (shrink)
Jason Bowers and Meg Wallace have recently argued that those who hold that every individual instantiates a ‘haecceity’ are caught up in a Euthyphro-style dilemma when confronted with familiar cases of fission and fusion. Key to Bowers and Wallace’s dilemma are certain assumptions about the nature of metaphysical explanation and the explanatory commitments of belief in haecceities. However, I argue that the dilemma only arises due to a failure to distinguish between providing a metaphysical explanation of why a fact holds (...) vs. a metaphysical explanation of what it is for a fact to hold. In the process, I also shed light on the explanatory commitments of belief in haecceities. (shrink)
Many of my first students at Anzaldúa’s alma mater read Borderlands/La Frontera and concluded that Anzaldúa was not a philosopher. Hostile comments suggested that Anzaldúa’s intimately personal and poetic ways of writing were not philosophical. In response, I created “American Philosophy and Self-Culture” using backwards course design and taught variations of it in 2013, 2016, and 2018. Students spend nearly a month exploring Anzaldúa’s works, but only after reading three centuries of U.S.-American philosophers who wrote in deeply personal and literary (...) ways about self-transformation, community-building, and world-changing. The sections of this chapter: 1) describe why my first students rejected Anzaldúa as a philosopher in terms of the discipline’s parochialism; 2) present Anzaldúa’s broader understanding of herself as a philosopher; 3) summarize my reconstructed Anzaldúa-inspired American Philosophy course and outline some assignments; 4) discuss how my students respond to Borderlands/La Frontera when we read it through the lens of self-culture; and 5) explain my attempt to shape the subdiscipline of American Philosophy by teaching Anzaldúa to specialists at the 2017 Summer Institute in American Philosophy. (shrink)
The willful ignorance doctrine says defendants should sometimes be treated as if they know what they don't. This book provides a careful defense of this method of imputing mental states. Though the doctrine is only partly justified and requires reform, it also demonstrates that the criminal law needs more legal fictions of this kind. The resulting theory of when and why the criminal law can pretend we know what we don't has far-reaching implications for legal practice and reveals a pressing (...) need for change. (shrink)
To study the influence of divinity on cosmos, Alexander uses the notions of ‘fate’ and ‘providence,’ which were common in the philosophy of his time. In this way, he provides an Aristotelian interpretation of the problems related to such concepts. In the context of this discussion, he offers a description of ‘nature’ different from the one that he usually regards as the standard Aristotelian notion of nature, i.e. the intrinsic principle of motion and rest. The new coined concept is (...) a ‘cosmic’ nature that can be identified with both ‘fate’ and ‘divine power,’ which are the immediate effect of providence upon the world. In the paper it is exposed how the conception of providence defended by Alexander means a rejection of the divine care of the particulars, since the divinities are only provident for species. Several texts belonging to the Middle Platonic philosophers will convince us that such thinkers (and not directly Aristotle) are the origin of the thesis that will be understood as the conventional Aristotelian position, namely that divinity only orders species but not individuals. (shrink)
I argue that Hubert Dreyfus’ work on embodied coping, the intentional arc, solicitations and the background as well as his anti-representationalism rest on introspection. I denote with ‘introspection’ the methodological malpractice of formulating ontological statements about the conditions of possibility of phenomena merely based on descriptions. In order to illustrate the insufficiencies of Dreyfus’ methodological strategy in particular and introspection in general, I show that Heidegger, to whom Dreyfus constantly refers as the foundation of his own work, derives ontological statements (...) about the conditions of possibility of phenomena not merely from descriptions, but also from analyses. I further show that deriving ontological statements directly from descriptions entails implausible results. I do so by discussing representative cases. Based on these general methodological considerations, I show that Dreyfus’ work on action, skill and understanding is introspective. First, I demonstrate that Dreyfus’ influential claim that rules and representations do not govern skillful actions is the result of introspection, because it is merely founded on the absence of rules and representations in representative descriptions of skillful actions. Second, I show that Dreyfus’ work on embodied coping, the intentional arc, solicitations and the background is also based on introspection. These ontological structures are merely reifications of descriptions and are not further substantiated by analyses. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.