Is it correct to accept an anthopological dimension in Baruch Spinoza’s doctrine? Regardless of the answer we may suggest for this point, how could be this connected to the prevailing Humanism of the immediately previous period in which our author lived? Our proposal points to a positive stance in relation to the presence of an anthropological perspective in Spinoza’s thought; perspective that may be seen as a reaction to that kind of Renaissance humanism that sees the human being in Nature (...) a privileged kind. Therefore, in this paper we are going to analyse the way the concept of man evolves from this to the Spinozian formulation as a foundation in which it can be possible to accept the study of a certain Anthropology in his work. (shrink)
This collection of essays explores the metaphysical thesis that the living world is not made up of substantial particles or things, as has often been assumed, but is rather constituted by processes. The biological domain is organised as an interdependent hierarchy of processes, which are stabilised and actively maintained at different timescales. Even entities that intuitively appear to be paradigms of things, such as organisms, are actually better understood as processes. Unlike previous attempts to articulate processual views of biology, which (...) have tended to use Alfred North Whitehead’s panpsychist metaphysics as a foundation, this book takes a naturalistic approach to metaphysics. It submits that the main motivations for replacing an ontology of substances with one of processes are to be found in the empirical findings of science. Biology provides compelling reasons for thinking that the living realm is fundamentally dynamic, and that the existence of things is always conditional on the existence of processes. The phenomenon of life cries out for theories that prioritise processes over things, and it suggests that the central explanandum of biology is not change but rather stability, or more precisely, stability attained through constant change. This edited volume brings together philosophers of science and metaphysicians interested in exploring the consequences of a processual philosophy of biology. The contributors draw on an extremely wide range of biological case studies, and employ a process perspective to cast new light on a number of traditional philosophical problems, such as identity, persistence, and individuality. (shrink)
The is-ought gap is Hume’s claim that we can’t get an ‘ought’ from just ‘is’s. Prior (“The Autonomy of Ethics,” 1960) showed that its most straightforward formulation, a staple of introductory philosophy classes, fails. Many authors attempt to resurrect the claim by restricting its domain syntactically or by reformulating it in terms of models of deontic logic. Those attempts prove to be complex, incomplete, or incorrect. I provide a simple reformulation of the is-ought gap that closely fits Hume’s description of (...) it. My formulation of the gap avoids the proposed counterexamples from Prior and offers a natural explanation of why they seem compelling. Moreover, I show that my formulation of the gap is guaranteed by standard theories of the semantics of normative terms, and that provides a more general reason to accept it. (shrink)
Change blindness is the striking failure to see large changes that normally would be noticed easily. Over the past decade this phenomenon has greatly contributed to our understanding of attention, perception, and even consciousness. The surprising extent of change blindness explains its broad appeal, but its counterintuitive nature has also engendered confusions about the kinds of inferences that legitimately follow from it. Here we discuss the legitimate and the erroneous inferences that have been drawn, and offer a set of requirements (...) to help separate them. In doing so, we clarify the genuine contributions of change blindness research to our understanding of visual perception and awareness, and provide a glimpse of some ways in which change blindness might shape future research. (shrink)
Philosophy of biology is often said to have emerged in the last third of the twentieth century. Prior to this time, it has been alleged that the only authors who engaged philosophically with the life sciences were either logical empiricists who sought to impose the explanatory ideals of the physical sciences onto biology, or vitalists who invoked mystical agencies in an attempt to ward off the threat of physicochemical reduction. These schools paid little attention to actual biological science, and as (...) a result philosophy of biology languished in a state of futility for much of the twentieth century. The situation, we are told, only began to change in the late 1960s and early 1970s, when a new generation of researchers began to focus on problems internal to biology, leading to the consolidation of the discipline. In this paper we challenge this widely accepted narrative of the history of philosophy of biology. We do so by arguing that the most important tradition within early twentieth-century philosophy of biology was neither logical empiricism nor vitalism, but the organicist movement that flourished between the First and Second World Wars. We show that the organicist corpus is thematically and methodologically continuous with the contemporary literature in order to discredit the view that early work in the philosophy of biology was unproductive, and we emphasize the desirability of integrating the historical and contemporary conversations into a single, unified discourse. (shrink)
Our recent opinion article [1] examined what change blindness can and cannot tell us about visual representations. Among other things, we argued that change blindness can tell us a lot about how visual representations can be used, but little about their extent. We and others found the ‘sparse representations’ view appealing (and still do), and initially made the overly strong claim that change blindness supports the conclusion of sparse representations [2,3]. We wrote our article because change blindness continues to be (...) taken as evidence for sparse – or even absent – representations, and we used O’Regan and Noë’s influential paper [4] as an example. However, as has been noted for some time [5–8], this conclusion is logically flawed: lack of ability need not be caused by lack of representation. (shrink)
The debate regarding the role of conscientious objection in healthcare has been protracted, with increasing demands for curbs on conscientious objection. There is a growing body of evidence that indicates that in some cases, high rates of conscientious objection can affect access to legal medical services such as abortion—a major concern of critics of conscientious objection. Moreover, few solutions have been put forward that aim to satisfy both this concern and that of defenders of conscientious objection—being expected to participate in (...) the provision of services that compromise their moral integrity. Here we attempt to bring some resolution to the debate by proposing a pragmatic, long-term solution offering what we believe to be an acceptable compromise—a quota system for medical trainees in specialties where a conscientious objection can be exercised, and is known to cause conflict. We envisage two main objectives of the quota system we propose. First, as a means to introduce conscientious objection into countries where this is not presently permitted. Second, to minimise or eliminate the effects of high rates of conscientious objection in countries such as Italy, where access to legal abortion provision can be negatively affected. (shrink)
This paper argues that all of the standard theories about the divisions of space and time can benefit from, and may need to rely on, parsimony considerations. More specifically, whether spacetime is discrete, gunky or pointy, there are wildly unparsimonious rivals to standard accounts that need to be resisted by proponents of those accounts, and only parsimony considerations offer a natural way of doing that resisting. Furthermore, quantitative parsimony considerations appear to be needed in many of these cases.
Thought experiments are common where infinitely many entities acting in concert give rise to strange results. Some of these cases, however, can be generalized to yield almost omnipotent systems from limited materials. This paper discusses one of these cases, bringing out one aspect of what seems so troubling about "New Zeno" cases. -/- This paper is in memory of Josh Parsons.
Some experiences—like the experience of drinking a cool sip of water on a hot day—are good experiences to have. But when we try to explain why they are good, we encounter a clash of intuitions. First, we have an objectivist intuition: plausibly, the experience is non-derivatively good for me just because it feels the way that it does. It ‘feels good’. Thus, any experience of the same kind would be good for the person who has it. That experience would also (...) ‘feel good’. Second, we have a subjectivist intuition: if a person were indifferent to that kind of experience, then it might fail to be good for that person. Third, we have a possibility intuition: for any kind of experience, possibly there is a subject who is indifferent to that kind of experience. The Pleasure Problem is the problem we face in reconciling these three claims. I explain the problem and I argue for a solution. I argue that we ought to reject the most common solutions: rejecting the objectivist or subjectivist intuitions. Instead we ought to follow Timothy Sprigge in rejecting the possibility claim. We should embrace the view that experiences bear necessary connections to our attitudes. (shrink)
To what extent do true predications correspond to truthmakers in virtue of which those predications are true? One sort of predicate which is often thought to not be susceptible to an ontological treatment is a predicate for instantiation, or some corresponding predication (trope-similarity or set-membership, for example). This paper discusses this question, and argues that an "ontological" approach is possible here too: where this ontological approach goes beyond merely finding a truthmaker for claims about instantiation. Along the way a version (...) of the problem of the regress of instantiation is posed and solved. (shrink)
This essay offers an account of Kierkegaard’s view of the limits of thought and of what makes this view distinctive. With primary reference to Philosophical Fragments, and its putative representation of Christianity as unthinkable, I situate Kierkegaard’s engagement with the problem of the limits of thought, especially with respect to the views of Kant and Hegel. I argue that Kierkegaard builds in this regard on Hegel’s critique of Kant but that, against Hegel, he develops a radical distinction between two types (...) of thinking and inquiry: the ‘aesthetic-intellectual’ and the ‘ethico-religious’. I clarify this distinction and show how it guides Kierkegaard’s conception of a form of philosophical practice that involves drawing limits to the proper sphere of disinterested contemplation. With reference to two rival interpretations of Kierkegaard’s approach to the limits of thought—which I call ‘bullet-biting’ and ‘relativizing’—I further show how my ‘disambiguating’ account can better explain how, and why, his work courts a form of self-referential incoherence, in which it appears that certain limits of thought are at once affirmed and violated. (shrink)
This paper explores the relation of the theory of time and the theory of truth in Deleuze’s philosophy. According to Deleuze, a mutation in our conception of time occurred with Kant. In antiquity, time had been subordinated to movement, it was the measure or the “number of movement” (Aristotle). In Kant, this relation is inverted: time is no longer subordinated to movement but assumes an independence and autonomy of its own for the first time. In Deleuze’s phrasing, time becomes the (...) pure and empty form of everything that moves and changes — not an eternal form (as in Plato), but precisely the form of what is not eternal. In turn, the theory of time is inextricably linked to the concept of truth, since to say that a proposition is true means that it is true “in all times and in all places.” Truth, in other words, is timeless, eternal, non-temporal. When the form of the true is confronted with the form of time, the concept of truth is necessarily put into crisis, and Deleuze’s argument is that time allows the power of the false to assume an autonomy of its own. The analysis will attempt to show how the liberation of time from movement (the pure and empty form of time) leads to a liberation of the false from the true (the power of the false). (shrink)
On an ‘internalist’ picture, knowledge isn’t necessary for understanding the nature of perception and perceptual experience. This contrasts with the ‘knowledge first’ picture, according to which it’s essential to the nature of successful perceiving as a mental state that it’s a way of knowing. It’s often thought that naturalistic theorizing about the mind should adopt the internalist picture. However, I argue that a powerful, recently prominent framework for scientific study of the mind, ‘predictive processing,’ instead supports the knowledge first picture. (...) Under predictive processing, it’s intrinsic to successful perceiving that it’s a state with the kind of modal robustness that’s distinctive of knowledge, which gives us reason to think of successful perceiving along knowledge first lines. Furthermore, I argue that the predictive processing framework encourages us to conceptualize experiences which don’t amount to knowledge along knowledge first lines, as states which by their nature fall short of knowledge. (shrink)
Physicalism, the thesis that everything is physical, is one of the most controversial problems in philosophy. Its adherents argue that there is no more important doctrine in philosophy, whilst its opponents claim that its role is greatly exaggerated. In this superb introduction to the problem Daniel Stoljar focuses on three fundamental questions: the interpretation, truth and philosophical significance of physicalism. In answering these questions he covers the following key topics: -/- (i)A brief history of physicalism and its definitions, (ii)what (...) a physical property is and how physicalism meets challenges from empirical sciences, (iii)'Hempel’s dilemma’ and the relationship between physicalism and physics, (iv)physicalism and key debates in metaphysics and philosophy of mind, such as supervenience, identity and conceivability, and (v)physicalism and causality. -/- Additional features include chapter summaries, annotated further reading and a glossary of technical terms, making Physicalism ideal for those coming to the problem for the first time. (shrink)
This chapter sets out an optimistic view of philosophical progress.The key idea is that the historical record speaks in favor of there being progress at least if we are clear about what philosophical problems are, and what it takes to solve them. I end by asking why so many people tend toward a pessimistic view of philosophical progress.
I defend, against its more recent critics, a literal, factual, and consistent interpretation of Timaeus’ creation of the cosmos and time. My main purpose is to clarify the assumptions under which a literal interpretation of Timaeus’ cosmology becomes philosophically attractive. I propose five exegetical principles that guide my interpretation. Unlike previous literalists, I argue that assuming a “pre-cosmic time” is a mistake. Instead, I challenge the exegetical assumptions scholars impose on the text and argue that for Timaeus, a mere succession (...) of events and the relations derived from it (before, after, simultaneous with) imply no time, given the narrow definition of the term used in the dialogue. For Timaeus, I explain, time is measurable, regular, and dependent on the motion of the celestial bodies. A mere succession of events like the one needed to understand the creation story and the pre-cosmos requires none of these elements. Readers of Plato erroneously assume that a succession of events implies time, but that is to impose a conception of time absent in the text. The chapter offers a detailed reconstruction of the pre-cosmic stage under a literalist interpretation and argues how it is compatible with the immutable relationship between the Demiurge and the cosmos. -/- This is an open access chapter distributed under the terms of the CC BY-NC 4.0 license. chapter 4. (shrink)
According to a view that goes by “Humeanism,” causal facts supervene on patterns of worldly entities. The simplest form of Humeanism is the constant conjunction theory: a particular type-F thing causes a particular type-G thing iff (i) that type-Fis conjoined with that type-G thing and (ii) all F’s are conjoined with G’s. The constant conjunction theory implies that all causation is extrinsic, in the following sense: for all positive causal facts pertaining to each possible region,it’s extrinsic to that region that (...) those causal facts pertain to it. Actual Humeans don’t accept the constant conjunction theory; they accept more sophisticated versions of Humeanism. But I argue that they, too, are committed to the thesis that all causation is extrinsic. In arguing for this claim,I use a discussion from Brian Weatherson as a springboard. Weatherson argues that on a plausible Humean view, some regions are such that all of their possible duplicates have the same or similar natural laws. I show that this is false. If Humeanism is true, then for every possible region, there are possible duplicates of that region with utterly alien natural laws. As a consequence, no causal facts pertain intrinsically to any region. (shrink)
This paper examines the moral force of exploitation in developing world research agreements. Taking for granted that some clinical research which is conducted in the developing world but funded by developed world sponsors is exploitative, it asks whether a third party would be morally justified in enforcing limits on research agreements in order to ensure more fair and less exploitative outcomes. This question is particularly relevant when such exploitative transactions are entered into voluntarily by all relevant parties, and both research (...) sponsors and host communities benefit from the resulting agreements. I show that defenders of the claim that exploitation ought to be permitted rely on a mischaracterization of certain forms of interference as unjustly paternalistic and two dubious empirical assumptions about the results of regulation. The view I put forward is that by evaluating a system of constraints on international research agreements, rather than individual transaction-level interference, we can better assess the alternatives to permitting exploitative research agreements. (shrink)
In Metaphysics A, Aristotle offers some objections to Plato’s theory of Forms to the effect that Plato’s Forms would not be explanatory in the right way, and seems to suggest that they might even make the explanatory project worse. One interesting historical puzzle is whether Aristotle can avoid these same objections to his own theory of universals. The concerns Aristotle raises are, I think, cousins of contemporary concerns about the usefulness and explanatoriness of abstract objects, some of which have recently (...) been receiving attention in the philosophy of mathematics. After discussing Aristotle’s objections and their contemporary cousins, the paper discusses some of the main available lines of response to these sorts of challenges, before concluding with an examination of whether these responses could assist Plato or Aristotle in responding to these challenges. (shrink)
ABSTRACT This paper examines the question of whether recognition relations are based on trust. Theorists of recognition have acknowledged the ways in which recognition relations make us vulnerable to others but have largely neglected the underlying ‘webs of trust’ in which such relations are embedded. In this paper, I consider the ways in which the theories of recognition developed by Jürgen Habermas and Axel Honneth, not only point to our mutual vulnerability but also implicitly rely upon mutual relations of trust. (...) The paper first offers a novel examination of the relation between recognition, vulnerability and trust in Habermas’ account of communicative action with the aim of arguing that such a consideration helps to elucidate important features of recognition. My claim is that a consideration of the dynamics of recognition and vulnerability in language-use, leads to an acknowledgment of the forms of trust that not only underpin communicative action, but recognition more generally. I conclude by considering the elements that are underplayed in Habermas’ account by turning to an examination of Axel Honneth’s alternative affective theory of recognition, specifically considering the interrelation between vulnerability and recognition. In doing so, I also turn to a consideration of the kind of trust that must be assumed in Honneth’s account of mutual recognition and point to a recognitive notion of trust. (shrink)
Moral debunking arguments are meant to show that, by realist lights, moral beliefs are not explained by moral facts, which in turn is meant to show that they lack some significant counterfactual connection to the moral facts (e.g., safety, sensitivity, reliability). The dominant, “minimalist” response to the arguments—sometimes defended under the heading of “third-factors” or “pre-established harmonies”—involves affirming that moral beliefs enjoy the relevant counterfactual connection while granting that these beliefs are not explained by the moral facts. We show that (...) the minimalist gambit rests on a controversial thesis about epistemic priority: that explanatory concessions derive their epistemic import from what they reveal about counterfactual connections. We then challenge this epistemic priority thesis, which undermines the minimalist response to debunking arguments (in ethics and elsewhere). (shrink)
Difference and Repetition might be said to have brought about a Deleuzian Revolution in philosophy comparable to Kant’s Copernican Revolution. Kant had denounced the three great terminal points of traditional metaphysics – self, world and God – as transcendent illusions, and Deleuze pushes Kant’s revolution to its limit by positing a transcendental field that excludes the coherence of the self, world and God in favour of an immanent and differential plane of impersonal individuations and pre-individual singularities. In the process, he (...) introduces numerous conceptual innovations into philosophy: the becoming of concepts; a transformation of the form of the question; an insistence that philosophy must start in the middle; an attempt to think in terms of multiplicities; the development of a new logic and a new metaphysics based on a concept of difference; a new conception of space as intensive rather than extensive; a conception of time as a pure and empty form; and an understanding of philosophy as a system in heterogenesis – that is, a system that entails a perpetual genesis of the heterogeneous, an incessant production of the new. -/- Keywords: concepts, becoming, multiplicity, singularity, the middle [au milieu], difference, intensity, time, system, the new. (shrink)
Choices confront us with questions. How we act depends on our answers to those questions. So the way our beliefs guide our choices is not just a function of their informational content, but also depends systematically on the questions those beliefs address. This paper gives a precise account of the interplay between choices, questions and beliefs, and harnesses this account to obtain a principled approach to the problem of deduction. The result is a novel theory of belief-guided action that explains (...) and predicts the decisions of agents who, like ourselves, fail to be logically omniscient: that is, of agents whose beliefs may not be deductively closed, or even consistent. (Winner of the 2021 Isaac Levi Prize.). (shrink)
Translation from German to English by Daniel Fidel Ferrer -/- What Does it Mean to Orient Oneself in Thinking? -/- German title: "Was heißt: sich im Denken orientieren?" -/- Published: October 1786, Königsberg in Prussia, Germany. By Immanuel Kant (Born in 1724 and died in 1804) -/- Translation into English by Daniel Fidel Ferrer (March, 17, 2014). The day of Holi in India in 2014. -/- From 1774 to about 1800, there were three intense philosophical and theological controversies (...) underway in Germany, namely: Fragments Controversy, the Pantheism Controversy, and the Atheism Controversy. Kant’s essay translated here is Kant’s respond to the Pantheism Controversy. During this period (1770-1800), there was the Sturm und Drang (Storm and Urge (stress)) movement with thinkers like Johann Hamann, Johann Herder, Friedrich Schiller, and Johann Goethe; who were against the cultural movement of the Enlightenment (Aufklärung). Kant was on the side of Enlightenment (see his Answer the Question: What is Enlightenment? 1784). -/- What Does it Mean to Orient Oneself in Thinking? / By Immanuel Kant (1724-1804). [Was heißt: sich im Denken orientieren? English]. (shrink)
Supererogatory acts—good deeds “beyond the call of duty”—are a part of moral common sense, but conceptually puzzling. I propose a unified solution to three of the most infamous puzzles: the classic Paradox of Supererogation (if it’s so good, why isn’t it just obligatory?), Horton’s All or Nothing Problem, and Kamm’s Intransitivity Paradox. I conclude that supererogation makes sense if, and only if, the grounds of rightness are multi-dimensional and comparative.
We provide three innovations to recent debates about whether topological or “network” explanations are a species of mechanistic explanation. First, we more precisely characterize the requirement that all topological explanations are mechanistic explanations and show scientific practice to belie such a requirement. Second, we provide an account that unifies mechanistic and non-mechanistic topological explanations, thereby enriching both the mechanist and autonomist programs by highlighting when and where topological explanations are mechanistic. Third, we defend this view against some powerful mechanist objections. (...) We conclude from this that topological explanations are autonomous from their mechanistic counterparts. (shrink)
There are at least two threads in our thought and talk about rationality, both practical and theoretical. In one sense, to be rational is to respond correctly to the reasons one has. Call this substantive rationality. In another sense, to be rational is to be coherent, or to have the right structural relations hold between one’s mental states, independently of whether those attitudes are justified. Call this structural rationality. According to the standard view, structural rationality is associated with a distinctive (...) set of requirements that mandate or prohibit certain combinations of attitudes, and it’s in virtue of violating these requirements that incoherent agents are irrational. I think the standard view is mistaken. The goal of this paper is to explain why, and to motivate an alternative account: rather than corresponding to a set of law-like requirements, structural rationality should be seen as corresponding to a distinctive kind of pro tanto rational pressure—i.e. something that comes in degrees, having both magnitude and direction. Something similar is standardly assumed to be true of substantive rationality. On the resulting picture, each dimension of rational evaluation is associated with a distinct kind of rational pressure—substantive rationality with (what I call) justificatory pressure and structural rationality with attitudinal pressure. The former is generated by one’s reasons while the latter is generated by one’s attitudes. Requirements turn out to be at best a footnote in the theory of rationality. (shrink)
The Unified Foundational Ontology (UFO) was developed over the last two decades by consistently putting together theories from areas such as formal ontology in philosophy, cognitive science, linguistics, and philosophical logics. It comprises a number of micro-theories addressing fundamental conceptual modeling notions, including entity types and relationship types. The aim of this paper is to summarize the current state of UFO, presenting a formalization of the ontology, along with the analysis of a number of cases to illustrate the application of (...) UFO and facilitate its comparison with other foundational ontologies in this special issue. (The cases originate from the First FOUST Workshop – the Foundational Stance, an international forum dedicated to Foundational Ontology research.). (shrink)
Privacy and surveillance scholars increasingly worry that data collectors can use the information they gather about our behaviors, preferences, interests, incomes, and so on to manipulate us. Yet what it means, exactly, to manipulate someone, and how we might systematically distinguish cases of manipulation from other forms of influence—such as persuasion and coercion—has not been thoroughly enough explored in light of the unprecedented capacities that information technologies and digital media enable. In this paper, we develop a definition of manipulation that (...) addresses these enhanced capacities, investigate how information technologies facilitate manipulative practices, and describe the harms—to individuals and to social institutions—that flow from such practices. -/- We use the term “online manipulation” to highlight the particular class of manipulative practices enabled by a broad range of information technologies. We argue that at its core, manipulation is hidden influence—the covert subversion of another person’s decision-making power. We argue that information technology, for a number of reasons, makes engaging in manipulative practices significantly easier, and it makes the effects of such practices potentially more deeply debilitating. And we argue that by subverting another person’s decision-making power, manipulation undermines his or her autonomy. Given that respect for individual autonomy is a bedrock principle of liberal democracy, the threat of online manipulation is a cause for grave concern. (shrink)
Over the last few decades, multiple studies have examined the understanding of participants in clinical research. They show variable and often poor understanding of key elements of disclosure, such as expected risks and the experimental nature of treatments. Did the participants in these studies give valid consent? According to the standard view of informed consent they did not. The standard view holds that the recipient of consent has a duty to disclose certain information to the profferer of consent because valid (...) consent requires that information to be understood. The contents of the understanding and disclosure requirements are therefore conceptually linked. In this paper, we argue that the standard view is mistaken. The disclosure and understanding requirements have distinct grounds tied to two different ways in which a token of consent can be rendered invalid. Analysis of these grounds allows us to derive the contents of the two requirements. It also implies that it is sometimes permissible to enroll willing participants who have not understood everything that they ought to be told about their clinical trials. (shrink)
Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of “online manipulation”. While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers — first, by defining “online manipulation”, thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of information technology to covertly influence another person’s decision-making, by targeting (...) and exploiting their decision-making vulnerabilities. Engaging in such practices can harm individuals by diminishing their economic interests, but its deeper, more insidious harm is its challenge to individual autonomy. We explore this autonomy harm, emphasising its implications for both individuals and society, and we briefly outline some strategies for combating online manipulation and strengthening autonomy in an increasingly digital world. (shrink)
When people combine concepts these are often characterised as “hybrid”, “impossible”, or “humorous”. However, when simply considering them in terms of extensional logic, the novel concepts understood as a conjunctive concept will often lack meaning having an empty extension (consider “a tooth that is a chair”, “a pet flower”, etc.). Still, people use different strategies to produce new non-empty concepts: additive or integrative combination of features, alignment of features, instantiation, etc. All these strategies involve the ability to deal with conflicting (...) attributes and the creation of new (combinations of) properties. We here consider in particular the case where a Head concept has superior ‘asymmetric’ control over steering the resulting concept combination (or hybridisation) with a Modifier concept. Specifically, we propose a dialogical approach to concept combination and discuss an implementation based on axiom weakening, which models the cognitive and logical mechanics of this asymmetric form of hybridisation. (shrink)
In this paper, I propose a general taxonomy of different forms of eliminativism. In order to do so, I begin by exploring eliminativism from a broad perspective, providing a comparative picture of eliminativist projects in different domains. This exploration shows that eliminativism is a label used for a family of related types of eliminativist arguments and claims. The proposed taxonomy is an attempt to systematise those arguments and claims.
Debunking arguments—also known as etiological arguments, genealogical arguments, access problems, isolation objec- tions, and reliability challenges—arise in philosophical debates about a diverse range of topics, including causation, chance, color, consciousness, epistemic reasons, free will, grounding, laws of nature, logic, mathematics, modality, morality, natural kinds, ordinary objects, religion, and time. What unifies the arguments is the transition from a premise about what does or doesn't explain why we have certain mental states to a negative assessment of their epistemic status. I examine (...) the common, underlying structure of the arguments and the different strategies for motivating and resisting the premises of debunking arguments. (shrink)
The distinction between objective and subjective reasons plays an important role in both folk normative thought and many research programs in metaethics. But the relation between objective and subjective reasons is unclear. This paper explores problems related to the unity of objective and subjective reasons for actions and attitudes and then offers a novel objectivist account of subjective reasons.
We defend Uniqueness, the claim that given a body of total evidence, there is a uniquely rational doxastic state that it is rational for one to be in. Epistemic rationality doesn't give you any leeway in forming your beliefs. To this end, we bring in two metaepistemological pictures about the roles played by rational evaluations. Rational evaluative terms serve to guide our practices of deference to the opinions of others, and also to help us formulate contingency plans about what to (...) believe in various situations. We argue that Uniqueness vindicates these two roles for rational evaluations, while Permissivism clashes with them. (shrink)
Nihilism is the thesis that no composite objects exist. Some ontologists have advocated abandoning nihilism in favor of deep nihilism, the thesis that composites do not existO, where to existO is to be in the domain of the most fundamental quantifier. By shifting from an existential to an existentialO thesis, the deep nihilist seems to secure all the benefits of a composite-free ontology without running afoul of ordinary belief in the existence of composites. I argue that, while there are well-known (...) reasons for accepting nihilism, there appears to be no reason at all to accept deep nihilism. In particular, deep nihilism draws no support either from the usual arguments for nihilism or from considerations of parsimony. (shrink)
I present two puzzles about the metaphysics of stores, restaurants, and other such establishments. I defend a solution to the puzzles, according to which establishments are not material objects and are not constituted by the buildings that they occupy.
The concept of mechanism in biology has three distinct meanings. It may refer to a philosophical thesis about the nature of life and biology (‘mechanicism’), to the internal workings of a machine-like structure (‘machine mechanism’), or to the causal explanation of a particular phenomenon (‘causal mechanism’). In this paper I trace the conceptual evolution of ‘mechanism’ in the history of biology, and I examine how the three meanings of this term have come to be featured in the philosophy of biology, (...) situating the new ‘mechanismic program’ in this context. I argue that the leading advocates of the mechanismic program (i.e., Craver, Darden, Bechtel, etc.) inadvertently conflate the different senses of ‘mechanism’. Specifically, they all inappropriately endow causal mechanisms with the ontic status of machine mechanisms, and this invariably results in problematic accounts of the role played by mechanism-talk in scientific practice. I suggest that for effective analyses of the concept of mechanism, causal mechanisms need to be distinguished from machine mechanisms, and the new mechanismic program in the philosophy of biology needs to be demarcated from the traditional concerns of mechanistic biology. (shrink)
Consequentialists say we may always promote the good. Deontologists object: not if that means killing one to save five. “Consequentializers” reply: this act is wrong, but it is not for the best, since killing is worse than letting die. I argue that this reply undercuts the “compellingness” of consequentialism, which comes from an outcome-based view of action that collapses the distinction between killing and letting die.
It is commonly held that p is a reason for A to ϕ only if p explains why A ought to ϕ. I argue that this view must be rejected because there are reasons for A to ϕ that would be redundant in any ex...
This paper defends the relevance of philosophy in the contemporary study of concepts. With the advent of cognitive science, naturalistic and interdisciplinary theorizing about concepts has gained momentum. In this context, it has been recently argued that philosophers’ theories of concepts are not aimed at answering the issues that psychologists are interested in, thus dismissing the mentioned philosophical contribution as scientifically otiose. We present and discuss two cases in point suggesting otherwise, as an attempt to vindicate the crucial role of (...) philosophy in the development of empirical theories of concepts. (shrink)
This is a contribution to a symposium on Amie Thomasson’s Ontology Made Easy (2015). Thomasson defends two deflationary theses: that philosophical questions about the existence of numbers, tables, properties, and other disputed entities can all easily be answered, and that there is something wrong with prolonged debates about whether such objects exist. I argue that the first thesis (properly understood) does not by itself entail the second. Rather, the case for deflationary metaontology rests largely on a controversial doctrine about the (...) possible meanings of ‘object’. I challenge Thomasson's argument for that doctrine, and I make a positive case for the availability of the contested, unrestricted use of ‘object’. (shrink)
Rose and Schaffer (forthcoming) argue that teleological thinking has a substantial influence on folk intuitions about composition. They take this to show (i) that we should not rely on folk intuitions about composition and (ii) that we therefore should not reject theories of composition on the basis of intuitions about composition. We cast doubt on the teleological interpretation of folk judgments about composition; we show how their debunking argument can be resisted, even on the assumption that folk intuitions have a (...) teleological source; and we argue that, even if folk intuitions about composition carry no weight, theories of composition can still be rejected on the basis of the intuitions of metaphysicians. (shrink)
Our primary aim in this paper is to sketch a cognitive evolutionary approach for developing explanations of social change that is anchored on the psychological mechanisms underlying normative cognition and the transmission of social norms. We throw the relevant features of this approach into relief by comparing it with the self-fulfilling social expectations account developed by Bicchieri and colleagues. After describing both accounts, we argue that the two approaches are largely compatible, but that the cognitive evolutionary approach is well- suited (...) to encompass much of the social expectations view, whose focus on a narrow range of norms comes at the expense of the breadth the cognitive evolutionary approach can provide. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.