Despite its short historical moment in the sun, behaviorism has become something akin to a theoria non grata, a position that dare not be explicitly endorsed. The reasons for this are complex, of course, and they include sociological factors which we cannot consider here, but to put it briefly: many have doubted the ambition to establish law-like relationships between mental states and behavior that dispense with any sort of mentalistic or intentional idiom, judging that explanations of intelligent behavior require reference (...) to qualia and/or mental events. Today, when behaviorism is discussed at all, it is usually in a negative manner, either as an attempt to discredit an opponent’s view via a reductio, or by enabling a position to distinguish its identity and positive claims by reference to what it is (allegedly) not. In this paper, however, we argue that the ghost of behaviorism is present in influential, contemporary work in the field of embodied and enactive cognition, and even in aspects of the phenomenological tradition that these theorists draw on. Rather than take this to be a problem for these views as some have, we argue that once the behaviorist dimensions are clarified and distinguished from the straw-man version of the view, it is in fact an asset, one which will help with task of setting forth a scientifically reputable version of enactivism and/or philosophical behaviorism that is nonetheless not brain-centric but behavior-centric. While this is a bit like “the enemy of my enemy is my friend” strategy, as Shaun Gallagher notes (2019), with the shared enemy of behaviorism and enactivism being classical Cartesian views and/or orthodox cognitivism in its various guises, the task of this paper is to render this alliance philosophically plausible. Doi: 10.1007/s11229-019-02432-1. (shrink)
Mark Eli Kalderon's book boldly positions itself as a work in speculative metaphysics. Its point of departure is the familiar distinction between presentational and representational philosophies of perception. Kalderon notes that the latter has been more popular of late, as it is more amenable to "an account" explicating causal or counterfactual conditions on perception; but he wishes to rehabilitate the former, at least in part. One widely perceived disadvantage of presentationalism has been the way that understanding perception merely as registering (...) the presence of things might seem to leave us vulnerable to error about the nature of what is presented. Kalderon seeks to remedy this not by dealing at length with various disjunctivist positions concerning perception which may be friendly to his position, nor by spending much time criticising opposing views, but by explicating presentationalist perception through a series of tactile metaphors, thereby providing a radically new philosophical view. He claims that we do not just 'stand before' reality, we grasp it-the metaphor survives tellingly in ordinary language-and he thereby seeks to defend a form of realism which is robust, though he admits, "pre-modern". He draws on a remarkably rich variety of thinkers to defend this position, including pre-modern, modern, and various figures from both analytic and continental philosophy-however, although there is plenty of solid scholarship here, the book is aimed at metaphysics more than the history of ideas. (shrink)
This paper will seek firstly to understand Deleuze’s main challenges to phenomenology, particularly as they are expressed in The Logic of Sense and What is Philosophy?, although reference will also be made to Pure Immanence and Difference and Repetition. We will then turn to a discussion of one of the few passages in which Deleuze directly engages with Merleau-Ponty, which occurs in the chapter on art in What is Philosophy? In this text, he and Guattari offer a critique of what (...) they call the “final avatar” of phenomenology – that is, the “fleshism” that Merleau-Ponty proposes in his unfinished but justly famous work, The Visible and the Invisible. It will be argued that both Deleuze’s basic criticisms of phenomenology, as well as he and Guattari’s problems with the concept of the flesh, do not adequately come to grips with Merleau-Ponty’s later philosophy. Merleau-Ponty is not obviously partisan to what Deleuze finds problematic in this tradition, despite continuing to identify himself as a phenomenologist, and is working within a surprisingly similar framework in certain key respects. In fact, in the more positive part of this paper, we will compare Merleau-Ponty’s notion of flesh, and Deleuze’s equally infamous univocity of being, as a means to consider the broader question of the ways in which the two philosophers consider ontological thought, its meaning and its conditions. It is our belief that through properly understanding both positions, a rapprochement, or at least the foundation for one, can be established between these two important thinkers. (shrink)
Phenomenology has been described as a “non-argumentocentric” way of doing philosophy, reflecting that the philosophical focus is on generating adequate descriptions of experience. But it should not be described as an argument-free zone, regardless of whether this is intended as a descriptive claim about the work of the “usual suspects” or a normative claim about how phenomenology ought to be properly practiced. If phenomenology is always at least partly in the business of arguments, then it is worth giving further attention (...) to the role and form of phenomenological argumentation, how it interacts with its more strictly descriptive component, and the status of phenomenological claims regarding conditions for various kinds of experience. I contend that different versions of phenomenological reasoning encroach upon argument forms that are commonly thought to be antithetical to phenomenology, notably abductive reasoning, understood in terms of its role in both hypothesis generation and in terms of justification. This paper identifies two main steps to making this case. The first step takes seriously the consequences of the intrinsically dialectical aspect of phenomenology in intersection with other modes of philosophy, the natural attitude, and non-philosophy. The second step focuses on transcendental reflection and arguments about the conditions/structures they contain. Together, these two steps aim to rescue phenomenology from the objection that it has an “ostrich epistemology” with regard to the ostensible purity of description, the intuition of essences, or the “conditions” ascertained through transcendental reflection. (shrink)
Most readers of Sartre focus only on the works written at the peak of his influence as a public intellectual in the 1940s, notably "Being and Nothingness". "Jean-Paul Sartre: Key Concepts" aims to reassess Sartre and to introduce readers to the full breadth of his philosophy. Bringing together leading international scholars, the book examines concepts from across Sartre's career, from his initial views on the "inner life" of conscious experience, to his later conceptions of hope as the binding agent for (...) a common humanity. The book will be invaluable to readers looking for a comprehensive assessment of Sartre's thinking - from his early influences to the development of his key concepts, to his legacy. (shrink)
As neither a classical naturalist nor a non-naturalist, Merleau-Ponty appears to be a moderate or liberal naturalist. But can a phenomenologist really be a naturalist, even a liberal one? A lot hinges on how we tease this out, both as to whether it is plausible to claim Merleau-Ponty as a liberal naturalist (I argue it is), and as to whether it is an attractive and coherent position. Indeed, despite its important challenges to orthodox naturalism, there are arguably two traps to (...) avoid. If it becomes too liberal, we get: dualism or an ontological pluralism that is difficult to distinguish from a constructivism; or, in seeking to sidestep that metaphysical dilemma, there is sometimes an insistence on an overly neat methodological separation between description/understanding and explanation that is belied in practice (both scientific and philosophical). It is doubtful that such positions can legitimately claim to be naturalist in orientation, liberal or not. Merleau-Ponty’s philosophy avoids these traps, however, and it is thus a useful resource for contemporary work trying to navigate between scientific naturalism and non-naturalism. (shrink)
The role of the body in cognition is acknowledged across a variety of disciplines, even if the precise nature and scope of that contribution remain contentious. As a result, most philosophers working on embodiment—e.g. those in embodied cognition, enactivism, and ‘4e’ cognition—interact with the life sciences as part of their interdisciplinary agenda. Despite this, a detailed engagement with emerging findings in epigenetics and post-genomic biology has been missing from proponents of this embodied turn. Surveying this research provides an opportunity to (...) rethink the relationship between embodiment and genetics, and we argue that the balance of current epigenetic research favours the extension of an enactivist approach to mind and life, rather than the extended functionalist view of embodied cognition associated with Andy Clark and Mike Wheeler, which is more substrate neutral. (shrink)
This volume celebrates the various facets of Alan Turing (1912–1954), the British mathematician and computing pioneer, widely considered as the father of computer science. It is aimed at the general reader, with additional notes and references for those who wish to explore the life and work of Turing more deeply. -/- The book is divided into eight parts, covering different aspects of Turing’s life and work. -/- Part I presents various biographical aspects of Turing, some from a personal point (...) of view. -/- Part II presents Turing’s universal machine (now known as a Turing machine), which provides a theoretical framework for reasoning about computation. His 1936 paper on this subject is widely seen as providing the starting point for the field of theoretical computer science. -/- Part III presents Turing’s working on codebreaking during World War II. While the War was a disastrous interlude for many, for Turing it provided a nationally important outlet for his creative genius. It is not an overstatement to say that without Turing, the War would probably have lasted longer, and may even have been lost by the Allies. The sensitive nature of Turning’s wartime work meant that much of this has been revealed only relatively recently. -/- Part IV presents Turing’s post-War work on computing, both at the National Physical Laboratory and at the University of Manchester. He made contributions to both hardware design, through the ACE computer at the NPL, and software, especially at Manchester. Part V covers Turing’s contribution to machine intelligence (now known as Artificial Intelligence or AI). Although Turing did not coin the term, he can be considered a founder of this field which is still active today, authoring a seminal paper in 1950. -/- Part VI covers morphogenesis, Turing’s last major scientific contribution, on the generation of seemingly random patterns in biology and on the mathematics behind such patterns. Interest in this area has increased rapidly in recent times in the field of bioinformatics, with Turing’s 1952 paper on this subject being frequently cited. -/- Part VII presents some of Turing’s mathematical influences and achievements. Turing was remarkably free of external influences, with few co-authors – Max Newman was an exception and acted as a mathematical mentor in both Cambridge and Manchester. -/- Part VIII considers Turing in a wider context, including his influence and legacy to science and in the public consciousness. -/- Reflecting Turing’s wide influence, the book includes contributions by authors from a wide variety of backgrounds. Contemporaries provide reminiscences, while there are perspectives by philosophers, mathematicians, computer scientists, historians of science, and museum curators. Some of the contributors gave presentations at Turing Centenary meetings in 2012 in Bletchley Park, King’s College Cambridge, and Oxford University, and several of the chapters in this volume are based on those presentations – some through transcription of the original talks, especially for Turing’s contemporaries, now aged in their 90s. Sadly, some contributors died before the publication of this book, hence its dedication to them. -/- For those interested in personal recollections, Chapters 2, 3, 11, 12, 16, 17, and 36 will be of interest. For philosophical aspects of Turing’s work, see Chapters 6, 7, 26–31, and 41. Mathematical perspectives can be found in Chapters 35 and 37–39. Historical perspectives can be found in Chapters 4, 8, 9, 10, 13–15, 18, 19, 21–25, 34, and 40. With respect to Turing’s body of work, the treatment in Parts II–VI is broadly chronological. We have attempted to be comprehensive with respect to all the important aspects of Turing’s achievements, and the book can be read cover to cover, or the chapters can be tackled individually if desired. There are cross-references between chapters where appropriate, and some chapters will inevitably overlap. -/- We hope that you enjoy this volume as part of your library and that you will dip into it whenever you wish to enter the multifaceted world of Alan Turing. (shrink)
What is the phenomenology of hope? A common view is that hope has a generally positive and pleasant affective tone. This rosy depiction, however, has recently been challenged. Certain hopes, it has been objected, are such that they are either entirely negative in valence or neutral in tone. In this paper, I argue that this challenge has only limited success. In particular, I show that it only applies to one sense of hope but leaves another sense—one that is implicitly but (...) widely employed in the hope literature—untouched. Moreover, I argue that hope construed in this latter sense is inherently positively valenced. The paper concludes by discussing some of the implications of this defense of hope's positive phenomenology, including the ontological question of whether hope is an emotion. (shrink)
I argue that the social dimension of alienation, as discussed by Williams and Railton, has been underappreciated. The lesson typically drawn from their exchange is that moral theory poses a threat to the internal integrity of the agent, but there is a parallel risk that moral theory will implicitly construe agents as constitutively alienated from one another. I argue that a satisfying account of agency will need to make room for what I call ‘genuine ethical contact’ with others, both as (...) concrete objects in the world external to ourselves and as subjects who can recognize us reciprocally. (shrink)
In order to better understand the topic of hope, this paper argues that two separate theories are needed: One for hoping, and the other for hopefulness. This bifurcated approach is warranted by the observation that the word ‘hope’ is polysemous: It is sometimes used to refer to hoping and sometimes, to feeling or being hopeful. Moreover, these two senses of 'hope' are distinct, as a person can hope for some outcome yet not simultaneously feel hopeful about it. I argue that (...) this distinction between hoping and hopefulness is not always observed or fully appreciated in the literature and has consequently caused much confusion. This paper then sketches what theorizing about hope looks like in light of this clarification and discusses some of its implications. (shrink)
A natural suggestion and increasingly popular account of how to revise our logical beliefs treats revision of logic analogously to the revision of scientific theories. I investigate this approach and argue that simple applications of abductive methodology to logic result in revision-cycles, developing a detailed case study of an actual dispute with this property. This is problematic if we take abductive methodology to provide justification for revising our logical framework. I then generalize the case study, pointing to similarities with more (...) recent and popular heterodox logics such as naïve logics of truth. I use this discussion to motivate a constraint—logical partisanhood—on the uses of such methodology: roughly: both the proposed alternative and our actual background logic must be able to agree that moving to the alternative logic is no worse than staying put. (shrink)
Iris Murdoch’s The Sovereignty of Good—especially the first essay, “The Idea of Perfection”—is often associated with a critique of a certain picture of agency and its proper place in ethical thought. There is implicit in this critique, however, an alternative, much richer one. I propose a reading of Murdochian agency in terms of the continuous activity of cultivating and refining a distinctive practical standpoint, and I apply this reading to her account of moral progress. For Murdoch moral progress depends on (...) transcending egoism and achieving clear perception of a normatively-saturated reality, but it would be a mistake to think of egoism in terms of selfishness, or clarity in terms of altruism. Rather, I argue, Murdochian moral progress requires overcoming socially-conditioned and often ideological forms of alienation, and making the social conditions that inform our practical standpoints self-conscious. (shrink)
This paper develops an argument against causal decision theory. I formulate a principle of preference, which I call the Guaranteed Principle. I argue that the preferences of rational agents satisfy the Guaranteed Principle, that the preferences of agents who embody causal decision theory do not, and hence that causal decision theory is false.
Etiquette and other merely formal normative standards like legality, honor, and rules of games are taken less seriously than they should be. While these standards are not intrinsically reason-providing in the way morality is often taken to be, they also play an important role in our practical lives: we collectively treat them as important for assessing the behavior of ourselves and others and as licensing particular forms of sanction for violations. This chapter develops a novel account of the normativity of (...) formal standards where the role they play in our practical lives explains a distinctive kind of reason to obey them. We have this kind of reason to be polite because etiquette is important to us. We also have this kind of reason to be moral because morality is important to us. This parallel suggests that the importance we assign to morality is insufficient to justify it being substantive. (shrink)
This paper proposes a new framework for thinking about hope, with certain unexpected consequences. Specifically, I argue that a shift in focus from locutions like “x hopes that” and “x is hoping that” to “x is hopeful that” and “x has hope that” can improve our understanding of hope. This approach, which emphasizes hopefulness as the central concept, turns out to be more revealing and fruitful in tackling some of the issues that philosophers have raised about hope, such as the (...) question of how hope can be distinguished from despair or how people can have differing strengths in hope. It also allows us to see that many current accounts of hope, far from being rivals, are actually compatible with one another. (shrink)
Sometimes a fact can play a role in a grounding explanation, but the particular content of that fact make no difference to the explanation—any fact would do in its place. I call these facts vacuous grounds. I show that applying the distinction between-vacuous grounds allows us to give a principled solution to Kit Fine and Stephen Kramer’s paradox of ground. This paradox shows that on minimal assumptions about grounding and minimal assumptions about logic, we can show that grounding is reflexive, (...) contra the intuitive character of grounds. I argue that we should never have accepted that grounding is irreflexive in the first place; the intuitions that support the irreflexive intuition plausibly only require that grounding be non-vacuously irreflexive. Fine and Kramer’s paradox relies, essentially, on a case of vacuous grounding and is thus no problem for this account. (shrink)
This paper argues that epistemic errors rooted in group- or identity- based biases, especially those pertaining to disability, are undertheorized in the literature on medical error. After sketching dominant taxonomies of medical error, we turn to the field of social epistemology to understand the role that epistemic schemas play in contributing to medical errors that disproportionately affect patients from marginalized social groups. We examine the effects of this unequal distribution through a detailed case study of ableism. There are four primary (...) mechanisms through which the epistemic schema of ableism distorts communication between nondisabled physicians and disabled patients: testimonial injustice, epistemic overconfidence, epistemic erasure, and epistemic derailing. Measures against epistemic injustices in general and against schema-based medical errors in particular are ultimately issues of justice that must be better addressed at all levels of health care practice. (shrink)
I argue that metaethicists should be concerned with two kinds of alienation that can result from theories of normativity: alienation between an agent and her reasons, and alienation between an agent and the concrete others with whom morality is principally concerned. A theory that cannot avoid alienation risks failing to make sense of central features of our experience of being agents, in whose lives normativity plays an important role. The twin threats of alienation establish two desiderata for theories of normativity; (...) however, I argue that they are difficult to jointly satisfy. (shrink)
I distinguish two ways of developing anti-exceptionalist approaches to logical revision. The first emphasizes comparing the theoretical virtuousness of developed bodies of logical theories, such as classical and intuitionistic logic. I'll call this whole theory comparison. The second attempts local repairs to problematic bits of our logical theories, such as dropping excluded middle to deal with intuitions about vagueness. I'll call this the piecemeal approach. I then briefly discuss a problem I've developed elsewhere for comparisons of logical theories. Essentially, the (...) problem is that a pair of logics may each evaluate the alternative as superior to themselves, resulting in oscillation between logical options. The piecemeal approach offers a way out of this problem andthereby might seem a preferable to whole theory comparisons. I go on to show that reflective equilibrium, the best known piecemeal method, has deep problems of its own when applied to logic. (shrink)
Among medieval Aristotelians, William of Ockham defends a minimalist account of artifacts, assigning to statues and houses and beds a unity that is merely spatial or locational rather than metaphysical. Thus, in contrast to his predecessors, Thomas Aquinas and Duns Scotus, he denies that artifacts become such by means of an advening ‘artificial form’ or ‘form of the whole’ or any change that might tempt us to say that we are dealing with a new thing (res). Rather, he understands artifacts (...) as per accidens composites of parts that differ, but not so much that only divine power could unite them, as in the matter and form of a proper substance. For Ockham, artifacts are essentially rearrangements, via human agency, of already existing things, like the clay shaped by a sculptor into a statue or the stick and bristles and string one might fashion into a broom. Ockham does not think that a new thing is thereby created, although his emphasis on the contribution of human artisans seems to leave questions about the ontological status of their agency open. In any case, there are no such things as natural statues, any more than substances created by human artifice. (shrink)
I argue that certain species of belief, such as mathematical, logical, and normative beliefs, are insulated from a form of Harman-style debunking argument whereas moral beliefs, the primary target of such arguments, are not. Harman-style arguments have been misunderstood as attempts to directly undermine our moral beliefs. They are rather best given as burden-shifting arguments, concluding that we need additional reasons to maintain our moral beliefs. If we understand them this way, then we can see why moral beliefs are vulnerable (...) to such arguments while mathematical, logical, and normative beliefs are not—the very construction of Harman-style skeptical arguments requires the truth of significant fragments of our mathematical, logical, and normative beliefs, but requires no such thing of our moral beliefs. Given this property, Harman-style skeptical arguments against logical, mathematical, and normative beliefs are self-effacing; doubting these beliefs on the basis of such arguments results in the loss of our reasons for doubt. But we can cleanly doubt the truth of morality. (shrink)
Is perception cognitively penetrable, and what are the epistemological consequences if it is? I address the latter of these two questions, partly by reference to recent work by Athanassios Raftopoulos and Susanna Seigel. Against the usual, circularity, readings of cognitive penetrability, I argue that cognitive penetration can be epistemically virtuous, when---and only when---it increases the reliability of perception.
Philosophical debates about the metaphysics of time typically revolve around two contrasting views of time. On the A-theory, time is something that itself undergoes change, as captured by the idea of the passage of time; on the B-theory, all there is to time is events standing in before/after or simultaneity relations to each other, and these temporal relations are unchanging. Philosophers typically regard the A-theory as being supported by our experience of time, and they take it that the B-theory clashes (...) with how we experience time and therefore faces the burden of having to explain away that clash. In this paper, we investigate empirically whether these intuitions about the experience of time are shared by the general public. We asked directly for people’s subjective reports of their experience of time—in particular, whether they believe themselves to have a phenomenology as of time’s passing—and we probed their understanding of what time’s passage in fact is. We find that a majority of participants do share the aforementioned intuitions, but interestingly a minority do not. (shrink)
The conventional wisdom regarding the aims and shortcomings of Kantian constructivism is mistaken. The aim of metaethical constructivism is not to provide a naturalistic account of the objectivity of normative facts by deriving substantive morality from a conception of agency so thin as to be uncontroversial (a task at which it is generally regarded to have failed). Its aim is to explain the “grip” that normative facts have on us—to avoid what I call the problem of normative alienation. So understood, (...) Kantian constructivism faces two problems: that determinate normative facts cannot be derived from agency and that its individualistic conception of agency cannot account for the sociality of morality. I propose and elaborate a social conception of agency that is better able to address the latter problem while still avoiding normative alienation, and evaluate two different strategies for responding to the former problem. (shrink)
In this paper, I address the question of whether metaphysics and theology are or can become science. After examining the qualities of contemporary science, which evolved from an earlier historic concept of any body of literature into a formal method for obtaining empirical knowledge, I apply that standard to metaphysics and theology. I argue that neither metaphysics nor theology practices a scientific method or generates scientific knowledge. Worse, I conclude that both metaphysics and theology are at best purely cultural projects—exercises (...) in exegesis of local cultural and religious ideas and language—and, therefore, that other cultures have produced or would produce radically different schemes of metaphysics or theology. At its worst, metaphysics is speculation about the unknowable, while theology is rumination about the imaginary. (shrink)
This paper develops a form of moral actualism that can explain the procreative asymmetry. Along the way, it defends and explains the attractive asymmetry: the claim that although an impermissible option can be self-conditionally permissible, a permissible option cannot be self-conditionally impermissible.
This is an opinionated overview of the Frege-Geach problem, in both its historical and contemporary guises. Covers Higher-order Attitude approaches, Tree-tying, Gibbard-style solutions, and Schroeder's recent A-type expressivist solution.
Consequentialists often assume rational monism: the thesis that options are always made rationally permissible by the maximization of the selfsame quantity. This essay argues that consequentialists should reject rational monism and instead accept rational pluralism: the thesis that, on different occasions, options are made rationally permissible by the maximization of different quantities. The essay then develops a systematic form of rational pluralism which, unlike its rivals, is capable of handling both the Newcomb problems that challenge evidential decision theory and the (...) unstable problems that challenge causal decision theory. (shrink)
Expressivists explain the expression relation which obtains between sincere moral assertion and the conative or affective attitude thereby expressed by appeal to the relation which obtains between sincere assertion and belief. In fact, they often explicitly take the relation between moral assertion and their favored conative or affective attitude to be exactly the same as the relation between assertion and the belief thereby expressed. If this is correct, then we can use the identity of the expression relation in the two (...) cases to test the expressivist account as a descriptive or hermeneutic account of moral discourse. I formulate one such test, drawing on a standard explanation of Moore's paradox. I show that if expressivism is correct as a descriptive account of moral discourse, then we should expect versions of Moore's paradox where we explicitly deny that we possess certain affective or conative attitudes. I then argue that the constructions that mirror Moore's paradox are not incoherent. It follows that expressivism is either incorrect as a hermeneutic account of moral discourse or that the expression relation which holds between sincere moral assertion and affective or conative attitudes is not identical to the relation which holds between sincere non-moral assertion and belief. A number of objections are canvassed and rejected. (shrink)
Introductory and advanced textbooks in bioethics focus almost entirely on issues that disproportionately affect disabled people and that centrally deal with becoming or being disabled. However, such textbooks typically omit critical philosophical reflection on disability, lack engagement with decades of empirical and theoretical scholarship spanning the social sciences and humanities in the multidisciplinary field of disability studies, and avoid serious consideration of the history of disability activism in shaping social, legal, political, and medical understandings of disability over the last fifty (...) years. For example, longstanding discussions on topics such as euthanasia, physician aid-in-dying, pre-implantation genetic diagnosis, prenatal testing, selective abortion, enhancement, patient autonomy, beneficence, non-maleficence, and health care rationing all tend to be premised on shared and implicit assumptions regarding disability, especially in relation to quality of life, yet with too little recognition of the way that “disability” is itself a topic of substantial research and scholarly disagreement across multiple fields. This is not merely a concern for academic and medical education; as an applied field tied to one of the largest economic sectors of most industrialized nations, bioethics has a direct impact on healthcare education, practice, policy, and, thereby, the health outcomes of existing and future populations. It is in light of these pressing issues that the Disability Bioethics Reader is the first reader to introduce students to core bioethical issues and concepts through the lens of critical disability studies and philosophy of disability. The Disability Bioethics Reader will include over thirty-five chapters covering key areas such as: critical histories and state-of-the-field analyses of modern medicine, bioethics, disability studies, and philosophy of medicine; methods in bioethics; concerns at the edge- and end-of-life; enhancement; disability, quality of life, and well-being; prenatal testing and abortion; invisible disabilities; chronic illness; healthcare justice; genetics and genomics; intellectual disability and neurodiversity; ethics and diagnosis; and epistemic injustice in healthcare. (shrink)
I investigate syntactic notions of theoretical equivalence between logical theories and a recent objection thereto. I show that this recent criticism of syntactic accounts, as extensionally inadequate, is unwarranted by developing an account which is plausibly extensionally adequate and more philosophically motivated. This is important for recent anti-exceptionalist treatments of logic since syntactic accounts require less theoretical baggage than semantic accounts.
This paper develops a view on which: (a) all fundamental facts are absolute, (b) some facts do not supervene on the fundamental facts, and (c) only relative facts fail to supervene on the fundamental facts.
The paper offers a solution to the generality problem for a reliabilist epistemology, by developing an “algorithm and parameters” scheme for type-individuating cognitive processes. Algorithms are detailed procedures for mapping inputs to outputs. Parameters are psychological variables that systematically affect processing. The relevant process type for a given token is given by the complete algorithmic characterization of the token, along with the values of all the causally relevant parameters. The typing that results is far removed from the typings of folk (...) psychology, and from much of the epistemology literature. But it is principled and empirically grounded, and shows good prospects for yielding the desired epistemological verdicts. The paper articulates and elaborates the theory, drawing out some of its consequences. Toward the end, the fleshed-out theory is applied to two important case studies: hallucination and cognitive penetration of perception. (shrink)
Seeking a decision theory that can handle both the Newcomb problems that challenge evidential decision theory and the unstable problems that challenge causal decision theory, some philosophers recently have turned to ‘graded ratifiability’. The graded ratifiability approach to decision theory is, however, despite its virtues, unsatisfactory; for it conflicts with the platitude that it is always rationally permissible for an agent to knowingly choose their best option.
Why do promises give rise to reasons? I consider a quadruple of possibilities which I think will not work, then sketch the explanation of the normativity of promising I find more plausible—that it is constitutive of the practice of promising that promise-breaking implies liability for blame and that we take liability for blame to be a bad thing. This effects a reduction of the normativity of promising to conventionalism about liability together with instrumental normativity and desire-based reasons. This is important (...) for a number of reasons, but the most important reason is that this style of account can be extended to account for nearly all normativity—one notable exception being instrumental normativity itself. Success in the case of promises suggests a general reduction of normativity to conventions and instrumental normativity. But success in the cases of promises is already quite interesting and does not depend essentially the general claim about normativity. (shrink)
The New Evil Demon Problem is supposed to show that straightforward versions of reliabilism are false: reliability is not necessary for justification after all. I argue that it does no such thing. The reliabilist can count a number of beliefs as justified even in demon worlds, others as unjustified but having positive epistemic status nonetheless. The remaining beliefs---primarily perceptual beliefs---are not, on further reflection, intuitively justified after all. The reliabilist is right to count these beliefs as unjustified in demon worlds, (...) and it is a challenge for the internalist to be able to do so as well. (shrink)
Logical Indefinites.Jack Woods - 2014 - Logique Et Analyse -- Special Issue Edited by Julien Murzi and Massimiliano Carrara 227: 277-307.details
I argue that we can and should extend Tarski's model-theoretic criterion of logicality to cover indefinite expressions like Hilbert's ɛ operator, Russell's indefinite description operator η, and abstraction operators like 'the number of'. I draw on this extension to discuss the logical status of both abstraction operators and abstraction principles.
In this paper, I argue that for the purposes of ordinary reasoning, sentences about properties of concrete objects can be replaced with sentences concerning how things in our universe would be related to inscriptions were there a pluriverse. Speaking loosely, pluriverses are composites of universes that collectively realize every way a universe could possibly be. As such, pluriverses exhaust all possible meanings that inscriptions could take. Moreover, because universes necessarily do not influence one another, our universe would not be any (...) different intrinsically if there were a pluriverse. These two facts enable anti-realists about abstract objects to replace, e.g. talk of anatomical features with talk of the inscriptions concerning anatomical structure that would exist were there a pluriverse. The availability of such replacements enables anti-realists to carry out essential ordinary reasoning without referring to properties, thereby making room for a consistent anti-realist worldview. The inscriptions of the would-be pluriverse are so numerous and varied that sentences about them can play the roles in ordinary reasoning served by simple sentences about properties of concrete objects. (shrink)
Philosophical arguments usually are and nearly always should be abductive. Across many areas, philosophers are starting to recognize that often the best we can do in theorizing some phenomena is put forward our best overall account of it, warts and all. This is especially true in esoteric areas like logic, aesthetics, mathematics, and morality where the data to be explained is often based in our stubborn intuitions. -/- While this methodological shift is welcome, it's not without problems. Abductive arguments involve (...) significant theoretical resources which themselves can be part of what's being disputed. This means that we will sometimes find otherwise good arguments which suggest their own grounds are problematic. In particular, sometimes revising our beliefs on the basis of such an argument can undermine the very justification we used in that argument. -/- This feature, which I'll call self-effacingness, occurs most dramatically in arguments against our standing views on the esoteric subject matters mentioned above: logic, mathematics, aesthetics, and morality. This is because these subject matters all play a role in how we reason abductively. This isn't an idle fact; we can resist some challenges to our standing beliefs about these subject matters exactly because the challenges are self-effacing. The self-effacing character of certain arguments is thus both a benefit and limitation of the abductive turn and deserves serious attention. I aim to give it the attention it deserves. (shrink)
Can beliefs that are not consciously formulated serve as part of an agent's evidence for other beliefs? A common view says no, any belief that is psychologically immediate is also epistemically immediate. I argue that some unconscious beliefs can serve as evidence, but other unconscious beliefs cannot. Person-level beliefs can serve as evidence, but subpersonal beliefs cannot. I try to clarify the nature of the personal/subpersonal distinction and to show how my proposal illuminates various epistemological problems and provides a principled (...) framework for solving other problems. (shrink)
I defend normative subjectivism against the charge that believing in it undermines the functional role of normative judgment. In particular, I defend it against the claim that believing that our reasons change from context to context is problematic for our use of normative judgments. To do so, I distinguish two senses of normative universality and normative reasons---evaluative universality and reasons and ontic universality and reasons. The former captures how even subjectivists can evaluate the actions of those subscribing to other conventions; (...) the latter explicates how their reasons differ from ours. I then show that four aspects of the functional role of normativity---evaluation of our and others actions and reasons, normative communication, hypothetical planning, and evaluating counternromative conditionals---at most requires our normative systems being evaluatively universal. Yet reasonable subjectivist positions need not deny evaluative universality. (shrink)
It is regrettably common for theorists to attempt to characterize the Humean dictum that one can’t get an ‘ought’ from an ‘is’ just in broadly logical terms. We here address an important new class of such approaches which appeal to model-theoretic machinery. Our complaint about these recent attempts is that they interfere with substantive debates about the nature of the ethical. This problem, developed in detail for Daniel Singer’s and Gillian Russell and Greg Restall’s accounts of Hume’s dictum, is of (...) a general type arising for the use of model-theoretic structures in cashing out substantive philosophical claims: the question of whether an abstract model-theoretic structure successfully interprets something often involves taking a stand on non-trivial issues surrounding the thing. In the particular case of Hume’s dictum, given reasonable conceptual or metaphysical claims about the ethical, Singer’s and Russell and Restall’s accounts treat obviously ethical claims as descriptive and vice versa. Consequently, their model-theoretic characterizations of Hume’s dictum are not metaethically neutral. This encourages skepticism about whether model-theoretic machinery suffices to provide an illuminating distinction between the ethical and the descriptive. (shrink)
Cognitive penetration of perception is the idea that what we see is influenced by such states as beliefs, expectations, and so on. A perceptual belief that results from cognitive penetration may be less justified than a nonpenetrated one. Inferentialism is a kind of internalist view that tries to account for this by claiming that some experiences are epistemically evaluable, on the basis of why the perceiver has that experience, and the familiar canons of good inference provide the appropriate standards by (...) which experiences are evaluated. I examine recent defenses of inferentialism by Susanna Siegel, Peter Markie, and Matthew McGrath and argue that the prospects for inferentialism are dim. (shrink)
In this book Alan Haworth tends to sneer at libertarians. However, there are, I believe, a few sound criticisms. I have always held similar opinions of Murray Rothbard‟s and Friedrich Hayek‟s definitions of liberty and coercion, Robert Nozick‟s account of natural rights, and Hayek‟s spontaneous-order arguments. I urge believers of these positions to read Haworth. But I don‟t personally know many libertarians who believe them (or who regard Hayek as a libertarian).
This paper puts forward an argument for a systematic, technical approach to formulation in verbal interaction. I see this as a kind of expansion of Sacks’ membership categorization analysis, and as something that is not offered (at least not in a fully developed form) by sequential analysis, the currently dominant form of conversation analysis. In particular, I suggest a technique for the study of “occasioned semantics,” that is, the study of structures of meaningful expressions in actual occasions of conversation. I (...) propose that meaning and rhetoric be approached through consideration of various dimensions or operations or properties, including, but not limited to, contrast and co-categorization, generalization and specification, scaling, and marking. As illustration, I consider a variety of cases, focused on generalization and specification. The paper can be seen as a return to some classical concerns with meaning, as illuminated by more recent insights into indexicality, social action, and interaction in recorded talk. (shrink)
Much of the intuitive appeal of evidentialism results from conflating two importantly different conceptions of evidence. This is most clear in the case of perceptual justification, where experience is able to provide evidence in one sense of the term, although not in the sense that the evidentialist requires. I argue this, in part, by relying on a reading of the Sellarsian dilemma that differs from the version standardly encountered in contemporary epistemology, one that is aimed initially at the epistemology of (...) introspection but which generalizes to theories of perceptual justification as well. (shrink)
In this open peer commentary, we concur with the three target articles’ analysis and positions on abortion in the special issue on Roe v. Wade as the exercise of reproductive liberty essential for the bioethical commitment to patient autonomy and self-determination. Our proposed OPC augments that analysis by explicating more fully the concept crucial to Roe of fetal personhood. We explain that the development and use of predictive reproductive technologies over the fifty years since Roe has changed the literal image, (...) and thereby the epistemological landscape, through which a prospective parent comes to know the fetus. The logic of Roe required a legal and ethical denial of fetal personhood to prioritize maternal autonomy over claims to fetal moral personhood. Our claim is that such a denial may be more complicated today. The fetal person genetic testing and reproductive imaging now presents to prospective parents has become an increasingly individualized, distinct medicalized picture of a developing person with which a parent can either identify or differentiate. In contrast, the fetal person of Roe was an abstract and vague figure stripped of most human particulars, a pregnancy rather than the specific individualized human entity reproductive technology now presents as a person to prospective parents. We discuss the implications of this shift and call for a more capacious analysis of reproductive ethics that works towards both reproductive and disability justice. (shrink)
This provocative study presents philological, philosophical, and historical arguments that with the Greek term καθῆκον and its Latin equivalent officium the ancient Stoics invented a new concept that anticipated the modern notion of moral duty, for example, Pflicht in Kant. Scholars began to shift from translating kathēkon as "duty" to translating it as "appropriate or fitting action" in the late 1800s, according to Visnjic. The usage of the verb kathēkein in Greek literature prior to the Stoics suggests to him that (...) it described something prescribed by law, tradition, or decree. Visnjic believes the etymology of kathēkon offered by Zeno, the founder of the Stoa, was meant to reveal the... (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.