The paper points out that the modern formulation of Bohm’s quantum theory known as Bohmian mechanics is committed only to particles’ positions and a law of motion. We explain how this view can avoid the open questions that the traditional view faces according to which Bohm’s theory is committed to a wave-function that is a physical entity over and above the particles, although it is defined on configuration space instead of three-dimensional space. We then enquire into the status of the (...) law of motion, elaborating on how the main philosophical options to ground a law of motion, namely Humeanism and dispositionalism, can be applied to Bohmian mechanics. In conclusion, we sketch out how these options apply to primitive ontology approaches to quantum mechanics in general. (shrink)
Following neo-Aristotelians Alasdair MacIntyre and Martha Nussbaum, we claim that humans are story-telling animals who learn from the stories of diverse others. Moral agents use rational emotions, such as compassion which is our focus here, to imaginatively reconstruct others’ thoughts, feelings and goals. In turn, this imaginative reconstruction plays a crucial role in deliberating and discerning how to act. A body of literature has developed in support of the role narrative artworks (i.e. novels and films) can play in allowing us (...) the opportunity to engage imaginatively and sympathetically with diverse characters and scenarios in a safe protected space that is created by the fictional world. By practising what Nussbaum calls a ‘loving attitude’, her version of ethical attention, we can form virtuous habits that lead to phronesis (practical wisdom). In this paper, and taking compassion as an illustrative focus, we examine the ways that students’ moral education might usefully develop from engaging with narrative artworks through Philosophy for Children (P4C), where philosophy is a praxis, conducted in a classroom setting using a Community of Inquiry (CoI). We argue that narrative artworks provide useful stimulus material to engage students, generate student questions, and motivate philosophical dialogue and the formation of good habits which, in turn, supports the argument for philosophy to be taught in schools. (shrink)
Despite being assailed for decades by disability activists and disability studies scholars spanning the humanities and social sciences, the medical model of disability—which conceptualizes disability as an individual tragedy or misfortune due to genetic or environmental insult—still today structures many cases of patient–practitioner communication. Synthesizing and recasting work done across critical disability studies and philosophy of disability, I argue that the reason the medical model of disability remains so gallingly entrenched is due to what I call the “ableist conflation” of (...) disability with pain and suffering. In an effort to better equip healthcare practitioners and those invested in health communication to challenge disability stigma, discrimination, and oppression, I lay out the logic of the ableist conflation and interrogate examples of its use. I argue that insofar as the semiosis of pain and suffering is structured by the lived experience of unwelcome bodily transition or variation, experiences of pain inform the ableist conflation by preemptively tying such variability and its attendant disequilibrium to disability. I conclude by discussing how philosophy of disability and critical disability studies might better inform health communication concerning disability, offering a number of conceptual distinctions toward that end. (shrink)
This is the first chapter to our edited collection of essays on the nature and ethics of blame. In this chapter we introduce the reader to contemporary discussions about blame and its relationship to other issues (e.g. free will and moral responsibility), and we situate the essays in this volume with respect to those discussions.
Nelson Goodman's distinction between autographic and allographic arts is appealing, we suggest, because it promises to resolve several prima facie puzzles. We consider and rebut a recent argument that alleges that digital images explode the autographic/allographic distinction. Regardless, there is another familiar problem with the distinction, especially as Goodman formulates it: it seems to entirely ignore an important sense in which all artworks are historical. We note in reply that some artworks can be considered both as historical products and as (...) formal structures. Talk about such works is ambiguous between the two conceptions. This allows us to recover Goodman's distinction: art forms that are ambiguous in this way are allographic. With that formulation settled, we argue that digital images are allographic. We conclude by considering the objection that digital photographs, unlike other digital images, would count as autographic by our criterion; we reply that this points to the vexed nature of photography rather than any problem with the distinction. (shrink)
Heidegger, Art, and Postmodernity offers a radical new interpretation of Heidegger's later philosophy, developing his argument that art can help lead humanity beyond the nihilistic ontotheology of the modern age. Providing pathbreaking readings of Heidegger's 'The Origin of the Work of Art' and his notoriously difficult Contributions to Philosophy, this book explains precisely what postmodernity meant for Heidegger, the greatest philosophical critic of modernity, and what it could still mean for us today. Exploring these issues, Iain D. Thomson examines several (...) postmodern works of art, including music, literature, painting and even comic books, from a post-Heideggerian perspective. Clearly written and accessible, this book will help readers gain a deeper understanding of Heidegger and his relation to postmodern theory, popular culture and art. (shrink)
According to a widespread view in metaphysics and philosophy of science, all explanations involve relations of ontic dependence between the items appearing in the explanandum and the items appearing in the explanans. I argue that a family of mathematical cases, which I call “viewing-as explanations”, are incompatible with the Dependence Thesis. These cases, I claim, feature genuine explanations that aren’t supported by ontic dependence relations. Hence the thesis isn’t true in general. The first part of the paper defends this claim (...) and discusses its significance. The second part of the paper considers whether viewing-as explanations occur in the empirical sciences, focusing on the case of so-called fictional models. It’s sometimes suggested that fictional models can be explanatory even though they fail to represent actual worldly dependence relations. Whether or not such models explain, I suggest, depends on whether we think scientific explanations necessarily give information relevant to intervention and control. Finally, I argue that counterfactual approaches to explanation also have trouble accommodating viewing-as cases. (shrink)
Clark and Chalmers (1998) defend the hypothesis of an ‘Extended Mind’, maintaining that beliefs and other paradigmatic mental states can be implemented outside the central nervous system or body. Aspects of the problem of ‘language acquisition’ are considered in the light of the extended mind hypothesis. Rather than ‘language’ as typically understood, the object of study is something called ‘utterance-activity’, a term of art intended to refer to the full range of kinetic and prosodic features of the on-line behaviour of (...) interacting humans. It is argued that utterance activity is plausibly regarded as jointly controlled by the embodied activity of interacting people, and that it contributes to the control of their behaviour. By means of specific examples it is suggested that this complex joint control facilitates easier learning of at least some features of language. This in turn suggests a striking form of the extended mind, in which infants’ cognitive powers are augmented by those of the people with whom they interact. (shrink)
Open peer commentary on the article “Sensorimotor Direct Realism: How We Enact Our World” by Michael Beaton. Upshot: In light of the construal of sensorimotor theory offered by the target article, this commentary examines the role the theory should admit for internal representation.
The problem of truth in fiction concerns how to tell whether a given proposition is true in a given fiction. Thus far, the nearly universal consensus has been that some propositions are ‘implicitly true’ in some fictions: such propositions are not expressed by any explicit statements in the relevant work, but are nevertheless held to be true in those works on the basis of some other set of criteria. I call this family of views ‘implicitism’. I argue that implicitism faces (...) serious problems, whereas the opposite view is much more plausible than has previously been thought. After mounting a limited defence of explicitism, I explore a difficult problem for the view and discuss some possible responses. (shrink)
This article defends the Doomsday Argument, the Halfer Position in Sleeping Beauty, the Fine-Tuning Argument, and the applicability of Bayesian confirmation theory to the Everett interpretation of quantum mechanics. It will argue that all four problems have the same structure, and it gives a unified treatment that uses simple models of the cases and no controversial assumptions about confirmation or self-locating evidence. The article will argue that the troublesome feature of all these cases is not self-location but selection effects.
Despite playing an important role in epistemology, philosophy of science, and more recently in moral philosophy and aesthetics, the nature of understanding is still much contested. One attractive framework attempts to reduce understanding to other familiar epistemic states. This paper explores and develops a methodology for testing such reductionist theories before offering a counterexample to a recently defended variant on which understanding reduces to what an agent knows.
Gauss’s quadratic reciprocity theorem is among the most important results in the history of number theory. It’s also among the most mysterious: since its discovery in the late 18th century, mathematicians have regarded reciprocity as a deeply surprising fact in need of explanation. Intriguingly, though, there’s little agreement on how the theorem is best explained. Two quite different kinds of proof are most often praised as explanatory: an elementary argument that gives the theorem an intuitive geometric interpretation, due to Gauss (...) and Eisenstein, and a sophisticated proof using algebraic number theory, due to Hilbert. Philosophers have yet to look carefully at such explanatory disagreements in mathematics. I do so here. According to the view I defend, there are two important explanatory virtues—depth and transparency—which different proofs (and other potential explanations) possess to different degrees. Although not mutually exclusive in principle, the packages of features associated with the two stand in some tension with one another, so that very deep explanations are rarely transparent, and vice versa. After developing the theory of depth and transparency and applying it to the case of quadratic reciprocity, I draw some morals about the nature of mathematical explanation. (shrink)
Philosophers of science since Nagel have been interested in the links between intertheoretic reduction and explanation, understanding and other forms of epistemic progress. Although intertheoretic reduction is widely agreed to occur in pure mathematics as well as empirical science, the relationship between reduction and explanation in the mathematical setting has rarely been investigated in a similarly serious way. This paper examines an important particular case: the reduction of arithmetic to set theory. I claim that the reduction is unexplanatory. In defense (...) of this claim, I offer evidence from mathematical practice, and I respond to contrary suggestions due to Steinhart, Maddy, Kitcher and Quine. I then show how, even if set-theoretic reductions are generally not explanatory, set theory can nevertheless serve as a legitimate foundation for mathematics. Finally, some implications of my thesis for philosophy of mathematics and philosophy of science are discussed. In particular, I suggest that some reductions in mathematics are probably explanatory, and I propose that differing standards of theory acceptance might account for the apparent lack of unexplanatory reductions in the empirical sciences. (shrink)
In response to historical challenges, advocates of a sophisticated variant of scientific realism emphasize that theoretical systems can be divided into numerous constituents. Setting aside any epistemic commitment to the systems themselves, they maintain that we can justifiably believe those specific constituents that are deployed in key successful predictions. Stathis Psillos articulates an explicit criterion for discerning exactly which theoretical constituents qualify. I critique Psillos's criterion in detail. I then test the more general deployment realist intuition against a set of (...) well-known historical cases, whose significance has, I contend, been overlooked. I conclude that this sophisticated form of realism remains threatened by the historical argument that prompted it. A criterion for scientific realism Assessing the criterion A return to the crucial insight: responsibility A few case studies Assessing deployment realism. (shrink)
In this paper, I develop and defend a new adverbial theory of perception. I first present a semantics for direct-object perceptual reports that treats their object positions as supplying adverbial modifiers, and I show how this semantics definitively solves the many-property problem for adverbialism. My solution is distinctive in that it articulates adverbialism from within a well-established formal semantic framework and ties adverbialism to a plausible semantics for perceptual reports in English. I then go on to present adverbialism as a (...) theory of the metaphysics of perception. The metaphysics I develop treats adverbial perception as a directed activity: it is an activity with success conditions. When perception is successful, the agent bears a relation to a concrete particular, but perception need not be successful; this allows perception to be fundamentally non-relational. The result is a novel formulation of adverbialism that eliminates the need for representational contents, but also treats successful and unsuccessful perceptual events as having a fundamental common factor. (shrink)
We maintain that in many contexts promising to try is expressive of responsibility as a promiser. This morally significant application of promising to try speaks in favor of the view that responsible promisers favor evidentialism about promises. Contra Berislav Marušić, we contend that responsible promisers typically withdraw from promising to act and instead promise to try, in circumstances in which they recognize that there is a significant chance that they will not succeed.
David Stove reviews Selwyn Grave's History of Philosophy in Australia, and praises philosophers for thinking harder about the bases of science, mathematics and medicine than the practitioners in the field. The review is reprinted as an appendix to James Franklin's Corrupting the Youth: A History of Philosophy in Australia.
Traditionally, Aristotle is held to believe that philosophical contemplation is valuable for its own sake, but ultimately useless. In this volume, Matthew D. Walker offers a fresh, systematic account of Aristotle's views on contemplation's place in the human good. The book situates Aristotle's views against the background of his wider philosophy, and examines the complete range of available textual evidence. On this basis, Walker argues that contemplation also benefits humans as perishable living organisms by actively guiding human life activity, including (...) human self-maintenance. Aristotle's views on contemplation's place in the human good thus cohere with his broader thinking about how living organisms live well. A novel exploration of Aristotle's views on theory and practice, this volume will interest scholars and students of both ancient Greek ethics and natural philosophy. It will also appeal to those working in other disciplines including classics, ethics, and political theory. (shrink)
This paper engages critically with anti-representationalist arguments pressed by prominent enactivists and their allies. The arguments in question are meant to show that the “as-such” and “job-description” problems constitute insurmountable challenges to causal-informational theories of mental content. In response to these challenges, a positive account of what makes a physical or computational structure a mental representation is proposed; the positive account is inspired partly by Dretske’s views about content and partly by the role of mental representations in contemporary cognitive scientific (...) modeling. (shrink)
Recent iterations of Alvin Plantinga’s “evolutionary argument against naturalism” bear a surprising resemblance to a famous argument in Descartes’s Third Meditation. Both arguments conclude that theists have an epistemic advantage over atheists/naturalists vis-à-vis the question whether or not our cognitive faculties are reliable. In this paper, I show how these arguments bear an even deeper resemblance to each other. After bringing the problem of evil to bear negatively on Descartes’s argument, I argue that, given these similarities, atheists can wield a (...) recent solution to the problem of evil against theism in much the way Plantinga wields the detailsof evolutionary theory against naturalism. I conclude that Plantinga and Descartes give us insufficient reason for thinking theists are in a better epistemic position than atheists and naturalists vis-à-vis the question whether or not our cognitive faculties are reliable. (shrink)
Recent years have seen fresh impetus brought to debates about the proper role of statistical evidence in the law. Recent work largely centres on a set of puzzles known as the ‘proof paradox’. While these puzzles may initially seem academic, they have important ramifications for the law: raising key conceptual questions about legal proof, and practical questions about DNA evidence. This article introduces the proof paradox, why we should care about it, and new work attempting to resolve it.
This paper evaluates the Natural-Kinds Argument for cognitive extension, which purports to show that the kinds presupposed by our best cognitive science have instances external to human organism. Various interpretations of the argument are articulated and evaluated, using the overarching categories of memory and cognition as test cases. Particular emphasis is placed on criteria for the scientific legitimacy of generic kinds, that is, kinds characterized in very broad terms rather than in terms of their fine-grained causal roles. Given the current (...) state of cognitive science, I conclude that we have no reason to think memory or cognition are generic natural kinds that can ground an argument for cognitive extension. (shrink)
The relationship between economics and the philosophy of natural science has changed substantially during the last few years. What was once exclusively a one-way relationship from philosophy to economics now seems to be much closer to bilateral exchange. The purpose of this paper is to examine this new relationship. First, I document the change. Second, I examine the situation within contemporary philosophy of science in order to explain why economics might have its current appeal. Third, I consider some of the (...) issues that might jeopardize the success of this philosophical project. (shrink)
The notion of function is indispensable to our understanding of distinctions such as that between being broken and being in working order (for artifacts) and between being diseased and being healthy (for organisms). A clear account of the ontology of functions and functioning is thus an important desideratum for any top-level ontology intended for application to domains such as engineering or medicine. The benefit of using top-level ontologies in applied ontology can only be realized when each of the categories identified (...) and defined by a top-level ontology is integrated with the others in a coherent fashion. Basic Formal Ontology (BFO) has from the beginning included function as one of its categories, exploiting a version of the etiological account of function that is framed at a level of generality sufficient to accommodate both biological and artifactual functions. This account has been subjected to a series of criticisms and refinements. We here articulate BFO’s account of function, provide some reasons for favoring it over competing views, and defend it against objections. (shrink)
Practical wisdom is the intellectual virtue that enables a person to make reliably good decisions about how, all-things-considered, to live. As such, it is a lofty and important ideal to strive for. It is precisely this loftiness and importance that gives rise to important questions about wisdom: Can real people develop it? If so, how? What is the nature of wisdom as it manifests itself in real people? I argue that we can make headway answering these questions by modeling wisdom (...) on expert skill. Presenting the main argument for this expert skill model of wisdom is the focus of this paper. More specifically, I’ll argue that wisdom is primarily the same kind of epistemic achievement as expert decision-making skill in areas such as firefighting. Acknowledging this helps us see that, and how, real people can develop wisdom. It also helps to resolve philosophical debates about the nature of wisdom. For example, philosophers, including those who think virtue should be modeled on skills, disagree about the extent to which wise people make decisions using intuitions or principled deliberation and reflection. The expert skill model resolves this debate by showing that wisdom includes substantial intuitive and deliberative and reflective abilities. (shrink)
Mathematicians distinguish between proofs that explain their results and those that merely prove. This paper explores the nature of explanatory proofs, their role in mathematical practice, and some of the reasons why philosophers should care about them. Among the questions addressed are the following: what kinds of proofs are generally explanatory (or not)? What makes a proof explanatory? Do all mathematical explanations involve proof in an essential way? Are there really such things as explanatory proofs, and if so, how do (...) they relate to the sorts of explanation encountered in philosophy of science and metaphysics? (shrink)
In the paper, I defend the skeptical view that no one is ever morally responsible in the basic desert sense since luck universally undermines responsibility-level control. I begin in Section 1 by defining a number of different varieties of luck and examining their relevance to moral responsibility. I then turn, in Section 2, to outlining and defending what I consider to be the best argument for the skeptical view--the luck pincer (Levy 2011). I conclude in Section 3 by addressing Robert (...) Hartman's (2017) numerous objections to the luck pincer. I argue that the luck pincer emerges unscathed and the pervasiveness of luck (still) undermines moral responsibility. (shrink)
Atomically precise manufacturing (APM) is the assembly of materials with atomic precision. APM does not currently exist, and may not be feasible, but if it is feasible, then the societal impacts could be dramatic. This paper assesses the net societal impacts of APM across the full range of important APM sectors: general material wealth, environmental issues, military affairs, surveillance, artificial intelligence, and space travel. Positive effects were found for material wealth, the environment, military affairs (specifically nuclear disarmament), and space travel. (...) Negative effects were found for military affairs (specifically rogue actor violence and AI. The net effect for surveillance was ambiguous. The effects for the environment, military affairs, and AI appear to be the largest, with the environment perhaps being the largest of these, suggesting that APM would be net beneficial to society. However, these factors are not well quantified and no definitive conclusion can be made. One conclusion that can be reached is that if APM R&D is pursued, it should go hand-in-hand with effective governance strategies to increase the benefits and reduce the harms. (shrink)
Rationalization in the sense of biased self-justification is very familiar. It's not cheating because everyone else is doing it too. I didn't report the abuse because it wasn't my place. I understated my income this year because I paid too much in tax last year. I'm only a social smoker, so I won't get cancer. The mental mechanisms subserving rationalization have been studied closely by psychologists. However, when viewed against the backdrop of philosophical accounts of the regulative role of truth (...) in doxastic deliberation , rationalization can look very puzzling. Almost all contemporary philosophers endorse a version of the thesis of deliberative exclusivity—a thinker cannot in full consciousness decide whether to believe that p in a way that issues directly in forming a belief by adducing anything other than considerations that he or she regards as relevant to the truth of p. But, as I argue, rationalization involves the weighing of considerations that the thinker kn.. (shrink)
Kyle Stanford has recently claimed to offer a new challenge to scientific realism. Taking his inspiration from the familiar Pessimistic Induction (PI), Stanford proposes a New Induction (NI). Contra Anjan Chakravartty’s suggestion that the NI is a ‘red herring’, I argue that it reveals something deep and important about science. The Problem of Unconceived Alternatives, which lies at the heart of the NI, yields a richer anti-realism than the PI. It explains why science falls short when it falls short, and (...) so it might figure in the most coherent account of scientific practice. However, this best account will be antirealist in some respects and about some theories. It will not be a sweeping antirealism about all or most of science. (shrink)
In this paper, I claim that extant empirical data do not support a radically embodied understanding of the mind but, instead, suggest (along with a variety of other results) a massively representational view. According to this massively representational view, the brain is rife with representations that possess overlapping and redundant content, and many of these represent other mental representations or derive their content from them. Moreover, many behavioral phenomena associated with attention and consciousness are best explained by the coordinated activity (...) of units with redundant content. I finish by arguing that this massively representational picture challenges the reliability of a priori theorizing about consciousness. (shrink)
In this paper I distinguish the category of “rationalization” from various forms of epistemic irrationality. I maintain that only if we model rationalizers as pretenders can we make sense of the rationalizer's distinctive relationship to the evidence in her possession. I contrast the cognitive attitude of the rationalizer with that of believers whose relationship to the evidence I describe as “waffling” or “intransigent”. In the final section of the paper, I compare the rationalizer to the Frankfurtian bullshitter.
The conspicuous similarities between interpretive strategies in classical statistical mechanics and in quantum mechanics may be grounded on their employment of common implementations of probability. The objective probabilities which represent the underlying stochasticity of these theories can be naturally associated with three of their common formal features: initial conditions, dynamics, and observables. Various well-known interpretations of the two theories line up with particular choices among these three ways of implementing probability. This perspective has significant application to debates on primitive ontology (...) and to the quantum measurement problem. (shrink)
In A Theory of Justice, Rawls attempts to ground intergenerational justice by "virtual representation" through a thickening of the veil of ignorance. Contractors don't know to what generation they belong. This approach is flawed and will not result in the just savings principle Rawls hopes to justify. The project of grounding intergenerational duties on a social contractarian foundation is misconceived. Non-overlapping generations do not stand in relation to one another that is central to the contractarian approach.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.