The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to (...) existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource providing details on the people, policies, and issues being addressed in association with OBI. (shrink)
Ontology is one strategy for promoting interoperability of heterogeneous data through consistent tagging. An ontology is a controlled structured vocabulary consisting of general terms (such as “cell” or “image” or “tissue” or “microscope”) that form the basis for such tagging. These terms are designed to represent the types of entities in the domain of reality that the ontology has been devised to capture; the terms are provided with logical defi nitions thereby also supporting reasoning over the tagged data. Aim: This (...) paper provides a survey of the biomedical imaging ontologies that have been developed thus far. It outlines the challenges, particularly faced by ontologies in the fields of histopathological imaging and image analysis, and suggests a strategy for addressing these challenges in the example domain of quantitative histopathology imaging. The ultimate goal is to support the multiscale understanding of disease that comes from using interoperable ontologies to integrate imaging data with clinical and genomics data. (shrink)
Interoperability across data sets is a key challenge for quantitative histopathological imaging. There is a need for an ontology that can support effective merging of pathological image data with associated clinical and demographic data. To foster organized, cross-disciplinary, information-driven collaborations in the pathological imaging field, we propose to develop an ontology to represent imaging data and methods used in pathological imaging and analysis, and call it Quantitative Histopathological Imaging Ontology – QHIO. We apply QHIO to breast cancer hot-spot detection with (...) the goal of enhancing reliability of detection by promoting the sharing of data between image analysts. (shrink)
ABSTRACT: Central to both James’s earlier psychology and his later philosophical views was a recurring distinction between percepts and concepts. The distinction evolved and remained fundamental to his thinking throughout his career as he sought to come to grips with its fundamental nature and significance. In this chapter, I focus initially on James’s early attempt to articulate the distinction in his 1885 article “The Function of Cognition.” This will highlight a key problem to which James continued to (...) return throughout his later philosophical work on the nature of our cognition, including in his famous “radical empiricist” metaphysics of “pure experience” around the turn of the century. We shall find that James grappled insightfully but ambivalently with the perceptual and conceptual dimensions of the “knowledge relation” or the “cognitive relation,” as he called it—or what, following Franz Brentano, philosophers would later call our object-directed thought or intentionality more generally. Some philosophers have once again returned to James’s work for crucial insights on this pivotal topic, while others continue to find certain aspects of his account to be problematic. What is beyond dispute is that James’s inquiries in this domain were both innovative and of lasting significance. (shrink)
Reification is to abstraction as disease is to health. Whereas abstraction is singling out, symbolizing, and systematizing, reification is neglecting abstractive context, especially functional, historical, and analytical-level context. William James and John Dewey provide similar and nuanced arguments regarding the perils and promises of abstraction. They share an abstraction-reification account. The stages of abstraction and the concepts of “vicious abstractionism,” “/the/ psychologist’s fallacy,” and “the philosophic fallacy” in the works of these pragmatists are here analyzed in detail. For instance, (...) in 1896 Dewey exposes various fallacies associated with reifying dualistic reflex arc theory. The conclusion prescribes treatments (pluralism and assumption archaeology) for de-reifying ill models (i.e., universalized, narrowed, and ontologized models) in contemporary scientific fields such as cognitive science and biology. (shrink)
The default theory of aesthetic value combines hedonism about aesthetic value with strict perceptual formalism about aesthetic value, holding the aesthetic value of an object to be the value it has in virtue of the pleasure it gives strictly in virtue of its perceptual properties. A standard theory of aesthetic value is any theory of aesthetic value that takes the default theory as its theoretical point of departure. This paper argues that standard theories fail because they theorize from the default (...) theory. (shrink)
Critics and defenders of William James both acknowledge serious tensions in his thought, tensions perhaps nowhere more vexing to readers than in regard to his claim about an individual’s intellectual right to their “faith ventures.” Focusing especially on “Pragmatism and Religion,” the final lecture in Pragmatism, this chapter will explore certain problems James’ pragmatic pluralism. Some of these problems are theoretical, but others concern the real-world upshot of adopting James permissive ethics of belief. Although Jamesian permissivism is (...) qualified in certain ways in this paper, I largely defend James in showing how permissivism has philosophical advantages over the non-permissivist position associated with evidentialism. These advantages include not having to treat disagreement as a sign of error or irrationality, and mutual support relations between permissivism and what John Rawls calls the "reasonable pluralism" at the heart of political liberalism. (shrink)
Recent debates between representational and relational theories of perceptual experience sometimes fail to clarify in what respect the two views differ. In this essay, I explain that the relational view rejects two related claims endorsed by most representationalists: the claim that perceptual experiences can be erroneous, and the claim that having the same representational content is what explains the indiscriminability of veridical perceptions and phenomenally matching illusions or hallucinations. I then show how the relational view can claim that errors associated (...) with perception should be explained in terms of false judgments, and develop a theory of illusions based on the idea that appearances are properties of objects in the surrounding environment. I provide an account of why appearances are sometimes misleading, and conclude by showing how the availability of this view undermines one of the most common ways of motivating representationalist theories of perception. (shrink)
A polemical account of Australian philosophy up to 2003, emphasising its unique aspects (such as commitment to realism) and the connections between philosophers' views and their lives. Topics include early idealism, the dominance of John Anderson in Sydney, the Orr case, Catholic scholasticism, Melbourne Wittgensteinianism, philosophy of science, the Sydney disturbances of the 1970s, Francofeminism, environmental philosophy, the philosophy of law and Mabo, ethics and Peter Singer. Realist theories especially praised are David Armstrong's on universals, David Stove's on logical probability (...) and the ethical realism of Rai Gaita and Catholic philosophers. In addition to strict philosophy, the book treats non-religious moral traditions to train virtue, such as Freemasonry, civics education and the Greek and Roman classics. (shrink)
As Thomas Uebel has recently argued, some early logical positivists saw American pragmatism as a kindred form of scientific philosophy. They associated pragmatism with William James, whom they rightly saw as allied with Ernst Mach. But what apparently blocked sympathetic positivists from pursuing commonalities with American pragmatism was the concern that James advocated some form of psychologism, a view they thought could not do justice to the a priori. This paper argues that positivists were wrong to read (...) class='Hi'>James as offering a psychologistic account of the a priori. They had encountered James by reading Pragmatism as translated by the unabashedly psychologistic Wilhelm Jerusalem. But in more technical works, James had actually developed a form of conventionalism that anticipated the so-called “relativized” a priori positivists themselves would independently develop. While positivists arrived at conventionalism largely through reflection on the exact sciences, though, James’s account of the a priori grew from his reflections on the biological evolution of cognition, particularly in the context of his Darwin-inspired critique of Herbert Spencer. (shrink)
Subject-sensitive invariantism posits surprising connections between a person’s knowledge and features of her environment that are not paradigmatically epistemic features. But which features of a person’s environment have this distinctive connection to knowledge? Traditional defenses of subject-sensitive invariantism emphasize features that matter to the subject of the knowledge-attribution. Call this pragmatic encroachment. A more radical thesis usually goes ignored: knowledge is sensitive to moral facts, whether or not those moral facts matter to the subject. Call this moral encroachment. This paper (...) argues that, insofar as there are good arguments for pragmatic encroachment, there are also good arguments for moral encroachment. (shrink)
We report the results of a study that investigated the views of researchers working in seven scientific disciplines and in history and philosophy of science in regard to four hypothesized dimensions of scientific realism. Among other things, we found that natural scientists tended to express more strongly realist views than social scientists, that history and philosophy of science scholars tended to express more antirealist views than natural scientists, that van Fraassen’s characterization of scientific realism failed to cluster with more standard (...) characterizations, and that those who endorsed the pessimistic induction were no more or less likely to endorse antirealism. (shrink)
According to the view that there is moral encroachment in epistemology, whether a person has knowledge of p sometimes depends on moral considerations, including moral considerations that do not bear on the truth or likelihood of p. Defenders of moral encroachment face a central challenge: they must explain why the moral considerations they cite, unlike moral bribes for belief, are reasons of the right kind for belief (or withheld belief). This paper distinguishes between a moderate and a radical version of (...) moral encroachment. It shows that, while defenders of moderate moral encroachment are well-placed to meet the central challenge, defenders of radical moral encroachment are not. The problem for radical moral encroachment is that it cannot, without taking on unacceptable costs, forge the right sort of connection between the moral badness of a belief and that belief’s chance of being false. (shrink)
Many writers have recently argued that there is something distinctively problematic about sustaining moral beliefs on the basis of others’ moral views. Call this claim pessimism about moral deference. Pessimism about moral deference, if true, seems to provide an attractive way to argue for a bold conclusion about moral disagreement: moral disagreement generally does not require belief revision. Call this claim steadfastness about moral disagreement. Perhaps the most prominent recent discussion of the connection between moral deference and moral disagreement, due (...) to Alison Hills, uses pessimism about the former to argue for steadfastness about the latter. This paper reveals that this line of thinking, and others like it, are unsuccessful. There is no way to argue from a compelling version of pessimism about moral deference to the conclusion of steadfastness about moral disagreement. The most plausible versions of pessimism about moral deference have only very limited implications for moral disagreement. (shrink)
Experimental philosophy brings empirical methods to philosophy. These methods are used to probe how people think about philosophically interesting things such as knowledge, morality, and freedom. This paper explores the contribution that qualitative methods have to make in this enterprise. I argue that qualitative methods have the potential to make a much greater contribution than they have so far. Along the way, I acknowledge a few types of resistance that proponents of qualitative methods in experimental philosophy might encounter, and provide (...) reasons to think they are ill-founded. (shrink)
How were reliable predictions made before Pascal and Fermat's discovery of the mathematics of probability in 1654? What methods in law, science, commerce, philosophy, and logic helped us to get at the truth in cases where certainty was not attainable? The book examines how judges, witch inquisitors, and juries evaluated evidence; how scientists weighed reasons for and against scientific theories; and how merchants counted shipwrecks to determine insurance rates. Also included are the problem of induction before Hume, design arguments for (...) the existence of God, and theories on how to evaluate scientific and historical hypotheses. It is explained how Pascal and Fermat's work on chance arose out of legal thought on aleatory contracts. The book interprets pre-Pascalian unquantified probability in a generally objective Bayesian or logical probabilist sense. (shrink)
In the past decade, a number of empirical researchers have suggested that laypeople have compatibilist intuitions. In a recent paper, Feltz and Millan have challenged this conclusion by claiming that most laypeople are only compatibilists in appearance and are in fact willing to attribute free will to people no matter what. As evidence for this claim, they have shown that an important proportion of laypeople still attribute free will to agents in fatalistic universes. In this paper, we first argue that (...) Feltz and Millan’s error-theory rests on a conceptual confusion: it is perfectly acceptable for a certain brand of compatibilist to judge free will and fatalism to be compatible, as long as fatalism does not prevent agents from being the source of their actions. We then present the results of two studies showing that laypeople’s intuitions are best understood as following a certain brand of source compatibilism rather than a “free-will-no-matter-what” strategy. (shrink)
We cannot imagine two straight lines intersecting at two points even though they may do so. In this case our abilities to imagine depend upon our abilities to visualise.
Th e present article reports a series of experiments designed to extend the empirical investigation of folk metaethical intuitions by examining how different kinds of ethical disagreement can impact attributions of objectivity to ethical claims.
Goetz outlined legal models of identical entities that include natural persons who are identical to a coregency and natural persons who are identical to a general partnership. Those entities cohere with the formula logic of relative identity. This essay outlines the coexistence of relative identity and numerical identity in the models of identical legal entities, which is impure relative identity. These models support the synthesis of Relative Trinitarianism and Social Trinitarianism, which I call Relative-Social Trinitarianism.
Recent decades have seen a surge in interest in metaphilosophy. In particular there has been an interest in philosophical methodology. Various questions have been asked about philosophical methods. Are our methods any good? Can we improve upon them? Prior to such evaluative and ameliorative concerns, however, is the matter of what methods philosophers actually use. Worryingly, our understanding of philosophical methodology is impoverished in various respects. This article considers one particular respect in which we seem to be missing an important (...) part of the picture. While it is a received wisdom that the word “ intuition ” has exploded across analytic philosophy in recent decades, the article presents evidence that the explosion is apparent across a broad swathe of academia. It notes various implications for current methodological debates about the role of intuitions in philosophy. (shrink)
Various studies have reported that moral intuitions about the permissibility of acts are subject to framing effects. This paper reports the results of a series of experiments which further examine the susceptibility of moral intuitions to framing effects. The main aim was to test recent speculation that intuitions about the moral relevance of certain properties of cases might be relatively resistent to framing effects. If correct, this would provide a certain type of moral intuitionist with the resources to resist challenges (...) to the reliability of moral intuitions based on such framing effects. And, fortunately for such intuitionists, although the results can’t be used to mount a strident defence of intuitionism, the results do serve to shift the burden of proof onto those who would claim that intuitions about moral relevance are problematically sensitive to framing effects. (shrink)
Abstract If asked about the Darwinian influence on William James, some might mention his pragmatic position that ideas are “mental modes of adaptation,” and that our stock of ideas evolves to meet our changing needs. However, while this is not obviously wrong, it fails to capture what James deems most important about Darwinian theory: the notion that there are independent cycles of causation in nature. Versions of this idea undergird everything from his campaign against empiricist psychologies to his (...) theories of mind and knowledge to his pluralistic worldview; and all of this together undergirds his attempts to challenge determinism and defend freewill. I begin this paper by arguing that James uses Darwinian thinking to bridge empiricism and rationalism, and that this merger undermines environmental determinism. I then discuss how Darwinism informs his concept of pluralism; how his concept challenges visions of a causally welded “block universe”; and how it also casts doubt on the project of reducing all reality to physical reality, and therewith the wisdom of dismissing consciousness as an inert by-product of physiology. I conclude by considering how Darwinism helps him justify the pragmatic grounds upon which he defends freewill. (shrink)
Naïve realism, often overlooked among philosophical theories of perception, has in recent years attracted a surge of interest. Broadly speaking, the central commitment of naïve realism is that mind-independent objects are essential to the fundamental analysis of perceptual experience. Since the claims of naïve realism concern the essential metaphysical structure of conscious perception, its truth or falsity is of central importance to a wide range of topics, including the explanation of semantic reference and representational content, the nature of phenomenal consciousness, (...) and the basis of perceptual justification and knowledge. One of the greatest difficulties surrounding discussions of naïve realism, however, has been lack of clarity concerning exactly what affirming or denying it entails. In particular, it is sometimes unclear how naïve realism is related to the claim that perceptual experience is in some sense direct or unmediated, and also to what extent the view is compatible with another widely discussed thesis in the philosophy of perception, the claim that perceptual experiences are states with representational content. In this essay, I discuss how recent work on these issues helps to clarify both the central commitments of naïve realism, as well as its relation to representationalist theories of perception. Along the way, I will attempt to shed light on the different ways in which each approach tries to address the various theoretical challenges facing a philosophical theory of perception, and also to assess the prospects for views that attempts to combine features of each approach. (shrink)
In What Science Knows, the Australian philosopher and mathematician James Franklin explains in captivating and straightforward prose how science works its magic. It offers a semipopular introduction to an objective Bayesian/logical probabilist account of scientific reasoning, arguing that inductive reasoning is logically justified (though actually existing science sometimes falls short). Its account of mathematics is Aristotelian realist.
In some cases, a group of people can bring about a morally bad outcome despite each person’s individual act making no difference with respect to bringing that outcome about. Since each person’s act makes no difference, it seems the effects of the act cannot provide a reason not to perform it. This is problematic, because if each person acts in accordance with their reasons, each will presumably perform the act—and thus, the bad outcome will be brought about. Recently, Julia Nefsky (...) has argued that this problem is solved by rejecting the assumption that if an act makes no difference with respect to an outcome, then the act cannot do anything non-superfluous toward bringing that outcome about. Nefsky suggests that, even if an act makes no difference, the act may nevertheless help: it may make a non-superfluous causal contribution. If this is right, it means that the potential effects of an act may give us a reason to perform the act, even if the act wouldn’t make a difference. In this paper, I offer some reasons to be wary of Nefsky’s approach. I first argue that her account generates problematic results in a certain range of cases, and thus that we may have no reason to help in any case. I then argue that, even if we do sometimes have a reason to act when it seems we cannot make a difference, this reason cannot be the one that Nefsky identifies. (shrink)
Are philosophers’ intuitions more reliable than philosophical novices’? Are we entitled to assume the superiority of philosophers’ intuitions just as we assume that experts in other domains have more reliable intuitions than novices? Ryberg raises some doubts and his arguments promise to undermine the expertise defence of intuition-use in philosophy once and for all. In this paper, I raise a number of objections to these arguments. I argue that philosophers receive sufficient feedback about the quality of their intuitions and that (...) philosophers’ experience in philosophy plausibly affects their intuitions. Consequently, the type of argument Ryberg offers fails to undermine the expertise defence of intuition-use in philosophy. (shrink)
Do philosophers use intuitions? Should philosophers use intuitions? Can philosophical methods (where intuitions are concerned) be improved upon? In order to answer these questions we need to have some idea of how we should go about answering them. I defend a way of going about methodology of intuitions: a metamethodology. I claim the following: (i) we should approach methodological questions about intuitions with a thin conception of intuitions in mind; (ii) we should carve intuitions finely; and, (iii) we should carve (...) to a grain to which we are sensitive in our everyday philosophising. The reason is that, unless we do so, we don’t get what we want from philosophical methodology. I argue that what we want is information that will aid us in formulating practical advice concerning how to do philosophy responsibly/well/better. (shrink)
Few address the extent to which William James regards the neo-Lamarckian account of “direct adaptation” as a biological extension of British empiricism. Consequently few recognize the instrumental role that the Darwinian idea of “indirect adaptation” plays in his lifelong efforts to undermine the empiricist view that sense experience molds the mind. This article examines how James uses Darwinian thinking, first, to argue that mental content can arise independently of sense experience; and, second, to show that empiricists advance a (...) hopelessly skeptical position when they insist that beliefs are legitimate only insofar as they directly correspond to the observable world. Using his attacks on materialism and his defense of spiritualism as examples, I particularly consider how Darwinian thinking enables him to keep his empiricist commitments while simultaneously developing a pragmatic alternative to empiricistic skepticism. I conclude by comparing his theory of beliefs to the remarkably similar theory of “memes” that Richard Dawkins uses to attack spiritualistic belief—an attack that James anticipates and counters with his pragmatic alternative. (shrink)
This paper presents a challenge to conciliationist views of disagreement. I argue that conciliationists cannot satisfactorily explain why we need not revise our beliefs in response to certain moral disagreements. Conciliationists can attempt to meet this challenge in one of two ways. First, they can individuate disputes narrowly. This allows them to argue that we have dispute-independent reason to distrust our opponents’ moral judgment. This approach threatens to license objectionable dogmatism. It also inappropriately gives deep epistemic significance to superficial questions (...) about how to think about the subject matter of a dispute. Second, conciliationists can individuate disputes widely. This allows them to argue that we lack dispute-independent reason to trust our opponents’ moral judgment. But such arguments fail; our background of generally shared moral beliefs gives us good reason to trust the moral judgment of our opponents, even after we set quite a bit of our reasoning aside. On either approach, then, conciliationists should acknowledge that we have dispute-independent reason to trust the judgment of those who reject our moral beliefs. Given a conciliationist view of disagreement’s epistemic role, this has the unattractive result that we are epistemically required to revise some of our most intuitively secure moral beliefs. (shrink)
In Vagueness and Contradiction (2001), Roy Sorensen defends and extends his epistemic account of vagueness. In the process, he appeals to connections between vagueness and semantic paradox. These appeals come mainly in Chapter 11, where Sorensen offers a solution to what he calls the no-no paradox—a “neglected cousin” of the more famous liar—and attempts to use this solution as a precedent for an epistemic account of the sorites paradox. This strategy is problematic for Sorensen’s project, however, since, as we establish, (...) he fails to resolve the semantic pathology of the no-no paradox. (shrink)
A ‘duality’ is a formal mapping between the spaces of solutions of two empirically equivalent theories. In recent times, dualities have been found to be pervasive in string theory and quantum field theory. Naïvely interpreted, duality-related theories appear to make very different ontological claims about the world—differing in e.g. space-time structure, fundamental ontology, and mereological structure. In light of this, duality-related theories raise questions familiar from discussions of underdetermination in the philosophy of science: in the presence of dual theories, what (...) is one to say about the ontology of the world? In this paper, we undertake a comprehensive and non-technical survey of the landscape of possible ontological interpretations of duality-related theories. We provide a significantly enriched and clarified taxonomy of options—several of which are novel to the literature. (shrink)
Various studies show moral intuitions to be susceptible to framing effects. Many have argued that this susceptibility is a sign of unreliability and that this poses a methodological challenge for moral philosophy. Recently, doubt has been cast on this idea. It has been argued that extant evidence of framing effects does not show that moral intuitions have an unreliability problem. I argue that, even if the extant evidence suggests that moral intuitions are fairly stable with respect to what intuitions we (...) have, the effect of framing on the strength of those intuitions still needs to be taken into account. I argue that this by itself poses a methodological challenge for moral philosophy. (shrink)
William James presents a preference-sensitive and future-directed notion of truth that has struck many as wildly revisionary. This paper argues that such a reaction usually results from failing to see how his accounts of truth and intentionality are intertwined. James' forward-looking account of intentionality (or "knowing") compares favorably the 'causal' and 'resemblance-driven' accounts that have been popular since his day, and it is only when his remarks about truth are placed in the context of his account of intentionality (...) that they come to seem as plausible as they manifestly did to James. (shrink)
My argument proceeds in two stages. In §I, I sum up the intuitions of a popular argument for 'satisfaction accounts' of Purgatory that I label, TAP. I then offer an argument, taken from a few standard orthodox Christian beliefs and one axiom of Christian theology, to so show that TAP is unsound. In the same section, I entertain some plausible responses to my argument that are prima facie consistent with these beliefs and axiom. I find these responses wanting. In §II, (...) I offer a sorites problem for TAP, given the orthodox Christian understanding of Christ’s parousia, showing that TAP and the intuitions driving it are faulty. To attempt something of a corrective, I end by offering some modest theological suggestions for thinking through “the logic of total transformation.”. (shrink)
Immoralists hold that in at least some cases, moral fl aws in artworks can increase their aesthetic value. They deny what I call the valence constraint: the view that any effect that an artwork’s moral value has on its aesthetic merit must have the same valence. The immoralist offers three arguments against the valence constraint. In this paper I argue that these arguments fail, and that this failure reveals something deep and interesting about the relationship between cognitive and moral value. (...) In the fi nal section I offer a positive argument for the valence constraint. (shrink)
This paper argues for the following disjunction: either we do not live in a world with a branching temporal structure, or backwards time travel is nomologically impossible, given the initial state of the universe, or backwards time travel to our space-time location is impossible given large-scale facts about space and time. A fortiori, if backwards time travel to our location is possible, we do not live in a branching universe.
The rise of experimental philosophy has placed metaphilosophical questions, particularly those concerning concepts, at the center of philosophical attention. X-phi offers empirically rigorous methods for identifying conceptual content, but what exactly it contributes towards evaluating conceptual content remains unclear. We show how x-phi complements Rudolf Carnap’s underappreciated methodology for concept determination, explication. This clarifies and extends x-phi’s positive philosophical import, and also exhibits explication’s broad appeal. But there is a potential problem: Carnap’s account of explication was limited to empirical and (...) logical concepts, but many concepts of interest to philosophers are essentially normative. With formal epistemology as a case study, we show how x-phi assisted explication can apply to normative domains. (shrink)
Throughout history, almost all mathematicians, physicists and philosophers have been of the opinion that space and time are infinitely divisible. That is, it is usually believed that space and time do not consist of atoms, but that any piece of space and time of non-zero size, however small, can itself be divided into still smaller parts. This assumption is included in geometry, as in Euclid, and also in the Euclidean and non- Euclidean geometries used in modern physics. Of the few (...) who have denied that space and time are infinitely divisible, the most notable are the ancient atomists, and Berkeley and Hume. All of these assert not only that space and time might be atomic, but that they must be. Infinite divisibility is, they say, impossible on purely conceptual grounds. (shrink)
Traversing the genres of philosophy and literature, this book elaborates Deleuze's notion of difference, conceives certain individuals as embodying difference, and applies these conceptions to their writings.
Direct epistemic consequentialism is the idea that X is epistemically permissible iff X maximizes epistemic value. It has received lots of attention in recent years and is widely accepted by philosophers to have counterintuitive implications. There are various reasons one might suspect that the relevant intuitions will not be widely shared among non-philosophers. This paper presents an initial empirical study of ordinary intuitions. The results of two experiments demonstrate that the counterintuitiveness of epistemic consequentialism is more than a philosophers' worry---the (...) folk seem to agree! (shrink)
This paper critically examines currently influential transparency accounts of our knowledge of our own beliefs that say that self-ascriptions of belief typically are arrived at by “looking outward” onto the world. For example, one version of the transparency account says that one self-ascribes beliefs via an inference from a premise to the conclusion that one believes that premise. This rule of inference reliably yields accurate self-ascriptions because you cannot infer a conclusion from a premise without believing the premise, and so (...) you cannot infer from a premise that you believe the premise unless you do believe it. I argue that this procedure cannot be a source of justification, however, because one can be justified in inferring from p that q only if p amounts to strong evidence that q is true. This is incompatible with the transparency account because p often is not very strong evidence that you believe that p. For example, unless you are a weather expert, the fact that it will rain is not very strong evidence that you believe it will rain. After showing how this intuitive problem can be made precise, I conclude with a broader lesson about the nature of inferential justification: that beliefs, when justified, must be underwritten by beliefs, when justified, must be underwritten by evidential relationships between the facts or propositions which those beliefs represent. (shrink)
This paper has three aims: to define autonomism clearly and charitably, to offer a positive argument in its favour, and to defend a larger view about what is at stake in the debate between autonomism and its critics. Autonomism is here understood as the claim that a valuer does not make an error in failing to bring her moral and aesthetic judgements together, unless she herself values doing so. The paper goes on to argue that reason does not require the (...) valuer to make coherent her aesthetic and moral evaluations. Finally, the paper shows that the denial of autonomism has realist commitments that autonomism does not have, and concludes that issues of value realism and irrealism are relevant to the debates about autonomism in ways that have not hitherto been recognized. (shrink)
_Selfhood and Appearing_ explores how, as embodied subjects, we are in the very world that we consciously internalize. Employing the insights of Merleau-Ponty and Patočka, this volume examines how the intertwining of both senses of “being-in” constitutes our reality.
Modal epistemologists parse modal conditions on knowledge in terms of metaphysical possibilities or ways the world might have been. This is problematic. Understanding modal conditions on knowledge this way has made modal epistemology, as currently worked out, unable to account for epistemic luck in the case of necessary truths, and unable to characterise widely discussed issues such as the problem of religious diversity and the perceived epistemological problem with knowledge of abstract objects. Moreover, there is reason to think that this (...) is a congenital defect of orthodox modal epistemology. This way of characterising modal epistemology is however optional. It is shown that one can non-circularly characterise modal conditions on knowledge in terms of epistemic possibilities, or ways the world might be for the target agent. Characterising the anti-luck condition in terms of epistemic possibilities removes the impediment to understanding epistemic luck in the case of necessary truths and opens the door to using these conditions to shed new light on some longstanding epistemological problems. (shrink)
In this paper we offer a new argument for the existence of God. We contend that the laws of logic are metaphysically dependent on the existence of God, understood as a necessarily existent, personal, spiritual being; thus anyone who grants that there are laws of logic should also accept that there is a God. We argue that if our most natural intuitions about them are correct, and if they are to play the role in our intellectual activities that we take (...) them to play, then the laws of logic are best construed as necessarily existent thoughts -- more specifically, as divine thoughts about divine thoughts. We conclude by highlighting some implications for both theistic arguments and antitheistic arguments. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.