In 2012, the Geological Time Scale, which sets the temporal framework for studying the timing and tempo of all major geological, biological, and climatic events in Earth’s history, had one-quarter of its boundaries moved in a widespread revision of radiometric dates. The philosophy of metrology helps us understand this episode, and it, in turn, elucidates the notions of calibration, coherence, and consilience. I argue that coherence testing is a distinct activity preceding calibration and consilience, and I highlight the (...) value of discordant evidence and trade-offs scientists face in calibration. The iterative nature of calibration, moreover, raises the problem of legacy data. (shrink)
The coherence of independent reports provides a strong reason to believe that the reports are true. This plausible claim has come under attack from recent work in Bayesian epistemology. This work shows that, under certain probabilistic conditions, coherence cannot increase the probability of the target claim. These theorems are taken to demonstrate that epistemic coherentism is untenable. To date no one has investigated how these results bear on different conceptions of coherence. I investigate this situation using Thagard’s (...) ECHO model of explanatory coherence. Thagard’s ECHO model provides a natural representation of the evidential significance of multiple independent reports. (shrink)
In this paper, we identify a new and mathematically well-defined sense in which the coherence of a set of hypotheses can be truth-conducive. Our focus is not, as usual, on the probability but on the confirmation of a coherent set and its members. We show that, if evidence confirms a hypothesis, confirmation is “transmitted” to any hypotheses that are sufficiently coherent with the former hypothesis, according to some appropriate probabilistic coherence measure such as Olsson’s or Fitelson’s measure. Our (...) findings have implications for scientific methodology, as they provide a formal rationale for the method of indirect confirmation and the method of confirming theories by confirming their parts. (shrink)
There are at least two different aspects of our rational evaluation of agents’ doxastic attitudes. First, we evaluate these attitudes according to whether they are supported by one’s evidence (substantive rationality). Second, we evaluate these attitudes according to how well they cohere with one another (structural rationality). In previous work, I’ve argued that substantive and structural rationality really are distinct, sui generis, kinds of rationality – call this view ‘dualism’, as opposed to ‘monism’, about rationality – by arguing that the (...) requirements of substantive and structural rationality can come into conflict. In this paper, I push the dialectic on this issue forward in two main ways. First, I argue that the most promising ways of resisting the diagnosis of my cases as conflicts still end up undermining monism in different ways. Second, supposing for the sake of argument that we should understand the cases as conflicts, I address the question of what we should do when such conflicts arise. I argue that, at least in a prominent kind of conflict case, the coherence requirements take precedence over the evidential requirements. (shrink)
Putnam (1975) infers from the success of a scientific theory to its approximate truth and the reference of its key term. Laudan (1981) objects that some past theories were successful, and yet their key terms did not refer, so they were not even approximately true. Kitcher (1993) replies that the past theories are approximately true because their working posits are true, although their idle posits are false. In contrast, I argue that successful theories which cohere with each other are approximately (...) true, and that their key terms refer. My position is immune to Laudan’s counterexamples to Putnam’s inference and yields a solution to a problem with Kitcher’s position. (shrink)
It is a widespread intuition that the coherence of independent reports provides a powerful reason to believe that the reports are true. Formal results by Huemer, M. 1997. “Probability and Coherence Justification.” Southern Journal of Philosophy 35: 463–72, Olsson, E. 2002. “What is the Problem of Coherence and Truth?” Journal of Philosophy XCIX : 246–72, Olsson, E. 2005. Against Coherence: Truth, Probability, and Justification. Oxford University Press., Bovens, L., and S. Hartmann. 2003. Bayesian Epistemology. Oxford University (...) Press, prove that, under certain conditions, coherence cannot increase the probability of the target claim. These formal results, known as ‘the impossibility theorems’ have been widely discussed in the literature. They are taken to have significant epistemic upshot. In particular, they are taken to show that reports must first individually confirm the target claim before the coherence of multiple reports offers any positive confirmation. In this paper, I dispute this epistemic interpretation. The impossibility theorems are consistent with the idea that the coherence of independent reports provides a powerful reason to believe that the reports are true even if the reports do not individually confirm prior to coherence. Once we see that the formal discoveries do not have this implication, we can recover a model of coherence justification consistent with Bayesianism and these results. This paper, thus, seeks to turn the tide of the negative findings for coherence reasoning by defending coherence as a unique source of confirmation. (shrink)
According to many philosophers, rationality is, at least in part, a matter of one’s attitudes cohering with one another. Theorists who endorse this idea have devoted much attention to formulating various coherence requirements. Surprisingly, they have said very little about what it takes for a set of attitudes to be coherent in general. We articulate and defend a general account on which a set of attitudes is coherent just in case and because it is logically possible for the attitudes (...) to be jointly satisfied in the sense of jointly fitting the world. In addition, we show how the account can help adjudicate debates about how to formulate various rational requirements. (shrink)
Evolutionary theory coheres with its neighboring theories, such as the theory of plate tectonics, molecular biology, electromagnetic theory, and the germ theory of disease. These neighboring theories were previously unconceived, but they were later conceived, and then they cohered with evolutionary theory. Since evolutionary theory has been strengthened by its several neighboring theories that were previously unconceived, it will be strengthened by infinitely many hitherto unconceived neighboring theories. This argument for evolutionary theory echoes the problem of unconceived alternatives. Ironically, however, (...) the former recommends that we take the realist attitude toward evolutionary theory, while the latter recommends that we take the antirealist attitude toward it. (shrink)
The recent literature on causality has seen the introduction of several distinctions within causality, which are thought to be important for understanding the widespread scientific practice of focusing causal explanations on a subset of the factors that are causally relevant for a phenomenon. Concepts used to draw such distinctions include, among others, stability, specificity, proportionality, or actual-difference making. In this contribution, I propose a new distinction that picks out an explanatorily salient class of causes in biological systems. Some select causes (...) in complex biological systems, I argue, have the property of enabling coherent causal control of these systems. Examples of such control variables include hormones and other signaling molecules, e.g., TOR (target of rapamycin), morphogens or the products of homeotic selector genes in embryonic pattern formation. I propose an analysis of this notion based on concepts borrowed from causal graph theory. (shrink)
Recently there have been several attempts in formal epistemology to develop an adequate probabilistic measure of coherence. There is much to recommend probabilistic measures of coherence. They are quantitative and render formally precise a notion—coherence—notorious for its elusiveness. Further, some of them do very well, intuitively, on a variety of test cases. Siebel, however, argues that there can be no adequate probabilistic measure of coherence. Take some set of propositions A, some probabilistic measure of coherence, (...) and a probability distribution such that all the probabilities on which A’s degree of coherence depends (according to the measure in question) are defined. Then, the argument goes, the degree to which A is coherent depends solely on the details of the distribution in question and not at all on the explanatory relations, if any, standing between the propositions in A. This is problematic, the argument continues, because, first, explanation matters for coherence, and, second, explanation cannot be adequately captured solely in terms of probability. We argue that Siebel’s argument falls short. (shrink)
I develop a probabilistic account of coherence, and argue that at least in certain respects it is preferable to (at least some of) the main extant probabilistic accounts of coherence: (i) Igor Douven and Wouter Meijs’s account, (ii) Branden Fitelson’s account, (iii) Erik Olsson’s account, and (iv) Tomoji Shogenji’s account. Further, I relate the account to an important, but little discussed, problem for standard varieties of coherentism, viz., the “Problem of Justified Inconsistent Beliefs.”.
Coherentism maintains that coherent beliefs are more likely to be true than incoherent beliefs, and that coherent evidence provides more confirmation of a hypothesis when the evidence is made coherent by the explanation provided by that hypothesis. Although probabilistic models of credence ought to be well-suited to justifying such claims, negative results from Bayesian epistemology have suggested otherwise. In this essay we argue that the connection between coherence and confirmation should be understood as a relation mediated by the causal (...) relationships among the evidence and a hypothesis, and we offer a framework for doing so by fitting together probabilistic models of coherence, confirmation, and causation. We show that the causal structure among the evidence and hypothesis is sometimes enough to determine whether the coherence of the evidence boosts confirmation of the hypothesis, makes no difference to it, or even reduces it. We also show that, ceteris paribus, it is not the coherence of the evidence that boosts confirmation, but rather the ratio of the coherence of the evidence to the coherence of the evidence conditional on a hypothesis. (shrink)
Reliabilism is an intuitive and attractive view about epistemic justification. However, it has many well-known problems. I offer a novel condition on reliabilist theories of justification. This method coherence condition requires that a method be appropriately tested by appeal to a subject’s other belief-forming methods. Adding this condition to reliabilism provides a solution to epistemic circularity worries, including the bootstrapping problem.
Locke has been accused of failing to have a coherent understanding of consciousness, since it can be identical neither to reflection nor to ordinary perception without contradicting other important commitments. I argue that the account of consciousness is coherent once we see that, for Locke, perceptions of ideas are complex mental acts and that consciousness can be seen as a special kind of self-referential mental state internal to any perception of an idea.
Coherence and correspondence are classical contenders as theories of truth. In this paper we examine them instead as interacting factors in the dynamics of belief across epistemic networks. We construct an agent-based model of network contact in which agents are characterized not in terms of single beliefs but in terms of internal belief suites. Individuals update elements of their belief suites on input from other agents in order both to maximize internal belief coherence and to incorporate ‘trickled in’ (...) elements of truth as correspondence. Results, though often intuitive, prove more complex than in simpler models (Hegselmann & Krause 2002, 2006; Grim, Singer, Reade & Fisher 2015). The optimistic finding is that pressures toward internal coherence can exploit and expand on small elements of truth as correspondence is introduced into epistemic networks. Less optimistic results show that pressures for coherence can also work directly against the incorporation of truth, particularly when coherence is established first and new data is introduced later. (shrink)
It is obvious that we would not want to demand that an agent' s beliefs at different times exhibit the same sort of consistency that we demand from an agent' s simultaneous beliefs; there' s nothing irrational about believing P at one time and not-P at another. Nevertheless, many have thought that some sort of coherence or stability of beliefs over time is an important component of epistemic rationality.
Why should we make our beliefs consistent or, more generally, probabilistically coherent? That it will prevent sure losses in betting and that it will maximize one’s chances of having accurate beliefs are popular answers. However, these justifications are self-centered, focused on the consequences of our coherence for ourselves. I argue that incoherence has consequences for others because it is liable to mislead others, to false beliefs about one’s beliefs and false expectations about one’s behavior. I argue that the moral (...) obligation of truthfulness thus constrains us to either conform to the logic our audience assumes we use, educate them in a new logic, or give notice that one will do neither. This does not show that probabilistic coherence is uniquely suited to making truthful communication possible, but I argue that classical probabilistic coherence is superior to other logics for maximizing efficiency in communication. (shrink)
Why should we avoid incoherence? An influential view tells us that incoherent combinations of attitudes are such that it is impossible for all of those attitudes to be simultaneously vindicated by the evidence. But it is not clear whether this view successfully explains what is wrong with certain akratic doxastic states. In this paper I flesh out an alternative response to that question, one according to which the problem with incoherent combinations of attitudes is that it is impossible for all (...) of those attitudes to be simultaneously knowledgeable. This alternative response explains what is wrong with akratic combinations of attitudes using commonly accepted epistemological theses. The paper still shows how this proposal is able to explain the badness of incoherent combinations involving the absence of attitudes, suspended judgment and credence. These explanations deploy the notions of knowledge and being in a position to know, instead of the notion of responding properly to the evidence. Finally, I suggest that this picture can be generalized to the realm of practical rationality as well. (shrink)
Recent work on rationality has been increasingly attentive to “coherence requirements”, with heated debates about both the content of such requirements and their normative status (e.g., whether there is necessarily reason to comply with them). Yet there is little to no work on the metanormative status of coherence requirements. Metaphysically: what is it for two or more mental states to be jointly incoherent, such that they are banned by a coherence requirement? In virtue of what are some (...) putative requirements genuine and others not? Epistemologically: how are we to know which of the requirements are genuine and which aren’t? This paper tries to offer an account that answers these questions. On my account, the incoherence of a set of attitudinal mental states is a matter of its being (partially) constitutive of the mental states in question that, for any agent that holds these attitudes jointly, the agent is disposed, when conditions of full transparency are met, to give up at least one of the attitudes. (shrink)
Arguments pro and contra convergent realism - underdetermination of theory by observational evidence and pessimistic meta-induction from past falsity- are considered. It is argued that, to meet the counter-arguments challenge, convergent realism should be considerably changed with a help of modification of the propositions from this meta-programme’s “hard core” and “protecting belt”. Maybe one of the ways out is to turn to the coherent theory of truth. Some of the works of Hegel (as interpreted by Merab Mamardashvili and Alexandre Kojev), (...) Husserl and Heidegger can help to dig still deeper into the background of this theory. Key words: Husserl, Heidegger, Hegel, convergent realism, internal realism, coherent theory of truth. -/- . (shrink)
An education for cultural coherence tends to the child’s well-being through identity construction and maintenance. Critics charge that this sort of education will not bode well for the future autonomy of children. I will argue that culturally coherent education, provided there is no coercion, can lend itself to eventual autonomy and may assist minority children in countering the negative stereotypes and discrimination they face in the larger society. Further, I will argue that few individuals actually possess an entirely coherent (...) identity; rather, most of us possess hybrid identities that lend themselves to multiple, not necessarily conflicting allegiances. (shrink)
We study whether it is possible to generalise Seidenfeld et al.’s representation result for coherent choice functions in terms of sets of probability/utility pairs when we let go of Archimedeanity. We show that the convexity property is necessary but not sufficient for a choice function to be an infimum of a class of lexicographic ones. For the special case of two-dimensional option spaces, we determine the necessary and sufficient conditions by weakening the Archimedean axiom.
We construct a probabilistic coherence measure for information sets which determines a partial coherence ordering. This measure is applied in constructing a criterion for expanding our beliefs in the face of new information. A number of idealizations are being made which can be relaxed by an appeal to Bayesian Networks.
In a recent article, Peter Gärdenfors (1992) has suggested that the AGM (Alchourrón, Gärdenfors, and Makinson) theory of belief revision can be given an epistemic basis by interpreting the revision postulates of that theory in terms of a version of the coherence theory of justification. To accomplish this goal Gärdenfors suggests that the AGM revision postulates concerning the conservative nature of belief revision can be interpreted in terms of a concept of epistemic entrenchment and that there are good empirical (...) reasons to adopt this view as opposed to some form of foundationalist account of the justification of our beliefs. In this paper I argue that Gärdenfors’ attempt to underwrite the AGM theory of belief revision by appealing to a form of coherentism is seriously inadequate for several reasons. (shrink)
Let me first state that I like Antti Revonsuo’s discussion of the various methodological and interpretational problems in neuroscience. It shows how careful and methodologically reflected scientists have to proceed in this fascinating field of research. I have nothing to add here. Furthermore, I am very sympathetic towards Revonsuo’s general proposal to call for a Philosophy of Neuroscience that stresses foundational issues, but also focuses on methodological and explanatory strategies.2 In a footnote of his paper, Revonsuo complains – as many (...) others do today – about what is sometimes called “physics imperialism”. This is the view that physics dominates the philosophy of science. I am not sure if this is still the case nowadays, but it is certainly historically correct that almost all work in the field of methodology centered around cases from physics. Although this has been changing, there are still plenty of special sciences philosophers did not worry about much. Admittedly, I am myself a trained physicist and not a neuroscientist and will therefore probably be biased negatively. As it is, I will discuss some examples from physics in order to illustrate my points. (shrink)
I argue that bounded agents face a systematic accuracy-coherence tradeoff in cognition. Agents must choose whether to structure their cognition in ways likely to promote coherence or accuracy. I illustrate the accuracy-coherence tradeoff by showing how it arises out of at least two component tradeoffs: a coherence-complexity tradeoff between coherence and cognitive complexity, and a coherence-variety tradeoff between coherence and strategic variety. These tradeoffs give rise to an accuracy-coherence tradeoff because privileging (...) class='Hi'>coherence over complexity or strategic variety often leads to a corresponding reduction in accuracy. I conclude with a discussion of two normative consequences for the study of bounded rationality: the importance of procedural rationality and the role of coherence in theories of bounded rationality. (shrink)
This paper examines the interplay of semantics and pragmatics within the domain of film. Films are made up of individual shots strung together in sequences over time. Though each shot is disconnected from the next, combinations of shots still convey coherent stories that take place in continuous space and time. How is this possible? The semantic view of film holds that film coherence is achieved in part through a kind of film language, a set of conventions which govern the (...) relationships between shots. In this paper, we develop and defend a new version of the semantic view. We articulate it for a pair of conventions that govern spatial relations between viewpoints. One such rule is already well-known; sometimes called the "180° Rule," we term it the X-Constraint; to this we add a previously unrecorded rule, the T-Constraint. As we show, both have the effect, in different ways, of limiting the way that viewpoint can shift through space from shot to shot over the course of a film sequence. Such constraints, we contend, are analogous to relations of discourse coherence that are widely recognized in the linguistic domain. If film is to have a language, it is a language made up of rules like these. (shrink)
There is a well-known equivalence between avoiding accuracy dominance and having probabilistically coherent credences (see, e.g., de Finetti 1974, Joyce 2009, Predd et al. 2009, Pettigrew 2016). However, this equivalence has been established only when the set of propositions on which credence functions are defined is finite. In this paper, I establish connections between accuracy dominance and coherence when credence functions are defined on an infinite set of propositions. In particular, I establish the necessary results to extend the classic (...) accuracy argument for probabilism to certain classes of infinite sets of propositions including countably infinite partitions. (shrink)
It is usually accepted that deductions are non-informative and monotonic, inductions are informative and nonmonotonic, abductions create hypotheses but are epistemically irrelevant, and both deductions and inductions can’t provide new insights. In this article, I attempt to provide a more cohesive view of the subject with the following hypotheses: (1) the paradigmatic examples of deductions, such as modus ponens and hypothetical syllogism, are not inferential forms, but coherence requirements for inferences; (2) since any reasoner aims to be coherent, any (...) inference must be deductive; (3) a coherent inference is an intuitive process where the premises should be taken as sufficient evidence for the conclusion, which on its turn should be viewed as a necessary evidence for the premises in some modal range; (4) inductions, properly understood, are abductions, but there are no abductions beyond the fact that in any inference the conclusion should be regarded as a necessary evidence for the premises; (5) motonocity is not only compatible with the retraction of past inferences given new information, but it is a requirement for it; (6) this explanation of inferences holds true for discovery processes, predictions and trivial inferences. (shrink)
Rationalists often argue that empiricism is incoherent and conclude, on that basis, that some knowledge is a priori. I contend that such arguments against empiricism cannot be parlayed into an argument in support of the a priori since rationalism is open to the same arguments. I go on to offer an alternative strategy. The leading idea is that, instead of offering a priori arguments against empiricism, rationalists should marshal empirical support for their position.
Several authors suggest that understanding and epistemic coherence are tightly connected. Using an account of understanding that makes no appeal to coherence, I explain away the intuitions that motivate this position. I then show that the leading coherentist epistemologies only place plausible constraints on understanding insofar as they replicate my own account’s requirements. I conclude that understanding is only superficially coherent.
According to a popular account, rationality is a kind of coherence of an agent’s mental states and, more specifically, a matter of fulfilling norms of coherence. For example, in order to be rational, an agent is required to intend to do what they judge they ought to and can do. This norm has been called ‘Enkrasia’. Another norm requires that, ceteris paribus, an agent retain their intention over time. This has been called ‘Persistence of Intention’. This paper argues (...) that thus understood norms of rationality may at times conflict. More specifically, Enkrasia and Persistence of Intention may place demands on the agent that are impossible to fulfil. In these cases, the framework of requirements does not provide us with norms that make us rational. A rival account, according to which rationality is a kind of responsiveness to one’s available reasons, can overcome the problem. (shrink)
Saul Smilansky’s theory of free will and moral responsibility consists of two parts; dualism and illusionism. Dualism is the thesis that both compatibilism and hard determinism are partly true, and has puzzled many philosophers. I argue that Smilansky’s dualism can be given an unquestionably coherent and comprehensible interpretation if we reformulate it in terms of pro tanto reasons. Dualism so understood is the thesis that respect for persons gives us pro tanto reasons to blame wrongdoers, and also pro tanto reasons (...) not to blame them. These reasons must be we ighed against each other (and against relevant consequentialist reasons) in order to find out what we all things considered ought to do. (shrink)
Throughout the biological and biomedical sciences there is a growing need for, prescriptive ‘minimum information’ (MI) checklists specifying the key information to include when reporting experimental results are beginning to find favor with experimentalists, analysts, publishers and funders alike. Such checklists aim to ensure that methods, data, analyses and results are described to a level sufficient to support the unambiguous interpretation, sophisticated search, reanalysis and experimental corroboration and reuse of data sets, facilitating the extraction of maximum value from data sets (...) them. However, such ‘minimum information’ MI checklists are usually developed independently by groups working within representatives of particular biologically- or technologically-delineated domains. Consequently, an overview of the full range of checklists can be difficult to establish without intensive searching, and even tracking thetheir individual evolution of single checklists may be a non-trivial exercise. Checklists are also inevitably partially redundant when measured one against another, and where they overlap is far from straightforward. Furthermore, conflicts in scope and arbitrary decisions on wording and sub-structuring make integration difficult. This presents inhibit their use in combination. Overall, these issues present significant difficulties for the users of checklists, especially those in areas such as systems biology, who routinely combine information from multiple biological domains and technology platforms. To address all of the above, we present MIBBI (Minimum Information for Biological and Biomedical Investigations); a web-based communal resource for such checklists, designed to act as a ‘one-stop shop’ for those exploring the range of extant checklist projects, and to foster collaborative, integrative development and ultimately promote gradual integration of checklists. (shrink)
Divine simplicity is central to Thomas Aquinas’s philosophy of God. Most important for Aquinas is his view that God’s existence (esse) is identical to God’s essence; for everything other than God, there is a distinction between existence and essence. However, recent developments in analytic philosophy about the nature of existence threaten to undermine what Aquinas thought regarding divine simplicity. In the first chapter of this dissertation, I trace Aquinas’s thinking on divine simplicity through the various texts he wrote regarding the (...) matter. I establish that it is crucial for Aquinas that God is identical to his existence. But, is it even coherent to talk about “a thing’s existence?” In Chapter Two I summarize the arguments of C.J.F. Williams that existence is not a real property that individuals have and that predicating the word “exists” after the name of an individual produces linguistic gibberish. After considering, and rejecting, in Chapter Three attempts by some philosophers to refute Williams’s arguments, I turn, in Chapter Four, to a more detailed account of what Aquinas means by esse, the Latin word often rendered in English as “existence.” The conclusion that I come to is that Aquinas does not mean by esse what contemporary philosophers have usually meant by “existence.” Rather, he understands esse to be the act by which anything can be anything at all, instead of there being nothing whatsoever. In the final chapter, I turn to implications of this for Aquinas’s views of divine simplicity, showing that rather than being incoherent, Aquinas’s thinking on esse points toward the unfathomable mystery that is God. (shrink)
Cases of severe and enduring Anorexia Nervosa (SEAN) rightly raise a great deal of concern around assessing capacity to refuse treatment (including artificial feeding). Commentators worry that the Court of Protection in England & Wales strays perilously close to a presumption of incapacity in such cases (Cave and Tan 2017, 16), with some especially bold (one might even say reckless) observers suggesting that the ordinary presumption in favor of capacity ought to be reversed in such cases (Ip 2019). -/- Those (...) of us who worry that such trends and proposals amount to (or at least pose a serious threat of) wrongful discrimination on the grounds of disability nevertheless feel the pull of judging many SEAN service users to be incapacitous with respect to decisions regarding (perhaps amongst other things) treatments involving feeding, artificial or otherwise. But it is difficult to get to grips with exactly what their incapacity consists in. Many such service users seem able to understand, retain, use and weigh information, and express a decision (i.e. they seem, prima facie, to satisfy the ‘MacArthur’ criteria). At the very least, they do not, generally. (shrink)
Hobbs (2010) introduced ‘clause-internal coherence’ (CIC) to describe inferences in, e.g., ‘A jogger was hit by a car,’ where the jogging is understood to have led to the car-hitting. Cohen & Kehler (2021) argue that well-known pragmatic tools cannot account for CIC, motivating an enrichment account familiar from discourse coherence research. An outstanding question is how to compositionally derive CIC from coherence relations. This paper takes strides in answering this question. It first provides experimental support for the (...) existence of CIC via offline evidence that attributive (non-)deverbal adjectives can trigger the same causal inferences within clauses that their predicative counterparts can trigger across clauses, albeit more weakly. To explain the experimental results, we use tools in Segmented Discourse Representation Theory (SDRT), which allows us to show that causal inferences can be derived in various ways, depending on whether deverbal adjectives are used attributively or predicatively. If the former, they are presupposition triggers and the coherence relations Elaboration/Continuation compete with Background; if the latter, Explanation/Result compete with Background. These different competitions -- cashed out in terms of interaction between default axioms -- correlate with the difference in the relative salience of the causal inferences. (shrink)
Putnam and Davidson both defended coherence theories of justification from the early 1980s onward. There are interesting similarities between these theories, and Putnam’s philosophical development lead to further convergence in the 1990s. The most conspicuous difference between Putnam’s and Davidson’s theories is that they appear to fundamentally disagree on the role and nature of conceptual schemes, but a closer look reveals that they are not as far apart on this issue as usually assumed. The veridicality of perceptual beliefs is (...) a cornerstone of both Davidson’s and Putnam’s later (but not earlier) coherentism. However, this thesis introduces a form of weak foundationalism into their theories, and consequently, those are strictly speaking not pure coherence theories, but hybrids between coherentism and foundationalism. (shrink)
This article shows that in 1.4.2.15-24 of the Treatise of Human Nature, Hume presents his own position on objects, which is to be distinguished from both the vulgar and philosophical conception of objects. Here, Hume argues that objects that are effectively imagined to have a “perfect identity” are imagined due to the constancy and coherence of our perceptions (what we may call ‘level 1 constancy and coherence’). In particular, we imagine that objects cause such perceptions, via what I (...) call ‘indirect causation.’ In virtue of imagining ideas of objects that have a perfect identity, our perceptions seem to be even more constant and coherent (what we may call ‘level 2 constancy and coherence’). Thus, in addition to seeing that Hume is presenting his own position on objects in this section of the Treatise, we see that he is working with a previously unrecognized kind of causation, i.e., indirect causation, and that he has two kinds of constancy and coherence in mind: level 1 and level 2. (shrink)
I examine three common beliefs about love: constancy, exclusivity, and the claim that love is a response to the properties of the beloved. Following a discussion of their relative consistency, I argue that neither the constancy nor the exclusivity of love are saved by the contrary belief, that love is not (entirely) a response to the properties of the beloved.
According to Quine, Charles Parsons, Mark Steiner, and others, Russell’s logicist project is important because, if successful, it would show that mathematical theorems possess desirable epistemic properties often attributed to logical theorems, such as aprioricity, necessity, and certainty. Unfortunately, Russell never attributed such importance to logicism, and such a thesis contradicts Russell’s explicitly stated views on the relationship between logic and mathematics. This raises the question: what did Russell understand to be the philosophical importance of logicism? Building on recent work (...) by Andrew Irvine and Martin Godwyn, I argue that Russell thought a systematic reduction of mathematics increases the certainty of known mathematical theorems (even basic arithmetical facts) by showing mathematical knowledge to be coherently organized. The paper outlines Russell’s theory of coherence, and discusses its relevance to logicism and the certainty attributed to mathematics. (shrink)
Coherentism in epistemology has long suffered from lack of formal and quantitative explication of the notion of coherence. One might hope that probabilistic accounts of coherence such as those proposed by Lewis, Shogenji, Olsson, Fitelson, and Bovens and Hartmann will finally help solve this problem. This paper shows, however, that those accounts have a serious common problem: the problem of belief individuation. The coherence degree that each of the accounts assigns to an information set (or the verdict (...) it gives as to whether the set is coherent tout court) depends on how beliefs (or propositions) that represent the set are individuated. Indeed, logically equivalent belief sets that represent the same information set can be given drastically different degrees of coherence. This feature clashes with our natural and reasonable expectation that the coherence degree of a belief set does not change unless the believer adds essentially new information to the set or drops old information from it; or, to put it simply, that the believer cannot raise or lower the degree of coherence by purely logical reasoning. None of the accounts in question can adequately deal with coherence once logical inferences get into the picture. Toward the end of the paper, another notion of coherence that takes into account not only the contents but also the origins (or sources) of the relevant beliefs is considered. It is argued that this notion of coherence is of dubious significance, and that it does not help solve the problem of belief individuation. (shrink)
Bayesians standardly claim that there is rational pressure for agents’ credences to cohere across time because they face bad (epistemic or practical) consequences if they fail to diachronically cohere. But as David Christensen has pointed out, groups of individual agents also face bad consequences if they fail to interpersonally cohere, and there is no general rational pressure for one agent's credences to cohere with another’s. So it seems that standard Bayesian arguments may prove too much. Here, we agree with Christensen (...) that there is no general rational pressure to diachronically cohere, but we argue that there are particular cases in which there is rational pressure to diachronically cohere, as well as particular cases in which interpersonal probabilistic coherence is rationally required. More generally, we suggest that Bayesian arguments for coherence apply whenever a collection (of agents or time slices) has a shared dimension of value and an ability to coordinate their actions in a range of cases relevant to that value. Typically, this shared value and ability to coordinate is very strong across the time slices of one human being, and very weak across different human beings, but there are special cases where these can switch—i.e., some groups of humans will have as much reason for their beliefs to cohere across a particular range of cases as the time slices of one human usually do, but some time slices of a human will have as much freedom to differ in their beliefs from the others as the members of a group usually do. (shrink)
We deepen the study of conjoined and disjoined conditional events in the setting of coherence. These objects, differently from other approaches, are defined in the framework of conditional random quantities. We show that some well known properties, valid in the case of unconditional events, still hold in our approach to logical operations among conditional events. In particular we prove a decomposition formula and a related additive property. Then, we introduce the set of conditional constituents generated by $n$ conditional events (...) and we show that they satisfy the basic properties valid in the case of unconditional events. We obtain a generalized inclusion-exclusion formula, which can be interpreted by introducing a suitable distributive property. Moreover, under logical independence of basic unconditional events, we give two necessary and sufficient coherence conditions. The first condition gives a geometrical characterization for the coherence of prevision assessments on a family F constituted by n conditional events and all possible conjunctions among them. The second condition characterizes the coherence of prevision assessments defined on $F\cup K$, where $K$ is the set of conditional constituents associated with the conditional events in $F$. Then, we give some further theoretical results and we examine some examples and counterexamples. Finally, we make a comparison with other approaches and we illustrate some theoretical aspects and applications. (shrink)
It can be argued (cf. Dizadji‑Bahmani et al. 2010) that an increase in coherence is one goal that drives reductionist enterprises. Consequently, the question if or how well this goal is achieved can serve as an epistemic criterion for evaluating both a concrete case of a purported reduction and our model of reduction : what conditions on the model allow for an increase in coherence ? In order to answer this question, I provide an analysis of the relation (...) between the reduction and the coherence of two theories. The underlying model of reduction is a (generalised) Nagelian model (cf. Nagel 1970, Schaffner 1974, Dizadji‑Bahmani et al. 2010). For coherence, different measures have been put forward (e.g. Shogenji 1999, Olsson 2002, Fitelson 2003, Bovens & Hartmann 2003). However, since there are counterexamples to each proposed coherence measure, we should be careful that the analysis be sufficiently stable (in a sense to be specified). It will turn out that this can be done. (shrink)
Polus admires orators for the tyrannical power they have. However, Socrates argues that orators and tyrants lack power worth having: the ability to satisfy one's wishes or wants (boulēseis). He distinguishes wanting from thinking best, and grants that orators and tyrants do what they think best while denying that they do what they want. His account is often thought to involve two conflicting requirements: wants must be attributable to the wanter from their own perspective (to count as their desires), but (...) wants must also be directed at objects that are genuinely good (in order for failure to satisfy them to matter). We offer an account of wanting as reflective, coherent desire, which allows Socrates to satisfy both desiderata. We then explain why he thinks that orators and tyrants want to act justly, though they do greater injustices than anyone else and so frustrate their own wants more than anyone else. (shrink)
Martha Nussbaum has provided a sustained critique of Cicero’s De officiis (or On Duties), concerning what she claims is Cicero’s incoherent distinction between duties of justice, which are strict, cosmopolitan, and impartial, and duties of material aid, which are elastic, weighted towards those who are near and dear, and partial. No doubt, from Nussbaum’s cosmopolitan perspective, Cicero’s distinction between justice and beneficence seems problematic and lies at the root of modern moral failures to conceptualize adequately our obligations in situations of (...) famine and global inequality. And yet a careful reading of Cicero’s On Duties shows many duties that appear to be partial. It is hard to believe that Cicero’s discussion of these “partial duties” was a mere oversight or omission on his part. Rather, Cicero seemed to have no problem endorsing what I will call the “partial coherence” of duties in the work—namely, the claim that asserting that duties are exhaustively partial or impartial is a false dichotomy. With the help of conceptual analysis by Richard Kraut, my paper aims to establish and defend Cicero’s “partial coherence” against Nussbaum’s criticisms. (shrink)
This article addresses the question of the coherence of enactivism as a research perspective by making a case study of enactivism in mathematics education research. Main theoretical directions in mathematics education are reviewed and the history of adoption of concepts from enactivism is described. It is concluded that enactivism offers a ‘grand theory’ that can be brought to bear on most of the phenomena of interest in mathematics education research, and so it provides a sufficient theoretical framework. It has (...) particular strength in describing interactions between cognitive systems, including human beings, human conversations and larger human social systems. Some apparent incoherencies of enactivism in mathematics education and in other fields come from the adoption of parts of enactivism that are then grafted onto incompatible theories. However, another significant source of incoherence is the inadequacy of Maturana’s definition of a social system and the lack of a generally agreed upon alternative. (shrink)
Whether collective agency is a coherent concept depends on the theory of agency that we choose to adopt. We argue that the enactive theory of agency developed by Barandiaran, Di Paolo and Rohde (2009) provides a principled way of grounding agency in biological organisms. However the importance of biological embodiment for the enactive approach might lead one to be skeptical as to whether artificial systems or collectives of individuals could instantiate genuine agency. To explore this issue we contrast the concept (...) of collective agency with multi-agent systems and multi-system agents, and argue that genuinely collective agents instantiate agency at both the collective level and at the level of the component parts. Developing the enactive model, we propose understanding agency – both at the level of the individual and of the collective – as spectra that are constituted by dimensions that vary across time. Finally, we consider whether collectives that are not merely metaphorically ‘agents’ but rather are genuinely agentive also instantiate subjectivity at the collective level. We propose that investigations using the perceptual crossing paradigm suggest that a shared lived perspective can indeed emerge but this should not be conflated with a collective first-person perspective, for which material integration in a living body may be required. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.