Smith argues that, unlike other forms of evidence, naked statistical evidence fails to satisfy normic support. This is his solution to the puzzles of statistical evidence in legal proof. This paper focuses on Smith’s claim that DNA evidence in cold-hit cases does not satisfy normic support. I argue that if this claim is correct, virtually no other form of evidence used at trial can satisfy normic support. This is troublesome. I discuss a few ways in which Smith can respond.
Why can testimony alone be enough for findings of liability? Why statistical evidence alone can’t? These questions underpin the “ProofParadox” (Redmayne 2008, Enoch et al. 2012). Many epistemologists have attempted to explain this paradox from a purely epistemic perspective. I call it the “Epistemic Project”. In this paper, I take a step back from this recent trend. Stemming from considerations about the nature and role of standards of proof, I define three requirements that any successful (...) account in line with the Epistemic Project should meet. I then consider three recent epistemic accounts on which the standard is met when the evidence rules out modal risk (Pritchard 2018), normic risk (Ebert et al. 2020), or relevant alternatives (Gardiner 2019 2020). I argue that none of these accounts meets all the requirements. Finally, I offer reasons to be pessimistic about the prospects of having a successful epistemic explanation of the paradox. I suggest the discussion on the proofparadox would benefit from undergoing a ‘value-turn’. (shrink)
Recent years have seen fresh impetus brought to debates about the proper role of statistical evidence in the law. Recent work largely centres on a set of puzzles known as the ‘proofparadox’. While these puzzles may initially seem academic, they have important ramifications for the law: raising key conceptual questions about legal proof, and practical questions about DNA evidence. This article introduces the proofparadox, why we should care about it, and new work attempting (...) to resolve it. (shrink)
This essay introduces the ‘she said, he said’ paradox for Title IX investigations. ‘She said, he said’ cases are accusations of rape, followed by denials, with no further significant case-specific evidence available to the evaluator. In such cases, usually the accusation is true. Title IX investigations adjudicate sexual misconduct accusations in US educational institutions; I address whether they should be governed by the ‘preponderance of the evidence’ standard of proof or the higher ‘clear and convincing evidence’ standard. -/- (...) Orthodoxy holds that the ‘preponderance’ standard is satisfied if the evidence adduced renders the litigated claim more likely than not. On this view, I argue, ‘she said, he said’ cases satisfy the ‘preponderance’ standard. But this consequence conflicts with plausible liberal and feminist claims. In this essay I contrast the ‘she said, he said’ paradox with legal epistemology’s proofparadox. I explain how both paradoxes arise from the distinction between individualised and non-individualised evidence, and I critically evaluate responses to the ‘she said, he said’ paradox. (shrink)
This paper defends the heretical view that, at least in some cases, we ought to assign legal liability based on purely statistical evidence. The argument draws on prominent civil law litigation concerning pharmaceutical negligence and asbestos-poisoning. The overall aim is to illustrate moral pitfalls that result from supposing that it is never appropriate to rely on bare statistics when settling a legal dispute.
In this dissertation, we shall investigate whether Tennant's criterion for paradoxicality(TCP) can be a correct criterion for genuine paradoxes and whether the requirement of a normal derivation(RND) can be a proof-theoretic solution to the paradoxes. Tennant’s criterion has two types of counterexamples. The one is a case which raises the problem of overgeneration that TCP makes a paradoxical derivation non-paradoxical. The other is one which generates the problem of undergeneration that TCP renders a non-paradoxical derivation paradoxical. Chapter 2 deals (...) with the problem of undergeneration and Chapter 3 concerns the problem of overgeneration. Chapter 2 discusses that Tenant’s diagnosis of the counterexample which applies CR−rule and causes the undergeneration problem is not correct and presents a solution to the problem of undergeneration. Chapter 3 argues that Tennant’s diagnosis of the counterexample raising the overgeneration problem is wrong and provides a solution to the problem. Finally, Chapter 4 addresses what should be explicated in order for RND to be a proof-theoretic solution to the paradoxes. (shrink)
This paper uses a paradox inherent in any solution to the Hard Problem of Consciousness to argue for God’s existence. The paper assumes we are “thought machines”, reading the state of a relevant physical medium and then outputting corresponding thoughts. However, the existence of such a thought machine is impossible, since it needs an infinite number of point-representing sensors to map the physical world to conscious thought. This paper shows that these sensors cannot exist, and thus thought cannot come (...) solely from our physical world. The only possible explanation is something outside, argued to be God. (shrink)
This article attempts to elucidate the phenomenon of time and its relationship to consciousness. It defends the idea that time exists both as a psychological or illusory experience, and as an ontological property of spacetime that actually exists independently of human experience.
Wittgenstein's paradoxical theses that unproved propositions are meaningless, proofs form new concepts and rules, and contradictions are of limited concern, led to a variety of interpretations, most of them centered on rule-following skepticism. We argue, with the help of C. S. Peirce's distinction between corollarial and theorematic proofs, that his intuitions are better explained by resistance to what we call conceptual omniscience, treating meaning as fixed content specified in advance. We interpret the distinction in the context of modern epistemic logic (...) and semantic information theory, and show how removing conceptual omniscience helps resolve Wittgenstein's paradoxes and explain the puzzle of deduction, its ability to generate new knowledge and meaning. (shrink)
Interesting as they are by themselves in philosophy and mathematics, paradoxes can be made even more fascinating when turned into proofs and theorems. For example, Russell’s paradox, which overthrew Frege’s logical edifice, is now a classical theorem in set theory, to the effect that no set contains all sets. Paradoxes can be used in proofs of some other theorems—thus Liar’s paradox has been used in the classical proof of Tarski’s theorem on the undefinability of truth in sufficiently (...) rich languages. This paradox (as well as Richard’s paradox) appears implicitly in Gödel’s proof of his celebrated first incompleteness theorem. In this paper, we study Yablo’s paradox from the viewpoint of first- and second-order logics. We prove that a formalization of Yablo’s paradox (which is second order in nature) is non-first-orderizable in the sense of George Boolos (1984). (shrink)
In order to perform certain actions – such as incarcerating a person or revoking parental rights – the state must establish certain facts to a particular standard of proof. These standards – such as preponderance of evidence and beyond reasonable doubt – are often interpreted as likelihoods or epistemic confidences. Many theorists construe them numerically; beyond reasonable doubt, for example, is often construed as 90 to 95% confidence in the guilt of the defendant. -/- A family of influential cases (...) suggests standards of proof should not be interpreted numerically. These ‘proof paradoxes’ illustrate that purely statistical evidence can warrant high credence in a disputed fact without satisfying the relevant legal standard. In this essay I evaluate three influential attempts to explain why merely statistical evidence cannot satisfy legal standards. (shrink)
The impossibility results in judgement aggregation show a clash between fair aggregation procedures and rational collective outcomes. In this paper, we are interested in analysing the notion of rational outcome by proposing a proof-theoretical understanding of collective rationality. In particular, we use the analysis of proofs and inferences provided by linear logic in order to define a fine-grained notion of group reasoning that allows for studying collective rationality with respect to a number of logics. We analyse the well-known paradoxes (...) in judgement aggregation and we pinpoint the reasoning steps that trigger the inconsistencies. Moreover, we extend the map of possibility and impossibility results in judgement aggregation by discussing the case of substructural logics. In particular, we show that there exist fragments of linear logic for which general possibility results can be obtained. (shrink)
A question, long discussed by legal scholars, has recently provoked a considerable amount of philosophical attention: ‘Is it ever appropriate to base a legal verdict on statistical evidence alone?’ Many philosophers who have considered this question reject legal reliance on bare statistics, even when the odds of error are extremely low. This paper develops a puzzle for the dominant theories concerning why we should eschew bare statistics. Namely, there seem to be compelling scenarios in which there are multiple sources of (...) incriminating statistical evidence. As we conjoin together different types of statistical evidence, it becomes increasingly incredible to suppose that a positive verdict would be impermissible. I suggest that none of the dominant views in the literature can easily accommodate such cases, and close by offering a diagnosis of my own. (shrink)
According to the “paradox of knowability”, the moderate thesis that all truths are knowable – ... – implies the seemingly preposterous claim that all truths are actually known – ... –, i.e. that we are omniscient. If Fitch’s argument were successful, it would amount to a knockdown rebuttal of anti-realism by reductio. In the paper I defend the nowadays rather neglected strategy of intuitionistic revisionism. Employing only intuitionistically acceptable rules of inference, the conclusion of the argument is, firstly, not (...) ..., but .... Secondly, even if there were an intuitionistically acceptable proof of ..., i.e. an argument based on a different set of premises, the conclusion would have to be interpreted in accordance with Heyting semantics, and read in this way, the apparently preposterous conclusion would be true on conceptual grounds and acceptable even from a realist point of view. Fitch’s argument, understood as an immanent critique of verificationism, fails because in a debate dealing with the justification of deduction there can be no interpreted formal language on which realists and anti-realists could agree. Thus, the underlying problem is that a satisfactory solution to the “problem of shared content” is not available. I conclude with some remarks on the proposals by J. Salerno and N. Tennant to reconstruct certain arguments in the debate on anti-realism by establishing aporias. (shrink)
In his "Ontological proof", Kurt Gödel introduces the notion of a second-order value property, the positive property P. The second axiom of the proof states that for any property φ: If φ is positive, its negation is not positive, and vice versa. I put forward that this concept of positiveness leads into a paradox when we apply it to the following self-reflexive sentences: (A) The truth value of A is not positive; (B) The truth value of B (...) is positive. Given axiom 2, sentences A and B paradoxically cannot be both true or both false, and it is also impossible that one of the sentences is true whereas the other is false. (shrink)
Which rules for aggregating judgments on logically connected propositions are manipulable and which not? In this paper, we introduce a preference-free concept of non-manipulability and contrast it with a preference-theoretic concept of strategy-proofness. We characterize all non-manipulable and all strategy-proof judgment aggregation rules and prove an impossibility theorem similar to the Gibbard--Satterthwaite theorem. We also discuss weaker forms of non-manipulability and strategy-proofness. Comparing two frequently discussed aggregation rules, we show that “conclusion-based voting” is less vulnerable to manipulation than “premise-based (...) voting”, which is strategy-proof only for “reason-oriented” individuals. Surprisingly, for “outcome-oriented” individuals, the two rules are strategically equivalent, generating identical judgments in equilibrium. Our results introduce game-theoretic considerations into judgment aggregation and have implications for debates on deliberative democracy. (shrink)
This paper gives a definition of self-reference on the basis of the dependence relation given by Leitgeb (2005), and the dependence digraph by Beringer & Schindler (2015). Unlike the usual discussion about self-reference of paradoxes centering around Yablo's paradox and its variants, I focus on the paradoxes of finitary characteristic, which are given again by use of Leitgeb's dependence relation. They are called 'locally finite paradoxes', satisfying that any sentence in these paradoxes can depend on finitely many sentences. I (...) prove that all locally finite paradoxes are self-referential in the sense that there is a directed cycle in their dependence digraphs. This paper also studies the 'circularity dependence' of paradoxes, which was introduced by Hsiung (2014). I prove that the locally finite paradoxes have circularity dependence in the sense that they are paradoxical only in the digraph containing a proper cycle. The proofs of the two results are based directly on König's infinity lemma. In contrast, this paper also shows that Yablo's paradox and its nested variant are non-self-referential, and neither McGee's paradox nor the omega-cycle liar paradox has circularity dependence. (shrink)
In a recent paper by Tranchini (Topoi, 2019), an introduction rule for the paradoxical proposition ρ∗ that can be simultaneously proven and disproven is discussed. This rule is formalized in Martin-Löf’s constructive type theory (CTT) and supplemented with an inferential explanation in the style of Brouwer-Heyting-Kolmogorov semantics. I will, however, argue that the provided formalization is problematic because what is paradoxical about ρ∗ from the viewpoint of CTT is not its provability, but whether it is a proposition at all.
In this paper we first develop a Dialetheic Logic with Exclusive Assumptions and Conclusions, DLEAC. We adopt the semantics of the logic of paradox (LP) extended with a notion of model suitable for DLEAC, and we modify its proof theory by refining the notions of assumption and conclusion, which are understood as speech acts. We introduce a new paradox – the rejectability paradox – first informally, then formally. We then provide its derivation in an extension of (...) DLEAC contanining the rejectability predicate. (shrink)
Many philosophers are sceptical about the power of philosophy to refute commonsensical claims. They look at the famous attempts and judge them inconclusive. I prove that, even if those famous attempts are failures, there are alternative successful philosophical proofs against commonsensical claims. After presenting the proofs I briefly comment on their significance.
Cantor’s proof that the powerset of the set of all natural numbers is uncountable yields a version of Richard’s paradox when restricted to the full definable universe, that is, to the universe containing all objects that can be defined not just in one formal language but by means of the full expressive power of natural language: this universe seems to be countable on one account and uncountable on another. We argue that the claim that definitional contexts impose restrictions (...) on the scope of quantifiers reveals a natural way out. (shrink)
One of the Bell's assumptions in the original derivation of his inequalities was the hypothesis of locality, i.e., the absence of the in uence of two remote measuring instruments on one another. That is why violations of these inequalities observed in experiments are often interpreted as a manifestation of the nonlocal nature of quantum mechanics, or a refutation of a local realism. It is well known that the Bell's inequality was derived in its traditional form, without resorting to the hypothesis (...) of locality and without the introduction of hidden variables, the only assumption being that the probability distributions are nonnegative. This can therefore be regarded as a rigorous proof that the hypothesis of locality and the hypothesis of existence of the hidden variables not relevant to violations of Bell's inequalities. The physical meaning of the obtained results is examined. Physical nature of the violation of the Bell inequalities is explained under new EPR-B nonlocality postulate.We show that the correlations of the observables involved in the Bohm{Bell type experiments can be expressed as correlations of classical random variables. The revisited Bell type inequality in canonical notatons reads <AB>+<A′B>+<AB′>-<A′B′>≤6. (shrink)
Many philosophers think that common sense knowledge survives sophisticated philosophical proofs against it. Recently, however, Bryan Frances (forthcoming) has advanced a philosophical proof that he thinks common sense can’t survive. Exploiting philosophical paradoxes like the Sorites, Frances attempts to show how common sense leads to paradox and therefore that common sense methodology is unstable. In this paper, we show how Frances’s proof fails and then present Frances with a dilemma.
This paper contends that Stoic logic (i.e. Stoic analysis) deserves more attention from contemporary logicians. It sets out how, compared with contemporary propositional calculi, Stoic analysis is closest to methods of backward proof search for Gentzen-inspired substructural sequent logics, as they have been developed in logic programming and structural proof theory, and produces its proof search calculus in tree form. It shows how multiple similarities to Gentzen sequent systems combine with intriguing dissimilarities that may enrich contemporary discussion. (...) Much of Stoic logic appears surprisingly modern: a recursively formulated syntax with some truth-functional propositional operators; analogues to cut rules, axiom schemata and Gentzen’s negation-introduction rules; an implicit variable-sharing principle and deliberate rejection of Thinning and avoidance of paradoxes of implication. These latter features mark the system out as a relevance logic, where the absence of duals for its left and right introduction rules puts it in the vicinity of McCall’s connexive logic. Methodologically, the choice of meticulously formulated meta-logical rules in lieu of axiom and inference schemata absorbs some structural rules and results in an economical, precise and elegant system that values decidability over completeness. (shrink)
DEFINING OUR TERMS A “paradox" is an argumentation that appears to deduce a conclusion believed to be false from premises believed to be true. An “inconsistency proof for a theory" is an argumentation that actually deduces a negation of a theorem of the theory from premises that are all theorems of the theory. An “indirect proof of the negation of a hypothesis" is an argumentation that actually deduces a conclusion known to be false from the hypothesis alone (...) or, more commonly, from the hypothesis augmented by a set of premises known to be true. A “direct proof of a hypothesis" is an argumentation that actually deduces the hypothesis itself from premises known to be true. Since `appears', `believes' and `knows' all make elliptical reference to a participant, it is clear that `paradox', `indirect proof' and `direct proof' are all participant-relative. PARTICIPANT RELATIVITY In normal mathematical writing the participant is presumed to be “the community of mathematicians" or some more or less well-defined subcommunity and, therefore, omission of explicit reference to the participant is often warranted. However, in historical, critical, or philosophical writing focused on emerging branches of mathematics such omission often invites confusion. One and the same argumentation has been a paradox for one mathematician, an inconsistency proof for another, and an indirect proof to a third. One and the same argumentation-text can appear to one mathematician to express an indirect proof while appearing to another mathematician to express a direct proof. WHAT IS A PARADOX’S SOLUTION? Of the above four sorts of argumentation only the paradox invites “solution" or “resolution", and ordinarily this is to be accomplished either by discovering a logical fallacy in the “reasoning" of the argumentation or by discovering that the conclusion is not really false or by discovering that one of the premises is not really true. Resolution of a paradox by a participant amounts to reclassifying a formerly paradoxical argumentation either as a “fallacy", as a direct proof of its conclusion, as an indirect proof of the negation of one of its premises, as an inconsistency proof, or as something else depending on the participant's state of knowledge or belief. This illustrates why an argumentation which is a paradox to a given mathematician at a given time may well not be a paradox to the same mathematician at a later time. -/- The present article considers several set-theoretic argumentations that appeared in the period 1903-1908. The year 1903 saw the publication of B. Russell's Principles of mathematics, [Cambridge Univ. Press, Cambridge, 1903; Jbuch 34, 62]. The year 1908 saw the publication of Russell's article on type theory as well as Ernst Zermelo's two watershed articles on the axiom of choice and the foundations of set theory. The argumentations discussed concern “the largest cardinal", “the largest ordinal", the well-ordering principle, “the well-ordering of the continuum", denumerability of ordinals and denumerability of reals. The article shows that these argumentations were variously classified by various mathematicians and that the surrounding atmosphere was one of confusion and misunderstanding, partly as a result of failure to make or to heed distinctions similar to those made above. The article implies that historians have made the situation worse by not observing or not analysing the nature of the confusion. -/- RECOMMENDATION This well-written and well-documented article exemplifies the fact that clarification of history can be achieved through articulation of distinctions that had not been articulated (or were not being heeded) at the time. The article presupposes extensive knowledge of the history of mathematics, of mathematics itself (especially set theory) and of philosophy. It is therefore not to be recommended for casual reading. AFTERWORD: This review was written at the same time Corcoran was writing his signature “Argumentations and logic”[249] that covers much of the same ground in much more detail. https://www.academia.edu/14089432/Argumentations_and_Logic . (shrink)
In recent years there has been a revitalised interest in non-classical solutions to the semantic paradoxes. In this paper I show that a number of logics are susceptible to a strengthened version of Curry's paradox. This can be adapted to provide a proof theoretic analysis of the omega-inconsistency in Lukasiewicz's continuum valued logic, allowing us to better evaluate which logics are suitable for a naïve truth theory. On this basis I identify two natural subsystems of Lukasiewicz logic which (...) individually, but not jointly, lack the problematic feature. (shrink)
A resolution to the Russell Paradox is presented that is similar to Russell's “theory of types” method but is instead based on the definition of why a thing exists as described in previous work by this author. In that work, it was proposed that a thing exists if it is a grouping tying "stuff" together into a new unit whole. In tying stuff together, this grouping defines what is contained within the new existent entity. A corollary is that a (...) thing, such as a set, does not exist until after the stuff is tied together, or said another way, until what is contained within is completely defined. A second corollary is that after a grouping defining what is contained within is present and the thing exists, if one then alters what is tied together (e.g., alters what is contained within), the first existent entity is destroyed and a different existent entity is created. A third corollary is that a thing exists only where and when its grouping exists. Based on this, the Russell Paradox's set R of all sets that aren't members of themselves does not even exist until after the list of the elements it contains (e.g. the list of all sets that aren't members of themselves) is defined. Once this list of elements is completely defined, R then springs into existence. Therefore, because it doesn't exist until after its list of elements is defined, R obviously can't be in this list of elements and, thus, cannot be a member of itself; so, the paradox is resolved. This same type of reasoning is then applied to Godel's first Incompleteness Theorem. Briefly, while writing a Godel Sentence, one makes reference to a future, not yet completed and not yet existent sentence, G, that claims its unprovability. However, only once the sentence is finished does it become a new unit whole and existent entity called sentence G. If one then goes back in and replaces the reference to the future sentence with the future sentence itself, a totally different sentence, G1, is created. This new sentence G1 does not assert its unprovability. An objection might be that all the possibly infinite number of possible G-type sentences or their corresponding Godel numbers already exist somehow, so one doesn't have to worry about references to future sentences and springing into existence. But, if so, where do they exist? If they exist in a Platonic realm, where is this realm? If they exist pre-formed in the mind, this would seem to require a possibly infinite-sized brain to hold all these sentences. This is not the case. What does exist in the mind is the system for creating G-type sentences and their corresponding numbers. This mental system for making a G-type sentence is not the same as the G-type sentence itself just as an assembly line is not the same as a finished car. In conclusion, a new resolution of the Russell Paradox and some issues with proofs of Godel's First Incompleteness Theorem are described. (shrink)
A principle, according to which any scientific theory can be mathematized, is investigated. That theory is presupposed to be a consistent text, which can be exhaustedly represented by a certain mathematical structure constructively. In thus used, the term “theory” includes all hypotheses as yet unconfirmed as already rejected. The investigation of the sketch of a possible proof of the principle demonstrates that it should be accepted rather a metamathematical axiom about the relation of mathematics and reality. Its investigation needs (...) philosophical means. Husserl’s phenomenology is what is used, and then the conception of “bracketing reality” is modelled to generalize Peano arithmetic in its relation to set theory in the foundation of mathematics. The obtained model is equivalent to the generalization of Peano arithmetic by means of replacing the axiom of induction with that of transfinite induction. A comparison to Mach’s doctrine is used to be revealed the fundamental and philosophical reductionism of Husserl’s phenomenology leading to a kind of Pythagoreanism in the final analysis. Accepting or rejecting the principle, two kinds of mathematics appear differing from each other by its relation to reality. Accepting the principle, mathematics has to include reality within itself in a kind of Pythagoreanism. These two kinds are called in paper correspondingly Hilbert mathematics and Gödel mathematics. The sketch of the proof of the principle demonstrates that the generalization of Peano arithmetic as above can be interpreted as a model of Hilbert mathematics into Gödel mathematics therefore showing that the former is not less consistent than the latter, and the principle is an independent axiom. An information interpretation of Hilbert mathematics is involved. It is a kind of ontology of information. Thus the problem which of the two mathematics is more relevant to our being is discussed. An information interpretation of the Schrödinger equation is involved to illustrate the above problem. (shrink)
A sorites argument is a symptom of the vagueness of the predicate with which it is constructed. A vague predicate admits of at least one dimension of variation (and typically more than one) in its intended range along which we are at a loss when to say the predicate ceases to apply, though we start out confident that it does. It is this feature of them that the sorites arguments exploit. Exactly how is part of the subject of this paper. (...) The majority of philosophers writing on vagueness take it to be a kind of semantic phenomenon. If we are right, they are correct in this assumption, which is surely the default position, but they have not so far provided a satisfactory account of the implications of this or a satisfactory diagnosis of the sorites arguments. Other philosophers have urged more exotic responses, which range from the view that the fault lies not in our language, but in the world, which they propose to be populated with vague objects which our semantics precisely reflects, to the view that the world and language are both perfectly in order, but that the fault lies with our knowledge of the properties of the words we use (epistemicism). In contrast to the exotica to which some philosophers have found themselves driven in an attempt to respond to the sorites puzzles, we undertake a defense of the commonsense view that vague terms are semantically vague. Our strategy is to take fresh look at the phenomenon of vagueness. Rather than attempting to adjudicate between different extant theories, we begin with certain pre-theoretic intuitions about vague terms, and a default position on classical logic. The aim is to see whether (i) a natural story can be told which will explain the vagueness phenomenon and the puzzling nature of soritical arguments, and, in the course of this, to see whether (ii) there arises any compelling pressure to give up the natural stance. We conclude that there is a simple and natural story to be told, and we tell it, and that there is no good reason to abandon our intuitively compelling starting point. The importance of the strategy lies in its dialectical structure. Not all positions on vagueness are on a par. Some are so incredible that even their defenders think of them as positions of last resort, positions to which we must be driven by the power of philosophical argument. We aim to show that there is no pressure to adopt these incredible positions, obviating the need to respond to them directly. If we are right, semantic vagueness is neither surprising, nor threatening. It provides no reason to suppose that the logic of natural languages is not classical or to give up any independently plausible principle of bivalence. Properly understood, it provides us with a satisfying diagnosis of the sorites argumentation. It would be rash to claim to have any completely novel view about a topic so well worked as vagueness. But we believe that the subject, though ancient, still retains its power to inform and challenge us. In particular, we will argue that taking seriously the central phenomenon of predicate vagueness—the “boundarylessness” of vague predicates—on the commonsense assumption that vagueness is semantic, leads ineluctably to the view that no sentences containing vague expressions (henceforth ‘vague sentences’) are truth-evaluable. This runs counter to much of the literature on vagueness, which commonly assumes that, though some applications of vague predicates to objects fail to be truth-evaluable, in clear positive and negative cases vague sentences are unproblematically true or false. It is clarity on this, and related points, that removes the puzzles associated with vagueness, and helps us to a satisfying diagnosis of why the sorites arguments both seem compelling and yet so obviously a bit of trickery. We give a proof that semantically vague predicates neither apply nor fail-to-apply to anything, and that consequently it is a mistake to diagnose sorites arguments, as is commonly done, by attempting to locate in them a false premise. Sorites arguments are not sound, but not unsound either. We offer an explanation of their appeal, and defend our position against a variety of worries that might arise about it. The plan of the paper is as follows. We first introduce an important distinction in terms of which we characterize what has gone wrong with vague predicates. We characterize what we believe to be our natural starting point in thinking about the phenomenon of vagueness, from which only a powerful argument should move us, and then trace out the consequences of accepting this starting point. We consider the charge that among the consequences of semantic vagueness are that we must give up classical logic and the principle of bivalence, which has figured prominently in arguments for epistemicism. We argue there are no such consequences of our view: neither the view that the logic of natural languages is classical, nor any plausible principle of bivalence, need be given up. Next, we offer a diagnosis of what has gone wrong in sorites arguments on the basis of our account. We then present an argument to show that our account must be accepted on pain of embracing (in one way or another) the epistemic view of “vagueness”, i.e., of denying that there are any semantically vague terms at all. Next, we discuss some worries that may arise about the intelligibility of our linguistic practices if our account is correct. We argue none of these worries should force us from our intuitive starting point. Finally, we cast a quick glance at other forms of semantic incompleteness. (shrink)
According to a common conception of legal proof, satisfying a legal burden requires establishing a claim to a numerical threshold. Beyond reasonable doubt, for example, is often glossed as 90% or 95% likelihood given the evidence. Preponderance of evidence is interpreted as meaning at least 50% likelihood given the evidence. In light of problems with the common conception, I propose a new ‘relevant alternatives’ framework for legal standards of proof. Relevant alternative accounts of knowledge state that a person (...) knows a proposition when their evidence rules out all relevant error possibilities. I adapt this framework to model three legal standards of proof—the preponderance of evidence, clear and convincing evidence, and beyond reasonable doubt standards. I describe virtues of this framework. I argue that, by eschewing numerical thresholds, the relevant alternatives framework avoids problems inherent to rival models. I conclude by articulating aspects of legal normativity and practice illuminated by the relevant alternatives framework. (shrink)
An interpretation of Wittgenstein’s much criticized remarks on Gödel’s First Incompleteness Theorem is provided in the light of paraconsistent arithmetic: in taking Gödel’s proof as a paradoxical derivation, Wittgenstein was drawing the consequences of his deliberate rejection of the standard distinction between theory and metatheory. The reasoning behind the proof of the truth of the Gödel sentence is then performed within the formal system itself, which turns out to be inconsistent. It is shown that the features of paraconsistent (...) arithmetics match with some intuitions underlying Wittgenstein’s philosophy of mathematics, such as its strict finitism and the insistence on the decidability of any mathematical question. (shrink)
I present and discuss three previously unpublished manuscripts written by Bertrand Russell in 1903, not included with similar manuscripts in Volume 4 of his Collected Papers. One is a one-page list of basic principles for his “functional theory” of May 1903, in which Russell partly anticipated the later Lambda Calculus. The next, catalogued under the title “Proof That No Function Takes All Values”, largely explores the status of Cantor’s proof that there is no greatest cardinal number in the (...) variation of the functional theory holding that only some but not all complexes can be analyzed into function and argument. The final manuscript, “Meaning and Denotation”, examines how his pre-1905 distinction between meaning and denotation is to be understood with respect to functions and their arguments. In them, Russell seems to endorse an extensional view of functions not endorsed in other works prior to the 1920s. All three manuscripts illustrate the close connection between his work on the logical paradoxes and his work on the theory of meaning. (shrink)
I have read many recent discussions of the limits of computation and the universe as computer, hoping to find some comments on the amazing work of polymath physicist and decision theorist David Wolpert but have not found a single citation and so I present this very brief summary. Wolpert proved some stunning impossibility or incompleteness theorems (1992 to 2008-see arxiv.org) on the limits to inference (computation) that are so general they are independent of the device doing the computation, and even (...) independent of the laws of physics, so they apply across computers, physics, and human behavior. They make use of Cantor's diagonalization, the liar paradox and worldlines to provide what may be the ultimate theorem in Turing Machine Theory, and seemingly provide insights into impossibility,incompleteness, the limits of computation,and the universe as computer, in all possible universes and all beings or mechanisms, generating, among other things,a non-quantum mechanical uncertainty principle and a proof of monotheism. (shrink)
I have read many recent discussions of the limits of computation and the universe as computer, hoping to find some comments on the amazing work of polymath physicist and decision theorist David Wolpert but have not found a single citation and so I present this very brief summary. Wolpert proved some stunning impossibility or incompleteness theorems (1992 to 2008-see arxiv dot org) on the limits to inference (computation) that are so general they are independent of the device doing the computation, (...) and even independent of the laws of physics, so they apply across computers, physics, and human behavior. They make use of Cantor's diagonalization, the liar paradox and worldlines to provide what may be the ultimate theorem in Turing Machine Theory, and seemingly provide insights into impossibility, incompleteness, the limits of computation, and the universe as computer, in all possible universes and all beings or mechanisms, generating, among other things, a non- quantum mechanical uncertainty principle and a proof of monotheism. There are obvious connections to the classic work of Chaitin, Solomonoff, Komolgarov and Wittgenstein and to the notion that no program (and thus no device) can generate a sequence (or device) with greater complexity than it possesses. One might say this body of work implies atheism since there cannot be any entity more complex than the physical universe and from the Wittgensteinian viewpoint, ‘more complex’ is meaningless (has no conditions of satisfaction, i.e., truth-maker or test). Even a ‘God’ (i.e., a ‘device’with limitless time/space and energy) cannot determine whether a given ‘number’ is ‘random’, nor find a certain way to show that a given ‘formula’, ‘theorem’ or ‘sentence’ or ‘device’ (all these being complex language games) is part of a particular ‘system’. -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 2nd ed (2019) and Suicidal Utopian Delusions in the 21st Century 4th ed (2019) . (shrink)
This note discusses three issues that Allen and Pardo believe to be especially problematic for a probabilistic interpretation of standards of proof: (1) the subjectivity of probability assignments; (2) the conjunction paradox; and (3) the non-comparative nature of probabilistic standards. I offer a reading of probabilistic standards that avoids these criticisms.
In the 1951 Gibbs lecture, Gödel asserted his famous dichotomy, where the notion of informal proof is at work. G. Priest developed an argument, grounded on the notion of naïve proof, to the effect that Gödel’s first incompleteness theorem suggests the presence of dialetheias. In this paper, we adopt a plausible ideal notion of naïve proof, in agreement with Gödel’s conception, superseding the criticisms against the usual notion of naïve proof used by real working mathematicians. We (...) explore the connection between Gödel’s theorem and naïve proof so understood, both from a classical and a dialetheic perspective. (shrink)
The paper is a continuation of another paper published as Part I. Now, the case of “n=3” is inferred as a corollary from the Kochen and Specker theorem (1967): the eventual solutions of Fermat’s equation for “n=3” would correspond to an admissible disjunctive division of qubit into two absolutely independent parts therefore versus the contextuality of any qubit, implied by the Kochen – Specker theorem. Incommensurability (implied by the absence of hidden variables) is considered as dual to quantum contextuality. The (...) relevant mathematical structure is Hilbert arithmetic in a wide sense, in the framework of which Hilbert arithmetic in a narrow sense and the qubit Hilbert space are dual to each other. A few cases involving set theory are possible: (1) only within the case “n=3” and implicitly, within any next level of “n” in Fermat’s equation; (2) the identification of the case “n=3” and the general case utilizing the axiom of choice rather than the axiom of induction. If the former is the case, the application of set theory and arithmetic can remain disjunctively divided: set theory, “locally”, within any level; and arithmetic, “globally”, to all levels. If the latter is the case, the proof is thoroughly within set theory. Thus, the relevance of Yablo’s paradox to the statement of Fermat’s last theorem is avoided in both cases. The idea of “arithmetic mechanics” is sketched: it might deduce the basic physical dimensions of mechanics (mass, time, distance) from the axioms of arithmetic after a relevant generalization, Furthermore, a future Part III of the paper is suggested: FLT by mediation of Hilbert arithmetic in a wide sense can be considered as another expression of Gleason’s theorem in quantum mechanics: the exclusions about (n = 1, 2) in both theorems as well as the validity for all the rest values of “n” can be unified after the theory of quantum information. The availability (respectively, non-availability) of solutions of Fermat’s equation can be proved as equivalent to the non-availability (respectively, availability) of a single probabilistic measure as to Gleason’s theorem. (shrink)
The “four-color” theorem seems to be generalizable as follows. The four-letter alphabet is sufficient to encode unambiguously any set of well-orderings including a geographical map or the “map” of any logic and thus that of all logics or the DNA plan of any alive being. Then the corresponding maximally generalizing conjecture would state: anything in the universe or mind can be encoded unambiguously by four letters. That admits to be formulated as a “four-letter theorem”, and thus one can search for (...) a properly mathematical proof of the statement. It would imply the “four colour theorem”, the proof of which many philosophers and mathematicians believe not to be entirely satisfactory for it is not a “human proof”, but intermediated by computers unavoidably since the necessary calculations exceed the human capabilities fundamentally. It is furthermore rather unsatisfactory because it consists in enumerating and proving all cases one by one. Sometimes, a more general theorem turns out to be much easier for proving including a general “human” method, and the particular and too difficult for proving theorem to be implied as a corollary in certain simple conditions. The same approach will be followed as to the four colour theorem, i.e. to be deduced more or less trivially from the “four-letter theorem” if the latter is proved. References are only classical and thus very well-known papers: their complete bibliographic description is omitted. (shrink)
Georg Cantor's absolute infinity, the paradoxical Burali-Forti class Ω of all ordinals, is a monstrous non-entity for which being called a "class" is an undeserved dignity. This must be the ultimate vexation for mathematical philosophers who hold on to some residual sense of realism in set theory. By careful use of Ω, we can rescue Georg Cantor's 1899 "proof" sketch of the Well-Ordering Theorem––being generous, considering his declining health. We take the contrapositive of Cantor's suggestion and add Zermelo's choice (...) function. This results in a concise and uncomplicated proof of the Well-Ordering Theorem. (shrink)
Martin Smith has recently proposed, in this journal, a novel and intriguing approach to puzzles and paradoxes in evidence law arising from the evidential standard of the Preponderance of the Evidence. According to Smith, the relation of normic support provides us with an elegant solution to those puzzles. In this paper I develop a counterexample to Smith’s approach and argue that normic support can neither account for our reluctance to base affirmative verdicts on bare statistical evidence nor resolve the pertinent (...) paradoxes. Normic support is, as a consequence, not a successful epistemic anti-luck condition. (shrink)
This essay is an accessible introduction to the proofparadox in legal epistemology. -/- In 1902 the Supreme Judicial Court of Maine filed an influential legal verdict. The judge claimed that in order to find a defendant culpable, the plaintiff “must adduce evidence other than a majority of chances”. The judge thereby claimed that bare statistical evidence does not suffice for legal proof. -/- In this essay I first motivate the claim that bare statistical evidence does not (...) suffice for legal proof. I then introduce and motivate a knowledge-centred explanation of this fact. The knowledge-centred explanation rests on two premises. The first is that legal proof requires knowledge of culpability. The second is that one cannot attain knowledge that p from bare statistical evidence that p. To motivate the second premise, I suggest that beliefs based on bare statistical evidence fail to be safe—they could easily be wrong—and bare statistical evidence cannot eliminate relevant alternatives. -/- I then cast doubt on the first premise; I argue that legal proof does not require knowledge. I thereby dispute the knowledge-centred explanation of the inadequacy of bare statistical evidence for legal proof. Instead of appealing to the nature of knowledge, I suggest we should seek a more direct explanation by appealing to those more foundational epistemic properties, such as safety or eliminating relevant alternatives. (shrink)
Recently, the practice of deciding legal cases on purely statistical evidence has been widely criticised. Many feel uncomfortable with finding someone guilty on the basis of bare probabilities, even though the chance of error might be stupendously small. This is an important issue: with the rise of DNA profiling, courts are increasingly faced with purely statistical evidence. A prominent line of argument—endorsed by Blome-Tillmann 2017; Smith 2018; and Littlejohn 2018—rejects the use of such evidence by appealing to epistemic norms that (...) apply to individual inquirers. My aim in this paper is to rehabilitate purely statistical evidence by arguing that, given the broader aims of legal systems, there are scenarios in which relying on such evidence is appropriate. Along the way I explain why popular arguments appealing to individual epistemic norms to reject legal reliance on bare statistics are unconvincing, by showing that courts and individuals face different epistemic predicaments (in short, individuals can hedge when confronted with statistical evidence, whilst legal tribunals cannot). I also correct some misconceptions about legal practice that have found their way into the recent literature. (shrink)
Over almost a half-century, evidence law scholars and philosophers have contended with what have come to be called the “Proof Paradoxes.” In brief, the following sort of paradox arises: Factfinders in criminal and civil trials are charged with reaching a verdict if the evidence presented meets a particular standard of proof—beyond a reasonable doubt, in criminal cases, and preponderance of the evidence, in civil trials. It seems that purely statistical evidence can suffice for just such a level (...) of certainty in a variety of cases where our intuition is that it would nonetheless be wrong to convict the defendant, or find in favor of the plaintiff, on merely statistical evidence. So, we either have to convict with statistical evidence, in spite of an intuition that this is unsettling, or else explain what (dispositive) deficiency statistical evidence has. -/- Most scholars have tried to justify the resistance to relying on merely statistical evidence: by relying on epistemic deficiencies in this kind of evidence; by relying on court practice; and also by reference to the psychological literature. In fact, I argue, the epistemic deficiencies philosophers and legal scholars allege are suspect. And, I argue, while scholars often discuss unfairness to civil defendants, they ignore a long history of relying on statistical evidence in a variety of civil matters, including employment discrimination, toxic torts, and market share liability cases. Were the dominant arguments in the literature to prevail, it would extremely difficult for plaintiffs to recover in a variety of cases. -/- The various considerations I advance lead to the conclusion that when it comes to naked statistical evidence, philosophers and legal scholars who argue for its insufficiency have been caught with their pants down. (shrink)
Argumentations are at the heart of the deductive and the hypothetico-deductive methods, which are involved in attempts to reduce currently open problems to problems already solved. These two methods span the entire spectrum of problem-oriented reasoning from the simplest and most practical to the most complex and most theoretical, thereby uniting all objective thought whether ancient or contemporary, whether humanistic or scientific, whether normative or descriptive, whether concrete or abstract. Analysis, synthesis, evaluation, and function of argumentations are described. Perennial philosophic (...) problems, epistemic and ontic, related to argumentations are put in perspective. So much of what has been regarded as logic is seen to be involved in the study of argumentations that logic may be usefully defined as the systematic study of argumentations, which is virtually identical to the quest of objective understanding of objectivity. -/- KEY WORDS: hypothesis, theorem, argumentation, proof, deduction, premise-conclusion argument, valid, inference, implication, epistemic, ontic, cogent, fallacious, paradox, formal, validation. (shrink)
The problem analysed in this paper is whether we can gain knowledge by using valid inferences, and how we can explain this process from a model-theoretic perspective. According to the paradox of inference (Cohen & Nagel 1936/1998, 173), it is logically impossible for an inference to be both valid and its conclusion to possess novelty with respect to the premises. I argue in this paper that valid inference has an epistemic significance, i.e., it can be used by an agent (...) to enlarge his knowledge, and this significance can be accounted in model-theoretic terms. I will argue first that the paradox is based on an equivocation, namely, it arises because logical containment, i.e., logical implication, is identified with epistemological containment, i.e., the knowledge of the premises entails the knowledge of the conclusion. Second, I will argue that a truth-conditional theory of meaning has the necessary resources to explain the epistemic significance of valid inferences. I will explain this epistemic significance starting from Carnap’s semantic theory of meaning and Tarski’s notion of satisfaction. In this way I will counter (Prawitz 2012b)’s claim that a truth-conditional theory of meaning is not able to account the legitimacy of valid inferences, i.e., their epistemic significance. (shrink)
Legal epistemology has been an area of great philosophical growth since the turn of the century. But recently, a number of philosophers have argued the entire project is misguided, claiming that it relies on an illicit transposition of the norms of individual epistemology to the legal arena. This paper uses these objections as a foil to consider the foundations of legal epistemology, particularly as it applies to the criminal law. The aim is to clarify the fundamental commitments of legal epistemology (...) and suggest a way to vindicate it. (shrink)
In some cases, there appears to be an asymmetry in the evidential value of statistical and more individualized evidence. For example, while I may accept that Alex is guilty based on eyewitness testimony that is 80% likely to be accurate, it does not seem permissible to do so based on the fact that 80% of a group that Alex is a member of are guilty. In this paper I suggest that rather than reflecting a deep defect in statistical evidence, this (...) asymmetry might arise from a general constraint on rational inquiry. Plausibly the degree of evidential support needed to justify taking a proposition to be true depends on the stakes of error. While relying on statistical evidence plausibly raises the stakes by introducing new kinds of risk to members of the reference class, paradigmatically `individualized' evidence---evidence tracing back to A's voluntary behavior---can lower the stakes. The net result explains the apparent evidential asymmetry without positing a deep difference in the brute justificatory power of different types of evidence. (shrink)
Curry's paradox for "if.. then.." concerns the paradoxical features of sentences of the form "If this very sentence is true, then 2+2=5". Standard inference principles lead us to the conclusion that such conditionals have true consequents: so, for example, 2+2=5 after all. There has been a lot of technical work done on formal options for blocking Curry paradoxes while only compromising a little on the various central principles of logic and meaning that are under threat. -/- Once we have (...) a sense of the technical options, though, a philosophical choice remains. When dealing with puzzles in the logic of conditionals, a natural place to turn is independently motivated semantic theories of the behaviour of "if... then...". This paper argues that the closest-worlds approach outlined in Nolan 1997 offers a philosophically satisfying reason to deny conditional proof and so block the paradoxical Curry reasoning, and can give the verdict that standard Curry conditionals are false, along with related "contraction conditionals". (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.