The traditional view of evidence in mathematics is that evidence is just proof and proof is just derivation. There are good reasons for thinking that this view should be rejected: it misrepresents both historical and current mathematical practice. Nonetheless, evidence, proof, and derivation are closely intertwined. This paper seeks to tease these concepts apart. It emphasizes the role of argumentation as a context shared by evidence, proofs, and derivations. The utility of argumentation theory, in general, and argumentation (...) schemes, in particular, as a methodology for the study of mathematical practice is thereby demonstrated. Argumentation schemes represent an almost untapped resource for mathematics education. Notably, they provide a consistent treatment of rigorous and non-rigorous argumentation, thereby working to exhibit the continuity of reasoning in mathematics with reasoning in other areas. Moreover, since argumentation schemes are a comparatively mature methodology, there is a substantial body of existing work to draw upon, including some increasingly sophisticated software tools. Such tools have significant potential for the analysis and evaluation of mathematical argumentation. The first four sections of the paper address the relationships of evidence to proof, proof to derivation, argument to proof, and argument to evidence, respectively. The final section directly addresses some of the educational implications of an argumentation scheme account of mathematical reasoning. (shrink)
Smith argues that, unlike other forms of evidence, naked statistical evidence fails to satisfy normic support. This is his solution to the puzzles of statistical evidence in legal proof. This paper focuses on Smith’s claim that DNA evidence in cold-hit cases does not satisfy normic support. I argue that if this claim is correct, virtually no other form of evidence used at trial can satisfy normic support. This is troublesome. I discuss a few ways in which Smith can respond.
Which rules for aggregating judgments on logically connected propositions are manipulable and which not? In this paper, we introduce a preference-free concept of non-manipulability and contrast it with a preference-theoretic concept of strategy-proofness. We characterize all non-manipulable and all strategy-proof judgment aggregation rules and prove an impossibility theorem similar to the Gibbard--Satterthwaite theorem. We also discuss weaker forms of non-manipulability and strategy-proofness. Comparing two frequently discussed aggregation rules, we show that “conclusion-based voting” is less vulnerable to manipulation than “premise-based (...) voting”, which is strategy-proof only for “reason-oriented” individuals. Surprisingly, for “outcome-oriented” individuals, the two rules are strategically equivalent, generating identical judgments in equilibrium. Our results introduce game-theoretic considerations into judgment aggregation and have implications for debates on deliberative democracy. (shrink)
This paper discusses proof-theoretic semantics, the project of specifying the meanings of the logical constants in terms of rules of inference governing them. I concentrate on Michael Dummett’s and Dag Prawitz’ philosophical motivations and give precise characterisations of the crucial notions of harmony and stability, placed in the context of proving normalisation results in systems of natural deduction. I point out a problem for defining the meaning of negation in this framework and prospects for an account of the meanings (...) of modal operators in terms of rules of inference. (shrink)
Prawitz conjectured that proof-theoretic validity offers a semantics for intuitionistic logic. This conjecture has recently been proven false by Piecha and Schroeder-Heister. This article resolves one of the questions left open by this recent result by showing the extensional alignment of proof-theoretic validity and general inquisitive logic. General inquisitive logic is a generalisation of inquisitive semantics, a uniform semantics for questions and assertions. The paper further defines a notion of quasi-proof-theoretic validity by restricting proof-theoretic validity to (...) allow double negation elimination for atomic formulas and proves the extensional alignment of quasi-proof-theoretic validity and inquisitive logic. (shrink)
A question, long discussed by legal scholars, has recently provoked a considerable amount of philosophical attention: ‘Is it ever appropriate to base a legal verdict on statistical evidence alone?’ Many philosophers who have considered this question reject legal reliance on bare statistics, even when the odds of error are extremely low. This paper develops a puzzle for the dominant theories concerning why we should eschew bare statistics. Namely, there seem to be compelling scenarios in which there are multiple sources of (...) incriminating statistical evidence. As we conjoin together different types of statistical evidence, it becomes increasingly incredible to suppose that a positive verdict would be impermissible. I suggest that none of the dominant views in the literature can easily accommodate such cases, and close by offering a diagnosis of my own. (shrink)
This essay addresses one of the open questions of proof-theoretic semantics: how to understand the semantic values of atomic sentences. I embed a revised version of the explanatory proof system of Millson and Straßer (2019) into the proof-theoretic semantics of Francez (2015) and show how to specify (part of) the intended interpretation of atomic sentences on the basis of their occurrences in the premises and conclusions of inferences to and from best explanations.
Kurt Gödel wrote (1964, p. 272), after he had read Husserl, that the notion of objectivity raises a question: “the question of the objective existence of the objects of mathematical intuition (which, incidentally, is an exact replica of the question of the objective existence of the outer world)”. This “exact replica” brings to mind the close analogy Husserl saw between our intuition of essences in Wesensschau and of physical objects in perception. What is it like to experience a mathematical proving (...) process? What is the ontological status of a mathematical proof? Can computer assisted provers output a proof? Taking a naturalized world account, I will assess the relationship between mathematics, the physical world and consciousness by introducing a significant conceptual distinction between proving and proof. I will propose that proving is a phenomenological conscious experience. This experience involves a combination of what Kurt Gödel called intuition, and what Husserl called intentionality. In contrast, proof is a function of that process — the mathematical phenomenon — that objectively self-presents a property in the world, and that results from a spatiotemporal unity being subject to the exact laws of nature. In this essay, I apply phenomenology to mathematical proving as a performance of consciousness, that is, a lived experience expressed and formalized in language, in which there is the possibility of formulating intersubjectively shareable meanings. (shrink)
Proofs of God in Early Modern Europe offers a fascinating window into early modern efforts to prove God’s existence. Assembled here are twenty-two key texts, many translated into English for the first time, which illustrate the variety of arguments that philosophers of the seventeenth and eighteenth centuries offered for God. These selections feature traditional proofs—such as various ontological, cosmological, and design arguments—but also introduce more exotic proofs, such as the argument from eternal truths, the argument from universal aseity, and the (...) argument ex consensu gentium. Drawn from the work of eighteen philosophers, this book includes both canonical figures (such as Descartes, Spinoza, Newton, Leibniz, Locke, and Berkeley) and noncanonical thinkers (such as Norris, Fontenelle, Voltaire, Wolff, Du Châtelet, and Maupertuis). -/- Lloyd Strickland provides fresh translations of all selections not originally written in English and updates the spelling and grammar of those that were. Each selection is prefaced by a lengthy headnote, giving a biographical account of its author, an analysis of the main argument(s), and important details about the historical context. Strickland’s introductory essay provides further context, focusing on the various reasons that led so many thinkers of early modernity to develop proofs of God’s existence. -/- Proofs of God is perfect for both students and scholars of early modern philosophy and philosophy of religion. (shrink)
In this paper, I propose that applying the methods of data science to “the problem of whether mathematical explanations occur within mathematics itself” (Mancosu 2018) might be a fruitful way to shed new light on the problem. By carefully selecting indicator words for explanation and justification, and then systematically searching for these indicators in databases of scholarly works in mathematics, we can get an idea of how mathematicians use these terms in mathematical practice and with what frequency. The results of (...) this empirical study suggest that mathematical explanations do occur in research articles published in mathematics journals, as indicated by the occurrence of explanation indicators. When compared with the use of justification indicators, however, the data suggest that justifications occur much more frequently than explanations in scholarly mathematical practice. The results also suggest that justificatory proofs occur much more frequently than explanatory proofs, thus suggesting that proof may be playing a larger justificatory role than an explanatory role in scholarly mathematical practice. (shrink)
The proof theory of many-valued systems has not been investigated to an extent comparable to the work done on axiomatizatbility of many-valued logics. Proof theory requires appropriate formalisms, such as sequent calculus, natural deduction, and tableaux for classical (and intuitionistic) logic. One particular method for systematically obtaining calculi for all finite-valued logics was invented independently by several researchers, with slight variations in design and presentation. The main aim of this report is to develop the proof theory of (...) finite-valued first order logics in a general way, and to present some of the more important results in this area. In Systems covered are the resolution calculus, sequent calculus, tableaux, and natural deduction. This report is actually a template, from which all results can be specialized to particular logics. (shrink)
In this dissertation, we shall investigate whether Tennant's criterion for paradoxicality(TCP) can be a correct criterion for genuine paradoxes and whether the requirement of a normal derivation(RND) can be a proof-theoretic solution to the paradoxes. Tennant’s criterion has two types of counterexamples. The one is a case which raises the problem of overgeneration that TCP makes a paradoxical derivation non-paradoxical. The other is one which generates the problem of undergeneration that TCP renders a non-paradoxical derivation paradoxical. Chapter 2 deals (...) with the problem of undergeneration and Chapter 3 concerns the problem of overgeneration. Chapter 2 discusses that Tenant’s diagnosis of the counterexample which applies CR−rule and causes the undergeneration problem is not correct and presents a solution to the problem of undergeneration. Chapter 3 argues that Tennant’s diagnosis of the counterexample raising the overgeneration problem is wrong and provides a solution to the problem. Finally, Chapter 4 addresses what should be explicated in order for RND to be a proof-theoretic solution to the paradoxes. (shrink)
Researchers often pursue proof of concept research, but criteria for evaluating such research remain poorly specified. This article proposes a general framework for proof of concept research that k...
The paper briefly surveys the sentential proof-theoretic semantics for fragment of English. Then, appealing to a version of Frege’s context-principle (specified to fit type-logical grammar), a method is presented for deriving proof-theoretic meanings for sub-sentential phrases, down to lexical units (words). The sentential meaning is decomposed according to the function-argument structure as determined by the type-logical grammar. In doing so, the paper presents a novel proof-theoretic interpretation of simple type, replacing Montague’s model-theoretic type interpretation (in arbitrary Henkin (...) models). The domains of derivations are collections of derivations in the associated “dedicated” natural-deduction proof-system, and functions therein (with no appeal to models, truth-values and elements of a domain). The compositionality of the semantics is analyzed. (shrink)
In mathematics, any form of probabilistic proof obtained through the application of a probabilistic method is not considered as a legitimate way of gaining mathematical knowledge. In a series of papers, Don Fallis has defended the thesis that there are no epistemic reasons justifying mathematicians’ rejection of probabilistic proofs. This paper identifies such an epistemic reason. More specifically, it is argued here that if one adopts a conception of mathematical knowledge in which an epistemic subject can know a mathematical (...) proposition based solely on a probabilistic proof, one is then forced to admit that such an epistemic subject can know several lottery propositions based solely on probabilistic evidence. Insofar as knowledge of lottery propositions on the basis of probabilistic evidence alone is denied by the vast majority of epistemologists, it is concluded that this constitutes an epistemic reason for rejecting probabilistic proofs as a means of acquiring mathematical knowledge. (shrink)
Many philosophers are sceptical about the power of philosophy to refute commonsensical claims. They look at the famous attempts and judge them inconclusive. I prove that, even if those famous attempts are failures, there are alternative successful philosophical proofs against commonsensical claims. After presenting the proofs I briefly comment on their significance.
Recent years have seen fresh impetus brought to debates about the proper role of statistical evidence in the law. Recent work largely centres on a set of puzzles known as the ‘proof paradox’. While these puzzles may initially seem academic, they have important ramifications for the law: raising key conceptual questions about legal proof, and practical questions about DNA evidence. This article introduces the proof paradox, why we should care about it, and new work attempting to resolve (...) it. (shrink)
In order to perform certain actions – such as incarcerating a person or revoking parental rights – the state must establish certain facts to a particular standard of proof. These standards – such as preponderance of evidence and beyond reasonable doubt – are often interpreted as likelihoods or epistemic confidences. Many theorists construe them numerically; beyond reasonable doubt, for example, is often construed as 90 to 95% confidence in the guilt of the defendant. -/- A family of influential cases (...) suggests standards of proof should not be interpreted numerically. These ‘proof paradoxes’ illustrate that purely statistical evidence can warrant high credence in a disputed fact without satisfying the relevant legal standard. In this essay I evaluate three influential attempts to explain why merely statistical evidence cannot satisfy legal standards. (shrink)
The Four-Colour Theorem (4CT) proof, presented to the mathematical community in a pair of papers by Appel and Haken in the late 1970's, provoked a series of philosophical debates. Many conceptual points of these disputes still require some elucidation. After a brief presentation of the main ideas of Appel and Haken’s procedure for the proof and a reconstruction of Thomas Tymoczko’s argument for the novelty of 4CT’s proof, we shall formulate some questions regarding the connections between the (...) points raised by Tymoczko and some Wittgensteinian topics in the philosophy of mathematics such as the importance of the surveyability as a criterion for distinguishing mathematical proofs from empirical experiments. Our aim is to show that the “characteristic Wittgensteinian invention” (Mühlhölzer 2006) – the strong distinction between proofs and experiments – can shed some light in the conceptual confusions surrounding the Four-Colour Theorem. (shrink)
This paper presents proof that Buss's S22 can prove the consistency of a fragment of Cook and Urquhart's PV from which induction has been removed but substitution has been retained. This result improves Beckmann's result, which proves the consistency of such a system without substitution in bounded arithmetic S12. Our proof relies on the notion of "computation" of the terms of PV. In our work, we first prove that, in the system under consideration, if an equation is proved (...) and either its left- or right-hand side is computed, then there is a corresponding computation for its right- or left-hand side, respectively. By carefully computing the bound of the size of the computation, the proof of this theorem inside a bounded arithmetic is obtained, from which the consistency of the system is readily proven. This result apparently implies the separation of bounded arithmetic because Buss and Ignjatović stated that it is not possible to prove the consistency of a fragment of PV without induction but with substitution in Buss's S12. However, their proof actually shows that it is not possible to prove the consistency of the system, which is obtained by the addition of propositional logic and other axioms to a system such as ours. On the other hand, the system that we have considered is strictly equational, which is a property on which our proof relies. (shrink)
One of the most fundamental questions in the philosophy of mathematics concerns the relation between truth and formal proof. The position according to which the two concepts are the same is called deflationism, and the opposing viewpoint substantialism. In an important result of mathematical logic, Kurt Gödel proved in his first incompleteness theorem that all consistent formal systems containing arithmetic include sentences that can neither be proved nor disproved within that system. However, such undecidable Gödel sentences can be established (...) to be true once we expand the formal system with Alfred Tarski s semantical theory of truth, as shown by Stewart Shapiro and Jeffrey Ketland in their semantical arguments for the substantiality of truth. According to them, in Gödel sentences we have an explicit case of true but unprovable sentences, and hence deflationism is refuted. -/- Against that, Neil Tennant has shown that instead of Tarskian truth we can expand the formal system with a soundness principle, according to which all provable sentences are assertable, and the assertability of Gödel sentences follows. This way, the relevant question is not whether we can establish the truth of Gödel sentences, but whether Tarskian truth is a more plausible expansion than a soundness principle. In this work I will argue that this problem is best approached once we think of mathematics as the full human phenomenon, and not just consisting of formal systems. When pre-formal mathematical thinking is included in our account, we see that Tarskian truth is in fact not an expansion at all. I claim that what proof is to formal mathematics, truth is to pre-formal thinking, and the Tarskian account of semantical truth mirrors this relation accurately. -/- However, the introduction of pre-formal mathematics is vulnerable to the deflationist counterargument that while existing in practice, pre-formal thinking could still be philosophically superfluous if it does not refer to anything objective. Against this, I argue that all truly deflationist philosophical theories lead to arbitrariness of mathematics. In all other philosophical accounts of mathematics there is room for a reference of the pre-formal mathematics, and the expansion of Tarkian truth can be made naturally. Hence, if we reject the arbitrariness of mathematics, I argue in this work, we must accept the substantiality of truth. Related subjects such as neo-Fregeanism will also be covered, and shown not to change the need for Tarskian truth. -/- The only remaining route for the deflationist is to change the underlying logic so that our formal languages can include their own truth predicates, which Tarski showed to be impossible for classical first-order languages. With such logics we would have no need to expand the formal systems, and the above argument would fail. From the alternative approaches, in this work I focus mostly on the Independence Friendly (IF) logic of Jaakko Hintikka and Gabriel Sandu. Hintikka has claimed that an IF language can include its own adequate truth predicate. I argue that while this is indeed the case, we cannot recognize the truth predicate as such within the same IF language, and the need for Tarskian truth remains. In addition to IF logic, also second-order logic and Saul Kripke s approach using Kleenean logic will be shown to fail in a similar fashion. (shrink)
How is the burden of proof to be distributed among individuals who are involved in resolving a particular issue? Under what conditions should the burden of proof be distributed unevenly? We distinguish attitudinal from dialectical burdens and argue that these questions should be answered differently, depending on which is in play. One has an attitudinal burden with respect to some proposition when one is required to possess sufficient evidence for it. One has a dialectical burden with respect to (...) some proposition when one is required to provide supporting arguments for it as part of a deliberative process. We show that the attitudinal burden with respect to certain propositions is unevenly distributed in some deliberative contexts, but in all of these contexts, establishing the degree of support for the proposition is merely a means to some other deliberative end, such as action guidance, or persuasion. By contrast, uneven distributions of the dialectical burden regularly further the aims of deliberation, even in contexts where the quest for truth is the sole deliberative aim, rather than merely a means to some different deliberative end. We argue that our distinction between these two burdens resolves puzzles about unevenness that have been raised in the literature. (shrink)
The impossibility results in judgement aggregation show a clash between fair aggregation procedures and rational collective outcomes. In this paper, we are interested in analysing the notion of rational outcome by proposing a proof-theoretical understanding of collective rationality. In particular, we use the analysis of proofs and inferences provided by linear logic in order to define a fine-grained notion of group reasoning that allows for studying collective rationality with respect to a number of logics. We analyse the well-known paradoxes (...) in judgement aggregation and we pinpoint the reasoning steps that trigger the inconsistencies. Moreover, we extend the map of possibility and impossibility results in judgement aggregation by discussing the case of substructural logics. In particular, we show that there exist fragments of linear logic for which general possibility results can be obtained. (shrink)
A textbook on proof in mathematics, inspired by an Aristotelian point of view on mathematics and proof. The book expounds the traditional view of proof as deduction of theorems from evident premises via obviously valid steps. It deals with the proof of "all" statements, "some" statements, multiple quantifiers and mathematical induction.
It is shown how the schema of equivalence can be used to obtain short proofs of tautologies A , where the depth of proofs is linear in the number of variables in A .
By the middle of the seventeenth century we that find that algebra is able to offer proofs in its own right. That is, by that time algebraic argument had achieved the status of proof. How did this transformation come about?
We introduce translations between display calculus proofs and labeled calculus proofs in the context of tense logics. First, we show that every derivation in the display calculus for the minimal tense logic Kt extended with general path axioms can be effectively transformed into a derivation in the corresponding labeled calculus. Concerning the converse translation, we show that for Kt extended with path axioms, every derivation in the corresponding labeled calculus can be put into a special form that is translatable to (...) a derivation in the associated display calculus. A key insight in this converse translation is a canonical representation of display sequents as labeled polytrees. Labeled polytrees, which represent equivalence classes of display sequents modulo display postulates, also shed light on related correspondence results for tense logics. (shrink)
Mathematicians distinguish between proofs that explain their results and those that merely prove. This paper explores the nature of explanatory proofs, their role in mathematical practice, and some of the reasons why philosophers should care about them. Among the questions addressed are the following: what kinds of proofs are generally explanatory (or not)? What makes a proof explanatory? Do all mathematical explanations involve proof in an essential way? Are there really such things as explanatory proofs, and if so, (...) how do they relate to the sorts of explanation encountered in philosophy of science and metaphysics? (shrink)
“Proof of concept” is a phrase frequently used in descriptions of research sought in program announcements, in experimental studies, and in the marketing of new technologies. It is often coupled with either a short definition or none at all, its meaning assumed to be fully understood. This is problematic. As a phrase with potential implications for research and technology, its assumed meaning requires some analysis to avoid it becoming a descriptive category that refers to all things scientifically exciting. I (...) provide a short analysis of proof of concept research and offer an example of it within synthetic biology. I suggest that not only are there activities that circumscribe new epistemological categories but there are also associated normative ethical categories or principles linked to the research. I examine these and provide an outline for an alternative ethical account to describe these activities that I refer to as “extended agency ethics”. This view is used to explain how the type of research described as proof of concept also provides an attendant proof of principle that is the result of decision-making that extends across practitioners, their tools, techniques, and the problem solving activities of other research groups. (shrink)
This work provides proof-search algorithms and automated counter-model extraction for a class of STIT logics. With this, we answer an open problem concerning syntactic decision procedures and cut-free calculi for STIT logics. A new class of cut-free complete labelled sequent calculi G3LdmL^m_n, for multi-agent STIT with at most n-many choices, is introduced. We refine the calculi G3LdmL^m_n through the use of propagation rules and demonstrate the admissibility of their structural rules, resulting in auxiliary calculi Ldm^m_nL. In the single-agent case, (...) we show that the refined calculi Ldm^m_nL derive theorems within a restricted class of (forestlike) sequents, allowing us to provide proof-search algorithms that decide single-agent STIT logics. We prove that the proof-search algorithms are correct and terminate. (shrink)
A proof P of a theorem T is transferable when a typical expert can become convinced of T solely on the basis of their prior knowledge and the information contained in P. Easwaran has argued that transferability is a constraint on acceptable proof. Meanwhile, a proof P is fixable when it’s possible for other experts to correct any mistakes P contains without having to develop significant new mathematics. Habgood-Coote and Tanswell have observed that some acceptable proofs are (...) both fixable and in need of fixing, in the sense that they contain nontrivial mistakes. The claim that acceptable proofs must be transferable seems quite plausible. The claim that some acceptable proofs need fixing seems plausible too. Unfortunately, these attractive suggestions stand in tension with one another. I argue that the transferability requirement is the problem. Acceptable proofs need only satisfy a weaker requirement I call “corrigibility”. I explain why, despite appearances, the corrigibility standard is preferable to stricter alternatives. (shrink)
Kant's A-Edition objective deduction is naturally (and has traditionally been) divided into two arguments: an " argument from above" and one that proceeds " von unten auf." This would suggest a picture of Kant's procedure in the objective deduction as first descending and ascending the same ladder, the better, perhaps, to test its durability or to thoroughly convince the reader of its soundness. There are obvious obstacles to such a reading, however; and in this chapter I will argue that the (...) arguments from above and below constitute different, albeit importantly inter-related, proofs. Rather than drawing on the differences in their premises, however, I will highlight what I take to be the different concerns addressed and, correspondingly, the distinct conclusions reached by each. In particular, I will show that both arguments can be understood to address distinct specters, with the argument from above addressing an internal concern generated by Kant’s own transcendental idealism, and the argument from below seeking to dispel a more traditional, broadly Humean challenge to the understanding’s role in experience. These distinct concerns also imply that these arguments yield distinct conclusions, though I will show that they are in fact complementary. (shrink)
Introduction to the Scientific Proof of the Natural Moral Law This paper proves that Aquinas has a means of demonstrating and deriving both moral goodness and the natural moral law from human nature alone. Aquinas scientifically proves the existence of the natural moral law as the natural rule of human operations from human nature alone. The distinction between moral goodness and transcendental goodness is affirmed. This provides the intellectual tools to refute the G.E. Moore (Principles of Ethics) attack against (...) the natural law as committing a "naturalistic fallacy". This article proves that instead Moore commits the fallacy of equivocation between moral goodness and transcendental goodness in his very assertion of a "naturalistic fallacy" by the proponents of the natural moral law. In the process the new deontological/kantian theory of natural law as articulated by John Finnis, Robert George, and Germain Grisez is false historically and philosophically. Ethical naturalism is affirmed as a result. (shrink)
We introduce an effective translation from proofs in the display calculus to proofs in the labelled calculus in the context of tense logics. We identify the labelled calculus proofs in the image of this translation as those built from labelled sequents whose underlying directed graph possesses certain properties. For the basic normal tense logic Kt, the image is shown to be the set of all proofs in the labelled calculus G3Kt.
Should we use the same standard of proof to adjudicate guilt for murder and petty theft? Why not tailor the standard of proof to the crime? These relatively neglected questions cut to the heart of central issues in the philosophy of law. This paper scrutinises whether we ought to use the same standard for all criminal cases, in contrast with a flexible approach that uses different standards for different crimes. I reject consequentialist arguments for a radically flexible standard (...) of proof, instead defending a modestly flexible approach on non-consequentialist grounds. The system I defend is one on which we should impose a higher standard of proof for crimes that attract more severe punishments. This proposal, although apparently revisionary, accords with a plausible theory concerning the epistemology of legal judgments and the role they play in society. (shrink)
An alternative consensus algorithm to both proof-of-work and proof-of-stake, proof-of-loss addresses all their deficiencies, including the lack of an organic block size limit, the risks of mining centralization, and the "nothing at stake" problem.
Why can testimony alone be enough for findings of liability? Why statistical evidence alone can’t? These questions underpin the “Proof Paradox” (Redmayne 2008, Enoch et al. 2012). Many epistemologists have attempted to explain this paradox from a purely epistemic perspective. I call it the “Epistemic Project”. In this paper, I take a step back from this recent trend. Stemming from considerations about the nature and role of standards of proof, I define three requirements that any successful account in (...) line with the Epistemic Project should meet. I then consider three recent epistemic accounts on which the standard is met when the evidence rules out modal risk (Pritchard 2018), normic risk (Ebert et al. 2020), or relevant alternatives (Gardiner 2019 2020). I argue that none of these accounts meets all the requirements. Finally, I offer reasons to be pessimistic about the prospects of having a successful epistemic explanation of the paradox. I suggest the discussion on the proof paradox would benefit from undergoing a ‘value-turn’. (shrink)
I explore, from a proof-theoretic perspective, the hierarchy of classical and paraconsistent logics introduced by Barrio, Pailos and Szmuc in (Journal o f Philosophical Logic,49, 93-120, 2021). First, I provide sequent rules and axioms for all the logics in the hierarchy, for all inferential levels, and establish soundness and completeness results. Second, I show how to extend those systems with a corresponding hierarchy of validity predicates, each one of which is meant to capture “validity” at a different inferential level. (...) Then, I point out two potential philosophical implications of these results. (i) Since the logics in the hierarchy differ from one another on the rules, I argue that each such logic maintains its own distinct identity (contrary to arguments like the one given by Dicher and Paoli in 2019). (ii) Each validity predicate need not capture “validity” at more than one metainferential level. Hence, there are reasons to deny the thesis (put forward in Barrio, E., Rosenblatt, L. & Tajer, D. (Synthese, 2016)) that the validity predicate introduced in by Beall and Murzi in (Journal o f Philosophy,110(3), 143–165, 2013) has to express facts not only about what follows from what, but also about the metarules, etc. (shrink)
According to Jim Pryor’s dogmatism, if you have an experience as if P, you acquire immediate prima facie justification for believing P. Pryor contends that dogmatism validates Moore’s infamous proof of a material world. Against Pryor, I argue that if dogmatism is true, Moore’s proof turns out to be non-transmissive of justification according to one of the senses of non-transmissivity defined by Crispin Wright. This type of non-transmissivity doesn’t deprive dogmatism of its apparent antisceptical bite.
This paper contends that Stoic logic (i.e. Stoic analysis) deserves more attention from contemporary logicians. It sets out how, compared with contemporary propositional calculi, Stoic analysis is closest to methods of backward proof search for Gentzen-inspired substructural sequent logics, as they have been developed in logic programming and structural proof theory, and produces its proof search calculus in tree form. It shows how multiple similarities to Gentzen sequent systems combine with intriguing dissimilarities that may enrich contemporary discussion. (...) Much of Stoic logic appears surprisingly modern: a recursively formulated syntax with some truth-functional propositional operators; analogues to cut rules, axiom schemata and Gentzen’s negation-introduction rules; an implicit variable-sharing principle and deliberate rejection of Thinning and avoidance of paradoxes of implication. These latter features mark the system out as a relevance logic, where the absence of duals for its left and right introduction rules puts it in the vicinity of McCall’s connexive logic. Methodologically, the choice of meticulously formulated meta-logical rules in lieu of axiom and inference schemata absorbs some structural rules and results in an economical, precise and elegant system that values decidability over completeness. (shrink)
This paper considers proof-theoretic semantics for necessity within Dummett's and Prawitz's framework. Inspired by a system of Pfenning's and Davies's, the language of intuitionist logic is extended by a higher order operator which captures a notion of validity. A notion of relative necessary is defined in terms of it, which expresses a necessary connection between the assumptions and the conclusion of a deduction.
This paper presents a sequent calculus and a dual domain semantics for a theory of definite descriptions in which these expressions are formalised in the context of complete sentences by a binary quantifier I. I forms a formula from two formulas. Ix[F, G] means ‘The F is G’. This approach has the advantage of incorporating scope distinctions directly into the notation. Cut elimination is proved for a system of classical positive free logic with I and it is shown to be (...) sound and complete for the semantics. The system has a number of novel features and is briefly compared to the usual approach of formalising ‘the F ’ by a term forming operator. It does not coincide with Hintikka’s and Lambert’s preferred theories, but the divergence is well-motivated and attractive. (shrink)
Should we use the same standard of proof to adjudicate guilt for murder and petty theft? Why not tailor the standard of proof to the crime? These relatively neglected questions cut to the heart of central issues in the philosophy of law. This paper scrutinises whether we ought to use the same standard for all criminal cases, in contrast with a flexible approach that uses different standards for different crimes. I reject consequentialist arguments for a radically flexible standard (...) of proof, instead defending a modestly flexible approach on non-consequentialist grounds. The system I defend is one on which we should impose a higher standard of proof for crimes that attract more severe punishments. This proposal, although apparently revisionary, accords with a plausible theory concerning the epistemology of legal judgments and the role they play in society. (shrink)
Restall set forth a "consecution" calculus in his "An Introduction to Substructural Logics." This is a natural deduction type sequent calculus where the structural rules play an important role. This paper looks at different ways of extending Restall's calculus. It is shown that Restall's weak soundness and completeness result with regards to a Hilbert calculus can be extended to a strong one so as to encompass what Restall calls proofs from assumptions. It is also shown how to extend the calculus (...) so as to validate the metainferential rule of reasoning by cases, as well as certain theory-dependent rules. (shrink)
As the 19th century drew to a close, logicians formalized an ideal notion of proof. They were driven by nothing other than an abiding interest in truth, and their proofs were as ethereal as the mind of God. Yet within decades these mathematical abstractions were realized by the hand of man, in the digital stored-program computer. How it came to be recognized that proofs and programs are the same thing is a story that spans a century, a chase with (...) as many twists and turns as a thriller. At the end of the story is a new principle for designing programming languages that will guide computers into the 21st century. -/- For my money, Gentzen’s natural deduction and Church’s lambda calculus are on a par with Einstein’s relativity and Dirac’s quantum physics for elegance and insight. And the maths are a lot simpler. I want to show you the essence of these ideas. I’ll need a few symbols, but not too many, and I’ll explain as I go along. -/- To simplify, I’ll present the story as we understand it now, with some asides to fill in the history. First, I’ll introduce Gentzen’s natural deduction, a formalism for proofs. Next, I’ll introduce Church’s lambda calculus, a formalism for programs. Then I’ll explain why proofs and programs are really the same thing, and how simplifying a proof corresponds to executing a program. Finally, I’ll conclude with a look at how these principles are being applied to design a new generation of programming languages, particularly mobile code for the Internet. (shrink)
I explain why model theory is unsatisfactory as a semantic theory and has drawbacks as a tool for proofs on logic systems. I then motivate and develop an alternative, truth-valuational substitutional approach (TVS), and prove with it the soundness and completeness of the first order Predicate Calculus with identity and of Modal Propositional Calculus. Modal logic is developed without recourse to possible worlds. Along the way I answer a variety of difficulties that have been raised against TVS and show that, (...) as applied to several central questions, model-theoretic semantics can be considered TVS in disguise. The conclusion is that the truth-valuational substitutional approach is an adequate tool for many of our logic inquiries, conceptually preferable over model-theoretic semantics. Another conclusion is that formal logic is independent of semantics, apart from its use of the notion of truth, but that even with respect to it its assumptions are minimal. (shrink)
Most human actions are complex, but some of them are basic. Which are these? In this paper, I address this question by invoking slips, a common kind of mistake. The proposal is this: an action is basic if and only if it is not possible to slip in performing it. The argument discusses some well-established results from the psychology of language production in the context of a philosophical theory of action. In the end, the proposed criterion is applied to discuss (...) some well-known theories of basic actions. (shrink)
In this article, I argue that it is impossible to complete infinitely many tasks in a finite time. A key premise in my argument is that the only way to get to 0 tasks remaining is from 1 task remaining, when tasks are done 1-by-1. I suggest that the only way to deny this premise is by begging the question, that is, by assuming that supertasks are possible. I go on to present one reason why this conclusion (that supertasks are (...) impossible) is important, namely that it implies a new verdict on a decision puzzle propounded by Jeffrey Barrett and Frank Arntzenius. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.