This paper offers a reconstruction of Wittgenstein's discussion on inductiveproofs. A "algebraic version" of these indirect proofs is offered and contrasted with the usual ones in which an infinite sequence of modus pones is projected.
I set up two axiomatic theories of inductive support within the framework of Kolmogorovian probability theory. I call these theories ‘Popperian theories of inductive support’ because I think that their specific axioms express the core meaning of the word ‘inductive support’ as used by Popper (and, presumably, by many others, including some inductivists). As is to be expected from Popperian theories of inductive support, the main theorem of each of them is an anti-induction theorem, the stronger (...) one of them saying, in fact, that the relation of inductive support is identical with the empty relation. It seems to me that an axiomatic treatment of the idea(s) of inductive support within orthodox probability theory could be worthwhile for at least three reasons. Firstly, an axiomatic treatment demands from the builder of a theory of inductive support to state clearly in the form of specific axioms what he means by ‘inductive support’. Perhaps the discussion of the new anti-induction proofs of Karl Popper and David Miller would have been more fruitful if they had given an explicit definition of what inductive support is or should be. Secondly, an axiomatic treatment of the idea(s) of inductive support within Kolmogorovian probability theory might be accommodating to those philosophers who do not completely trust Popperian probability theory for having theorems which orthodox Kolmogorovian probability theory lacks; a transparent derivation of anti-induction theorems within a Kolmogorovian frame might bring additional persuasive power to the original anti-induction proofs of Popper and Miller, developed within the framework of Popperian probability theory. Thirdly, one of the main advantages of the axiomatic method is that it facilitates criticism of its products: the axiomatic theories. On the one hand, it is much easier than usual to check whether those statements which have been distinguished as theorems really are theorems of the theory under examination. On the other hand, after we have convinced ourselves that these statements are indeed theorems, we can take a critical look at the axioms—especially if we have a negative attitude towards one of the theorems. Since anti-induction theorems are not popular at all, the adequacy of some of the axioms they are derived from will certainly be doubted. If doubt should lead to a search for alternative axioms, sheer negative attitudes might develop into constructive criticism and even lead to new discoveries. -/- I proceed as follows. In section 1, I start with a small but sufficiently strong axiomatic theory of deductive dependence, closely following Popper and Miller (1987). In section 2, I extend that starting theory to an elementary Kolmogorovian theory of unconditional probability, which I extend, in section 3, to an elementary Kolmogorovian theory of conditional probability, which in its turn gets extended, in section 4, to a standard theory of probabilistic dependence, which also gets extended, in section 5, to a standard theory of probabilistic support, the main theorem of which will be a theorem about the incompatibility of probabilistic support and deductive independence. In section 6, I extend the theory of probabilistic support to a weak Popperian theory of inductive support, which I extend, in section 7, to a strong Popperian theory of inductive support. In section 8, I reconsider Popper's anti-inductivist theses in the light of the anti-induction theorems. I conclude the paper with a short discussion of possible objections to our anti-induction theorems, paying special attention to the topic of deductive relevance, which has so far been neglected in the discussion of the anti-induction proofs of Popper and Miller. (shrink)
Gödel's incompleteness theorems establish the stunning result that mathematics cannot be fully formalized and, further, that any formal system containing a modicum of number or set theory cannot establish its own consistency. Wilfried Sieg and Clinton Field, in their paper Automated Search for Gödel's Proofs, presented automated proofs of Gödel's theorems at an abstract axiomatic level; they used an appropriate expansion of the strategic considerations that guide the search of the automated theorem prover AProS. The representability conditions that (...) allow the syntactic notions of the metalanguage to be represented inside the object language were taken as axioms in the automated proofs. The concrete task I am taking on in this project is to extend the search by formally verifying these conditions. Using a formal metatheory defined in the language of binary trees, the syntactic objects of the metatheory lend themselves naturally to a direct encoding in Zermelo's theory of sets. The metatheoretic notions can then be inductively defined and shown to be representable in the object-theory using appropriate inductive arguments. Formal verification of the representability conditions is the first step towards an automated proof thereof which, in turn, brings the automated verification of Gödel's theorems one step closer to completion. (shrink)
Interpreters of Hume on causation consider that an advantage of the ‘quasi-realist’ reading is that it does not commit him to scepticism or to an error theory about causal reasoning. It is unique to quasi-realism that it maintains this positive epistemic result together with a rejection of metaphysical realism about causation: the quasi-realist supplies an appropriate semantic theory in order to justify the practice of talking ‘as if’ there were causal powers in the world. In this paper, I problematise the (...) quasi-realist reading of Hume on causation by showing how quasi-realism does not speak to inductive scepticism. I also offer evidence that Hume takes inductive scepticism to result from his theory of causation, and that his scepticism is tied to his rejection of metaphysical causal realism. (shrink)
This paper formulates some paradoxes of inductive knowledge. Two responses in particular are explored: According to the first sort of theory, one is able to know in advance that certain observations will not be made unless a law exists. According to the other, this sort of knowledge is not available until after the observations have been made. Certain natural assumptions, such as the idea that the observations are just as informative as each other, the idea that they are independent, (...) and that they increase your knowledge monotonically (among others) are given precise formulations. Some surprising consequences of these assumptions are drawn, and their ramifications for the two theories examined. Finally, a simple model of inductive knowledge is offered, and independently derived from other principles concerning the interaction of knowledge and counterfactuals. (shrink)
The definitions of ‘deduction’ found in virtually every introductory logic textbook would encourage us to believe that the inductive/deductive distinction is a distinction among kinds of arguments and that the extension of ‘deduction’ is a determinate class of arguments. In this paper, we argue that that this approach is mistaken. Specifically, we defend the claim that typical definitions of ‘deduction’ operative in attempts to get at the induction/deduction distinction are either too narrow or insufficiently precise. We conclude by presenting (...) a deflationary understanding of the inductive/deductive distinction; in our view, its content is nothing over and above the answers to two fundamental sorts of questions central to critical thinking. (shrink)
An argument, different from the Newman objection, against the view that the cognitive content of a theory is exhausted by its Ramsey sentence is reviewed. The crux of the argument is that Ramsification may ruin inductive systematization between theory and observation. The argument also has some implications concerning the issue of underdetermination.
Many theorists have proposed that we can use the principle of indifference to defeat the inductive sceptic. But any such theorist must confront the objection that different ways of applying the principle of indifference lead to incompatible probability assignments. Huemer offers the explanatory priority proviso as a strategy for overcoming this objection. With this proposal, Huemer claims that we can defend induction in a way that is not question-begging against the sceptic. But in this article, I argue that the (...) opposite is true: if anything, Huemer’s use of the principle of indifference supports the rationality of inductive scepticism. (shrink)
The aim of this paper is to apply inductive logic to the field that, presumably, Carnap never expected: legal causation. Legal causation is expressible in the form of singular causal statements; but it is distinguished from the customary concept of scientific causation, because it is subjective. We try to express this subjectivity within the system of inductive logic. Further, by semantic complement, we compensate a defect found in our application, to be concrete, the impossibility of two-place predicates (for (...) causal relationship) in inductive logic. (shrink)
In this paper, I discuss how Newton’s inductive argument of the Principia can be defended against criticisms levelled against it by Duhem, Popper and myself. I argue that Duhem’s and Popper’s criticisms can be countered, but mine cannot. It requires that we reconsider, not just Newton’s inductive argument in the Principia, but also the nature of science more generally. The methods of science, whether conceived along inductivist or hypothetico-deductivist lines, make implicit metaphysical presuppositions which rigour requires we make (...) explicit within science so that they can be critically assessed, alternatives being developed and assessed, in the hope that they can be improved. Despite claiming to derive his law of gravitation by induction from phenomena without resource to hypotheses, Newton does nevertheless acknowledge in the Principia that his rules of reasoning make metaphysical presuppositions. To this extent, Newton has a more enlightened view of scientific method than most 20th and 21st century scientists and historians and philosophers of science. (shrink)
This paper contains five observations concerning the intended meaning of the intuitionistic logical constants: (1) if the explanations of this meaning are to be based on a non-decidable concept, that concept should not be that of 'proof'; (2) Kreisel's explanations using extra clauses can be significantly simplified; (3) the impredicativity of the definition of → can be easily and safely ameliorated; (4) the definition of → in terms of 'proofs from premises' results in a loss of the inductive (...) character of the definitions of ∨ and ∃; and (5) the same occurs with the definition of ∀ in terms of 'proofs with free variables'. (shrink)
Although we often see references to Carnap’s inductive logic even in modern literatures, seemingly its confusing style has long obstructed its correct understanding. So instead of Carnap, in this paper, I devote myself to its necessary and sufficient commentary. In the beginning part (Sections 2-5), I explain why Carnap began the study of inductive logic and how he related it with our thought on probability (Sections 2-4). Therein, I trace Carnap’s thought back to Wittgenstein’s Tractatus as well (Section (...) 5). In the succeeding sections, I attempt the simplest exhibition of Carnap’s earlier system, where his original thought was thoroughly provided. For this purpose, minor concepts to which researchers have not paid attention are highlighted, for example, m-function (Section 8), in-correlation (Section 10), C-correlate (Section 10), statistical distribution (Section 12), and fitting sequence (Section 17). The climax of this paper is the proof of theorem (56). Through this theorem, we will be able to overview Carnap’s whole system. (shrink)
This paper takes two tasks. The one is elaborating on the relationship of inductive logic with decision theory to which later Carnap planned to apply his system (§§1-7); this is a surveying side of this article. The other is revealing the property of our prediction of the future, subjectivity (§§8-11); this is its philosophical aspect. They are both discussed under the name of belief in causation. Belief in causation is a kind of “degree of belief” born about the causal (...) effect of the action. As such, it admits of the analysis by inductive logic. (shrink)
We extend the framework of Inductive Logic to Second Order languages and introduce Wilmers' Principle, a rational principle for probability functions on Second Order languages. We derive a representation theorem for functions satisfying this principle and investigate its relationship to the first order principles of Regularity and Super Regularity.
An extract from Williams' The Ground of Induction (1947): "The sober amateur who takes the time to follow recent philosophical discussion will hardly resist the impression that much of it, in its dread of superstition and dogmatic reaction, has been oriented purposely toward skepticism: that a conclusion is admired in proportion as it is skeptical; that a jejune argument for skepticism will be admitted where a scrupulous defense of knowledge is derided or ignored; that an affirmative theory is a mere (...) annoyance to be stamped down as quickly as possible to a normal level of denial and defeat. It is an age which most admires the man who, as somebody has said, 'has a difficulty for every solution'. Whether or not this judgment is fair, however, it is safe to say, with Whitehead, that 'the theory of induction is the despair of philosophy - and yet all our activities are based upon it'. [A.N. Whitehead, Science and the Modern World (New York, 1925, p. 35] So prodigious a theoretical contretemps cannot remain a tempest in the professors' teapot. The news that no foundation is discoverable for the procedures of empirical intelligence, and still more the proclaimed discovery that there is no foundation, and still more the complacency which recommends that we reconcile ourselves to the lack, condemn the problem as a 'pseudoproblem', and proceed by irrational faith or pragmatic postulate, will slowly shatter civilized life and thought, to a degree which will make the modernist's loss of confidence in Christian supernaturalism, so often cited as the ultimate in spiritual cataclysms, seem a minor vicissitude. The demand that rational man adjust himself to a somewhat bleaker universe than he once hoped for is only one large and picturesque instance of the sort of re-orientation which inductive intelligence, in its very nature,continually imposes, and well within the proved capacities of human reason and good-will.. (shrink)
My dissertation explores the ways in which Rudolf Carnap sought to make philosophy scientific by further developing recent interpretive efforts to explain Carnap’s mature philosophical work as a form of engineering. It does this by looking in detail at his philosophical practice in his most sustained mature project, his work on pure and applied inductive logic. I, first, specify the sort of engineering Carnap is engaged in as involving an engineering design problem and then draw out the complications of (...) design problems from current work in history of engineering and technology studies. I then model Carnap’s practice based on those lessons and uncover ways in which Carnap’s technical work in inductive logic takes some of these lessons on board. This shows ways in which Carnap’s philosophical project subtly changes right through his late work on induction, providing an important corrective to interpretations that ignore the work on inductive logic. Specifically, I show that paying attention to the historical details of Carnap’s attempt to apply his work in inductive logic to decision theory and theoretical statistics in the 1950s and 1960s helps us understand how Carnap develops and rearticulates the philosophical point of the practical/theoretical distinction in his late work, offering thus a new interpretation of Carnap’s technical work within the broader context of philosophy of science and analytical philosophy in general. (shrink)
Proofs of God in Early Modern Europe offers a fascinating window into early modern efforts to prove God’s existence. Assembled here are twenty-two key texts, many translated into English for the first time, which illustrate the variety of arguments that philosophers of the seventeenth and eighteenth centuries offered for God. These selections feature traditional proofs—such as various ontological, cosmological, and design arguments—but also introduce more exotic proofs, such as the argument from eternal truths, the argument from universal (...) aseity, and the argument ex consensu gentium. Drawn from the work of eighteen philosophers, this book includes both canonical figures (such as Descartes, Spinoza, Newton, Leibniz, Locke, and Berkeley) and noncanonical thinkers (such as Norris, Fontenelle, Voltaire, Wolff, Du Châtelet, and Maupertuis). -/- Lloyd Strickland provides fresh translations of all selections not originally written in English and updates the spelling and grammar of those that were. Each selection is prefaced by a lengthy headnote, giving a biographical account of its author, an analysis of the main argument(s), and important details about the historical context. Strickland’s introductory essay provides further context, focusing on the various reasons that led so many thinkers of early modernity to develop proofs of God’s existence. -/- Proofs of God is perfect for both students and scholars of early modern philosophy and philosophy of religion. (shrink)
Many philosophers think that common sense knowledge survives sophisticated philosophical proofs against it. Recently, however, Bryan Frances (forthcoming) has advanced a philosophical proof that he thinks common sense can’t survive. Exploiting philosophical paradoxes like the Sorites, Frances attempts to show how common sense leads to paradox and therefore that common sense methodology is unstable. In this paper, we show how Frances’s proof fails and then issue Frances a dilemma.
Many philosophers are sceptical about the power of philosophy to refute commonsensical claims. They look at the famous attempts and judge them inconclusive. I prove that even if those famous attempts are failures, there are alternative successful philosophical proofs against commonsensical claims. After presenting the proofs I briefly comment on their significance.
This chapter presents a typology of the different kinds of inductive inferences we can draw from our evidence, based on the explanatory relationship between evidence and conclusion. Drawing on the literature on graphical models of explanation, I divide inductive inferences into (a) downwards inferences, which proceed from cause to effect, (b) upwards inferences, which proceed from effect to cause, and (c) sideways inferences, which proceed first from effect to cause and then from that cause to an additional effect. (...) I further distinguish between direct and indirect forms of downwards and upwards inferences. I then show how we can subsume canonical forms of inductive inference mentioned in the literature, such as inference to the best explanation, enumerative induction, and analogical inference, under this typology. Along the way, I explore connections with probability and confirmation, epistemic defeat, the relation between abduction and enumerative induction, the compatibility of IBE and Bayesianism, and theories of epistemic justification. (shrink)
I argue that Graham Oppy’s attempt to redefend his charge that all modal theistic arguments “must be question-begging” is unsuccessful. Oppy’s attempt to show that theism and modal concretism are compatible is not only tangential for his purposes, it is marred by a misunderstanding of theism, and vulnerable to a counterexample that actually demonstrates incompatibility. Moreover, the notion of begging the question employed by Oppy against the theist is seen to be far too permissive.
I would like to assume that Reichenbach's distinction of Justification and Discovery lives on, and to seek arguments in his texts that would justify their relevance in this field. The persuasive force of these arguments transcends the contingent circumstances apart from which their genesis and local transmission cannot be made understandable. I shall begin by characterizing the context distinction as employed by Reichenbach in "Experience and Prediction" to differentiate between epistemology and science (1). Following Thomas Nickles and Kevin T. Kelly, (...) one can distinguish two meanings of the context distinction in Reichenbach's work. One meaning, which is primarily to be found in the earlier writings, conceives of scientific discoveries as potential objects of epistemological justification. The other meaning, typical for the later writings, removes scientific discoveries from the possible domain of epistemology. The genesis of both meanings, which demonstrates the complexity of the relationships obtaining between epistemology and science, can be made understandable by appealing to the historical context (2). Both meanings present Reichenbach with the task of establishing the autonomy of epistemology through the justification of induction. Finally, I shall expound this justification and address some of its elements of rationality characterizing philosophy of science(3). (shrink)
Not focusing on the history of classical logic, this book provides discussions and quotes central passages on its origins and development, namely from a philosophical perspective. Not being a book in mathematical logic, it takes formal logic from an essentially mathematical perspective. Biased towards a computational approach, with SAT and VAL as its backbone, this is an introduction to logic that covers essential aspects of the three branches of logic, to wit, philosophical, mathematical, and computational.
As the 19th century drew to a close, logicians formalized an ideal notion of proof. They were driven by nothing other than an abiding interest in truth, and their proofs were as ethereal as the mind of God. Yet within decades these mathematical abstractions were realized by the hand of man, in the digital stored-program computer. How it came to be recognized that proofs and programs are the same thing is a story that spans a century, a chase (...) with as many twists and turns as a thriller. At the end of the story is a new principle for designing programming languages that will guide computers into the 21st century. -/- For my money, Gentzen’s natural deduction and Church’s lambda calculus are on a par with Einstein’s relativity and Dirac’s quantum physics for elegance and insight. And the maths are a lot simpler. I want to show you the essence of these ideas. I’ll need a few symbols, but not too many, and I’ll explain as I go along. -/- To simplify, I’ll present the story as we understand it now, with some asides to fill in the history. First, I’ll introduce Gentzen’s natural deduction, a formalism for proofs. Next, I’ll introduce Church’s lambda calculus, a formalism for programs. Then I’ll explain why proofs and programs are really the same thing, and how simplifying a proof corresponds to executing a program. Finally, I’ll conclude with a look at how these principles are being applied to design a new generation of programming languages, particularly mobile code for the Internet. (shrink)
In The Rationality of Induction, David Stove presents an argument against scepticism about inductive inference—where, for Stove, inductive inference is inference from the observed to the unobserved. Let U be a finite collection of n particulars such that each member of U either has property F-ness or does not. If s is a natural number less than n, define an s-fold sample of U as s observations of distinct members of U each either having F-ness or not having (...) F-ness. Let pU denote the proportion of members of U that are Fs and, if S is an s-fold sample of U, let pS denote the proportion of members of S that are Fs. Call S representative if and only if |pS – pU|<0.01. Stove‘s argument against inductive scepticism is built on the following statistical fact: -/- As s gets larger the proportion of all possible s-fold samples of U that are representative gets closer to 1 (regardless of the size of U or of the value of pU). -/- In this essay I subject Stove‘s argument to thorough scrutiny. I show that the argument – as it stands – is incomplete, and I illuminate the issues involved in trying to fill the gaps. Along the way I demonstrate that one of the commonest objects to Stove‘s argument misses the point. -/- . (shrink)
Mathematicians distinguish between proofs that explain their results and those that merely prove. This paper explores the nature of explanatory proofs, their role in mathematical practice, and some of the reasons why philosophers should care about them. Among the questions addressed are the following: what kinds of proofs are generally explanatory (or not)? What makes a proof explanatory? Do all mathematical explanations involve proof in an essential way? Are there really such things as explanatory proofs, and (...) if so, how do they relate to the sorts of explanation encountered in philosophy of science and metaphysics? (shrink)
When modeling informal proofs like that of Euclid’s Elements using a sound logical system, we go from proofs seen as somewhat unrigorous – even having gaps to be filled – to rigorous proofs. However, metalogic grounds the soundness of our logical system, and proofs in metalogic are not like formal proofs and look suspiciously like the informal proofs. This brings about what I am calling here the groundedness problem: how can we decide with certainty (...) that our metalogical proofs are rigorous and sustain our logical system? In this paper, I will expose this problem. I will not try to solve it here. (shrink)
I prove the nonexistence of gods. The proof is based on three axioms: Ockham’s razor (OR), religiosity is endogenous in humans, and, there are no miracles. The OR is formulated operationally, to remove improper postulates, such that it yields not only a plausible argument but truth. The validity of the second and the third axiom is established empirically by inductive reasoning relying on a thorough analysis of the psychiatric literature and skeptical publications. With these axioms I prove that gods (...) are not necessary for our universe. Applying OR yields that gods do not exist. The implications of this article are enormous. Mankind’s understanding of the world is elevated to a higher level to a unified view on the world being nature and mankind being a part of it. (shrink)
This paper discusses ethical issues surrounding Integrated Assessment Models (IAMs) of the economic effects of climate change, and how climate economists acting as policy advisors ought to represent the uncertain possibility of catastrophe. Some climate economists, especially Martin Weitzman, have argued for a precautionary approach where avoiding catastrophe should structure climate economists’ welfare analysis. This paper details ethical arguments that justify this approach, showing how Weitzman’s “fat tail” probabilities of climate catastrophe pose ethical problems for widely used IAMs. The main (...) claim is that economists who ignore or downplay catastrophic risks in their representations of uncertainty likely fall afoul of ethical constraints on scientists acting as policy advisors. Such scientists have duties to honestly articulate uncertainties and manage (some) inductive risks, or the risks of being wrong in different ways. (shrink)
The paper explores the handling of singular analogy in quantitative inductive logics. It concentrates on two analogical patterns coextensive with the traditional argument from analogy: perfect and imperfect analogy. Each is examined within Carnap’s λ-continuum, Carnap’s and Stegmüller’s λ-η continuum, Carnap’s Basic System, Hintikka’s α-λ continuum, and Hintikka’s and Niiniluoto’s K-dimensional system. Itis argued that these logics handle perfect analogies with ease, and that imperfect analogies, while unmanageable in some logics, are quite manageable in others. The paper concludes with (...) a modification of the K-dimensional system that synthesizes independent proposals by Kuipers and Niiniluoto. (shrink)
Considered in light of the readers expectation of a thoroughgoing criticism of the pretensions of the rational psychologist, and of the wealth of discussions available in the broader 18th century context, which includes a variety of proofs that do not explicitly turn on the identification of the soul as a simple substance, Kants discussion of immortality in the Paralogisms falls lamentably short. However, outside of the Paralogisms (and the published works generally), Kant had much more to say about the (...) arguments for the souls immortality as he devoted considerable time to the topic throughout his career in his lectures on metaphysics. In fact, as I show in this paper, the student lecture notes prove to be an indispensable supplement to the treatment in the Paralogisms, not only for illuminating Kants criticism of the rational psychologists views on the immortality of the soul, but also in reconciling this criticism with Kants own positive claims regarding certain theoretical proofs of immortality. (shrink)
The traditional view of evidence in mathematics is that evidence is just proof and proof is just derivation. There are good reasons for thinking that this view should be rejected: it misrepresents both historical and current mathematical practice. Nonetheless, evidence, proof, and derivation are closely intertwined. This paper seeks to tease these concepts apart. It emphasizes the role of argumentation as a context shared by evidence, proofs, and derivations. The utility of argumentation theory, in general, and argumentation schemes, in (...) particular, as a methodology for the study of mathematical practice is thereby demonstrated. Argumentation schemes represent an almost untapped resource for mathematics education. Notably, they provide a consistent treatment of rigorous and non-rigorous argumentation, thereby working to exhibit the continuity of reasoning in mathematics with reasoning in other areas. Moreover, since argumentation schemes are a comparatively mature methodology, there is a substantial body of existing work to draw upon, including some increasingly sophisticated software tools. Such tools have significant potential for the analysis and evaluation of mathematical argumentation. The first four sections of the paper address the relationships of evidence to proof, proof to derivation, argument to proof, and argument to evidence, respectively. The final section directly addresses some of the educational implications of an argumentation scheme account of mathematical reasoning. (shrink)
Review of John Stillwell, Reverse Mathematics: Proofs from the Inside Out. Princeton, NJ: Princeton University Press, 2018, pp. 200. ISBN 978-0-69-117717-5 (hbk), 978-0-69-119641-1 (pbk), 978-1-40-088903-7 (e-book).
The Four-Colour Theorem (4CT) proof, presented to the mathematical community in a pair of papers by Appel and Haken in the late 1970's, provoked a series of philosophical debates. Many conceptual points of these disputes still require some elucidation. After a brief presentation of the main ideas of Appel and Haken’s procedure for the proof and a reconstruction of Thomas Tymoczko’s argument for the novelty of 4CT’s proof, we shall formulate some questions regarding the connections between the points raised by (...) Tymoczko and some Wittgensteinian topics in the philosophy of mathematics such as the importance of the surveyability as a criterion for distinguishing mathematical proofs from empirical experiments. Our aim is to show that the “characteristic Wittgensteinian invention” (Mühlhölzer 2006) – the strong distinction between proofs and experiments – can shed some light in the conceptual confusions surrounding the Four-Colour Theorem. (shrink)
This paper discusses critically what simulation models of the evolution of cooperation can possibly prove by examining Axelrod’s “Evolution of Cooperation” (1984) and the modeling tradition it has inspired. Hardly any of the many simulation models in this tradition have been applicable empirically. Axelrod’s role model suggested a research design that seemingly allowed to draw general conclusions from simulation models even if the mechanisms that drive the simulation could not be identified empirically. But this research design was fundamentally flawed. At (...) best such simulations can claim to prove logical possibilities, i.e. they prove that certain phenomena are possible as the consequence of the modeling assumptions built into the simulation, but not that they are possible or can be expected to occur in reality. I suggest several requirements under which proofs of logical possibilities can nevertheless be considered useful. Sadly, most Axelrod-style simulations do not meet these requirements. It would be better not to use this kind of simulations at all. (shrink)
In this chapter we introduce concepts for analyzing proofs, and for analyzing undergraduate and beginning graduate mathematics students’ proving abilities. We discuss how coordination of these two analyses can be used to improve students’ ability to construct proofs. -/- For this purpose, we need a richer framework for keeping track of students’ progress than the everyday one used by mathematicians. We need to know more than that a particular student can, or cannot, prove theorems by induction or contradiction (...) or can, or cannot, prove certain theorems in beginning set theory or analysis. It is more useful to describe a student’s work in terms of a finer-grained framework that includes various smaller abilities that contribute to proving and that can be learned in differing ways and at differing periods of a student’s development. (shrink)
Corcoran reviews Boute’s 2013 paper “How to calculate proofs”. -/- There are tricky aspects to classifying occurrences of variables: is an occurrence of ‘x’ free as in ‘x + 1’, is it bound as in ‘{x: x = 1}’, or is it orthographic as in ‘extra’? The trickiness is compounded failure to employ conventions to separate use of expressions from their mention. The variable occurrence is free in the term ‘x + 1’ but it is orthographic in that term’s (...) quotes name ‘‘{x: x = 1}’’. The term has no quotes, the term’s name has one set of quotes, and the name of the term’s name has two sets of quotes. The trickiness is further compounded by failure to explicitly distinguish a variable’s values from it substituents. The variable ranges over its values but its occurrences are replaced by occurrences of its substituents. In arithmetic the values are numbers not numerals but the substituents are numerals not numbers. See https://www.academia.edu/s/1eddee0c62?source=link -/- Raymond Boute tries to criticize Daniel Velleman for mistakes in this area. However, Corcoran finds mistakes in Boute’s handling of the material. The reader is invited to find mistakes in Corcoran’s handling of this tricky material. -/- The paper and the review treat other issues as well. -/- Acknowledgements: Raymond Boute, Joaquin Miller, Daniel Velleman, George Weaver, and others. (shrink)
This paper considers logics which are formally dual to intuitionistic logic in order to investigate a co-constructive logic for proofs and refutations. This is philosophically motivated by a set of problems regarding the nature of constructive truth, and its relation to falsity. It is well known both that intuitionism can not deal constructively with negative information, and that defining falsity by means of intuitionistic negation leads, under widely-held assumptions, to a justification of bivalence. For example, we do not want (...) to equate falsity with the non-existence of a proof since this would render a statement such as “pi is transcendental” false prior to 1882. In addition, the intuitionist account of negation as shorthand for the derivation of absurdity is inadequate, particularly outside of purely mathematical contexts. To deal with these issues, I investigate the dual of intuitionistic logic, co-intuitionistic logic, as a logic of refutation, alongside intuitionistic logic of proofs. Direct proof and refutation are dual to each other, and are constructive, whilst there also exist syntactic, weak, negations within both logics. In this respect, the logic of refutation is weakly paraconsistent in the sense that it allows for statements for which, neither they, nor their negation, are refuted. I provide a proof theory for the co-constructive logic, a formal dualizing map between the logics, and a Kripke-style semantics. This is given an intuitive philosophical rendering in a re-interpretation of Kolmogorov’s logic of problems. (shrink)
Transfinite ordinal numbers enter mathematical practice mainly via the method of definition by transfinite recursion. Outside of axiomatic set theory, there is a significant mathematical tradition in works recasting proofs by transfinite recursion in other terms, mostly with the intention of eliminating the ordinals from the proofs. Leaving aside the different motivations which lead each specific case, we investigate the mathematics of this action of proof transforming and we address the problem of formalising the philosophical notion of elimination (...) which characterises this move. (shrink)
This paper discusses critically what simulation models of the evolution ofcooperation can possibly prove by examining Axelrod’s “Evolution of Cooperation” and the modeling tradition it has inspired. Hardly any of the many simulation models of the evolution of cooperation in this tradition have been applicable empirically. Axelrod’s role model suggested a research design that seemingly allowed to draw general conclusions from simulation models even if the mechanisms that drive the simulation could not be identified empirically. But this research design was (...) fundamentally flawed, because it is not possible to draw general empirical conclusions from theoretical simulations. At best such simulations can claim to prove logical possibilities, i.e. they prove that certain phenomena are possible as the consequence of the modeling assumptions built into the simulation, but not that they are possible or can be expected to occur in reality I suggest several requirements under which proofs of logical possibilities can nevertheless be considered useful. Sadly, most Axelrod-style simulations do not meet these requirements. I contrast this with Schelling’s neighborhood segregation model, thecore mechanism of which can be retraced empirically. (shrink)
The problem of algorithmic structuring of proofs in the sequent calculi LK and LKB ( LK where blocks of quantifiers can be introduced in one step) is investigated, where a distinction is made between linear proofs and proofs in tree form. In this framework, structuring coincides with the introduction of cuts into a proof. The algorithmic solvability of this problem can be reduced to the question of k-l-compressibility: "Given a proof of length k , and l ≤ (...) k : Is there is a proof of length ≤ l ?" When restricted to proofs with universal or existential cuts, this problem is shown to be (1) undecidable for linear or tree-like LK-proofs (corresponds to the undecidability of second order unification), (2) undecidable for linear LKB-proofs (corresponds to the undecidability of semi-unification), and (3) decidable for tree-like LKB -proofs (corresponds to a decidable subprob- lem of semi-unification). (shrink)
It is shown how the schema of equivalence can be used to obtain short proofs of tautologies A , where the depth of proofs is linear in the number of variables in A .
Medical diagnosis has been traditionally recognized as a privileged field of application for so called probabilistic induction. Consequently, the Bayesian theorem, which mathematically formalizes this form of inference, has been seen as the most adequate tool for quantifying the uncertainty surrounding the diagnosis by providing probabilities of different diagnostic hypotheses, given symptomatic or laboratory data. On the other side, it has also been remarked that differential diagnosis rather works by exclusion, e.g. by modus tollens, i.e. deductively. By drawing on a (...) case history, this paper aims at clarifying some points on the issue. Namely: 1) Medical diagnosis does not represent, strictly speaking, a form of induction, but a type, of what in Peircean terms should be called ‘abduction’ (identifying a case as the token of a specific type); 2) in performing the single diagnostic steps, however, different inferential methods are used for both inductive and deductive nature: modus tollens, hypothetical-deductive method, abduction; 3) Bayes’ theorem is a probabilized form of abduction which uses mathematics in order to justify the degree of confidence which can be entertained on a hypothesis given the available evidence; 4) although theoretically irreconcilable, in practice, both the hypothetical- deductive method and the Bayesian one, are used in the same diagnosis with no serious compromise for its correctness; 5) Medical diagnosis, especially differential diagnosis, also uses a kind of “probabilistic modus tollens”, in that, signs (symptoms or laboratory data) are taken as strong evidence for a given hypothesis not to be true: the focus is not on hypothesis confirmation, but instead on its refutation [Pr (¬ H/E1, E2, …, En)]. Especially at the beginning of a complicated case, odds are between the hypothesis that is potentially being excluded and a vague “other”. This procedure has the advantage of providing a clue of what evidence to look for and to eventually reduce the set of candidate hypotheses if conclusive negative evidence is found. 6) Bayes’ theorem in the hypothesis-confirmation form can more faithfully, although idealistically, represent the medical diagnosis when the diagnostic itinerary has come to a reduced set of plausible hypotheses after a process of progressive elimination of candidate hypotheses; 7) Bayes’ theorem is however indispensable in the case of litigation in order to assess doctor’s responsibility for medical error by taking into account the weight of the evidence at his disposal. (shrink)
Hegel endorsed proofs of the existence of God, and also believed God to be a person. Some of his interpreters ignore these apparently retrograde tendencies, shunning them in favor of the philosopher's more forward-looking contributions. Others embrace Hegel's religious thought, but attempt to recast his views as less reactionary than they appear to be. Robert Williams's latest monograph belongs to a third category: he argues that Hegel's positions in philosophical theology are central to his philosophy writ large. The book (...) is diligently researched, and marshals an impressive amount of textual evidence concerning Hegel's view of the proofs, his theory of personhood, and his views on religious community.Many of... (shrink)
Inductive Logic is a ‘thematic compilation’ by Avi Sion. It collects in one volume many (though not all) of the essays, that he has written on this subject over a period of some 23 years, which all demonstrate the possibility and conditions of validity of human knowledge, the utility and reliability of human cognitive means when properly used, contrary to the skeptical assumptions that are nowadays fashionable.
The resolving of the main problem of quantum mechanics about how a quantum leap and a smooth motion can be uniformly described resolves also the problem of how a distribution of reliable data and a sequence of deductive conclusions can be uniformly described by means of a relevant wave function “Ψdata”.
We introduce an effective translation from proofs in the display calculus to proofs in the labelled calculus in the context of tense logics. We identify the labelled calculus proofs in the image of this translation as those built from labelled sequents whose underlying directed graph possesses certain properties. For the basic normal tense logic Kt, the image is shown to be the set of all proofs in the labelled calculus G3Kt.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.