This paper was written to study the order of medical advances throughout history. It investigates changing human beliefs concerning the causes of diseases, how modern surgery developed and improved methods of diagnosis and the use of medical statistics. Human beliefs about the causes of disease followed a logical progression from supernatural causes, such as the wrath of the Gods, to natural causes, involving imbalances within the human body. The invention of the microscope led to the discovery of microorganisms which (...) were eventually identified as the cause of infectious diseases. Identification of the particular microorganism causing a disease led to immunization against the disease. Modern surgery only developed after the ending of the taboo against human dissection and the discovery of modern anesthesia and the discovery of the need for anti-septic practices. Modern diagnostic practices began with the discovery of x-rays and the invention of medical scanners. Improved mathematics, especially in probability theory, led to statistical studies which led to a much greater ability, to identify the causes of disease, and to evaluate the effectiveness of treatments. These discoveries all occurred in a necessary and inevitable order with the easiest discoveries being made first and the harder discoveries being made later. The order of discovery determined the course of the history of medicine and is an example of how social and cultural history has to follow a particular course determined by the structure of the world around us. (shrink)
We propose a way to explain the diversification of branches of mathematics, distinguishing the different approaches by which mathematical objects can be studied. In our philosophy of mathematics, there is a base object, which is the abstract multiplicity that comes from our empirical experience. However, due to our human condition, the analysis of such multiplicity is covered by other empirical cognitive attitudes (approaches), diversifying the ways in which it can be conceived, and consequently giving rise to different mathematical (...) disciplines. This diversity of approaches is founded on the manifold categories that we find in physical reality. We also propose, grounded on this idea, the use of Aristotelian categories as a first model for this division, generating from it a classification of mathematical branches. Finally we make a history review to show that there is consistency between our classification, and the historical appearance of the different branches of mathematics. (shrink)
Standard histories of mathematics and of analytic philosophy contend that work on the foundations of mathematics was motivated by a crisis such as the discovery of paradoxes in set theory or the discovery of non-Euclidean geometries. Recent scholarship, however, casts doubt on the standard histories, opening the way for consideration of an alternative motive for the study of the foundations of mathematics—unification. Work on foundations has shown that diverse mathematical practices could be integrated into a single framework (...) of axiomatic systems and that much of mathematics could be expressed in a single language. The new framework was the product of an interdisciplinary coalition whose ideas resemble those later adopted by the Vienna Circle and logical empiricists. (shrink)
A new hypothesis on the basic features characterising the Foundations of Mathematics is suggested. By means of them the entire historical development of Mathematics before the 20th Century is summarised through a table. Also the several programs, launched around the year 1900, on the Foundations of Mathematics are characterised by a corresponding table. The major difficulty that these programs met was to recognize an alternative to the basic feature of the deductive organization of a theory - more (...) precisely, to Hilbert’s main tenet. Ironically, already half a century before the births of these programs the alternative organization has been substantially represented by Lobachevsky's theory on parallel lines. Moreover, although each program’s founder recognised the basic features in a partial way only, all together these programs represented just the four possible foundational approaches. (shrink)
The concept of similar systems arose in physics, and appears to have originated with Newton in the seventeenth century. This chapter provides a critical history of the concept of physically similar systems, the twentieth century concept into which it developed. The concept was used in the nineteenth century in various fields of engineering, theoretical physics and theoretical and experimental hydrodynamics. In 1914, it was articulated in terms of ideas developed in the eighteenth century and used in nineteenth century (...) class='Hi'>mathematics and mechanics: equations, functions and dimensional analysis. The terminology physically similar systems was proposed for this new characterization of similar systems by the physicist Edgar Buckingham. Related work by Vaschy, Bertrand, and Riabouchinsky had appeared by then. The concept is very powerful in studying physical phenomena both theoretically and experimentally. As it is not currently part of the core curricula of STEM disciplines or philosophy of science, it is not as well known as it ought to be. (shrink)
According to Steiner (1998), in contemporary physics new important discoveries are often obtained by means of strategies which rely on purely formal mathematical considerations. In such discoveries, mathematics seems to have a peculiar and controversial role, which apparently cannot be accounted for by means of standard methodological criteria. M. Gell-Mann and Y. Ne׳eman׳s prediction of the Ω− particle is usually considered a typical example of application of this kind of strategy. According to Bangu (2008), this prediction is apparently based (...) on the employment of a highly controversial principle—what he calls the “reification principle”. Bangu himself takes this principle to be methodologically unjustifiable, but still indispensable to make the prediction logically sound. In the present paper I will offer a new reconstruction of the reasoning that led to this prediction. By means of this reconstruction, I will show that we do not need to postulate any “reificatory” role of mathematics in contemporary physics and I will contextually clarify the representative and heuristic role of mathematics in science. (shrink)
This proposal serves to enhance scientific and technological literacy, by promoting STEM (Science, Technology, Engineering, and Mathematics) education with particular reference to contemporary physics. The study is presented in the form of a repertoire, and it gives the reader a glimpse of the conceptual structure and development of quantum theory along a rational line of thought, whose understanding might be the key to introducing young generations of students to physics.
The need for revolution in modern physics is a well known and often broached subject, however, the precision and success of current models narrows the possible changes to such a great degree that there appears to be no major change possible. We provide herein, the first step toward a possible solution to this paradox via reinterpretation of the conceptual-theoretical framework while still preserving the modern art and tools in an unaltered form. This redivision of concepts and redistribution of the data (...) can revolutionize expectations of new experimental outcomes. This major change within finely tuned constraints is made possible by the fact that numerous mathematically equivalent theories were direct precursors to, and contemporaneous with, the modern interpretations. In this first of a series of papers, historical investigation of the conceptual lineage of modern theory reveals points of exacting overlap in physical theories which, while now considered cross discipline, originally split from a common source and can be reintegrated as a singular science again. This revival of an an older associative hierarchy, combined with modern insights, can open new avenues for investigation. This reintegration cross-disciplinary theories and tools is defined as the “Neoclassical Interpretation.” . (shrink)
In this paper, I present the case of the discovery of complex numbers by Girolamo Cardano. Cardano acquires the concepts of (specific) complex numbers, complex addition, and complex multiplication. His understanding of these concepts is incomplete. I show that his acquisition of these concepts cannot be explained on the basis of Christopher Peacocke’s Conceptual Role Theory of concept possession. I argue that Strong Conceptual Role Theories that are committed to specifying a set of transitions that is both necessary and sufficient (...) for possession of mathematical concepts will always face counterexamples of the kind illustrated by Cardano. I close by suggesting that we should rely more heavily on resources of Anti-Individualism as a framework for understanding the acquisition and possession of concepts of abstract subject matters. (shrink)
Since antiquity well into the beginnings of the 20th century geometry was a central topic for philosophy. Since then, however, most philosophers of science, if they took notice of topology at all, considered it as an abstruse subdiscipline of mathematics lacking philosophical interest. Here it is argued that this neglect of topology by philosophy may be conceived of as the sign of a conceptual sea-change in philosophy of science that expelled geometry, and, more generally, mathematics, from the central (...) position it used to have in philosophy of science and placed logic at center stage in the 20th century philosophy of science. Only in recent decades logic has begun to loose its monopoly and geometry and topology received a new chance to find a place in philosophy of science. (shrink)
In the early 1900s, Russell began to recognize that he, and many other mathematicians, had been using assertions like the Axiom of Choice implicitly, and without explicitly proving them. In working with the Axioms of Choice, Infinity, and Reducibility, and his and Whitehead’s Multiplicative Axiom, Russell came to take the position that some axioms are necessary to recovering certain results of mathematics, but may not be proven to be true absolutely. The essay traces historical roots of, and motivations for, (...) Russell’s method of analysis, which are intended to shed light on his view about the status of mathematical axioms. I describe the position Russell develops in consequence as “immanent logicism,” in contrast to what Irving (1989) describes as “epistemic logicism.” Immanent logicism allows Russell to avoid the logocentric predicament, and to propose a method for discovering structural relationships of dependence within mathematical theories. (shrink)
We argue that the mathematization of science should be understood as a normative activity of advocating for a particular methodology with its own criteria for evaluating good research. As a case study, we examine the mathematization of taxonomic classification in systematic biology. We show how mathematization is a normative activity by contrasting its distinctive features in numerical taxonomy in the 1960s with an earlier reform advocated by Ernst Mayr starting in the 1940s. Both Mayr and the numerical taxonomists sought to (...) formalize the work of classification, but Mayr introduced a qualitative formalism based on human judgment for determining the taxonomic rank of populations, while the numerical taxonomists introduced a quantitative formalism based on automated procedures for computing classifications. The key contrast between Mayr and the numerical taxonomists is how they conceptualized the temporal structure of the workflow of classification, specifically where they allowed meta-level discourse about difficulties in producing the classification. (shrink)
This is not a mathematics book, but a book about mathematics, which addresses both student and teacher, with a goal as practical as possible, namely to initiate and smooth the way toward the student’s full understanding of the mathematics taught in school. The customary procedural-formal approach to teaching mathematics has resulted in students’ distorted vision of mathematics as a merely formal, instrumental, and computational discipline. Without the conceptual base of mathematics, students develop over time (...) a “mathematical anxiety” and abandon any effort to understand mathematics, which becomes their “traditional enemy” in school. This work materializes the results of the inter- and trans-disciplinary research aimed toward the understanding of mathematics, which concluded that the fields with the potential to contribute to mathematics education in this respect, by unifying the procedural and conceptual approaches, are epistemology and philosophy of mathematics and science, as well as fundamentals and history of mathematics. These results argue that students’ fear of mathematics can be annulled through a conceptual approach, and a student with a good conceptual understanding will be a better problem solver. The author has identified those zones and concepts from the above disciplines that can be adapted and processed for familiarizing the student with this type of knowledge, which should accompany the traditional content of school mathematics. The work was organized so as to create for the reader a unificatory image of the complex nature of mathematics, as well as a conceptual perspective ultimately necessary to the holistic understanding of school mathematics. The author talks about mathematics to convince readers that to understand mathematics means first to understand it as a whole, but also as part of a whole. The nature of mathematics, its primary concepts (like numbers and sets), its structures, language, methods, roles, and applicability, are all presented in their essential content, and the explanation of non-mathematical concepts is done in an accessible language and with many relevant examples. (shrink)
K. Marx’s 200th jubilee coincides with the celebration of the 85 years from the first publication of his “Mathematical Manuscripts” in 1933. Its editor, Sofia Alexandrovna Yanovskaya (1896–1966), was a renowned Soviet mathematician, whose significant studies on the foundations of mathematics and mathematical logic, as well as on the history and philosophy of mathematics are unduly neglected nowadays. Yanovskaya, as a militant Marxist, was actively engaged in the ideological confrontation with idealism and its influence on modern (...) class='Hi'>mathematics and their interpretation. Concomitantly, she was one of the pioneers of mathematical logic in the Soviet Union, in an era of fierce disputes on its compatibility with Marxist philosophy. Yanovskaya managed to embrace in an originally Marxist spirit the contemporary level of logico-philosophical research of her time. Due to her highly esteemed status within Soviet academia, she became one of the most significant pillars for the culmination of modern mathematics in the Soviet Union. In this paper, I attempt to trace the influence of the complex socio-cultural context of the first decades of the Soviet Union on Yanovskaya’s work. Among the several issues I discuss, her encounter with L. Wittgenstein is striking. (shrink)
We discuss central aspects of history of the concept of an affine differentiable manifold, as a proposal confirming the need for using some quantitative methods (drawn from elementary Model Theory) in Mathematical Historiography. In particular, we prove that this geometric structure is a syntactic rigid designator in the sense of Kripke-Putnam.
DEFINING OUR TERMS A “paradox" is an argumentation that appears to deduce a conclusion believed to be false from premises believed to be true. An “inconsistency proof for a theory" is an argumentation that actually deduces a negation of a theorem of the theory from premises that are all theorems of the theory. An “indirect proof of the negation of a hypothesis" is an argumentation that actually deduces a conclusion known to be false from the hypothesis alone or, more commonly, (...) from the hypothesis augmented by a set of premises known to be true. A “direct proof of a hypothesis" is an argumentation that actually deduces the hypothesis itself from premises known to be true. Since `appears', `believes' and `knows' all make elliptical reference to a participant, it is clear that `paradox', `indirect proof' and `direct proof' are all participant-relative. PARTICIPANT RELATIVITY In normal mathematical writing the participant is presumed to be “the community of mathematicians" or some more or less well-defined subcommunity and, therefore, omission of explicit reference to the participant is often warranted. However, in historical, critical, or philosophical writing focused on emerging branches of mathematics such omission often invites confusion. One and the same argumentation has been a paradox for one mathematician, an inconsistency proof for another, and an indirect proof to a third. One and the same argumentation-text can appear to one mathematician to express an indirect proof while appearing to another mathematician to express a direct proof. WHAT IS A PARADOX’S SOLUTION? Of the above four sorts of argumentation only the paradox invites “solution" or “resolution", and ordinarily this is to be accomplished either by discovering a logical fallacy in the “reasoning" of the argumentation or by discovering that the conclusion is not really false or by discovering that one of the premises is not really true. Resolution of a paradox by a participant amounts to reclassifying a formerly paradoxical argumentation either as a “fallacy", as a direct proof of its conclusion, as an indirect proof of the negation of one of its premises, as an inconsistency proof, or as something else depending on the participant's state of knowledge or belief. This illustrates why an argumentation which is a paradox to a given mathematician at a given time may well not be a paradox to the same mathematician at a later time. -/- The present article considers several set-theoretic argumentations that appeared in the period 1903-1908. The year 1903 saw the publication of B. Russell's Principles of mathematics, [Cambridge Univ. Press, Cambridge, 1903; Jbuch 34, 62]. The year 1908 saw the publication of Russell's article on type theory as well as Ernst Zermelo's two watershed articles on the axiom of choice and the foundations of set theory. The argumentations discussed concern “the largest cardinal", “the largest ordinal", the well-ordering principle, “the well-ordering of the continuum", denumerability of ordinals and denumerability of reals. The article shows that these argumentations were variously classified by various mathematicians and that the surrounding atmosphere was one of confusion and misunderstanding, partly as a result of failure to make or to heed distinctions similar to those made above. The article implies that historians have made the situation worse by not observing or not analysing the nature of the confusion. -/- RECOMMENDATION This well-written and well-documented article exemplifies the fact that clarification of history can be achieved through articulation of distinctions that had not been articulated (or were not being heeded) at the time. The article presupposes extensive knowledge of the history of mathematics, of mathematics itself (especially set theory) and of philosophy. It is therefore not to be recommended for casual reading. AFTERWORD: This review was written at the same time Corcoran was writing his signature “Argumentations and logic”[249] that covers much of the same ground in much more detail. https://www.academia.edu/14089432/Argumentations_and_Logic . (shrink)
This paper argues that the principle of continuity that underlies Benjamin’s understanding of what makes the reality of a thing thinkable, which in the Kantian context implies a process of “filling time” with an anticipatory structure oriented to the subject, is of a different order than that of infinitesimal calculus—and that a “discontinuity” constitutive of the continuity of experience and (merely) counterposed to the image of actuality as an infinite gradation of ultimately thetic acts cannot be the principle on which (...) Benjamin bases the structure of becoming. Tracking the transformation of the process of “filling time” from its logical to its historical iteration, or from what Cohen called the “fundamental acts of time” in Logik der reinen Erkenntnis to Benjamin’s image of a language of language (qua language touching itself), the paper will suggest that for Benjamin, moving from 0 to 1 is anything but paradoxical, and instead relies on the possibility for a mathematical function to capture the nature of historical occurrence beyond paradoxes of language or phenomenality. (shrink)
The paper follows the track of a previous paper “Natural cybernetics of time” in relation to history in a research of the ways to be mathematized regardless of being a descriptive humanitarian science withal investigating unique events and thus rejecting any repeatability. The pathway of classical experimental science to be mathematized gradually and smoothly by more and more relevant mathematical models seems to be inapplicable. Anyway quantum mechanics suggests another pathway for mathematization; considering the historical reality as dual or (...) “complimentary” to its model. The historical reality by itself can be seen as mathematical if one considers it in Hegel’s manner as a specific interpretation of the totality being in a permanent self-movement due to being just the totality, i.e. by means of the “speculative dialectics” of history, however realized as a theory both mathematical and empirical and thus falsifiable as by logical contradictions within itself as emprical discrepancies to facts. Not less, a Husserlian kind of “historical phenomenology” is possible along with Hegel’s historical dialectics sharing the postulate of the totality (and thus, that of transcendentalism). One would be to suggest the transcendental counterpart: an “eternal”, i.e. atemporal and aspatial history to the usual, descriptive temporal history, and equating the real course of history as with its alternative, actually happened branches of the regions of the world as with only imaginable, counterfactual histories. That universal and transcendental history is properly mathematical by itself, even in a neo-Pythagorean model. It is only represented on the temporal screen of the standard historiography as a discrete series of unique events. An analogy to the readings of the apparatus in quantum mechanics can be useful. Even more, that analogy is considered rigorously and logically as implied by the mathematical transcendental history and sharing with it the same quantity of information as an invariant to all possible alternative or counterfactual histories. One can involve the hypothetical external viewpoint to history (as if outside of history or from “God’s viewpoint to it), to which all alternative or counterfactual histories can be granted as a class of equivalence sharing the same information (i.e. the number choices, but realized in different sequence or adding redundant ones in each branch) being similar and even mathematically isomorphic to Feynman trajectories in quantum mechanics. Particularly, a fundamental law of mathematical history, the law of least choice of the real historical pathway is deducible from the same approach. Its counterpart in physics is the well-known and confirmed law of least action as far as the quantity of action corresponds equivocally to the quantity of information or that of number elementary historical choices. (shrink)
We have reached the peculiar situation where the advance of mainstream science has required us to dismiss as unreal our own existence as free, creative agents, the very condition of there being science at all. Efforts to free science from this dead-end and to give a place to creative becoming in the world have been hampered by unexamined assumptions about what science should be, assumptions which presuppose that if creative becoming is explained, it will be explained away as an illusion. (...) In this paper it is shown that this problem has permeated the whole of European civilization from the Ancient Greeks onwards, leading to a radical disjunction between cosmology which aims at a grasp of the universe through mathematics and history which aims to comprehend human action through stories. By going back to the Ancient Greeks and tracing the evolution of the denial of creative becoming, I trace the layers of assumptions that must in some way be transcended if we are to develop a truly post-Egyptian science consistent with the forms of understanding and explanation that have evolved within history. (shrink)
The article evaluates the Domain Postulate of the Classical Model of Science and the related Aristotelian prohibition rule on kind-crossing as interpretative tools in the history of the development of mathematics into a general science of quantities. Special reference is made to Proclus’ commentary to Euclid’s first book of Elements , to the sixteenth century translations of Euclid’s work into Latin and to the works of Stevin, Wallis, Viète and Descartes. The prohibition rule on kind-crossing formulated by Aristotle (...) in Posterior analytics is used to distinguish between conceptions that share the same name but are substantively different: for example the search for a broader genus including all mathematical objects; the search for a common character of different species of mathematical objects; and the effort to treat magnitudes as numbers. (shrink)
Epstein and Carnielli's fine textbook on logic and computability is now in its second edition. The readers of this journal might be particularly interested in the timeline `Computability and Undecidability' added in this edition, and the included wall-poster of the same title. The text itself, however, has some aspects which are worth commenting on.
In 1901 Russell had envisaged the new analytic philosophy as uniquely systematic, borrowing the methods of science and mathematics. A century later, have Russell’s hopes become reality? David Lewis is often celebrated as a great systematic metaphysician, his influence proof that we live in a heyday of systematic philosophy. But, we argue, this common belief is misguided: Lewis was not a systematic philosopher, and he didn’t want to be. Although some aspects of his philosophy are systematic, mainly his pluriverse (...) of possible worlds and its many applications, that systematicity was due to the influence of his teacher Quine, who really was an heir to Russell. Drawing upon Lewis’s posthumous papers and his correspondence as well as the published record, we show that Lewis’s non- Quinean influences, including G.E. Moore and D.M. Armstrong, led Lewis to an anti- systematic methodology which leaves each philosopher’s views and starting points to his or her own personal conscience. (shrink)
The present yearbook (which is the fourth in the series) is subtitled Trends & Cycles. It is devoted to cyclical and trend dynamics in society and nature; special attention is paid to economic and demographic aspects, in particular to the mathematical modeling of the Malthusian and post-Malthusian traps' dynamics. An increasingly important role is played by new directions in historical research that study long-term dynamic processes and quantitative changes. This kind of history can hardly develop without the application of (...) mathematical methods. There is a tendency to study history as a system of various processes, within which one can detect waves and cycles of different lengths – from a few years to several centuries, or even millennia. The contributions to this yearbook present a qualitative and quantitative analysis of global historical, political, economic and demographic processes, as well as their mathematical models. This issue of the yearbook consists of three main sections: (I) Long-Term Trends in Nature and Society; (II) Cyclical Processes in Pre-industrial Societies; (III) Contemporary History and Processes. We hope that this issue of the yearbook will be interesting and useful both for historians and mathematicians, as well as for all those dealing with various social and natural sciences. (shrink)
Bertrand Russell’s _Principles of Mathematics_ (1903) gives rise to several interpretational challenges, especially concerning the theory of denoting concepts. Only relatively recently, for instance, has it been properly realised that Russell accepted denoting concepts that do not denote anything. Such empty denoting concepts are sometimes thought to enable Russell, whether he was aware of it or not, to avoid commitment to some of the problematic non-existent entities he seems to accept, such as the Homeric gods and chimeras. In this paper, (...) I argue first that the theory of denoting concepts in _Principles of __Mathematics_ has been generally misunderstood. According to the interpretation I defend, if a denoting concept shifts what a proposition is about, then the aggregate of the denoted terms will also be a constituent of the proposition. I then show that Russell therefore could not have avoided commitment to the Homeric gods and chimeras by appealing to empty denoting concepts. Finally, I develop what I think is the best understanding of the ontology of _Principles of __Mathematics_ by interpreting some difficult passages. (shrink)
This 4-page review-essay—which is entirely reportorial and philosophically neutral as are my other contributions to MATHEMATICAL REVIEWS—starts with a short introduction to the philosophy known as mathematical structuralism. The history of structuralism traces back to George Boole (1815–1864). By reference to a recent article various feature of structuralism are discussed with special attention to ambiguity and other terminological issues. The review-essay includes a description of the recent article. The article’s 4-sentence summary is quoted in full and then analyzed. The (...) point of the quotation is to make clear how murky, incompetent, and badly written the paper is. There is no way to determine from the article whether the editor or referees suggests improvements. (shrink)
Girolamo Saccheri (1667--1733) was an Italian Jesuit priest, scholastic philosopher, and mathematician. He earned a permanent place in the history of mathematics by discovering and rigorously deducing an elaborate chain of consequences of an axiom-set for what is now known as hyperbolic (or Lobachevskian) plane geometry. Reviewer's remarks: (1) On two pages of this book Saccheri refers to his previous and equally original book Logica demonstrativa (Turin, 1697) to which 14 of the 16 pages of the editor's "Introduction" (...) are devoted. At the time of the first edition, 1920, the editor was apparently not acquainted with the secondary literature on Logica demonstrativa which continued to grow in the period preceding the second edition \ref[see D. J. Struik, in Dictionary of scientific biography, Vol. 12, 55--57, Scribner's, New York, 1975]. Of special interest in this connection is a series of three articles by A. F. Emch [Scripta Math. 3 (1935), 51--60; Zbl 10, 386; ibid. 3 (1935), 143--152; Zbl 11, 193; ibid. 3 (1935), 221--333; Zbl 12, 98]. (2) It seems curious that modern writers believe that demonstration of the "nondeducibility" of the parallel postulate vindicates Euclid whereas at first Saccheri seems to have thought that demonstration of its "deducibility" is what would vindicate Euclid. Saccheri is perfectly clear in his commitment to the ancient (and now discredited) view that it is wrong to take as an "axiom" a proposition which is not a "primal verity", which is not "known through itself". So it would seem that Saccheri should think that he was convicting Euclid of error by deducing the parallel postulate. The resolution of this confusion is that Saccheri thought that he had proved, not merely that the parallel postulate was true, but that it was a "primal verity" and, thus, that Euclid was correct in taking it as an "axiom". As implausible as this claim about Saccheri may seem, the passage on p. 237, lines 3--15, seems to admit of no other interpretation. Indeed, Emch takes it this way. (3) As has been noted by many others, Saccheri was fascinated, if not obsessed, by what may be called "reflexive indirect deductions", indirect deductions which show that a conclusion follows from given premises by a chain of reasoning beginning with the given premises augmented by the denial of the desired conclusion and ending with the conclusion itself. It is obvious, of course, that this is simply a species of ordinary indirect deduction; a conclusion follows from given premises if a contradiction is deducible from those given premises augmented by the denial of the conclusion---and it is immaterial whether the contradiction involves one of the premises, the denial of the conclusion, or even, as often happens, intermediate propositions distinct from the given premises and the denial of the conclusion. Saccheri seemed to think that a proposition proved in this way was deduced from its own denial and, thus, that its denial was self-contradictory (p. 207). Inference from this mistake to the idea that propositions proved in this way are "primal verities" would involve yet another confusion. The reviewer gratefully acknowledges extensive communication with his former doctoral students J. Gasser and M. Scanlan. ADDED 14 March 14, 2015: (1) Wikipedia reports that many of Saccheri's ideas have a precedent in the 11th Century Persian polymath Omar Khayyám's Discussion of Difficulties in Euclid, a fact ignored in most Western sources until recently. It is unclear whether Saccheri had access to this work in translation, or developed his ideas independently. (2) This book is another exemplification of the huge difference between indirect deduction and indirect reduction. Indirect deduction requires making an assumption that is inconsistent with the premises previously adopted. This means that the reasoner must perform a certain mental act of assuming a certain proposition. It case the premises are all known truths, indirect deduction—which would then be indirect proof—requires the reasoner to assume a falsehood. This fact has been noted by several prominent mathematicians including Hardy, Hilbert, and Tarski. Indirect reduction requires no new assumption. Indirect reduction is simply a transformation of an argument in one form into another argument in a different form. In an indirect reduction one proposition in the old premise set is replaced by the contradictory opposite of the old conclusion and the new conclusion becomes the contradictory opposite of the replaced premise. Roughly and schematically, P,Q/R becomes P,~R/~Q or ~R, Q/~P. Saccheri’s work involved indirect deduction not indirect reduction. (3) The distinction between indirect deduction and indirect reduction has largely slipped through the cracks, the cracks between medieval-oriented logic and modern-oriented logic. The medievalists have a heavy investment in reduction and, though they have heard of deduction, they think that deduction is a form of reduction, or vice versa, or in some cases they think that the word ‘deduction’ is the modern way of referring to reduction. The modernists have no interest in reduction, i.e. in the process of transforming one argument into another having exactly the same number of premises. Modern logicians, like Aristotle, are concerned with deducing a single proposition from a set of propositions. Some focus on deducing a single proposition from the null set—something difficult to relate to reduction. (shrink)
Book Review for Reading Natural Philosophy: Essays in the History and Philosophy of Science and Mathematics, La Salle, IL: Open Court, 2002. Edited by David Malament. This volume includes thirteen original essays by Howard Stein, spanning a range of topics that Stein has written about with characteristic passion and insight. This review focuses on the essays devoted to history and philosophy of physics.
Contemporary natural-language semantics began with the assumption that the meaning of a sentence could be modeled by a single truth condition, or by an entity with a truth-condition. But with the recent explosion of dynamic semantics and pragmatics and of work on non- truth-conditional dimensions of linguistic meaning, we are now in the midst of a shift away from a truth-condition-centric view and toward the idea that a sentence’s meaning must be spelled out in terms of its various roles in (...) conversation. This communicative turn in semantics raises historical questions: Why was truth-conditional semantics dominant in the first place, and why were the phenomena now driving the communicative turn initially ignored or misunderstood by truth-conditional semanticists? I offer a historical answer to both questions. The history of natural-language semantics—springing from the work of Donald Davidson and Richard Montague—began with a methodological toolkit that Frege, Tarski, Carnap, and others had created to better understand artificial languages. For them, the study of linguistic meaning was subservient to other explanatory goals in logic, philosophy, and the foundations of mathematics, and this subservience was reflected in the fact that they idealized away from all aspects of meaning that get in the way of a one-to-one correspondence between sentences and truth-conditions. The truth-conditional beginnings of natural- language semantics are best explained by the fact that, upon turning their attention to the empirical study of natural language, Davidson and Montague adopted the methodological toolkit assembled by Frege, Tarski, and Carnap and, along with it, their idealization away from non-truth-conditional semantic phenomena. But this pivot in explana- tory priorities toward natural language itself rendered the adoption of the truth-conditional idealization inappropriate. Lifting the truth-conditional idealization has forced semanticists to upend the conception of linguistic meaning that was originally embodied in their methodology. (shrink)
The continuing interest in the book of S. Hawking "A Brief History of Time" makes a philosophical evaluation of the content highly desirable. As will be shown, the genre of this work can be identified as a speciality in philosophy, namely the proof of the existence of God. In this study an attempt is given to unveil the philosophical concepts and steps that lead to the final conclusions, without discussing in detail the remarkable review of modern physical theories. In (...) order to clarify these concepts, the classical Aristotelian-Thomistic proof of the existence of God is presented and compared with Hawking's approach. For his argumentation he uses a concept of causality, which in contrast to the classical philosophy neglects completely an ontological dependence and is reduced to only temporal aspects. On the basis of this temporal causality and modern physical theories and speculations, Hawking arrives at his conclusions about a very restricted role of a possible creator. It is shown, that neither from the philosophical nor the scientific view his conclusions about the existence of God are strictly convincing, a position Hawking himself seems to be aware of. (shrink)
The syllogistic figures and moods can be taken to be argument schemata as can the rules of the Stoic propositional logic. Sentence schemata have been used in axiomatizations of logic only since the landmark 1927 von Neumann paper [31]. Modern philosophers know the role of schemata in explications of the semantic conception of truth through Tarski’s 1933 Convention T [42]. Mathematical logicians recognize the role of schemata in first-order number theory where Peano’s second-order Induction Axiom is approximated by Herbrand’s Induction-Axiom (...) Schema [23]. Similarly, in first-order set theory, Zermelo’s second-order Separation Axiom is approximated by Fraenkel’s first-order Separation Schema [17]. In some of several closely related senses, a schema is a complex system having multiple components one of which is a template-text or scheme-template, a syntactic string composed of one or more “blanks” and also possibly significant words and/or symbols. In accordance with a side condition the template-text of a schema is used as a “template” to specify a multitude, often infinite, of linguistic expressions such as phrases, sentences, or argument-texts, called instances of the schema. The side condition is a second component. The collection of instances may but need not be regarded as a third component. The instances are almost always considered to come from a previously identified language (whether formal or natural), which is often considered to be another component. This article reviews the often-conflicting uses of the expressions ‘schema’ and ‘scheme’ in the literature of logic. It discusses the different definitions presupposed by those uses. And it examines the ontological and epistemic presuppositions circumvented or mooted by the use of schemata, as well as the ontological and epistemic presuppositions engendered by their use. In short, this paper is an introduction to the history and philosophy of schemata. (shrink)
In this paper we apply social epistemology to mathematical proofs and their role in mathematical knowledge. The most famous modern collaborative mathematical proof effort is the Classification of Finite Simple Groups. The history and sociology of this proof have been well-documented by Alma Steingart (2012), who highlights a number of surprising and unusual features of this collaborative endeavour that set it apart from smaller-scale pieces of mathematics. These features raise a number of interesting philosophical issues, but have received (...) very little attention. In this paper, we will consider the philosophical tensions that Steingart uncovers, and use them to argue that the best account of the epistemic status of the Classification Theorem will be essentially and ineliminably social. This forms part of the broader argument that in order to understand mathematical proofs, we must appreciate their social aspects. (shrink)
A Mathematical Review by John Corcoran, SUNY/Buffalo -/- Macbeth, Danielle Diagrammatic reasoning in Frege's Begriffsschrift. Synthese 186 (2012), no. 1, 289–314. ABSTRACT This review begins with two quotations from the paper: its abstract and the first paragraph of the conclusion. The point of the quotations is to make clear by the “give-them-enough-rope” strategy how murky, incompetent, and badly written the paper is. I know I am asking a lot, but I have to ask you to read the quoted passages—aloud if (...) possible. Don’t miss the silly attempt to recycle Kant’s quip “Concepts without intuitions are empty; intuitions without concepts are blind”. What the paper was aiming at includes the absurdity: “Proofs without definitions are empty; definitions without proofs are, if not blind, then dumb.” But the author even bollixed this. The editor didn’t even notice. The copy-editor missed it. And the author’s proof-reading did not catch it. In order not to torment you I will quote the sentence as it appears: “In a slogan: proofs without definitions are empty, merely the aimless manipulation of signs according to rules; and definitions without proofs are, if no blind, then dumb.”[sic] The rest of my review discusses the paper’s astounding misattribution to contemporary logicians of the information-theoretic approach. This approach was cruelly trashed by Quine in his 1970 Philosophy of Logic, and thereafter ignored by every text I know of. The paper under review attributes generally to modern philosophers and logicians views that were never espoused by any of the prominent logicians—such as Hilbert, Gödel, Tarski, Church, and Quine—apparently in an attempt to distance them from Frege: the focus of the article. On page 310 we find the following paragraph. “In our logics it is assumed that inference potential is given by truth-conditions. Hence, we think, deduction can be nothing more than a matter of making explicit information that is already contained in one’s premises. If the deduction is valid then the information contained in the conclusion must be contained already in the premises; if that information is not contained already in the premises […], then the argument cannot be valid.” Although the paper is meticulous in citing supporting literature for less questionable points, no references are given for this. In fact, the view that deduction is the making explicit of information that is only implicit in premises has not been espoused by any standard symbolic logic books. It has only recently been articulated by a small number of philosophical logicians from a younger generation, for example, in the prize-winning essay by J. Sagüillo, Methodological practice and complementary concepts of logical consequence: Tarski’s model-theoretic consequence and Corcoran’s information-theoretic consequence, History and Philosophy of Logic, 30 (2009), pp. 21–48. The paper omits definitions of key terms including ‘ampliative’, ‘explicatory’, ‘inference potential’, ‘truth-condition’, and ‘information’. The definition of prime number on page 292 is as follows: “To say that a number is prime is to say that it is not divisible without remainder by another number”. This would make one be the only prime number. The paper being reviewed had the benefit of two anonymous referees who contributed “very helpful comments on an earlier draft”. Could these anonymous referees have read the paper? -/- J. Corcoran, U of Buffalo, SUNY -/- PS By the way, if anyone has a paper that has been turned down by other journals, any journal that would publish something like this might be worth trying. (shrink)
Austrian-born Kurt Gödel is widely considered the greatest logician of modern times. It is above all his celebrated incompleteness theorems—rigorous mathematical results about the necessary limits...
CORCORAN RECOMMENDS COCCHIARELLA ON TYPE THEORY. The 1983 review in Mathematical Reviews 83e:03005 of: Cocchiarella, Nino “The development of the theory of logical types and the notion of a logical subject in Russell's early philosophy: Bertrand Russell's early philosophy, Part I”. Synthese 45 (1980), no. 1, 71-115 .
Research into ancient physical structures, some having been known as the seven wonders of the ancient world, inspired new developments in the early history of mathematics. At the other end of this spectrum of inquiry the research is concerned with the minimum of observations from physical data as exemplified by Eddington's Principle. Current discussions of the interplay between physics and mathematics revive some of this early history of mathematics and offer insight into the fine-structure constant. (...) Arthur Eddington's work leads to a new calculation of the inverse fine-structure constant giving the same approximate value as ancient geometry combined with the golden ratio structure of the hydrogen atom. The hyperbolic function suggested by Alfred Landé leads to another result, involving the Laplace limit of Kepler's equation, with the same approximate value and related to the aforementioned results. The accuracy of these results are consistent with the standard reference. Relationships between the four fundamental coupling constants are also found. (shrink)
In this paper, we wish to highlight, within the general cultural context, some possible elementary computational psychoanalysis formalizations concerning Matte Blanco’s bi-logic components through certain very elementary mathematical tools and notions drawn from theoretical physics and algebra. NOTE: This is the corrected version of the paper which had to be published but that instead has been wrongly uploaded in the related published proceedings.
In the first part of this article we survey general similarities and differences between biological and social macroevolution. In the second (and main) part, we consider a concrete mathematical model capable of describing important features of both biological and social macroevolution. In mathematical models of historical macrodynamics, a hyperbolic pattern of world population growth arises from non-linear, second-order positive feedback between demographic growth and technological development. Based on diverse paleontological data and an analogy with macrosociological models, we suggest that the (...) hyperbolic character of biodiversity growth can be similarly accounted for by non-linear, second-order positive feedback between diversity growth and the complexity of community structure. We discuss how such positive feedback mechanisms can be modelled mathematically. (shrink)
The concept of ‘ideas’ plays central role in philosophy. The genesis of the idea of continuity and its essential role in intellectual history have been analyzed in this research. The main question of this research is how the idea of continuity came to the human cognitive system. In this context, we analyzed the epistemological function of this idea. In intellectual history, the idea of continuity was first introduced by Leibniz. After him, this idea, as a paradigm, formed the (...) base of several fundamental scientific conceptions. This idea also allowed mathematicians to justify the nature of real numbers, which was one of central questions and intellectual discussions in the history of mathematics. For this reason, we analyzed how Dedekind’s continuity idea was used to this justification. As a result, it can be said that several fundamental conceptions in intellectual history, philosophy and mathematics cannot arise without existence of the idea of continuity. However, this idea is neither a purely philosophical nor a mathematical idea. This is an interdisciplinary concept. For this reason, we call and classify it as mathematical and philosophical invariance. (shrink)
intro to Part 1 - -/- Most people disliked mathematics when they were at school and they were absolutely correct to do so. This is because maths as we know it is severely incomplete. No matter how elaborated and complicated mathematical equations become, in today's world they're based on 1+1=2. This certainly conforms to the world our physical senses perceive and to the world scientific instruments detect. It has been of immeasurable value to all knowledge throughout history and (...) has elevated science to the lofty status it enjoys. Science is now striving towards Unification - where the subatomic realm, all matter, energy, forces, space and time will be seen as entangled parts of one universe. While 1+1=2 has been vital in getting humanity to this point, it's time to suppress our attachments to the past and realize that whereas 1+1 will always equal 2, it's also capable of equalling the 1 which represents unification. -/- intro to Part 2 -/- b) Division by zero is accepted, in Newtonian maths, to be impossible. But we can regard division by zero as division by nothing i.e. division that has no effect. In this case, 1 divided by 0 is 1. However, to a physicist there is no such thing as nothing (even empty space contains energy). What could the something called 0 actually be? It could be a binary digit. If we use the base of ten (for simplicity) and attach one and zero to it as exponents, we get 10^1 divided by 10^0 = 10^1. If we then cancel 10 from each factor in the expression, we get 1 divided by 0 = 1. At the start of the paragraph, this was referred to as division by nothing. Then 0 was called a binary digit and division by nothing became division by something. The 1 that the division equals is the unified field of space-time. Division by 0 is impossible in Newtonian maths because the result can be infinity. But the word “infinity” can, as the last section of this book shows, apply to the unified field of spacetime. So division by zero is not impossible because it results in the universe, which is obviously possible … a possibility that has always been, and always will be, realized. -/- intro to Part 3 -/- If quantum entanglement has existed in the entire universe forever, everything would be everywhere and everywhen. Space, time and 5th-dimensional hyperspace would not be restricted to certain parts of the Mobius Universe but would exist in every particle. Past, present and future would not exist as the distinct periods which everyday life assumes. All instants of all periods would exist eternally, permitting time travel to any point in the past and to any point in the future. Entanglement may be created by simply zipping along at close to the speed of light - “Quantum entanglement of moving bodies” by Robert M. Gingrich and Christoph Adami in Physical Review Letters 89, 270402 (issue of 30 December 2002) – which might be achieved, according to this book, by warping space so it’s either a fraction of the 90 degrees allowing instantaneous travel or almost at 270 degrees to space as we know it. (shrink)
The Mathematical Imagination focuses on the role of mathematics and digital technologies in critical theory of culture. This book belongs to the history of ideas rather than to that of mathematics proper since it treats it on a metaphorical level to express phenomena of silence or discontinuity. In order to bring more readability and clarity to the non-specialist readers, I firstly present the essential concepts, background, and objectives of his book...
What were the reasons of the Copernican Revolution ? How did modern science (created by a bunch of ambitious intellectuals) manage to force out the old one created by Aristotle and Ptolemy, rooted in millennial traditions and strongly supported by the Church? What deep internal causes and strong social movements took part in the genesis, development and victory of modern science? The author comes to a new picture of Copernican Revolution on the basis of the elaborated model of scientific revolutions (...) that takes into account some recent advances in philosophy, sociology and history of science. The model was initially invented to describe Einstein’s Revolution of the XX century beginning. The model considers the growth of knowledge as interaction, interpenetration and unification of the research programmes, springing out of different cultural traditions. Thus, Copernican Revolution appears as a result of revealation and (partial) resolution of the dualism , of the gap between Ptolemy’s mathematical astronomy and Aristotelian qualitative physics. The works of Copernicus, Galileo, Kepler and Newton were all the stages of mathematics descendance from skies to earth and reciprocal extrapolation of earth physics on skies. The model elaborated enables to reassess the role of some social factors crucial for the scientific revolution. It is argued that initially modern science was a result of the development of Christian Weltanschaugung . Later the main support came from the absolute monarchies. In the long run the creators of modern science appeared to be the “apparatchics” of the “regime of truth” built-in state machine. Natural science became a part of ideological state apparatus providing not only scientific education but the internalization of values crucial for the functioning of state. -/- . (shrink)
Recent experimental evidence from developmental psychology and cognitive neuroscience indicates that humans are equipped with unlearned elementary mathematical skills. However, formal mathematics has properties that cannot be reduced to these elementary cognitive capacities. The question then arises how human beings cognitively deal with more advanced mathematical ideas. This paper draws on the extended mind thesis to suggest that mathematical symbols enable us to delegate some mathematical operations to the external environment. In this view, mathematical symbols are not only used (...) to express mathematical concepts—they are constitutive of the mathematical concepts themselves. Mathematical symbols are epistemic actions, because they enable us to represent concepts that are literally unthinkable with our bare brains. Using case-studies from the history of mathematics and from educational psychology, we argue for an intimate relationship between mathematical symbols and mathematical cognition. (shrink)
In this paper, we look at Bourbaki’s work as a case study for the notion of mathematical style. We argue that indeed Bourbaki exemplifies a mathematical style, namely the structuralist style.
In my dissertation, I present Hermann Cohen's foundation for the history and philosophy of science. My investigation begins with Cohen's formulation of a neo-Kantian epistemology. I analyze Cohen's early work, especially his contributions to 19th century debates about the theory of knowledge. I conclude by examining Cohen's mature theory of science in two works, The Principle of the Infinitesimal Method and its History of 1883, and Cohen's extensive 1914 Introduction to Friedrich Lange's History of Materialism. In the (...) former, Cohen gives an historical and philosophical analysis of the foundations of the infinitesimal method in mathematics. In the latter, Cohen presents a detailed account of Heinrich Hertz's Principles of Mechanics of 1894. Hertz considers a series of possible foundations for mechanics, in the interest of finding a secure conceptual basis for mechanical theories. Cohen argues that Hertz's analysis can be completed, and his goal achieved, by means of a philosophical examination of the role of mathematical principles and fundamental concepts in scientific theories. (shrink)
In the first part of this article we survey general similarities and differences between biological and social macroevolution. In the second (and main) part, we consider a concrete mathematical model capable of describing important features of both biological and social macroevolution. In mathematical models of historical macrodynamics, a hyperbolic pattern of world population growth arises from non-linear, second-order positive feedback between demographic growth and technological development. Based on diverse paleontological data and an analogy with macrosociological models, we suggest that the (...) hyperbolic character of biodiversity growth can be similarly accounted for by non-linear, second-order positive feedback between diversity growth and the complexity of community structure. We discuss how such positive feedback mechanisms can be modelled mathematically. (shrink)
Philosophy of history is the conceptual and technical study of the relation which exists between philosophy and history. This paper tries to analyze and examine the nature of philosophy of history, its methodology and ideal development. In this I have tried to set the limits of knowledge to know the special account of Hegel’s idealistic view about philosophy of history. In this paper I have also used the philosophical methodology and philosophy inquiry, quest and hypothesis to (...) discuss the Hegel’s idealistic concept of philosophy of history. It also examines and demonstrates the views of other idealist philosophers like, Socrates, Plato and Aristotle. It also shows the how history of mathematics is a complementary of idealism as most of philosophers who were idealists are also great mathematicians. In this paper we are investigation the epistemological approach, logical and metaphysical approach to study the nature of history, meaning of history and structure of history. (shrink)
In “What Makes a Scientific Explanation Distinctively Mathematical?” (2013b), Lange uses several compelling examples to argue that certain explanations for natural phenomena appeal primarily to mathematical, rather than natural, facts. In such explanations, the core explanatory facts are modally stronger than facts about causation, regularity, and other natural relations. We show that Lange's account of distinctively mathematical explanation is flawed in that it fails to account for the implicit directionality in each of his examples. This inadequacy is remediable in each (...) case by appeal to ontic facts that account for why the explanation is acceptable in one direction and unacceptable in the other direction. The mathematics involved in these examples cannot play this crucial normative role. While Lange's examples fail to demonstrate the existence of distinctively mathematical explanations, they help to emphasize that many superficially natural scientific explanations rely for their explanatory force on relations of stronger-than-natural necessity. These are not opposing kinds of scientific explanations; they are different aspects of scientific explanation. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.