This paper examines Helmholtz's attempt to use empirical psychology to refute certain of Kant's epistemological positions. Particularly, Helmholtz believed that his work in the psychology of visual perception showed Kant's doctrine of the a priori character of spatial intuition to be in error. Some of Helmholtz's arguments are effective, but this effectiveness derives from his arguments to show the possibility of obtaining evidence that the structure of physical space is non-Euclidean, and these arguments do not depend on his theory (...) of vision. Helmholtz's general attempt to provide an empirical account of the "inferences" of perception is regarded as a failure. (shrink)
David Hyder.The Determinate World: Kant and Helmholtz on the Physical Meaning of Geometry. viii + 229 pp., bibl., index. Berlin/New York: Walter de Gruyter, 2009.
Girolamo Saccheri (1667--1733) was an Italian Jesuit priest, scholastic philosopher, and mathematician. He earned a permanent place in the history of mathematics by discovering and rigorously deducing an elaborate chain of consequences of an axiom-set for what is now known as hyperbolic (or Lobachevskian) plane geometry. Reviewer's remarks: (1) On two pages of this book Saccheri refers to his previous and equally original book Logica demonstrativa (Turin, 1697) to which 14 of the 16 pages of the editor's "Introduction" are (...) devoted. At the time of the first edition, 1920, the editor was apparently not acquainted with the secondary literature on Logica demonstrativa which continued to grow in the period preceding the second edition \ref[see D. J. Struik, in Dictionary of scientific biography, Vol. 12, 55--57, Scribner's, New York, 1975]. Of special interest in this connection is a series of three articles by A. F. Emch [Scripta Math. 3 (1935), 51--60; Zbl 10, 386; ibid. 3 (1935), 143--152; Zbl 11, 193; ibid. 3 (1935), 221--333; Zbl 12, 98]. (2) It seems curious that modern writers believe that demonstration of the "nondeducibility" of the parallel postulate vindicates Euclid whereas at first Saccheri seems to have thought that demonstration of its "deducibility" is what would vindicate Euclid. Saccheri is perfectly clear in his commitment to the ancient (and now discredited) view that it is wrong to take as an "axiom" a proposition which is not a "primal verity", which is not "known through itself". So it would seem that Saccheri should think that he was convicting Euclid of error by deducing the parallel postulate. The resolution of this confusion is that Saccheri thought that he had proved, not merely that the parallel postulate was true, but that it was a "primal verity" and, thus, that Euclid was correct in taking it as an "axiom". As implausible as this claim about Saccheri may seem, the passage on p. 237, lines 3--15, seems to admit of no other interpretation. Indeed, Emch takes it this way. (3) As has been noted by many others, Saccheri was fascinated, if not obsessed, by what may be called "reflexive indirect deductions", indirect deductions which show that a conclusion follows from given premises by a chain of reasoning beginning with the given premises augmented by the denial of the desired conclusion and ending with the conclusion itself. It is obvious, of course, that this is simply a species of ordinary indirect deduction; a conclusion follows from given premises if a contradiction is deducible from those given premises augmented by the denial of the conclusion---and it is immaterial whether the contradiction involves one of the premises, the denial of the conclusion, or even, as often happens, intermediate propositions distinct from the given premises and the denial of the conclusion. Saccheri seemed to think that a proposition proved in this way was deduced from its own denial and, thus, that its denial was self-contradictory (p. 207). Inference from this mistake to the idea that propositions proved in this way are "primal verities" would involve yet another confusion. The reviewer gratefully acknowledges extensive communication with his former doctoral students J. Gasser and M. Scanlan. ADDED 14 March 14, 2015: (1) Wikipedia reports that many of Saccheri's ideas have a precedent in the 11th Century Persian polymath Omar Khayyám's Discussion of Difficulties in Euclid, a fact ignored in most Western sources until recently. It is unclear whether Saccheri had access to this work in translation, or developed his ideas independently. (2) This book is another exemplification of the huge difference between indirect deduction and indirect reduction. Indirect deduction requires making an assumption that is inconsistent with the premises previously adopted. This means that the reasoner must perform a certain mental act of assuming a certain proposition. It case the premises are all known truths, indirect deduction—which would then be indirect proof—requires the reasoner to assume a falsehood. This fact has been noted by several prominent mathematicians including Hardy, Hilbert, and Tarski. Indirect reduction requires no new assumption. Indirect reduction is simply a transformation of an argument in one form into another argument in a different form. In an indirect reduction one proposition in the old premise set is replaced by the contradictory opposite of the old conclusion and the new conclusion becomes the contradictory opposite of the replaced premise. Roughly and schematically, P,Q/R becomes P,~R/~Q or ~R, Q/~P. Saccheri’s work involved indirect deduction not indirect reduction. (3) The distinction between indirect deduction and indirect reduction has largely slipped through the cracks, the cracks between medieval-oriented logic and modern-oriented logic. The medievalists have a heavy investment in reduction and, though they have heard of deduction, they think that deduction is a form of reduction, or vice versa, or in some cases they think that the word ‘deduction’ is the modern way of referring to reduction. The modernists have no interest in reduction, i.e. in the process of transforming one argument into another having exactly the same number of premises. Modern logicians, like Aristotle, are concerned with deducing a single proposition from a set of propositions. Some focus on deducing a single proposition from the null set—something difficult to relate to reduction. (shrink)
Kant argued that Euclidean geometry is synthesized on the basis of an a priori intuition of space. This proposal inspired much behavioral research probing whether spatial navigation in humans and animals conforms to the predictions of Euclidean geometry. However, Euclidean geometry also includes concepts that transcend the perceptible, such as objects that are infinitely small or infinitely large, or statements of necessity and impossibility. We tested the hypothesis that certain aspects of nonperceptible Euclidian geometry map onto (...) intuitions of space that are present in all humans, even in the absence of formal mathematical education. Our tests probed intuitions of points, lines, and surfaces in participants from an indigene group in the Amazon, the Mundurucu, as well as adults and age-matched children controls from the United States and France and younger US children without education in geometry. The responses of Mundurucu adults and children converged with that of mathematically educated adults and children and revealed an intuitive understanding of essential properties of Euclidean geometry. For instance, on a surface described to them as perfectly planar, the Mundurucu's estimations of the internal angles of triangles added up to ∼180 degrees, and when asked explicitly, they stated that there exists one single parallel line to any given line through a given point. These intuitions were also partially in place in the group of younger US participants. We conclude that, during childhood, humans develop geometrical intuitions that spontaneously accord with the principles of Euclidean geometry, even in the absence of training in mathematics. (shrink)
John Corcoran and George Boger. Aristotelian logic and Euclidean geometry. Bulletin of Symbolic Logic. 20 (2014) 131. -/- By an Aristotelian logic we mean any system of direct and indirect deductions, chains of reasoning linking conclusions to premises—complete syllogisms, to use Aristotle’s phrase—1) intended to show that their conclusions follow logically from their respective premises and 2) resembling those in Aristotle’s Prior Analytics. Such systems presuppose existence of cases where it is not obvious that the conclusion follows from the (...) premises: there must be something deductions can show. Corcoran calls a proposition that follows from given premises a hidden consequence of those premises if it is not obvious that the proposition follows from those premises. By a Euclidean geometry we mean an extended discourse beginning with basic premises—axioms, postulates, definitions—1) treating a universe of geometrical figures and 2) resembling Euclid’s Elements. There were Euclidean geometries before Euclid (fl. 300 BCE), even before Aristotle (384–322 BCE). Bochenski, Lukasiewicz, Patzig and others never new this or if they did they found it inconvenient to mention. Euclid shows no awareness of Aristotle. It is obvious today—as it should have been obvious in Euclid’s time, if anyone knew both—that Aristotle’s logic was insufficient for Euclid’s geometry: few if any geometrical theorems can be deduced from Euclid’s premises by means of Aristotle’s deductions. Aristotle’s writings don’t say whether his logic is sufficient for Euclidean geometry. But, there is not even one fully-presented example. However, Aristotle’s writings do make clear that he endorsed the goal of a sufficient system. Nevertheless, incredible as this is today, many logicians after Aristotle claimed that Aristotelian logics are sufficient for Euclidean geometries. This paper reviews and analyses such claims by Mill, Boole, De Morgan, Russell, Poincaré, and others. It also examines early contrary statements by Hintikka, Mueller, Smith, and others. Special attention is given to the argumentations pro or con and especially to their logical, epistemic, and ontological presuppositions. What methodology is necessary or sufficient to show that a given logic is adequate or inadequate to serve as the underlying logi of a given science. (shrink)
In his doctoral dissertation On the Principle of Sufficient Reason, Arthur Schopenhauer there outlines a critique of Euclidean geometry on the basis of the changing nature of mathematics, and hence of demonstration, as a result of Kantian idealism. According to Schopenhauer, Euclid treats geometry synthetically, proceeding from the simple to the complex, from the known to the unknown, “synthesizing” later proofs on the basis of earlier ones. Such a method, although proving the case logically, nevertheless fails to attain (...) the raison d’être of the entity. In order to obtain this, a separate method is required, which Schopenhauer refers to as “analysis,” thus echoing a method already in practice among the early Greek geometers, with however some significant differences. In this essay, I here discuss Schopenhauer’s criticism of synthesis in Euclid’s Elements, and the nature and relevance of his own method of analysis. (shrink)
In this work, Einstein’s view of geometry as physical geometry is taken into account in the analysis of diverse issues related to the notions of inertial motion and inertial reference frame. Einstein’s physical geometry enables a non-conventional view on Euclidean geometry (as the geometry associated to inertial motion and inertial reference frames) and on the uniform time. Also, by taking into account the implications of the view of geometry as a physical geometry, it (...) is presented a critical reassessment of the so-called boostability assumption (implicit according to Einstein in the formulation of the theory) and also of ‘alternative’ derivations of the Lorentz transformations that do not take into account the so-called ‘light postulate’. Finally it is addressed the issue of the eventual conventionality of the one-way speed of light or, what is the same, the conventionality of distant simultaneity (within the same inertial reference frame). It turns out that it is possible to see the (possible) conventionality of distant simultaneity as a case of conventionality of geometry (in Einstein’s reinterpretation of Poincaré’s views). By taking into account synchronization procedures that do not make reference to light propagation (which is necessary in the derivation of the Lorentz transformations without the ‘light postulate’), it can be shown that the synchronization of distant clocks does not need any conventional element. This implies that the whole of chronogeometry (and because of this the physical part of the theory) does not have any conventional element in it, and it is a physical chronogeometry. (shrink)
In this paper I will offer a novel understanding of a priori knowledge. My claim is that the sharp distinction that is usually made between a priori and a posteriori knowledge is groundless. It will be argued that a plausible understanding of a priori and a posteriori knowledge has to acknowledge that they are in a constant bootstrapping relationship. It is also crucial that we distinguish between a priori propositions that hold in the actual world and merely possible, non-actual a (...) priori propositions, as we will see when considering cases like Euclidean geometry. Furthermore, contrary to what Kripke seems to suggest, a priori knowledge is intimately connected with metaphysical modality, indeed, grounded in it. The task of a priori reasoning, according to this account, is to delimit the space of metaphysically possible worlds in order for us to be able to determine what is actual. (shrink)
Throughout history, almost all mathematicians, physicists and philosophers have been of the opinion that space and time are infinitely divisible. That is, it is usually believed that space and time do not consist of atoms, but that any piece of space and time of non-zero size, however small, can itself be divided into still smaller parts. This assumption is included in geometry, as in Euclid, and also in the Euclidean and non- Euclidean geometries used in modern physics. Of the (...) few who have denied that space and time are infinitely divisible, the most notable are the ancient atomists, and Berkeley and Hume. All of these assert not only that space and time might be atomic, but that they must be. Infinite divisibility is, they say, impossible on purely conceptual grounds. (shrink)
While I was working about some basic physical phenomena, I discovered some geometric relations that also interest mathematics. In this work, I applied the rules I have been proven to P=NP? problem over impossibility of perpendicularity in the universe. It also brings out extremely interesting results out like imaginary numbers which are known as real numbers currently. Also it seems that Euclidean Geometry is impossible. The actual geometry is Riemann Geometry and complex numbers are real.
We examine some of Connes’ criticisms of Robinson’s infinitesimals starting in 1995. Connes sought to exploit the Solovay model S as ammunition against non-standard analysis, but the model tends to boomerang, undercutting Connes’ own earlier work in functional analysis. Connes described the hyperreals as both a “virtual theory” and a “chimera”, yet acknowledged that his argument relies on the transfer principle. We analyze Connes’ “dart-throwing” thought experiment, but reach an opposite conclusion. In S , all definable sets of reals are (...) Lebesgue measurable, suggesting that Connes views a theory as being “virtual” if it is not definable in a suitable model of ZFC. If so, Connes’ claim that a theory of the hyperreals is “virtual” is refuted by the existence of a definable model of the hyperreal field due to Kanovei and Shelah. Free ultrafilters aren’t definable, yet Connes exploited such ultrafilters both in his own earlier work on the classification of factors in the 1970s and 80s, and in Noncommutative Geometry, raising the question whether the latter may not be vulnerable to Connes’ criticism of virtuality. We analyze the philosophical underpinnings of Connes’ argument based on Gödel’s incompleteness theorem, and detect an apparent circularity in Connes’ logic. We document the reliance on non-constructive foundational material, and specifically on the Dixmier trace −∫ (featured on the front cover of Connes’ magnum opus) and the Hahn–Banach theorem, in Connes’ own framework. We also note an inaccuracy in Machover’s critique of infinitesimal-based pedagogy. (shrink)
This paper argues that Frege's notoriously long commitment to Kant's thesis that Euclidean geometry is synthetic _a priori_ is best explained by realizing that Frege uses ‘intuition’ in two senses. Frege sometimes adopts the usage presented in Hermann Helmholtz's sign theory of perception. However, when using ‘intuition’ to denote the source of geometric knowledge, he is appealing to Hermann Cohen's use of Kantian terminology. We will see that Cohen reinterpreted Kantian notions, stripping them of any psychological connotation. Cohen's defense (...) of his modified Kantian thesis on the unique status of the Euclidean axioms presents Frege's own views in a much more favorable light. (shrink)
Geometry, etymologically the “science of measuring the Earth”, is a mathematical formalization of space. Just as formal concepts of number may be rooted in an evolutionary ancient system for perceiving numerical quantity, the fathers of geometry may have been inspired by their perception of space. Is the spatial content of formal Euclidean geometry universally present in the way humans perceive space, or is Euclidean geometry a mental construction, specific to those who have received appropriate instruction? The (...) spatial content of the formal theories of geometry may depart from spatial perception for two reasons: first, because in geometry, only some of the features of spatial figures are theoretically relevant; and second, because some geometric concepts go beyond any possible perceptual experience. Focusing in turn on these two aspects of geometry, we will present several lines of research on US adults and children from the age of three years, and participants from an Amazonian culture, the Mundurucu. Almost all the aspects of geometry tested proved to be shared between these two cultures. Nevertheless, some aspects involve a process of mental construction where explicit instruction seem to play a role in the US, but that can still take place in the absence of instruction in geometry. (shrink)
REVIEW OF: Automated Development of Fundamental Mathematical Theories by Art Quaife. (1992: Kluwer Academic Publishers) 271pp. Using the theorem prover OTTER Art Quaife has proved four hundred theorems of von Neumann-Bernays-Gödel set theory; twelve hundred theorems and definitions of elementary number theory; dozens of Euclidean geometry theorems; and Gödel's incompleteness theorems. It is an impressive achievement. To gauge its significance and to see what prospects it offers this review looks closely at the book and the proofs it presents.
Does geometry constitues a core set of intuitions present in all humans, regarless of their language or schooling ? We used two non verbal tests to probe the conceptual primitives of geometry in the Munduruku, an isolated Amazonian indigene group. Our results provide evidence for geometrical intuitions in the absence of schooling, experience with graphic symbols or maps, or a rich language of geometrical terms.
This paper examines explanations that turn on non-local geometrical facts about the space of possible configurations a system can occupy. I argue that it makes sense to contrast such explanations from "geometry of motion" with causal explanations. I also explore how my analysis of these explanations cuts across the distinction between kinematics and dynamics.
Abstract. Let REL(O*E) be the relation algebra of binary relations defined on the Boolean algebra O*E of regular open regions of the Euclidean plane E. The aim of this paper is to prove that the canonical contact relation C of O*E generates a subalgebra REL(O*E, C) of REL(O*E) that has infinitely many elements. More precisely, REL(O*,C) contains an infinite family {SPPn, n ≥ 1} of relations generated by the relation SPP (Separable Proper Part). This relation can be used to define (...) point-free concept of connectedness that for the regular open regions of E coincides with the standard topological notion of connectedness, i.e., a region of the plane E is connected in the sense of topology if and only if it has no separable proper part. Moreover, it is shown that the contact relation algebra REL(O*E, C) and the relation algebra REL(O*E, NTPP) generated by the non-tangential proper parthood relation NTPP, coincide. This entails that the allegedly purely topological notion of connectedness can be defined in mereological terms. (shrink)
This article analyzes the value of geometric models to understand matter with the examples of the Platonic model for the primary four elements (fire, air, water, and earth) and the models of carbon atomic structures in the new science of crystallography. How the geometry of these models is built in order to discover the properties of matter is explained: movement and stability for the primary elements, and hardness, softness and elasticity for the carbon atoms. These geometric models appear to (...) have a double quality: firstly, they exhibit visually the scientific properties of matter, and secondly they give us the possibility to visualize its whole nature. Geometrical models appear to be the expression of the mind in the understanding of physical matter. (shrink)
In the brain the relations between free neurons and the conditioned ones establish the constraints for the informational neural processes. These constraints reflect the systemenvironment state, i.e. the dynamics of homeocognitive activities. The constraints allow us to define the cost function in the phase space of free neurons so as to trace the trajectories of the possible configurations at minimal cost while respecting the constraints imposed. Since the space of the free states is a manifold or a non orthogonal space, (...) the minimum distance is not a straight line but a geodesic. The minimum condition is expressed by a set of ordinary differential equation ( ODE ) that in general are not linear. In the brain there is not an algorithm or a physical field that regulates the computation, then we must consider an emergent process coming out of the neural collective behavior triggered by synaptic variability. We define the neural computation as the study of the classes of trajectories on a manifold geometry defined under suitable constraints. The cost function supervises pseudo equilibrium thermodynamics effects that manage the computational activities from beginning to end and realizes an optimal control through constraints and geodetics. The task of this work is to establish a connection between the geometry of neural computation and cost functions. To illustrate the essential mathematical aspects we will use as toy model a Network Resistor with Adaptive Memory (Memristors).The information geometry here defined is an analog computation, therefore it does not suffer the limits of the Turing computation and it seems to respond to the demand for a greater biological plausibility. The model of brain optimal control proposed here can be a good foundation for implementing the concept of "intentionality",according to the suggestion of W. Freeman. Indeed, the geodesic in the brain states can produce suitable behavior to realize wanted functions and invariants as neural expressionsof cognitive intentions. (shrink)
Evolution and geometry generate complexity in similar ways. Evolution drives natural selection while geometry may capture the logic of this selection and express it visually, in terms of specific generic properties representing some kind of advantage. Geometry is ideally suited for expressing the logic of evolutionary selection for symmetry, which is found in the shape curves of vein systems and other natural objects such as leaves, cell membranes, or tunnel systems built by ants. The topology and (...) class='Hi'>geometry of symmetry is controlled by numerical parameters, which act in analogy with a biological organism’s DNA. The introductory part of this paper reviews findings from experiments illustrating the critical role of two-dimensional (2D) design parameters, affine geometry and shape symmetry for visual or tactile shape sensation and perception-based decision making in populations of experts and non-experts. It will be shown that 2D fractal symmetry, referred to herein as the “symmetry of things in a thing”, results from principles very similar to those of affine projection. Results from experiments on aesthetic and visual preference judgments in response to 2D fractal trees with varying degrees of asymmetry are presented. In a first experiment (psychophysical scaling procedure), non-expert observers had to rate (on a scale from 0 to 10) the perceived beauty of a random series of 2D fractal trees with varying degrees of fractal symmetry. In a second experiment (two-alternative forced choice procedure), they had to express their preference for one of two shapes from the series. The shape pairs were presented successively in random order. Results show that the smallest possible fractal deviation from “symmetry of things in a thing” significantly reduces the perceived attractiveness of such shapes. The potential of future studies where different levels of complexity of fractal patterns are weighed against different degrees of symmetry is pointed out in the conclusion. (shrink)
In standard probability theory, probability zero is not the same as impossibility. But many have suggested that only impossible events should have probability zero. This can be arranged if we allow infinitesimal probabilities, but infinitesimals do not solve all of the problems. We will see that regular probabilities are not invariant over rigid transformations, even for simple, bounded, countable, constructive, and disjoint sets. Hence, regular chances cannot be determined by space-time invariant physical laws, and regular credences cannot satisfy seemingly reasonable (...) symmetry principles. Moreover, the examples here are immune to the objections against Williamson’s infinite coin flips. (shrink)
Berkeley in his Introduction to the Principles of Human knowledge uses geometrical examples to illustrate a way of generating “universal ideas,” which allegedly account for the existence of general terms. In doing proofs we might, for example, selectively attend to the triangular shape of a diagram. Presumably what we prove using just that property applies to all triangles.I contend, rather, that given Berkeley’s view of extension, no Euclidean triangles exist to attend to. Rather proof, as Berkeley would normally assume, requires (...) idealizing diagrams; treating them as if they obeyed Euclidean constraints. This convention solves the problem of representative generalization. View HTML Send article to KindleTo send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. Find out more about the Kindle Personal Document Service.Berkeley and Proof in GeometryVolume 51, Issue 3RICHARD J. BROOK DOI: https://doi.org/10.1017/S0012217312000686Your Kindle email address Please provide your Kindle email.@free.kindle.com@kindle.com Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Dropbox. Berkeley and Proof in GeometryVolume 51, Issue 3RICHARD J. BROOK DOI: https://doi.org/10.1017/S0012217312000686Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Google Drive. Berkeley and Proof in GeometryVolume 51, Issue 3RICHARD J. BROOK DOI: https://doi.org/10.1017/S0012217312000686Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Export citation Request permission. (shrink)
Let us start by some general definitions of the concept of complexity. We take a complex system to be one composed by a large number of parts, and whose properties are not fully explained by an understanding of its components parts. Studies of complex systems recognized the importance of “wholeness”, defined as problems of organization (and of regulation), phenomena non resolvable into local events, dynamics interactions in the difference of behaviour of parts when isolated or in higher configuration, etc., in (...) short, systems of various orders (or levels) not understandable by investigation of their respective parts in isolation. In a complex system it is essential to distinguish between ‘global’ and ‘local’ properties. Theoretical physicists in the last two decades have discovered that the collective behaviour of a macro-system, i.e. a system composed of many objects, does not change qualitatively when the behaviour of single components are modified slightly. Conversely, it has been also found that the behaviour of single components does change when the overall behaviour of the system is modified. There are many universal classes which describe the collective behaviour of the system, and each class has its own characteristics; the universal classes do not change when we perturb the system. The most interesting and rewarding work consists in finding these universal classes and in spelling out their properties. This conception has been followed in studies done in the last twenty years on second order phase transitions. The objective, which has been mostly achieved, was to classify all possible types of phase transitions in different universality classes and to compute the parameters that control the behaviour of the system near the transition (or critical or bifurcation) point as a function of the universality class. This point of view is not very different from the one expressed by Thom in the introduction of Structural Stability and Morphogenesis (1975). It differs from Thom’s program because there is no a priori idea of the mathematical framework which should be used. Indeed Thom considers only a restricted class of models (ordinary differential equations in low dimensional spaces) while we do not have any prejudice regarding which models should be accepted. One of the most interesting and surprising results obtained by studying complex systems is the possibility of classifying the configurations of the system taxonomically. It is well-known that a well founded taxonomy is possible only if the objects we want to classify have some unique properties, i.e. species may be introduced in an objective way only if it is impossible to go continuously from one specie to another; in a more mathematical language, we say that objects must have the property of ultrametricity. More precisely, it was discovered that there are conditions under which a class of complex systems may only exist in configurations that have the ultrametricity property and consequently they can be classified in a hierarchical way. Indeed, it has been found that only this ultrametricity property is shared by the near-optimal solutions of many optimization problems of complex functions, i.e. corrugated landscapes in Kauffman’s language. These results are derived from the study of spin glass model, but they have wider implications. It is possible that the kind of structures that arise in these cases is present in many other apparently unrelated problems. Before to go on with our considerations, we have to pick in mind two main complementary ideas about complexity. (i) According to the prevalent and usual point of view, the essence of complex systems lies in the emergence of complex structures from the non-linear interaction of many simple elements that obey simple rules. Typically, these rules consist of 0–1 alternatives selected in response to the input received, as in many prototypes like cellular automata, Boolean networks, spin systems, etc. Quite intricate patterns and structures can occur in such systems. However, what can be also said is that these are toy systems, and the systems occurring in reality rather consist of elements that individually are quite complex themselves. (ii) So, this bring a new aspect that seems essential and indispensable to the emergence and functioning of complex systems, namely the coordination of individual agents or elements that themselves are complex at their own scale of operation. This coordination dramatically reduces the degree of freedom of those participating agents. Even the constituents of molecules, i.e. the atoms, are rather complicated conglomerations of subatomic particles, perhaps ultimately excitations of patterns of superstrings. Genes, the elementary biochemical coding units, are very complex macromolecular strings, as are the metabolic units, the proteins. Neurons, the basic elements of cognitive networks, themselves are cells. In those mentioned and in other complex systems, it is an important feature that the potential complexity of the behaviour of the individual agents gets dramatically simplified through the global interactions within the system. The individual degrees of freedom are drastically reduced, or, in a more formal terminology, the factual space of the system is much smaller than the product of the state space of the individual elements. That is one key aspect. The other one is that on this basis, that is utilizing the coordination between the activities of its members, the system then becomes able to develop and express a coherent structure at a higher level, that is, an emergent behaviour (and emergent properties) that transcends what each element is individually capable of. (shrink)
When national borders in the modern sense first began to be established in early modern Europe, non-contiguous and perforated nations were a commonplace. According to the conception of the shapes of nations that is currently preferred, however, nations must conform to the topological model of circularity; their borders must guarantee contiguity and simple connectedness, and such borders must as far as possible conform to existing topographical features on the ground. The striving to conform to this model can be seen at (...) work today in Quebec and in Ireland, it underpins much of the rhetoric of the P.L.O., and was certainly to some degree involved as a motivating factor in much of the ethnic cleansing which took place in Bosnia in recent times. The question to be addressed in what follows is: to what extent could inter-group disputes be more peacefully resolved, and ethnic cleansing avoided, if political leaders, diplomats and others involved in the resolution of such disputes could be brought to accept weaker geometrical constraints on the shapes of nations? A number of associated questions then present themselves: What sorts of administrative and logistical problems have been encountered by existing non contiguous nations and by perforated nations, and by other nations deviating in different ways from the received geometrical ideal? To what degree is the desire for continuity and simple connectedness a rational desire, and to what degree does it rest on species of political rhetoric which might be countered by, for example, philosophical argument? These and a series of related questions will form the subject- matter of the present essay. (shrink)
The content of the manuscript represents a bold idea system, it is beyond the boundaries of all existing knowledge but the method of reasoning and logic is also very strict and scientific. The purpose of the manuscript is to unify the natural categories (natural philosophy, natural geometry, quantum mechanics, astronomy,…), and to open a new direction for most other sciences. -/- Abstract of the manuscript: -/- About Philosophy: • Proved the existence of time and non-dilation. • Proved that matter (...) is always motionless in space. • Conclusion that space is energy. • … About Mathematics: • Solved the squaring the circle problem(a millennium problem). • Using the equation to calculate speed of light is 471.000.000 m/s • … About Physics : • Explained the nature of gravity. • Explained dark energy, spin value of Nucleon. • Explained matter and antimatter. • …. (shrink)
The concept of time is examined using the second law of thermodynamics that was recently formulated as an equation of motion. According to the statistical notion of increasing entropy, flows of energy diminish differences between energy densities that form space. The flow of energy is identified with the flow of time. The non-Euclidean energy landscape, i.e. the curved space–time, is in evolution when energy is flowing down along gradients and levelling the density differences. The flows along the steepest descents, (...) i.e. geodesics are obtained from the principle of least action for mechanics, electrodynamics and quantum mechanics. The arrow of time, associated with the expansion of the Universe, identifies with grand dispersal of energy when high-energy densities transform by various mechanisms to lower densities in energy and eventually to ever-diluting electromagnetic radiation. Likewise, time in a quantum system takes an increment forwards in the detection-associated dissipative transformation when the stationary-state system begins to evolve pictured as the wave function collapse. The energy dispersal is understood to underlie causality so that an energy gradient is a cause and the resulting energy flow is an effect. The account on causality by the concepts of physics does not imply determinism; on the contrary, evolution of space–time as a causal chain of events is non-deterministic. (shrink)
I argue first that attention is a (maybe the) paradigmatic case of an object-directed, non-propositional intentional mental episode. In addition attention cannot be reduced to any other (propositional or non-propositional) mental episodes. Yet, second, attention is not a non-propositional mental attitude. It might appear puzzling how one could hold both of these claims. I show how to combine them, and how that combination shows how propositionality and non-propositionality can co-exist in a mental life. The crucial move is one away from (...) an atomistic, building block picture to a more holistic, structural picture. (shrink)
Philippe Huneman has recently questioned the widespread application of mechanistic models of scientific explanation based on the existence of structural explanations, i.e. explanations that account for the phenomenon to be explained in virtue of the mathematical properties of the system where the phenomenon obtains, rather than in terms of the mechanisms that causally produce the phenomenon. Structural explanations are very diverse, including cases like explanations in terms of bowtie structures, in terms of the topological properties of the system, or in (...) terms of equilibrium. The role of mathematics in bowtie structured systems and in topologically constrained systems has recently been examined in different papers. However, the specific role that mathematical properties play in equilibrium explanations requires further examination, as different authors defend different interpretations, some of them closer to the new-mechanistic approach than to the structural model advocated by Huneman. In this paper, we cover this gap by investigating the explanatory role that mathematics play in Blaser and Kirschner’s nested equilibrium model of the stability of persistent long-term human-microbe associations. We argue that their model is explanatory because: i) it provides a mathematical structure in the form of a set of differential equations that together satisfy an ESS; ii) that the nested nature of the ESSs makes the explanation of host-microbe persistent associations robust to any perturbation; iii) that this is so because the properties of the ESS directly mirror the properties of the biological system in a non-causal way. The combination of these three theses make equilibrium explanations look more similar to structural explanations than to causal-mechanistic explanation. (shrink)
This article presents modal versions of resource-conscious logics. We concentrate on extensions of variants of linear logic with one minimal non-normal modality. In earlier work, where we investigated agency in multi-agent systems, we have shown that the results scale up to logics with multiple non-minimal modalities. Here, we start with the language of propositional intuitionistic linear logic without the additive disjunction, to which we add a modality. We provide an interpretation of this language on a class of Kripke resource models (...) extended with a neighbourhood function: modal Kripke resource models. We propose a Hilbert-style axiomatisation and a Gentzen-style sequent calculus. We show that the proof theories are sound and complete with respect to the class of modal Kripke resource models. We show that the sequent calculus admits cut elimination and that proof-search is in PSPACE. We then show how to extend the results when non-commutative connectives are added to the language. Finally, we put the l.. (shrink)
This work contributes to the theory of judgement aggregation by discussing a number of significant non-classical logics. After adapting the standard framework of judgement aggregation to cope with non-classical logics, we discuss in particular results for the case of Intuitionistic Logic, the Lambek calculus, Linear Logic and Relevant Logics. The motivation for studying judgement aggregation in non-classical logics is that they offer a number of modelling choices to represent agents’ reasoning in aggregation problems. By studying judgement aggregation in logics that (...) are weaker than classical logic, we investigate whether some well-known impossibility results, that were tailored for classical logic, still apply to those weak systems. (shrink)
The human attempts to access, measure and organize physical phenomena have led to a manifold construction of mathematical and physical spaces. We will survey the evolution of geometries from Euclid to the Algebraic Geometry of the 20th century. The role of Persian/Arabic Algebra in this transition and its Western symbolic development is emphasized. In this relation, we will also discuss changes in the ontological attitudes toward mathematics and its applications. Historically, the encounter of geometric and algebraic perspectives enriched the (...) mathematical practices and their foundations. Yet, the collapse of Euclidean certitudes, of over 2300 years, and the crisis in the mathematical analysis of the 19th century, led to the exclusion of “geometric judgments” from the foundations of Mathematics. After the success and the limits of the logico-formal analysis, it is necessary to broaden our foundational tools and re-examine the interactions with natural sciences. In particular, the way the geometric and algebraic approaches organize knowledge is analyzed as a cross-disciplinary and cross-cultural issue and will be examined in Mathematical Physics and Biology. We finally discuss how the current notions of mathematical (phase) “space” should be revisited for the purposes of life sciences. (shrink)
It is widely accepted that the ethical supervenes on the natural, where this is roughly the claim that it is impossible for two circumstances to be identical in all natural respects, but different in their ethical respects. This chapter refines and defends the traditional thought that this fact poses a significant challenge to ethical non-naturalism, a view on which ethical properties are fundamentally different in kind from natural properties. The challenge can be encapsulated in three core claims which the chapter (...) defends: that a defensible non-naturalism is committed to the supervenience of the ethical, that this commits the non-naturalist to a brute necessary connection between properties of distinct kinds, and that commitment to such brute connections counts against the non-naturalist’s view. Each of these claims has recently been challenged. Against Nicholas Sturgeon’s recent doubts about the dialectical force of supervenience, this chapter defends a supervenience thesis as deserving to be common ground among ethical realists. It is then argued that attempts to explain supervenience on behalf of the non-naturalist either fail as explanations, generate near-identical explanatory burdens elsewhere, or appeal to commitments that are inconsistent with core motivations for non-naturalism. The chapter concludes that, suitably refined, the traditional argument against nonnaturalism from supervenience is alive and well. (shrink)
According to non-conceptualist interpretations, Kant held that the application of concepts is not necessary for perceptual experience. Some have motivated non-conceptualism by noting the affinities between Kant's account of perception and contemporary relational theories of perception. In this paper I argue (i) that non-conceptualism cannot provide an account of the Transcendental Deduction and thus ought to be rejected; and (ii) that this has no bearing on the issue of whether Kant endorsed a relational account of perceptual experience.
This paper focuses on a particular kind of non-naturalism: moral non-naturalism. Our primary aim is to argue that the moral non-naturalist places herself in an invidious position if she simply accepts that the non-natural moral facts that she posits are not explanatory. This has, hitherto, been the route that moral non-naturalists have taken. They have attempted to make their position more palatable by pointing out that there is reason to be suspicious of the explanatory criterion of ontological commitment. That is (...) because other perfectly respectable views fall foul of that criterion, most notably: mathematical realism. Since we don’t want to rule out mathematical realism, we should jettison the explanatory criterion of ontological commitment. Against this manoeuvre, we argue that many contemporary mathematical realists accept the explanatory criterion and provide an account of how mathematical objects are indeed indispensable to our best explanations. Thus, the moral non-naturalist will be left in an awkward dialectical position if she accepts that non-natural moral properties play no such explanatory role. (shrink)
Multialgebras (or hyperalgebras or non-deterministic algebras) have been much studied in mathematics and in computer science. In 2016 Carnielli and Coniglio introduced a class of multialgebras called swap structures, as a semantic framework for dealing with several Logics of Formal Inconsistency (or LFIs) that cannot be semantically characterized by a single finite matrix. In particular, these LFIs are not algebraizable by the standard tools of abstract algebraic logic. In this paper, the first steps towards a theory of non-deterministic algebraization of (...) logics by swap structures are given. Specifically, a formal study of swap structures for LFIs is developed, by adapting concepts of universal algebra to multialgebras in a suitable way. A decomposition theorem similar to Birkhoff's representation theorem is obtained for each class of swap structures. Moreover, when applied to the 3-valued algebraizable logics J3 and Ciore, their classes of algebraic models are retrieved, and the swap structures semantics become twist structures semantics (as independently introduced by M. Fidel and D. Vakarelov). This fact, together with the existence of a functor from the category of Boolean algebras to the category of swap structures for each LFI (which is closely connected with Kalman's functor), suggests that swap structures can be seen as non-deterministic twist structures. This opens new avenues for dealing with non-algebraizable logics by the more general methodology of multialgebraic semantics. (shrink)
A driving force behind much of the literature on the non-identity problem is the widely shared intuition that actions or policies that change who comes into existence don't, as a result, lose their morally problematic features. We hypothesize that this intuition isn’t entirely shared by the general public, which might have widespread implications concerning how to best motivate public support for large-scale, identity-affecting policies like those involved in climate change mitigation. To test our hypothesis, we ran a behavioural economic experiment, (...) a version of the well-known dictator game, designed to mimic the public's morally loaded behaviour in identity-affecting choice problems. As predicted, we found that the public does seem to behave more selfishly when making identity-affecting choices. We further hypothesised that one possible mechanism involved in this change is the notion of harm that plays a role in the public’s normatively loaded decision making. So, during our study, we also solicited subjects’ attitudes about harm, in particular about whether the “dictators” had done harm through their choices. The data suggest that substantial portions of the population each employ distinct notions of harm in their normative thinking, which raises some puzzling features about the public’s normative thinking that call out for further empirical examination. (shrink)
Moral non-cognitivists hope to explain the nature of moral agreement and disagreement as agreement and disagreement in non-cognitive attitudes. In doing so, they take on the task of identifying the relevant attitudes, distinguishing the non-cognitive attitudes corresponding to judgements of moral wrongness, for example, from attitudes involved in aesthetic disapproval or the sports fan’s disapproval of her team’s performance. We begin this paper by showing that there is a simple recipe for generating apparent counterexamples to any informative specification of the (...) moral attitudes. This may appear to be a lethal objection to non-cognitivism, but a similar recipe challenges attempts by non-cognitivism’s competitors to specify the conditions underwriting the contrast between genuine and merely apparent moral disagreement. Because of its generality, this specification problem requires a systematic response, which, we argue, is most easily available for the non-cognitivist. Building on premisses congenial to the non-cognitivist tradition, we make the following claims: (1) In paradigmatic cases, wrongness-judgements constitute a certain complex but functionally unified state, and paradigmatic wrongness-judgements form a functional kind, preserved by homeostatic mechanisms. (2) Because of the practical function of such judgements, we should expect judges’ intuitive understanding of agreement and disagreement to be accommodating, treating states departing from the paradigm in various ways as wrongness-judgements. (3) This explains the intuitive judgements required by the counterexample-generating recipe, and more generally why various kinds of amoralists are seen as making genuine wrongness-judgements. (shrink)
Teleological theories of reason and value, upon which all reasons are fundamentally reasons to realize states of affairs that are in some respect best, cannot account for the intuition that victims in non-identity cases have been wronged. Many philosophers, however, reject such theories in favor of alternatives that recognize fundamentally non-teleological reasons, second-personal reasons that reflect a moral significance each person has that is not grounded in the teleologist’s appeal to outcomes. Such deontological accounts appear to be better positioned to (...) identify the wrong committed against non-identity victims because a person wrongs another on such accounts if she violates his second-personal claims -- overall benefit to victims presents no obstacle to the identification of second-personal wrongdoing. Derek Parfit argues that non-identity is a problem for these deontological theories as well because the alleged victims are properly understood as consenting to the action in question, thereby waiving any such second-personal claim. But his arguments misrepresent the role of consent on such theories by articulating it through appeal to the very teleological theory of reasons that their advocates dismiss as inadequate. Properly understood, Parfit’s appeal to consent understood as retroactive endorsement only provides the answer on such deontological accounts to the question of whether, given that the non-identity victim is second-personally wronged, he is nonetheless better off existing. Indeed, it becomes clear that it is teleological theories for which non-identity poses a particular problem: they cannot -- while their deontological counterparts can – account for the intuition that non-identity victims have been wronged. (shrink)
According to Philip Pettit, we should understand republican liberty, freedom as ‘non-domination,’ as a ‘supreme political value.’ It is its commitment to freedom as non-domination, Pettit claims, that distinguishes republicanism from various forms of liberal egalitarianism, including the political liberalism of John Rawls. I explain that Rawlsian political liberalism is committed to a form of non-domination, namely, a ‘political’ conception, which is: (a) limited in its scope to the ‘basic structure of society,’ and (b) ‘freestanding’ in nature (that is, compatible (...) with the ‘fact of reasonable pluralism’). I show that the political conception of non-domination is an integral part of political liberalism through an exploration of the kind of citizenship education that political liberalism mandates for all students. Such an education would impart to future citizen the skills and knowledge necessary for them to realize republican freedom vis-à-vis their political institutions, their workplaces, and, by means of an enforceable ‘right of exit,’ the various associations to which they might belong. (shrink)
The goals of this paper are two-fold: I wish to clarify the Aristotelian conception of the law of non-contradiction as a metaphysical rather than a semantic or logical principle, and to defend the truth of the principle in this sense. First I will explain what it in fact means that the law of non-contradiction is a metaphysical principle. The core idea is that the law of non-contradiction is a general principle derived from how things are in the world. For example, (...) there are certain constraints as to what kind of properties an object can have, and especially: some of these properties are mutually exclusive. Given this characterisation, I will advance to examine what kind of challenges the law of non-contradiction faces; the main opponent here is Graham Priest. I will consider these challenges and conclude that they do not threaten the truth of the law of non-contradiction understood as a metaphysical principle. (shrink)
In this chapter I seek to examine the credibility of Finnis’s basic stance on Aquinas that while many neo-Thomists are meta-ethically naturalistic in their understanding of natural law theory (for example, Heinrich Rommen, Henry Veatch, Ralph McInerny, Russell Hittinger, Benedict Ashley and Anthony Lisska), Aquinas’s own meta-ethical framework avoids the “pitfall” of naturalism. On examination, the short of it is that I find Finnis’s account (while adroit) wanting in the interpretation stakes vis-à-vis other accounts of Aquinas’s meta-ethical foundationalism. I think (...) that the neo-Thomists are basically right to argue that for Aquinas we cannot really understand objective truths about moral standards unless we derive them from our intellective knowledge of natural facts as given to us by the essential human nature that we have. While I find Finnis’s interpretative position on Aquinas wanting, I go on to argue that his own attachment to non-naturalism is justified and should not be jettisoned. Because I think non-naturalism important to the future tenability of a viable natural law ethics (an ethics that is both cognitive and objectivist), I argue that Finnis should, so to speak, “beef up” his “fundamental option” for non-naturalism and more fully avail himself of certain argumentative strategies available in its defense, argumentative strategies that are inspired by the analytical philosophy of G.E. Moore. (shrink)
This chapter identifies a novel family of metaethical theories that are non-descriptive and that aim to explain the action-guiding qualities of normative thought and language. The general strategy is to consider different relations language might bear to a given content, where we locate descriptivity (or lack of it) in these relations, rather than locating it in a theory that begins with the expression of states of mind, or locating it in a special kind of content that is not way-things-might-be content. (...) One such view is sketched, which posits two different content-fixing cognitive roles for bits of language. One role fixes a descriptive relation to content and another role fixes a non-descriptive relation to content. In addition to non-descriptivity and action guidance, the chapter briefly considers the appearance of mind-independent authoritative force, disagreement, and Frege–Geach concerns. (shrink)
This chapter focuses on Daoist praxeology and language in order to build something of a moral realist position (the contours of which may differ from most western versions insofar as it need not commit to moral cognitivism) that hinges on the seemingly paradoxical notions of ineffable moral truths and non-transferable moral skill.
The global relation between logical empiricism and American pragmatism is one of the more difficult problems in history of philosophy. In this paper I’d like to take a local perspective and concentrate on the details that concern the vicissitudes of a philosopher who played an important role in the encounter of logical empiricism and American pragmatism, namely, Ernest Nagel. In this paper, I want to explore some aspects of Nagel’s changing attitude towards the then „new“ logical-empiricist philosophy. In the beginning (...) Nagel welcomed logical empiricism whole-heartedly. This early enthusiasm did not last. At the end of his philosophical career Nagel’s early positive attitude towards logical empiricism shown in the 1930s had been replaced by a much more reserved one. Nagel’s growing dissatisfaction with the Carnapian version of logical empiricist philosophy was clearly expressed in Nagel’s criticism of Carnap’s inductive logic and more generally in his last book Teleology Revisited and Other Essays on History and Philosophy of Science. There he critized harshly Carnap’s philosophy of science in general as ahistoric and non-pragmatist. One of the distinctive features of Nagel’s philosophy of science is the emphasis that he put on the role of history of science for philosophy of science. A compelling evidence for this attitude are his works on the history and philosophy of geometry and algebra One may say that Carnap and Nagel represented opposed possibilities of how the profession of a philosopher of science could be understood: Carnap as a „conceptual engineer“ was engaged in the task of inventing the conceptual tools for a better theoretical understanding of science, while Nagel was to be considered more as a „public intellectual“ engaged in the project of realizing a more rational and enlightened society. (shrink)
One of the central debates in contemporary Kant scholarship concerns whether Kant endorses a “conceptualist” account of the nature of sensory experience. Understanding the debate is crucial for getting a full grasp of Kant's theory of mind, cognition, perception, and epistemology. This paper situates the debate in the context of Kant's broader theory of cognition and surveys some of the major arguments for conceptualist and non-conceptualist interpretations of his critical philosophy.
Ladyman and Ross argue that quantum objects are not individuals and use this idea to ground their metaphysical view, ontic structural realism, according to which relational structures are primary to things. LR acknowledge that there is a version of quantum theory, namely the Bohm theory, according to which particles do have denite trajectories at all times. However, LR interpret the research by Brown et al. as implying that "raw stuff" or haecceities are needed for the individuality of particles of BT, (...) and LR dismiss this as idle metaphysics. In this paper we note that Brown et al.'s research does not imply that haecceities are needed. Thus BT remains as a genuine option for those who seek to understand quantum particles as individuals. However, we go on to discuss some problems with BT which led Bohm and Hiley to modify it. This modified version underlines that, due to features such as context-dependence and non-locality, Bohmian particles have a very limited autonomy in situations where quantum effects are non-negligible. So while BT restores the possibility of quantum individuals, it also underlines the primacy of the whole over the autonomy of the parts. The later sections of the paper also examine the Bohm theory in the general mathematical context of symplectic geometry. This provides yet another way of understanding the subtle, holistic and dynamic nature of Bohmian individuals. We finally briefly consider Bohm's other main line of research, the "implicate order", which is in some ways similar to LR's structural realism. (shrink)
Are the special sciences autonomous from physics? Those who say they are need to explain how dependent special science properties could feature in irreducible causal explanations, but that’s no easy task. The demands of a broadly physicalist worldview require that such properties are not only dependent on the physical, but also physically realized. Realized properties are derivative, so it’s natural to suppose that they have derivative causal powers. Correspondingly, philosophical orthodoxy has it that if we want special science properties to (...) bestow genuinely new causal powers, we must reject physical realization and embrace a form of emergentism, in which such properties arise from the physical by mysterious brute determination. In this paper, I argue that contrary to this orthodoxy, there are physically realized properties that bestow new causal powers in relation to their realizers. The key to my proposal is to reject causal-functional accounts of realization and embrace a broader account that allows for the realization of shapes and patterns. Unlike functional properties, such properties are defined by qualitative, non-causal specifications, so realizing them does not consist in bestowing causal powers. This, I argue, allows for causal novelty of the strongest kind. I argue that the molecular geometry of H2O—a qualitative, multiply realizable property—plays an irreducible role in explaining its dipole moment, and thereby bestows novel powers. On my proposal, special science properties can have the kind of causal novelty traditionally associated with strong emergence, without any of the mystery. (shrink)
This paper is a survey of the supervenience challenge to non-naturalist moral realism. I formulate a version of the challenge, consider the most promising non-naturalist replies to it, and suggest that no fully effective reply has yet been given.
A number of authors have objected to the application of non-classical logic to problems in philosophy on the basis that these non-classical logics are usually characterised by a classical metatheory. In many cases the problem amounts to more than just a discrepancy; the very phenomena responsible for non-classicality occur in the field of semantics as much as they do elsewhere. The phenomena of higher order vagueness and the revenge liar are just two such examples. The aim of this paper is (...) to show that a large class of non-classical logics are strong enough to formulate their own model theory in a corresponding non-classical set theory. Specifically I show that adequate definitions of validity can be given for the propositional calculus in such a way that the metatheory proves, in the specified logic, that every theorem of the propositional fragment of that logic is validated. It is shown that in some cases it may fail to be a classical matter whether a given sentence is valid or not. One surprising conclusion for non-classical accounts of vagueness is drawn: there can be no axiomatic, and therefore precise, system which is determinately sound and complete. (shrink)
Intuitively, the knowledge of one’s own intentional actions is different from the knowledge of actions of other sorts, including those of other people and unintentional actions of one's own. But how are we to understand this phenomenon? Does it pertain to all actions, under every description under which they are known? If so, then how is this possible? If not, then how should we think about cases that are exceptions to this principle? This paper is a critical survey of recent (...) attempts to answer these questions. I consider views under three headings: "special source" views, which hold that the knowledge of one's intentional actions has a non-perceptual source; "special domain" views, which hold that some but not all aspects of one's intentional actions are known in a special way; and "special character" views, which hold that the knowledge of intentional actions is special not because of where it comes from, but because of some other respect in which it is different in kind from the knowledge of other things. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.