Does geometry constitues a core set of intuitions present in all humans, regarless of their language or schooling ? We used two non verbal tests to probe the conceptual primitives of geometry in the Munduruku, an isolated Amazonian indigene group. Our results provide evidence for geometrical intuitions in the absence of schooling, experience with graphic symbols or maps, or a rich language of geometrical terms.
Kant argued that Euclidean geometry is synthesized on the basis of an a priori intuition of space. This proposal inspired much behavioral research probing whether spatial navigation in humans and animals conforms to the predictions of Euclidean geometry. However, Euclidean geometry also includes concepts that transcend the perceptible, such as objects that are infinitely small or infinitely large, or statements of necessity and impossibility. We tested the hypothesis that certain aspects of nonperceptible Euclidian geometry map onto (...) intuitions of space that are present in all humans, even in the absence of formal mathematical education. Our tests probed intuitions of points, lines, and surfaces in participants from an indigene group in the Amazon, the Mundurucu, as well as adults and age-matched children controls from the United States and France and younger US children without education in geometry. The responses of Mundurucu adults and children converged with that of mathematically educated adults and children and revealed an intuitive understanding of essential properties of Euclidean geometry. For instance, on a surface described to them as perfectly planar, the Mundurucu's estimations of the internal angles of triangles added up to ∼180 degrees, and when asked explicitly, they stated that there exists one single parallel line to any given line through a given point. These intuitions were also partially in place in the group of younger US participants. We conclude that, during childhood, humans develop geometrical intuitions that spontaneously accord with the principles of Euclidean geometry, even in the absence of training in mathematics. (shrink)
Against Thomas Mormann's argument that differential topology does not support Carnap's conventionalism in geometry we show their compatibility. However, Mormann's emphasis on the entanglement that characterizes topology and its associated metrics is not misplaced. It poses questions about limits of empirical inquiry. For Carnap, to pose a question is to give a statement with the task of deciding its truth. Mormann's point forces us to introduce more clarity to what it means to specify the task that decides between competing (...) hypotheses and in what way such a task may be both in practice and/or in principle impossible to carry out. (shrink)
This paper argues that Frege's notoriously long commitment to Kant's thesis that Euclidean geometry is synthetic _a priori_ is best explained by realizing that Frege uses ‘intuition’ in two senses. Frege sometimes adopts the usage presented in Hermann Helmholtz's sign theory of perception. However, when using ‘intuition’ to denote the source of geometric knowledge, he is appealing to Hermann Cohen's use of Kantian terminology. We will see that Cohen reinterpreted Kantian notions, stripping them of any psychological connotation. Cohen's defense (...) of his modified Kantian thesis on the unique status of the Euclidean axioms presents Frege's own views in a much more favorable light. (shrink)
Geometry, etymologically the “science of measuring the Earth”, is a mathematical formalization of space. Just as formal concepts of number may be rooted in an evolutionary ancient system for perceiving numerical quantity, the fathers of geometry may have been inspired by their perception of space. Is the spatial content of formal Euclidean geometry universally present in the way humans perceive space, or is Euclidean geometry a mental construction, specific to those who have received appropriate instruction? The (...) spatial content of the formal theories of geometry may depart from spatial perception for two reasons: first, because in geometry, only some of the features of spatial figures are theoretically relevant; and second, because some geometric concepts go beyond any possible perceptual experience. Focusing in turn on these two aspects of geometry, we will present several lines of research on US adults and children from the age of three years, and participants from an Amazonian culture, the Mundurucu. Almost all the aspects of geometry tested proved to be shared between these two cultures. Nevertheless, some aspects involve a process of mental construction where explicit instruction seem to play a role in the US, but that can still take place in the absence of instruction in geometry. (shrink)
Explications of the reconstruction of Leibniz’s metaphysics that Deleuze undertakes in 'The Fold: Leibniz and the Baroque' focus predominantly on the role of the infinitesimal calculus developed by Leibniz.1 While not underestimat- ing the importance of the infinitesimal calculus and the law of continuity as reflected in the calculus of infinite series to any understanding of Leibniz’s metaphysics and to Deleuze’s reconstruction of it in The Fold, what I propose to examine in this paper is the role played by other (...) developments in mathematics that Deleuze draws upon, including those made by a number of Leibniz’s near contemporaries – the projective geometry that has its roots in the work of Desargues (1591–1661) and the ‘proto-topology’ that appears in the work of Du ̈rer (1471–1528) – and a number of the subsequent developments in these fields of mathematics. Deleuze brings this elaborate conjunction of material together in order to set up a mathematical idealization of the system that he considers to be implicit in Leibniz’s work. The result is a thoroughly mathematical explication of the structure of Leibniz’s metaphysics. What is provided in this paper is an exposition of the very mathematical underpinnings of this Deleuzian account of the structure of Leibniz’s metaphysics, which, I maintain, subtends the entire text of The Fold. (shrink)
John Corcoran and George Boger. Aristotelian logic and Euclidean geometry. Bulletin of Symbolic Logic. 20 (2014) 131. -/- By an Aristotelian logic we mean any system of direct and indirect deductions, chains of reasoning linking conclusions to premises—complete syllogisms, to use Aristotle’s phrase—1) intended to show that their conclusions follow logically from their respective premises and 2) resembling those in Aristotle’s Prior Analytics. Such systems presuppose existence of cases where it is not obvious that the conclusion follows from the (...) premises: there must be something deductions can show. Corcoran calls a proposition that follows from given premises a hidden consequence of those premises if it is not obvious that the proposition follows from those premises. By a Euclidean geometry we mean an extended discourse beginning with basic premises—axioms, postulates, definitions—1) treating a universe of geometrical figures and 2) resembling Euclid’s Elements. There were Euclidean geometries before Euclid (fl. 300 BCE), even before Aristotle (384–322 BCE). Bochenski, Lukasiewicz, Patzig and others never new this or if they did they found it inconvenient to mention. Euclid shows no awareness of Aristotle. It is obvious today—as it should have been obvious in Euclid’s time, if anyone knew both—that Aristotle’s logic was insufficient for Euclid’s geometry: few if any geometrical theorems can be deduced from Euclid’s premises by means of Aristotle’s deductions. Aristotle’s writings don’t say whether his logic is sufficient for Euclidean geometry. But, there is not even one fully-presented example. However, Aristotle’s writings do make clear that he endorsed the goal of a sufficient system. Nevertheless, incredible as this is today, many logicians after Aristotle claimed that Aristotelian logics are sufficient for Euclidean geometries. This paper reviews and analyses such claims by Mill, Boole, De Morgan, Russell, Poincaré, and others. It also examines early contrary statements by Hintikka, Mueller, Smith, and others. Special attention is given to the argumentations pro or con and especially to their logical, epistemic, and ontological presuppositions. What methodology is necessary or sufficient to show that a given logic is adequate or inadequate to serve as the underlying logi of a given science. (shrink)
After sketching the historical development of “emergence” and noting several recent problems relating to “emergent properties”, this essay proposes that properties may be either “emergent” or “mergent” and either “intrinsic” or “extrinsic”. These two distinctions define four basic types of change: stagnation, permanence, flux, and evolution. To illustrate how emergence can operate in a purely logical system, the Geometry of Logic is introduced. This new method of analyzing conceptual systems involves the mapping of logical relations onto geometrical figures, following (...) either an analytic or a synthetic pattern (or both together). Evolution is portrayed as a form of discontinuous change characterized by emergent properties that take on an intrinsic quality with respect to the object(s) or proposition(s) involved. Causal leaps, not continuous development, characterize the evolution of human life in a developing foetus, of a thought out of certain brain states, of a new idea (or insight) out of ordinary thoughts, and of a great person out of a set of historical experiences. The tendency to assume that understanding evolutionary change requires a step-by-step explanation of the historical development that led to the appearance of a certain emergent property is thereby discredited. (shrink)
This paper examines Hobbes’s criticisms of Robert Boyle’s air-pump experiments in light of Hobbes’s account in _De Corpore_ and _De Homine_ of the relationship of natural philosophy to geometry. I argue that Hobbes’s criticisms rely upon his understanding of what counts as “true physics.” Instead of seeing Hobbes as defending natural philosophy as “a causal enterprise … [that] as such, secured total and irrevocable assent,” 1 I argue that, in his disagreement with Boyle, Hobbes relied upon his understanding of (...) natural philosophy as a mixed mathematical science. In a mixed mathematical science one can mix facts from experience with causal principles borrowed from geometry. Hobbes’s harsh criticisms of Boyle’s philosophy, especially in the _Dialogus Physicus, sive De natura aeris_, should thus be understood as Hobbes advancing his view of the proper relationship of natural philosophy to geometry in terms of mixing principles from geometry with facts from experience. Understood in this light, Hobbes need not be taken to reject or diminish the importance of experiment/experience; nor should Hobbes’s criticisms in _Dialogus Physicus_ be understood as rejecting experimenting as ignoble and not befitting a philosopher. Instead, Hobbes’s viewpoint is that experiment/experience must be understood within its proper place – it establishes the ‘that’ for a mixed mathematical science explanation. (shrink)
David Hyder.The Determinate World: Kant and Helmholtz on the Physical Meaning of Geometry. viii + 229 pp., bibl., index. Berlin/New York: Walter de Gruyter, 2009.
Article Authors Metrics Comments Media Coverage Abstract Author Summary Introduction Results Discussion Supporting information Acknowledgments Author Contributions References Reader Comments (0) Media Coverage (0) Figures Abstract During language processing, humans form complex embedded representations from sequential inputs. Here, we ask whether a “geometrical language” with recursive embedding also underlies the human ability to encode sequences of spatial locations. We introduce a novel paradigm in which subjects are exposed to a sequence of spatial locations on an octagon, and are asked to (...) predict future locations. The sequences vary in complexity according to a well-defined language comprising elementary primitives and recursive rules. A detailed analysis of error patterns indicates that primitives of symmetry and rotation are spontaneously detected and used by adults, preschoolers, and adult members of an indigene group in the Amazon, the Munduruku, who have a restricted numerical and geometrical lexicon and limited access to schooling. Furthermore, subjects readily combine these geometrical primitives into hierarchically organized expressions. By evaluating a large set of such combinations, we obtained a first view of the language needed to account for the representation of visuospatial sequences in humans, and conclude that they encode visuospatial sequences by minimizing the complexity of the structured expressions that capture them. (shrink)
We show how an epistemology informed by cognitive science promises to shed light on an ancient problem in the philosophy of mathematics: the problem of exactness. The problem of exactness arises because geometrical knowledge is thought to concern perfect geometrical forms, whereas the embodiment of such forms in the natural world may be imperfect. There thus arises an apparent mismatch between mathematical concepts and physical reality. We propose that the problem can be solved by emphasizing the ways in which the (...) brain can transform and organize its perceptual intake. It is not necessary for a geometrical form to be perfectly instantiated in order for perception of such a form to be the basis of a geometrical concept. (shrink)
This paper examines Helmholtz's attempt to use empirical psychology to refute certain of Kant's epistemological positions. Particularly, Helmholtz believed that his work in the psychology of visual perception showed Kant's doctrine of the a priori character of spatial intuition to be in error. Some of Helmholtz's arguments are effective, but this effectiveness derives from his arguments to show the possibility of obtaining evidence that the structure of physical space is non-Euclidean, and these arguments do not depend on his theory of (...) vision. Helmholtz's general attempt to provide an empirical account of the "inferences" of perception is regarded as a failure. (shrink)
At ordinary scales, the ontological model proposed by Ontology of Knowledge (OK) does not call into question the representation of the world elaborated by common sense or science. This is not the world such as it appears to us and as science describes it that is challenged by the OK but the way it appears to the knowing subject and science. In spite of the efforts made to separate scientific reasoning and metaphysical considerations, in spite of the rigorous construction of (...) mathematics, these are not, in their very foundations, independent of modalities, of laws of representation of the world. The OK shows that logical facts Exist neither more nor less than the facts of the World which are Facts of Knowledge. The mathematical facts are facts of representation. Indeed : by the experimental proof, only the laws of the representation are proved persistent/consistent, because what science foresees and verifies with precision, it is not the facts of the world but the facts of the representation of the world. Beyond the laws of representation, nothing proves to us that there are laws of the world. Remember, however, that mathematics « are worth themselves » and can not be called into question « for themselves » by an ontology. The only question is the process of creating meaning that provides mathematics with their intuitions a priori. The first objective of this article will therefore be to identify and clarify what ruptures proposed by the OK could affect intuitions a priori which found mathematics but also could explain the remarkable ability of mathematics to represent the world. For this, three major intuitions of form will be analyzed, namely : the intuition of the One, the intuition of time and the intuition of space. Then considering mathematics in two major classes : {logic, arithmetic, set theory ...} on the one hand and geometry on the other hand, we will ask the questions : - How does the OK affect their premises and rules of inference ? - In case of incompatibility, under what conditions can such a mathematical theory be made compatible with the OK? - Can we deduce a possible extension of the theory ? (shrink)
After a brief review of the golden ratio in history and our previous exposition of the fine-structure constant and equations with the exponential function, the fine-structure constant is studied in the context of other research calculating the fine-structure constant from the golden ratio geometry of the hydrogen atom. This research is extended and the fine-structure constant is then calculated in powers of the golden ratio to an accuracy consistent with the most recent publications. The mathematical constants associated with the (...) golden ratio are also involved in both the calculation of the fine-structure constant and the proton-electron mass ratio. These constants are included in symbolic geometry of historical relevance in the science of the ancients. (shrink)
This paper is about Poincaré’s view of the foundations of geometry. According to the established view, which has been inherited from the logical positivists, Poincaré, like Hilbert, held that axioms in geometry are schemata that provide implicit definitions of geometric terms, a view he expresses by stating that the axioms of geometry are “definitions in disguise.” I argue that this view does not accord well with Poincaré’s core commitment in the philosophy of geometry: the view that (...)geometry is the study of groups of operations. In place of the established view I offer a revised view, according to which Poincaré held that axioms in geometry are in fact assertions about invariants of groups. Groups, as forms of the understanding, are prior in conception to the objects of geometry and afford the proper definition of those objects, according to Poincaré. Poincaré’s view therefore contrasts sharply with Kant’s foundation of geometry in a unique form of sensibility. According to my interpretation, axioms are not definitions in disguise because they themselves implicitly define their terms, but rather because they disguise the definitions which imply them. (shrink)
The purpose of this note is to show how an 'AB-series' interpretation of time leads, surprisingly, apparently, to AdS_5 geometry. This is not a theory of 2 time dimensions. Rather, it is a theory of 1 time dimension that has both A-series and B-series characteristics. To summarize the result, a spacetime in terms of (1) the earlier-to-later aspect of time, and (2) the (related) future-present-past aspect of time, and (3) 3-d space, it would seem, gives us the AdS_5 (...) class='Hi'>geometry. (shrink)
In the brain the relations between free neurons and the conditioned ones establish the constraints for the informational neural processes. These constraints reflect the systemenvironment state, i.e. the dynamics of homeocognitive activities. The constraints allow us to define the cost function in the phase space of free neurons so as to trace the trajectories of the possible configurations at minimal cost while respecting the constraints imposed. Since the space of the free states is a manifold or a non orthogonal space, (...) the minimum distance is not a straight line but a geodesic. The minimum condition is expressed by a set of ordinary differential equation ( ODE ) that in general are not linear. In the brain there is not an algorithm or a physical field that regulates the computation, then we must consider an emergent process coming out of the neural collective behavior triggered by synaptic variability. We define the neural computation as the study of the classes of trajectories on a manifold geometry defined under suitable constraints. The cost function supervises pseudo equilibrium thermodynamics effects that manage the computational activities from beginning to end and realizes an optimal control through constraints and geodetics. The task of this work is to establish a connection between the geometry of neural computation and cost functions. To illustrate the essential mathematical aspects we will use as toy model a Network Resistor with Adaptive Memory (Memristors).The information geometry here defined is an analog computation, therefore it does not suffer the limits of the Turing computation and it seems to respond to the demand for a greater biological plausibility. The model of brain optimal control proposed here can be a good foundation for implementing the concept of "intentionality",according to the suggestion of W. Freeman. Indeed, the geodesic in the brain states can produce suitable behavior to realize wanted functions and invariants as neural expressionsof cognitive intentions. (shrink)
In this work, Einstein’s view of geometry as physical geometry is taken into account in the analysis of diverse issues related to the notions of inertial motion and inertial reference frame. Einstein’s physical geometry enables a non-conventional view on Euclidean geometry (as the geometry associated to inertial motion and inertial reference frames) and on the uniform time. Also, by taking into account the implications of the view of geometry as a physical geometry, it (...) is presented a critical reassessment of the so-called boostability assumption (implicit according to Einstein in the formulation of the theory) and also of ‘alternative’ derivations of the Lorentz transformations that do not take into account the so-called ‘light postulate’. Finally it is addressed the issue of the eventual conventionality of the one-way speed of light or, what is the same, the conventionality of distant simultaneity (within the same inertial reference frame). It turns out that it is possible to see the (possible) conventionality of distant simultaneity as a case of conventionality of geometry (in Einstein’s reinterpretation of Poincaré’s views). By taking into account synchronization procedures that do not make reference to light propagation (which is necessary in the derivation of the Lorentz transformations without the ‘light postulate’), it can be shown that the synchronization of distant clocks does not need any conventional element. This implies that the whole of chronogeometry (and because of this the physical part of the theory) does not have any conventional element in it, and it is a physical chronogeometry. (shrink)
my goal here is to provide a detailed analysis of the methods of inference that are employed in De prospectiva pingendi. For this purpose, a method of natural deduction is proposed. the treatise by Piero della Francesca is a manifestation of a union between the ne arts and the mathematical sciences of arithmetic and geometry. He de nes painting as a part of perspective and, speaking precisely, as a branch of geometry, which is why we nd advanced geometrical (...) exercises here. (shrink)
This paper examines explanations that turn on non-local geometrical facts about the space of possible configurations a system can occupy. I argue that it makes sense to contrast such explanations from "geometry of motion" with causal explanations. I also explore how my analysis of these explanations cuts across the distinction between kinematics and dynamics.
Traditional geometry concerns itself with planimetric and stereometric considerations, which are at the root of the division between plane and solid geometry. To raise the issue of the relation between these two areas brings with it a host of different problems that pertain to mathematical practice, epistemology, semantics, ontology, methodology, and logic. In addition, issues of psychology and pedagogy are also important here. To our knowledge there is no single contribution that studies in detail even one of the (...) aforementioned areas. (shrink)
In his doctoral dissertation On the Principle of Sufficient Reason, Arthur Schopenhauer there outlines a critique of Euclidean geometry on the basis of the changing nature of mathematics, and hence of demonstration, as a result of Kantian idealism. According to Schopenhauer, Euclid treats geometry synthetically, proceeding from the simple to the complex, from the known to the unknown, “synthesizing” later proofs on the basis of earlier ones. Such a method, although proving the case logically, nevertheless fails to attain (...) the raison d’être of the entity. In order to obtain this, a separate method is required, which Schopenhauer refers to as “analysis,” thus echoing a method already in practice among the early Greek geometers, with however some significant differences. In this essay, I here discuss Schopenhauer’s criticism of synthesis in Euclid’s Elements, and the nature and relevance of his own method of analysis. (shrink)
I explore some ways in which one might base an account of the fundamental metaphysics of geometry on the mathematical theory of Linear Structures recently developed by Tim Maudlin (2010). Having considered some of the challenges facing this approach, Idevelop an alternative approach, according to which the fundamental ontology includes concrete entities structurally isomorphic to functions from space-time points to real numbers.
Evolution and geometry generate complexity in similar ways. Evolution drives natural selection while geometry may capture the logic of this selection and express it visually, in terms of specific generic properties representing some kind of advantage. Geometry is ideally suited for expressing the logic of evolutionary selection for symmetry, which is found in the shape curves of vein systems and other natural objects such as leaves, cell membranes, or tunnel systems built by ants. The topology and (...) class='Hi'>geometry of symmetry is controlled by numerical parameters, which act in analogy with a biological organism’s DNA. The introductory part of this paper reviews findings from experiments illustrating the critical role of two-dimensional (2D) design parameters, affine geometry and shape symmetry for visual or tactile shape sensation and perception-based decision making in populations of experts and non-experts. It will be shown that 2D fractal symmetry, referred to herein as the “symmetry of things in a thing”, results from principles very similar to those of affine projection. Results from experiments on aesthetic and visual preference judgments in response to 2D fractal trees with varying degrees of asymmetry are presented. In a first experiment (psychophysical scaling procedure), non-expert observers had to rate (on a scale from 0 to 10) the perceived beauty of a random series of 2D fractal trees with varying degrees of fractal symmetry. In a second experiment (two-alternative forced choice procedure), they had to express their preference for one of two shapes from the series. The shape pairs were presented successively in random order. Results show that the smallest possible fractal deviation from “symmetry of things in a thing” significantly reduces the perceived attractiveness of such shapes. The potential of future studies where different levels of complexity of fractal patterns are weighed against different degrees of symmetry is pointed out in the conclusion. (shrink)
The physical singularity of life phenomena is analyzed by means of comparison with the driving concepts of theories of the inert. We outline conceptual analogies, transferals of methodologies and theoretical instruments between physics and biology, in addition to indicating significant differences and sometimes logical dualities. In order to make biological phenomenalities intelligible, we introduce theoretical extensions to certain physical theories. In this synthetic paper, we summarize and propose a unified conceptual framework for the main conclusions drawn from work spanning a (...) book and several articles, quoted throughout. (shrink)
This paper proposes an abstract mathematical frame for describing some features of biological time. The key point is that usual physical (linear) representation of time is insufficient, in our view, for the understanding key phenomena of life, such as rhythms, both physical (circadian, seasonal …) and properly biological (heart beating, respiration, metabolic …). In particular, the role of biological rhythms do not seem to have any counterpart in mathematical formalization of physical clocks, which are based on frequencies along the usual (...) (possibly thermodynamical, thus oriented) time. We then suggest a functional representation of biological time by a 2-dimensional manifold as a mathematical frame for accommodating autonomous biological rhythms. The “visual” representation of rhythms so obtained, in particular heart beatings, will provide, by a few examples, hints towards possible applications of our approach to the understanding of interspecific differences or intraspecific pathologies. The 3- dimensional embedding space, needed for purely mathematical reasons, allows to introduce a suitable extra-dimension for “representation time”, with a cognitive significance. (shrink)
Some recently popular accounts of perception account for the phenomenal character of perceptual experience in terms of the qualities of objects. My concern in this paper is with naturalistic versions of such a phenomenal externalist view. Focusing on visual spatial perception, I argue that naturalistic phenomenal externalism conflicts with a number of scientific facts about the geometrical characteristics of visual spatial experience.
The golden ratio is found to be related to the fine-structure constant, which determines the strength of the electromagnetic interaction. The golden ratio and classical harmonic proportions with quartic equations give an approximate value for the inverse fine-structure constant the same as that discovered previously in the geometry of the hydrogen atom. With the former golden ratio results, relationships are also shown between the four fundamental forces of nature: electromagnetism, the weak force, the strong force, and the force of (...) gravitation. (shrink)
We cannot imagine two straight lines intersecting at two points even though they may do so. In this case our abilities to imagine depend upon our abilities to visualise.
Planar geometry was exploited for the computation of symmetric visual curves in the image plane, with consistent variations in local parameters such as sagitta, chordlength, and the curves’ height-to-width ratio, an indicator of the visual area covered by the curve, also called aspect ratio. Image representations of single curves (no local image context) were presented to human observers to measure their visual sensation of curvature magnitude elicited by a given curve. Nonlinear regression analysis was performed on both the individual (...) and the average data using two types of model: (1) a power function where. (shrink)
Berkeley in his Introduction to the Principles of Human knowledge uses geometrical examples to illustrate a way of generating “universal ideas,” which allegedly account for the existence of general terms. In doing proofs we might, for example, selectively attend to the triangular shape of a diagram. Presumably what we prove using just that property applies to all triangles.I contend, rather, that given Berkeley’s view of extension, no Euclidean triangles exist to attend to. Rather proof, as Berkeley would normally assume, requires (...) idealizing diagrams; treating them as if they obeyed Euclidean constraints. This convention solves the problem of representative generalization. View HTML Send article to KindleTo send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. Find out more about the Kindle Personal Document Service.Berkeley and Proof in GeometryVolume 51, Issue 3RICHARD J. BROOK DOI: https://doi.org/10.1017/S0012217312000686Your Kindle email address Please provide your Kindle email.@free.kindle.com@kindle.com Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Dropbox. Berkeley and Proof in GeometryVolume 51, Issue 3RICHARD J. BROOK DOI: https://doi.org/10.1017/S0012217312000686Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Google Drive. Berkeley and Proof in GeometryVolume 51, Issue 3RICHARD J. BROOK DOI: https://doi.org/10.1017/S0012217312000686Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Export citation Request permission. (shrink)
Let us start by some general definitions of the concept of complexity. We take a complex system to be one composed by a large number of parts, and whose properties are not fully explained by an understanding of its components parts. Studies of complex systems recognized the importance of “wholeness”, defined as problems of organization (and of regulation), phenomena non resolvable into local events, dynamics interactions in the difference of behaviour of parts when isolated or in higher configuration, etc., in (...) short, systems of various orders (or levels) not understandable by investigation of their respective parts in isolation. In a complex system it is essential to distinguish between ‘global’ and ‘local’ properties. Theoretical physicists in the last two decades have discovered that the collective behaviour of a macro-system, i.e. a system composed of many objects, does not change qualitatively when the behaviour of single components are modified slightly. Conversely, it has been also found that the behaviour of single components does change when the overall behaviour of the system is modified. There are many universal classes which describe the collective behaviour of the system, and each class has its own characteristics; the universal classes do not change when we perturb the system. The most interesting and rewarding work consists in finding these universal classes and in spelling out their properties. This conception has been followed in studies done in the last twenty years on second order phase transitions. The objective, which has been mostly achieved, was to classify all possible types of phase transitions in different universality classes and to compute the parameters that control the behaviour of the system near the transition (or critical or bifurcation) point as a function of the universality class. This point of view is not very different from the one expressed by Thom in the introduction of Structural Stability and Morphogenesis (1975). It differs from Thom’s program because there is no a priori idea of the mathematical framework which should be used. Indeed Thom considers only a restricted class of models (ordinary differential equations in low dimensional spaces) while we do not have any prejudice regarding which models should be accepted. One of the most interesting and surprising results obtained by studying complex systems is the possibility of classifying the configurations of the system taxonomically. It is well-known that a well founded taxonomy is possible only if the objects we want to classify have some unique properties, i.e. species may be introduced in an objective way only if it is impossible to go continuously from one specie to another; in a more mathematical language, we say that objects must have the property of ultrametricity. More precisely, it was discovered that there are conditions under which a class of complex systems may only exist in configurations that have the ultrametricity property and consequently they can be classified in a hierarchical way. Indeed, it has been found that only this ultrametricity property is shared by the near-optimal solutions of many optimization problems of complex functions, i.e. corrugated landscapes in Kauffman’s language. These results are derived from the study of spin glass model, but they have wider implications. It is possible that the kind of structures that arise in these cases is present in many other apparently unrelated problems. Before to go on with our considerations, we have to pick in mind two main complementary ideas about complexity. (i) According to the prevalent and usual point of view, the essence of complex systems lies in the emergence of complex structures from the non-linear interaction of many simple elements that obey simple rules. Typically, these rules consist of 0–1 alternatives selected in response to the input received, as in many prototypes like cellular automata, Boolean networks, spin systems, etc. Quite intricate patterns and structures can occur in such systems. However, what can be also said is that these are toy systems, and the systems occurring in reality rather consist of elements that individually are quite complex themselves. (ii) So, this bring a new aspect that seems essential and indispensable to the emergence and functioning of complex systems, namely the coordination of individual agents or elements that themselves are complex at their own scale of operation. This coordination dramatically reduces the degree of freedom of those participating agents. Even the constituents of molecules, i.e. the atoms, are rather complicated conglomerations of subatomic particles, perhaps ultimately excitations of patterns of superstrings. Genes, the elementary biochemical coding units, are very complex macromolecular strings, as are the metabolic units, the proteins. Neurons, the basic elements of cognitive networks, themselves are cells. In those mentioned and in other complex systems, it is an important feature that the potential complexity of the behaviour of the individual agents gets dramatically simplified through the global interactions within the system. The individual degrees of freedom are drastically reduced, or, in a more formal terminology, the factual space of the system is much smaller than the product of the state space of the individual elements. That is one key aspect. The other one is that on this basis, that is utilizing the coordination between the activities of its members, the system then becomes able to develop and express a coherent structure at a higher level, that is, an emergent behaviour (and emergent properties) that transcends what each element is individually capable of. (shrink)
In this small book logician and mathematician Jens Erik Fenstad addresses some of the most important foundational questions of linguistics: What should a theory of meaning look like and how might we provide the missing link between meaning theory and our knowledge of how the brain works? The author’s answer is twofold. On the one hand, he suggests that logical semantics in the Montague tradition and other broadly conceived symbolic approaches do not suffice. On the other hand, he does not (...) argue that the logical approach should be discarded; instead, he opts for a methodological pluralism in which symbolic approaches to meaning are combined with geo- metric ones such as Conceptual Spaces [9] and discusses ways in which these geometric accounts could be hooked up with connectionist frameworks and dynamic systems approaches in neurophysiology. (shrink)
When national borders in the modern sense first began to be established in early modern Europe, non-contiguous and perforated nations were a commonplace. According to the conception of the shapes of nations that is currently preferred, however, nations must conform to the topological model of circularity; their borders must guarantee contiguity and simple connectedness, and such borders must as far as possible conform to existing topographical features on the ground. The striving to conform to this model can be seen at (...) work today in Quebec and in Ireland, it underpins much of the rhetoric of the P.L.O., and was certainly to some degree involved as a motivating factor in much of the ethnic cleansing which took place in Bosnia in recent times. The question to be addressed in what follows is: to what extent could inter-group disputes be more peacefully resolved, and ethnic cleansing avoided, if political leaders, diplomats and others involved in the resolution of such disputes could be brought to accept weaker geometrical constraints on the shapes of nations? A number of associated questions then present themselves: What sorts of administrative and logistical problems have been encountered by existing non contiguous nations and by perforated nations, and by other nations deviating in different ways from the received geometrical ideal? To what degree is the desire for continuity and simple connectedness a rational desire, and to what degree does it rest on species of political rhetoric which might be countered by, for example, philosophical argument? These and a series of related questions will form the subject- matter of the present essay. (shrink)
The purpose of this note is to show how an 'AB-series' interpretation of time, given in a companion paper, leads, surprisingly, directly to the physicists' important AdS5 geometry. This is not a theory of 2 time dimensions. Rather, it is a theory of 1 time dimension that has both B-series and A-series characteristics. -/- To summarize the result, a spacetime in terms of 1. the earlier-to-later aspect of time, and 2. the related future-present-past aspect of time, and 3. 3-d (...) space, automatically gives us AdS_5. (shrink)
The purpose of this note is to show how an 'AB-series' interpretation of time, given in a companion paper, leads, surprisingly, to AdS_5 geometry. This is not a theory of 2 time dimensions. Rather, it is a theory of 1 time dimension that has both A-series and B-series characteristics.
In this paper I present critical evaluations of Ayer and Putnam's views on the analyticity of geometry. By drawing on the historico-philosophical work of Michael Friedman on the relativized apriori; and Roberto Torretti on the foundations of geometry, I show how we can make sense of the assertion that pure geometry is analytic in Carnap's sense.
Proof by refutation of a geometry theorem that is not universally true produces a Gröbner basis whose elements, called side polynomials, may be used to give inequations that can be added to the hypotheses to give a valid theorem. We show that (in a certain sense) all possible subsidiary conditions are implied by those obtained from the basis; that what we call the kind of truth of the theorem may be derived from the basis; and that the side polynomials (...) may be classified in a useful way. We analyse the relationship between side polynomials and kinds of truth, and we give a unified algorithmic treatment of side polynomials, with examples generated by an implementation. (shrink)
Einstein structured the theoretical frame of his work on gravity under the Special Relativity and Minkowski´s spacetime using three guide principles: The strong principle of equivalence establishes that acceleration and gravity are equivalents. Mach´s principle explains the inertia of the bodies and particles as completely determined by the total mass existent in the universe. And, general covariance searches to extend the principle of relativity from inertial motion to accelerated motion. Mach´s principle was abandoned quickly, general covariance resulted mathematical property of (...) the tensors and principle of equivalence inconsistent and it can only apply to punctual gravity, no to extended gravity. Also, the basic principle of Special Relativity, i.e., the constancy of the speed of the electromagnetic wave in the vacuum was abandoned, static Minkowski´s spacetime was replaced to dynamic Lorentz´s manifold and the main conceptual fundament of the theory, i.e. spacetime is not known what is. Of other hand, gravity never was conceptually defined; neither answers what is the law of gravity in general. However, the predictions arise of Einstein equations are rigorously exacts. Thus, the conclusion is that on gravity, it has only the equations. In this work it shows that principle of equivalence applies really to punctual and extended gravity, gravity is defined as effect of change of coordinates although in the case of the extended gravity with change of geometry from Minkowski´s spacetime to Lorentz´s manifold; and the gravitational motion is the geodesic motion that well it can declare as the general law of gravity. (shrink)
In this manuscript, published here for the first time, Tarski explores the concept of logical notion. He draws on Klein's Erlanger Programm to locate the logical notions of ordinary geometry as those invariant under all transformations of space. Generalizing, he explicates the concept of logical notion of an arbitrary discipline.
We examine some of Connes’ criticisms of Robinson’s infinitesimals starting in 1995. Connes sought to exploit the Solovay model S as ammunition against non-standard analysis, but the model tends to boomerang, undercutting Connes’ own earlier work in functional analysis. Connes described the hyperreals as both a “virtual theory” and a “chimera”, yet acknowledged that his argument relies on the transfer principle. We analyze Connes’ “dart-throwing” thought experiment, but reach an opposite conclusion. In S , all definable sets of reals are (...) Lebesgue measurable, suggesting that Connes views a theory as being “virtual” if it is not definable in a suitable model of ZFC. If so, Connes’ claim that a theory of the hyperreals is “virtual” is refuted by the existence of a definable model of the hyperreal field due to Kanovei and Shelah. Free ultrafilters aren’t definable, yet Connes exploited such ultrafilters both in his own earlier work on the classification of factors in the 1970s and 80s, and in Noncommutative Geometry, raising the question whether the latter may not be vulnerable to Connes’ criticism of virtuality. We analyze the philosophical underpinnings of Connes’ argument based on Gödel’s incompleteness theorem, and detect an apparent circularity in Connes’ logic. We document the reliance on non-constructive foundational material, and specifically on the Dixmier trace −∫ (featured on the front cover of Connes’ magnum opus) and the Hahn–Banach theorem, in Connes’ own framework. We also note an inaccuracy in Machover’s critique of infinitesimal-based pedagogy. (shrink)
I offer an alternative account of the relationship of Hobbesian geometry to natural philosophy by arguing that mixed mathematics provided Hobbes with a model for thinking about it. In mixed mathematics, one may borrow causal principles from one science and use them in another science without there being a deductive relationship between those two sciences. Natural philosophy for Hobbes is mixed because an explanation may combine observations from experience (the ‘that’) with causal principles from geometry (the ‘why’). My (...) argument shows that Hobbesian natural philosophy relies upon suppositions that bodies plausibly behave according to these borrowed causal principles from geometry, acknowledging that bodies in the world may not actually behave this way. First, I consider Hobbes's relation to Aristotelian mixed mathematics and to Isaac Barrow's broadening of mixed mathematics in Mathematical Lectures (1683). I show that for Hobbes maker's knowledge from geometry provides the ‘why’ in mixed-mathematical explanations. Next, I examine two explanations from De corpore Part IV: (1) the explanation of sense in De corpore 25.1-2; and (2) the explanation of the swelling of parts of the body when they become warm in De corpore 27.3. In both explanations, I show Hobbes borrowing and citing geometrical principles and mixing these principles with appeals to experience. (shrink)
Luminance and color are strong and self-sufficient cues to pictorial depth in visual scenes and images. The present study investigates the conditions Under which luminance or color either strengthens or overrides geometric depth cues. We investigated how luminance contrasts associated with color contrast interact with relative height in the visual field, partial occlusion, and interposition in determining the probability that a given figure is perceived as ‘‘nearer’’ than another. Latencies of ‘‘near’’ responses were analyzed to test for effects of attentional (...) selection. Figures in a pair were supported by luminance contrast or isoluminant color contrast and combined with one of the three geometric cues. The results of Experiment 1 show that luminance contrasts associated with hue, when it does not interact with other hues, produces the same effects as achromatic luminance contrasts: The probability of‘‘near’’ increases with luminance contrast while the latencies for ‘‘near’’ responses decrease. Partial occlusion is found to be a strong enough pictorial cue to support a weaker red luminance contrast. Interposition cues lose out against cues of spatial position and partial occlusion. The results of Experiment 2, with isoluminant displays of varying color contrast, reveal that red color contrast on a light background supported by any of the three geometric cues wins over green or white supported by any of the three geometric cues. On a dark background, red color contrast supported by the interposition cue loses out against green or white color contrast supported by partial occlusion. These findings reveal that color is not an independent depth cue, but is strongly influenced by luminance contrast and stimulus geometry. Systematically shorter response latencies for stronger ‘‘near’’ percepts demonstrate that selective visual attention reliably detects the most likely depth cue combination in a given configuration. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.