The notions of conservation and relativity lie at the heart of classicalmechanics, and were critical to its early development. However, in Newton’s theory of mechanics, these symmetry principles were eclipsed by domain-specific laws. In view of the importance of symmetry principles in elucidating the structure of physical theories, it is natural to ask to what extent conservation and relativity determine the structure of mechanics. In this paper, we address this question by deriving classical (...) class='Hi'>mechanics—both nonrelativistic and relativistic—using relativity and conservation as the primary guiding principles. The derivation proceeds in three distinct steps. First, conservation and relativity are used to derive the asymptotically conserved quantities of motion. Second, in order that energy and momentum be continuously conserved, the mechanical system is embedded in a larger energetic framework containing a massless component that is capable of bearing energy. Imposition of conservation and relativity then results, in the nonrelativistic case, in the conservation of mass and in the frame-invariance of massless energy; and, in the relativistic case, in the rules for transforming massless energy and momentum between frames. Third, a force framework for handling continuously interacting particles is established, wherein Newton’s second law is derived on the basis of relativity and a staccato model of motion-change. Finally, in light of the derivation, we elucidate the structure of mechanics by classifying the principles and assumptions that have been employed according to their explanatory role, distinguishing between symmetry principles and other types of principles that are needed to build up the theoretical edifice. (shrink)
Do scientific theories limit human knowledge? In other words, are there physical variables hidden by essence forever? We argue for negative answers and illustrate our point on chaotic classical dynamical systems. We emphasize parallels with quantum theory and conclude that the common real numbers are, de facto, the hidden variables of classical physics. Consequently, real numbers should not be considered as ``physically real" and classicalmechanics, like quantum physics, is indeterministic.
Currently there are at least four sizeable projects going on to establish the gravitational acceleration of massive antiparticles on earth. While general relativity and modern quantum theories strictly forbid any repulsive gravity, it has not yet been established experimentally that gravity is attraction only. With that in mind, the Elementary Process Theory (EPT) is a rather abstract theory that has been developed from the hypothesis that massive antiparticles are repulsed by the gravitational field of a body of ordinary matter: the (...) EPT essentially describes the elementary processes by which the smallest massive systems have to interact with their environments for repulsive gravity to exist. In this paper we model a nonrelativistic, one-component massive system that evolves in time by the processes as described by the EPT in an environment described by classical fields: the main result is a semi-classical model of a process at Planck scale by which a non-relativistic onecomponent system interacts with its environment, such that the interaction has both gravitational and electromagnetic aspects. Some worked-out examples are provided, among which the repulsion of an antineutron by the gravitational field of the earth. The general conclusion is that the semi-classical model of the EPT corresponds to non-relativistic classicalmechanics. Further research is aimed at demonstrating that the EPT has a model that reproduces the successful predictions of general relativity. (shrink)
In this article, it is suggested that a pedagogical point of departure in the teaching of classicalmechanics is the Liouville theorem. The theorem is interpreted to define the condition that describe the conservation of information in classicalmechanics. The Hamilton equations and the Hamilton principle of least action are derived from the Liouville theorem.
It is usual to identify initial conditions of classical dynamical systems with mathematical real numbers. However, almost all real numbers contain an infinite amount of information. I argue that a finite volume of space can’t contain more than a finite amount of information, hence that the mathematical real numbers are not physically relevant. Moreover, a better terminology for the so-called real numbers is “random numbers”, as their series of bits are truly random. I propose an alternative classical (...) class='Hi'>mechanics, which is empirically equivalent to classicalmechanics, but uses only finite-information numbers. This alternative classicalmechanics is non-deterministic, despite the use of deterministic equations, in a way similar to quantum theory. Interestingly, both alternative classicalmechanics and quantum theories can be supplemented by additional variables in such a way that the supplemented theory is deterministic. Most physicists straightforwardly supplement classical theory with real numbers to which they attribute physical existence, while most physicists reject Bohmian mechanics as supplemented quantum theory, arguing that Bohmian positions have no physical reality. (shrink)
It is usual to identify initial conditions of classical dynamical systems with mathematical real numbers. However, almost all real numbers contain an infinite amount of information. I argue that a finite volume of space can’t contain more than a finite amount of information, hence that the mathematical real numbers are not physically relevant. Moreover, a better terminology for the so-called real numbers is “random numbers”, as their series of bits are truly random. I propose an alternative classical (...) class='Hi'>mechanics, which is empirically equivalent to classicalmechanics, but uses only finite-information numbers. This alternative classicalmechanics is non-deterministic, despite the use of deterministic equations, in a way similar to quantum theory. Interestingly, both alternative classicalmechanics and quantum theories can be supplemented by additional variables in such a way that the supplemented theory is deterministic. Most physicists straightforwardly supplement classical theory with real numbers to which they attribute physical existence, while most physicists reject Bohmian mechanics as supplemented quantum theory, arguing that Bohmian positions have no physical reality. (shrink)
Contrary to Bell’s theorem it is demonstrated that with the use of classical probability theory the quantum correlation can be approximated. Hence, one may not conclude from experiment that all local hidden variable theories are ruled out by a violation of inequality result.
This paper examines an insoluble Cartesian problem for classical AI, namely, how linguistic understanding involves knowledge and awareness of u’s meaning, a cognitive process that is irreducible to algorithms. As analyzed, Descartes’ view about reason and intelligence has paradoxically encouraged certain classical AI researchers to suppose that linguistic understanding suffices for machine intelligence. Several advocates of the Turing Test, for example, assume that linguistic understanding only comprises computational processes which can be recursively decomposed into algorithmic mechanisms. Against this (...) background, in the first section, I explain Descartes’ view about language and mind. To show that Turing bites the bullet with his imitation game, in the second section I analyze this method to assess intelligence. Then, in the third section, I elaborate on Schank and Abelsons’ Script Applier Mechanism (SAM, hereby), which supposedly casts doubt on Descartes’ denial that machines can think. Finally, in the fourth section, I explore a challenge that any algorithmic decomposition of linguistic understanding faces. This challenge, I argue, is the core of the Cartesian problem: knowledge and awareness of meaning require a first-person viewpoint which is irreducible to the decomposition of algorithmic mechanisms. (shrink)
In my dissertation (Rutgers, 2007) I developed the proposal that one can establish that material quantum objects behave classically just in case there is a “local plane wave” regime, which naturally corresponds to the suppression of all quantum interference.
THE PRINCIPLE OF SUPERPOSITION. The need for a quantum theory Classicalmechanics has been developed continuously from the time of Newton and applied to an ...
In this paper I investigate, within the framework of realistic interpretations of the wave function in nonrelativistic quantum mechanics, the mathematical and physical nature of the wave function. I argue against the view that mathematically the wave function is a two-component scalar field on configuration space. First, I review how this view makes quantum mechanics non- Galilei invariant and yields the wrong classical limit. Moreover, I argue that interpreting the wave function as a ray, in agreement many (...) physicists, Galilei invariance is preserved. In addition, I discuss how the wave function behaves more similarly to a gauge potential than to a field. Finally I show how this favors a nomological rather than an ontological view of the wave function. (shrink)
This paper elaborates on relationalism about space and time as motivated by a minimalist ontology of the physical world: there are only matter points that are individuated by the distance relations among them, with these relations changing. We assess two strategies to combine this ontology with physics, using classicalmechanics as example: the Humean strategy adopts the standard, non-relationalist physical theories as they stand and interprets their formal apparatus as the means of bookkeeping of the change of the (...) distance relations instead of committing us to additional elements of the ontology. The alternative theory strategy seeks to combine the relationalist ontology with a relationalist physical theory that reproduces the predictions of the standard theory in the domain where these are empirically tested. We show that, as things stand, this strategy cannot be accomplished without compromising a minimalist relationalist ontology. (shrink)
It has been argued that the transition from classical to quantum mechanics is an example of a Kuhnian scientific revolution, in which there is a shift from the simple, intuitive, straightforward classical paradigm, to the quantum, convoluted, counterintuitive, amazing new quantum paradigm. In this paper, after having clarified what these quantum paradigms are supposed to be, I analyze whether they constitute a radical departure from the classical paradigm. Contrary to what is commonly maintained, I argue that, (...) in addition to radical quantum paradigms, there are also legitimate ways of understanding the quantum world that do not require any substantial change to the classical paradigm. (shrink)
This book argues that the Enlightenment was a golden age for the philosophy of body, and for efforts to integrate coherently a philosophical concept of body with a mathematized theory of mechanics. Thereby, it articulates a new framing for the history of 18th-century philosophy and science. It explains why, more than a century after Newton, physics broke away from philosophy to become an autonomous domain. And, it casts fresh light on the structure and foundations of classicalmechanics. (...) Among the figures studied are Malebranche, Leibniz, Du Châtelet, Boscovich, and Kant, alongside d’Alembert, Euler, Lagrange, Laplace and Cauchy. (shrink)
Indeterminism of quantum mechanics is considered as an immediate corollary from the theorems about absence of hidden variables in it, and first of all, the Kochen – Specker theorem. The base postulate of quantum mechanics formulated by Niels Bohr that it studies the system of an investigated microscopic quantum entity and the macroscopic apparatus described by the smooth equations of classicalmechanics by the readings of the latter implies as a necessary condition of quantum mechanics (...) the absence of hidden variables, and thus, quantum indeterminism. Consequently, the objectivity of quantum mechanics and even its possibility and ability to study its objects as they are by themselves imply quantum indeterminism. The so-called free-will theorems in quantum mechanics elucidate that the “valuable commodity” of free will is not a privilege of the experimenters and human beings, but it is shared by anything in the physical universe once the experimenter is granted to possess free will. The analogical idea, that e.g. an electron might possess free will to “decide” what to do, scandalized Einstein forced him to exclaim (in a letter to Max Born in 2016) that he would be а shoemaker or croupier rather than a physicist if this was true. Anyway, many experiments confirmed the absence of hidden variables and thus quantum indeterminism in virtue of the objectivity and completeness of quantum mechanics. Once quantum mechanics is complete and thus an objective science, one can ask what this would mean in relation to classical physics and its objectivity. In fact, it divides disjunctively what possesses free will from what does not. Properly, all physical objects belong to the latter area according to it, and their “behavior” is necessary and deterministic. All possible decisions, on the contrary, are concentrated in the experimenters (or human beings at all), i.e. in the former domain not intersecting the latter. One may say that the cost of the determinism and unambiguous laws of classical physics, is the indeterminism and free will of the experimenters and researchers (human beings) therefore necessarily being out of the scope and objectivity of classical physics. This is meant as the “deterministic subjectivity of classical physics” opposed to the “indeterminist objectivity of quantum mechanics”. (shrink)
Statistical mechanics is often taken to be the paradigm of a successful inter-theoretic reduction, which explains the high-level phenomena (primarily those described by thermodynamics) by using the fundamental theories of physics together with some auxiliary hypotheses. In my view, the scope of statistical mechanics is wider since it is the type-identity physicalist account of all the special sciences. But in this chapter, I focus on the more traditional and less controversial domain of this theory, namely, that of explaining (...) the thermodynamic phenomena.What are the fundamental theories that are taken to explain the thermodynamic phenomena? The lively research into the foundations of classical statistical mechanics suggests that using classicalmechanics to explain the thermodynamic phenomena is fruitful. Strictly speaking, in contemporary physics, classicalmechanics is considered to be false. Since classicalmechanics preserves certain explanatory and predictive aspects of the true fundamental theories, it can be successfully applied in certain cases. In other circumstances, classicalmechanics has to be replaced by quantum mechanics. In this chapter I ask the following two questions: I) How does quantum statistical mechanics differ from classical statistical mechanics? How are the well-known differences between the two fundamental theories reflected in the statistical mechanical account of high-level phenomena? II) How does quantum statistical mechanics differ from quantum mechanics simpliciter? To make our main points I need to only consider non-relativistic quantum mechanics. Most of the ideas described and addressed in this chapter hold irrespective of the choice of a (so-called) interpretation of quantum mechanics, and so I will mention interpretations only when the differences between them are important to the matter discussed. (shrink)
I examine here if Kant’s metaphysics of matter can support any late-modern versions of classicalmechanics. I argue that in principle it can, by two different routes. I assess the interpretive costs of each approach, and recommend the most promising strategy: a mass-point approach.
This paper shows how the classical finite probability theory (with equiprobable outcomes) can be reinterpreted and recast as the quantum probability calculus of a pedagogical or toy model of quantum mechanics over sets (QM/sets). There have been several previous attempts to develop a quantum-like model with the base field of ℂ replaced by ℤ₂. Since there are no inner products on vector spaces over finite fields, the problem is to define the Dirac brackets and the probability calculus. The (...) previous attempts all required the brackets to take values in ℤ₂. But the usual QM brackets <ψ|ϕ> give the "overlap" between states ψ and ϕ, so for subsets S,T⊆U, the natural definition is <S|T>=|S∩T| (taking values in the natural numbers). This allows QM/sets to be developed with a full probability calculus that turns out to be a non-commutative extension of classical Laplace-Boole finite probability theory. The pedagogical model is illustrated by giving simple treatments of the indeterminacy principle, the double-slit experiment, Bell's Theorem, and identical particles in QM/Sets. A more technical appendix explains the mathematics behind carrying some vector space structures between QM over ℂ and QM/Sets over ℤ₂. (shrink)
We study the question of how to decompose Hilbert space into a preferred tensor-product factorization without any pre-existing structure other than a Hamiltonian operator, in particular the case of a bipartite decomposition into "system" and "environment." Such a decomposition can be defined by looking for subsystems that exhibit quasi-classical behavior. The correct decomposition is one in which pointer states of the system are relatively robust against environmental monitoring (their entanglement with the environment does not continually and dramatically increase) and (...) remain localized around approximately-classical trajectories. We present an in-principle algorithm for finding such a decomposition by minimizing a combination of entanglement growth and internal spreading of the system. Both of these properties are related to locality in different ways. This formalism could be relevant to the emergence of spacetime from quantum entanglement. (shrink)
Is quantum mechanics about ‘states’? Or is it basically another kind of probability theory? It is argued that the elementary formalism of quantum mechanics operates as a well-justified alternative to ‘classical’ instantiations of a probability calculus. Its providing a general framework for prediction accounts for its distinctive traits, which one should be careful not to mistake for reflections of any strange ontology. The suggestion is also made that quantum theory unwittingly emerged, in Schrödinger’s formulation, as a ‘lossy’ (...) by-product of a quantum-mechanical variant of the Hamilton-Jacobi equation. As it turns out, the effectiveness of quantum theory qua predictive algorithm makes up for the computational impracticability of that master equation. (shrink)
A longstanding issue in attempts to understand the Everett (Many-Worlds) approach to quantum mechanics is the origin of the Born rule: why is the probability given by the square of the amplitude? Following Vaidman, we note that observers are in a position of self-locating uncertainty during the period between the branches of the wave function splitting via decoherence and the observer registering the outcome of the measurement. In this period it is tempting to regard each branch as equiprobable, but (...) we argue that the temptation should be resisted. Applying lessons from this analysis, we demonstrate (using methods similar to those of Zurek's envariance-based derivation) that the Born rule is the uniquely rational way of apportioning credence in Everettian quantum mechanics. In doing so, we rely on a single key principle: changes purely to the environment do not affect the probabilities one ought to assign to measurement outcomes in a local subsystem. We arrive at a method for assigning probabilities in cases that involve both classical and quantum self-locating uncertainty. This method provides unique answers to quantum Sleeping Beauty problems, as well as a well-defined procedure for calculating probabilities in quantum cosmological multiverses with multiple similar observers. (shrink)
Cyclic mechanic is intended as a suitable generalization both of quantum mechanics and general relativity apt to unify them. It is founded on a few principles, which can be enumerated approximately as follows: 1. Actual infinity or the universe can be considered as a physical and experimentally verifiable entity. It allows of mechanical motion to exist. 2. A new law of conservation has to be involved to generalize and comprise the separate laws of conservation of classical and relativistic (...)mechanics, and especially that of conservation of energy: This is the conservation of action or information. 3. Time is not a uniformly flowing time in general. It can have some speed, acceleration, more than one dimension, to be discrete. 4. The following principle of cyclicity: The universe returns in any point of it. The return can be only kinematic, i.e. per a unit of energy (or mass), and thermodynamic, i.e. considering the universe as a thermodynamic whole. 5. The kinematic return, which is per a unit of energy (or mass), is the counterpart of conservation of energy, which can be interpreted as the particular case of conservation of action “per a unit of time”. The kinematic return per a unit of energy (or mass) can be interpreted in turn as another particular law of conservation in the framework of conservation of action (or information), namely conservation of wave period (or time). These two counterpart laws of conservation correspond exactly to the particle “half” and to the wave “half” of wave-particle duality. 6. The principle of quantum invariance is introduced. It means that all physical laws have to be invariant to discrete and continuous (smooth) morphisms (motions) or mathematically, to the axiom of choice. The list is not intended to be exhausted or disjunctive, but only to give an introductory idea. (shrink)
In this paper I propose an interpretation of classical statistical mechanics that centers on taking seriously the idea that probability measures represent complete states of statistical mechanical systems. I show how this leads naturally to the idea that the stochasticity of statistical mechanics is associated directly with the observables of the theory rather than with the microstates (as traditional accounts would have it). The usual assumption that microstates are representationally significant in the theory is therefore dispensable, a (...) consequence which suggests interesting possibilities for developing non-equilibrium statistical mechanics and investigating inter-theoretic answers to the foundational questions of statistical mechanics. (shrink)
The aim of this study is to justify the belief that there are biological normative mechanisms that fulfill non-trivial causal roles in the explanations (as formulated by researchers) of actions and behaviors present in specific systems. One example of such mechanisms is the predictive mechanisms described and explained by predictive processing (hereinafter PP), which (1) guide actions and (2) shape causal transitions between states that have specific content and fulfillment conditions (e.g. mental states). Therefore, I am guided by a specific (...) theoretical goal associated with the need to indicate those conditions that should be met by the non-trivial theory of normative mechanisms and the specific models proposed by PP supporters. In this work, I use classical philosophical methods, such as conceptual analysis and critical reflection. I also analyze selected studies in the field of cognitive science, cognitive psychology, neurology, information theory and biology in terms of the methodology, argumentation and language used, in accordance with their theoretical importance for the issues discussed in this study. In this sense, the research presented here is interdisciplinary. My research framework is informed by the mechanistic model of explanation, which defines the necessary and sufficient conditions for explaining a given phenomenon. The research methods I chose are therefore related to the problems that I intend to solve. In the introductory chapter, “The concept of predictive processing”, I discuss the nature of PP as well as its main assumptions and theses. I also highlight the key concepts and distinctions for this research framework. Many authors argue that PP is a contemporary version of Kantianism and is exposed to objections similar to those made against the approach of Immanuel Kant. I discuss this thesis and show that it is only in a very general sense that the PP framework is neo-Kantian. Here we are not dealing with transcendental deduction nor with the application of transcendental arguments. I argue that PP is based on reverse engineering and abduction inferences. In the second part of this chapter, I respond to the objection formulated by Dan Zahavi, who directly accuses this research framework of anti-realistic consequences. I demonstrate that the position of internalism, present in the so-called conservative PP, does not imply anti-realism, and that, due to the explanatory role played in it by structural representations directed at real patterns, it is justified to claim that PP is realistic. In this way, I show that PP is a non-trivial research framework, having its subject, specific methods and its own epistemic status. Finally, I discuss positions classified as the socalled radical PP. In the chapter “Predictive processing as a Bayesian explanatory model” I justify the thesis according to which PP offers Bayesian modeling. Many researchers claim that the brain is an implemented statistical probabilistic network that is an approximation of the Bayesian 7 rule. In practice, this means that all cognitive processes are to apply Bayes' rule and can be described in terms of probability distributions. Such a solution arouses objections among many researchers and is the subject of wide criticism. The purpose of this chapter is to justify the thesis that Bayesian PP is a non-trivial research framework. For this purpose, I argue that it explains certain phenomena not only at the computational level described by David Marr, but also at the level of algorithms and implementation. Later in this chapter I demonstrate that PP is normative modeling. Proponents of the use of Bayesian models in psychology or decision theory argue that they are normative because they allow the formulation of formal rules of action that show what needs to be done to make a given action optimal. Critics of this approach emphasize that such thinking about the normativity of Bayesian modeling is unjustified and that science should shift from prescriptive to descriptive positions. In a polemic with Shira Elqayam and Jonathan Evans (2011), I show that the division they propose into prescriptivism and Bayesian descriptivism is apparent, because, as I argue, there are two forms of prescriptivism, i.e. the weak and the strong. I argue that the weak version is epistemic and can lead to anti-realism, while the strong version is ontic and allows one to justify realism in relation to Bayesian models. I argue that a weak version of prescriptivism is valid for PP. It allows us to adopt anti-realism in relation to PP. In practice, this means that you can explain phenomena using Bayes' rule. This does not, however, imply that they are Bayesian in nature. However, the full justification of realism in relation to the Bayesian PP presupposes the adoption of strong prescriptivism. This position assumes that phenomena are explained by Bayesian rule because they are Bayesian as such. If they are Bayesian in nature, then they should be explained using Bayesian modeling. This thesis will be substantiated in the chapters “Normative functions and mechanisms in the context of predictive processing” and “Normative mechanisms and actions in predictive processing”. In the chapter “The Free Energy Principle in predictive processing”, I discuss the Free Energy Principle (hereinafter FEP) formulated by Karl Friston and some of its implications. According to this principle, all biological systems (defined in terms of Markov blankets) minimize the free energy of their internal states in order to maintain homeostasis. Some researchers believe that PP is a special case of applying this principle to cognition, and that predictive mechanisms are homeostatic mechanisms that minimize free energy. The discussion of FEP is important due to the fact that some authors consider it to be important for explanatory purposes and normative. If this is the case, then FEP turns out to be crucial in explaining normative predictive mechanisms and, in general, any normative biological mechanisms. To define the explanatory possibilities of this principle, I refer to the discussion of its supporters on the issue they define as the problem of continuity and discontinuity between life and mind. A critical analysis of this discussion and the additional arguments I have formulated have allowed me to revise the explanatory ambitions of FEP. I also reject the belief that this principle is necessary to explain the nature of predictive mechanisms. I argue that the principle formulated and defended by Friston is an important research heuristic for PP analysis. 8 In the chapter “Normative functions and mechanisms in predictive processing”, I start my analyzes by formulating an answer to the question about the normative nature of homeostatic mechanisms. I demonstrate that predictive mechanisms are not homeostatic. I defend the view that a full explanation of normative mechanisms presupposes an explanation of normative functions. I discuss the most important proposals for understanding the normativity of a function, both from a systemic and teleosemantic perspective. I conclude that the non-trivial concept of a function must meet two requirements which I define as explanatory and normative. I show that none of the theories I have invoked satisfactorily meets both of these requirements. Instead, I propose a model of normativity based on Bickhard's account, but supplemented by a mechanistic perspective. I argue that a function is normative when: (1) it allows one to explain the dysfunction of a given mechanism; (2) it contributes to the maintenance of the organism's stability by shaping and limiting possible relations, processes and behaviors of a given system; and when (3) (according to the representational and predictive functions) it enables explaining the attribution of logical values of certain representations / predictions. In such an approach, a mechanism is normative when it performs certain normative functions and when it is constitutive for a specific action or behavior, despite the fact that for some reason it cannot realize it either currently or in the long-term. Such an understanding of the normativity of mechanisms presupposes the acceptance of the epistemic hypothesis. I argue that this hypothesis is not cognitively satisfactory, and therefore the ontic hypothesis should be justified, which is directly related to adopting the position of ontic prescriptivism. For this reason, referring to the mechanistic theory of scientific explanations, I formulate an ontical interpretation of the concept of a normative mechanism. According to this approach, a mechanism or a function is normative when they perform such and such causal roles in explaining certain actions and behaviors. With regard to the normative properties of predictive mechanisms and functions, this means that they are the causes of specific actions an organism carries out in the environment. In this way, I justify the necessity of accepting the ontic hypothesis and rejecting the epistemic hypothesis. The fifth chapter, “Normative mechanisms and actions in predictive processing”, is devoted to the dark room problem and the related exploration-exploitation trade-off. A dark room is the state that an agent could be in if it minimized the sum of all potential prediction errors. I demonstrate that, in accordance with the basic assumption of PP about the need for continuous and long-term minimization of prediction errors, such a state should be desirable for the agent. Is it really so? Many authors believe it is not. I argue that the test of the value of PP is the possibility of a non-trivial solution of this problem, which can be reduced to the choice between active and uncertainty-increasing exploration and safe and easily predictable exploitation. I show that the solution proposed by PP supporters present in the literature does not enable a fully satisfactory explanation of this dilemma. Then I defend the position according to which the full explanation of the normative mechanisms, and, subsequently, the solution to the dilemma of exploration and exploitation, involves reference to the existence of constraints present in the environment. The constraints 9 include elements of the environment that make a given mechanism not only causal but also normative. They are therefore key to explaining the predictive mechanisms. They do not only play the role of the context in which the mechanism is implemented, but, above all, are its constitutive component. I argue that the full explanation of the role of constraints in normative predictive mechanisms presupposes the integration of individual models of specific cognitive phenomena, because only the mechanistic integration of PP with other models allows for a non-trivial explanation of the nature of normative predictive mechanisms that would have a strong explanatory value. The explanatory monism present in many approaches to PP makes it impossible to solve the problem of the dark room. Later in this chapter, I argue that the Bayesian PP is normative not because it enables the formulation of such and such rules of action, but because the predictive mechanisms themselves are normative. They are normative because they condition the choice of such and such actions by agents. In this way, I justify the hypothesis that normative mechanisms make it possible to explain the phenomenon of agent motivation, which is crucial for solving the dark room problem. In the last part of the chapter, I formulate the hypothesis of distributed normativity, which assumes that the normative nature of certain mechanisms, functions or objects is determined by the relations into which these mechanisms, functions or objects enter. This means that what is normative (in the primary sense) is the relational structure that constitutes the normativity of specific items included in it. I suggest that this hypothesis opens up many areas of research and makes it possible to rethink many problems. In the “Conclusion”, I summarize the results of my research and indicate further research perspectives. (shrink)
Wigner’s quantum-mechanical classification of particle-types in terms of irreducible representations of the Poincaré group has a classical analogue, which we extend in this paper. We study the compactness properties of the resulting phase spaces at fixed energy, and show that in order for a classical massless particle to be physically sensible, its phase space must feature a classical-particle counterpart of electromagnetic gauge invariance. By examining the connection between massless and massive particles in the massless limit, we also (...) derive a classical-particle version of the Higgs mechanism. (shrink)
This paper examines the origin, range and meaning of the Principle of Action and Reaction in Kant’s mechanics. On the received view, it is a version of Newton’s Third Law. I argue that Kant meant his principle as foundation for a Leibnizian mechanics. To find a ‘Newtonian’ law of action and reaction, we must look to Kant’s ‘dynamics,’ or theory of matter. I begin, in part I, by noting marked differences between Newton’s and Kant’s laws of action and (...) reaction. I argue that these are explainable by Kant’s allegiance to a Leibnizian mechanics. I show (in part II) that Leibniz too had a model of action and reaction, at odds with Newton’s. Then I reconstruct how Jakob Hermann and Christian Wolff received Leibniz’s model. I present (in Part III) Kant’s early law of action and reaction for mechanics. I show that he devised it so as to solve extant problems in the Hermann-Wolff account. I reconstruct Kant’s views on ‘mechanical’ action and reaction in the 1780s, and highlight strong continuities with his earlier, pre-Critical stance. I use these continuities, and Kant’s earlier engagement with post-Leibnizians, to explain the un-Newtonian features of his law of action and reaction. (shrink)
The problem of indeterminism in quantum mechanics usually being considered as a generalization determinism of classicalmechanics and physics for the case of discrete (quantum) changes is interpreted as an only mathematical problem referring to the relation of a set of independent choices to a well-ordered series therefore regulated by the equivalence of the axiom of choice and the well-ordering “theorem”. The former corresponds to quantum indeterminism, and the latter, to classical determinism. No other premises (besides (...) the above only mathematical equivalence) are necessary to explain how the probabilistic causation of quantum mechanics refers to the unambiguous determinism of classical physics. The same equivalence underlies the mathematical formalism of quantum mechanics. It merged the well-ordered components of the vectors of Heisenberg’s matrix mechanics and the non-ordered members of the wave functions of Schrödinger’s undulatory mechanics. The mathematical condition of that merging is just the equivalence of the axiom of choice and the well-ordering theorem implying in turn Max Born’s probabilistic interpretation of quantum mechanics. Particularly, energy conservation is justified differently than classical physics. It is due to the equivalence at issue rather than to the principle of least action. One may involve two forms of energy conservation corresponding whether to the smooth changes of classical physics or to the discrete changes of quantum mechanics. Further both kinds of changes can be equated to each other under the unified energy conservation as well as the conditions for the violation of energy conservation to be investigated therefore directing to a certain generalization of energy conservation. (shrink)
This chapter draws an analogy between computing mechanisms and autopoietic systems, focusing on the non-representational status of both kinds of system (computational and autopoietic). It will be argued that the role played by input and output components in a computing mechanism closely resembles the relationship between an autopoietic system and its environment, and in this sense differs from the classical understanding of inputs and outputs. The analogy helps to make sense of why we should think of computing mechanisms as (...) non-representational, and might also facilitate reconciliation between computational and autopoietic/enactive approaches to the study of cognition. (shrink)
The conspicuous similarities between interpretive strategies in classical statistical mechanics and in quantum mechanics may be grounded on their employment of common implementations of probability. The objective probabilities which represent the underlying stochasticity of these theories can be naturally associated with three of their common formal features: initial conditions, dynamics, and observables. Various well-known interpretations of the two theories line up with particular choices among these three ways of implementing probability. This perspective has significant application to debates (...) on primitive ontology and to the quantum measurement problem. (shrink)
Bohmian mechanics is a realistic interpretation of quantum theory. It shares the same ontology of classicalmechanics: particles following continuous trajectories in space through time. For this ontological continuity, it seems to be a good candidate for recovering the classical limit of quantum theory. Indeed, in a Bohmian framework, the issue of the classical limit reduces to showing how classical trajectories can emerge from Bohmian ones, under specific classicality assumptions. In this paper, we shall (...) focus on a technical problem that arises from the dynamics of a Bohmian system in bounded regions; and we suggest that a possible solution is supplied by the action of environmental decoherence. However, we shall show that, in order to implement decoherence in a Bohmian framework, a stronger condition is required rather than the usual one. (shrink)
The present paper discusses the problem of quantum-mechanical properties of a subject’s consciousness. The model of generalized economic measurements is used for the analysis. Two types of such measurements are analyzed – transactions and technologies. Algebraic ratios between the technology-type measurements allow making their analogy with slit experiments in physics. It has been shown that the description of results of such measurements is possible both in classical and in quantum formalism of calculation of probabilities. Thus, the quantum-mechanical formalism of (...) the description of states appears as a result of idealization of the selection mechanism in the proprietor's consciousness. (shrink)
With the advent of quantum mechanics in the early 20th century, a great revolution took place in science. The philosophical foundations of classical physics collapsed, and controversial conceptual issues arose: can the quantum mechanical description of physical reality be considered complete? Are the objects of nature inseparable? Do objects not have a specific location before measurement, and are there non-causal quantum jumps? As time passed, not only did the controversies not diminish, but with the decline of positivism, they (...) got more attention. This book, written in Persian, attempts to explain these issues and controversies and their philosophical foundations as simply and critically as possible for those students interested in the philosophical foundations of quantum mechanics. (shrink)
Definitions I presented in a previous article as part of a semantic approach in epistemology assumed that the concept of derivability from standard logic held across all mathematical and scientific disciplines. The present article argues that this assumption is not true for quantum mechanics (QM) by showing that concepts of validity applicable to proofs in mathematics and in classicalmechanics are inapplicable to proofs in QM. Because semantic epistemology must include this important theory, revision is necessary. The (...) one I propose also extends semantic epistemology beyond the ‘hard’ sciences. The article ends by presenting and then refuting some responses QM theorists might make to my arguments. (shrink)
Maxwell’s Demon is a thought experiment devised by J. C. Maxwell in 1867 in order to show that the Second Law of thermodynamics is not universal, since it has a counter-example. Since the Second Law is taken by many to provide an arrow of time, the threat to its universality threatens the account of temporal directionality as well. Various attempts to “exorcise” the Demon, by proving that it is impossible for one reason or another, have been made throughout the years, (...) but none of them were successful. We have shown (in a number of publications) by a general state-space argument that Maxwell’s Demon is compatible with classicalmechanics, and that the most recent solutions, based on Landauer’s thesis, are not general. In this paper we demonstrate that Maxwell’s Demon is also compatible with quantum mechanics. We do so by analyzing a particular (but highly idealized) experimental setup and proving that it violates the Second Law. Our discussion is in the framework of standard quantum mechanics; we give two separate arguments in the framework of quantum mechanics with and without the projection postulate. We address in our analysis the connection between measurement and erasure interactions and we show how these notions are applicable in the microscopic quantum mechanical structure. We discuss what might be the quantum mechanical counterpart of the classical notion of “macrostates”, thus explaining why our Quantum Demon setup works not only at the micro level but also at the macro level, properly understood. One implication of our analysis is that the Second Law cannot provide a universal lawlike basis for an account of the arrow of time; this account has to be sought elsewhere. (shrink)
Why does classical equilibrium statistical mechanics work? Malament and Zabell (1980) noticed that, for ergodic dynamical systems, the unique absolutely continuous invariant probability measure is the microcanonical. Earman and Rédei (1996) replied that systems of interest are very probably not ergodic, so that absolutely continuous invariant probability measures very distant from the microcanonical exist. In response I define the generalized properties of epsilon-ergodicity and epsilon-continuity, I review computational evidence indicating that systems of interest are epsilon-ergodic, I adapt Malament (...) and Zabell’s defense of absolute continuity to support epsilon-continuity, and I prove that, for epsilon-ergodic systems, every epsilon-continuous invariant probability measure is very close to the microcanonical. (shrink)
This chapter looks at Euler’s relation to Newton, and at his role in the rise of ‘Newtonian’ mechanics. It aims to give a sense of Newton’s complicated legacy for Enlightenment science, and to raise awareness that some key ‘Newtonian’ results really come from Euler.
I review a widely accepted argument to the conclusion that the contents of our beliefs, desires and other mental states cannot be causally efficacious in a classical computational model of the mind. I reply that this argument rests essentially on an assumption about the nature of neural structure that we have no good scientific reason to accept. I conclude that computationalism is compatible with wide semantic causal efficacy, and suggest how the computational model might be modified to accommodate this (...) possibility. (shrink)
The paper interprets the concept “operator in the separable complex Hilbert space” (particalry, “Hermitian operator” as “quantity” is defined in the “classical” quantum mechanics) by that of “quantum information”. As far as wave function is the characteristic function of the probability (density) distribution for all possible values of a certain quantity to be measured, the definition of quantity in quantum mechanics means any unitary change of the probability (density) distribution. It can be represented as a particular case (...) of “unitary” qubits. The converse interpretation of any qubits as referring to a certain physical quantity implies its generalization to non-Hermitian operators, thus neither unitary, nor conserving energy. Their physical sense, speaking loosely, consists in exchanging temporal moments therefore being implemented out of the space-time “screen”. “Dark matter” and “dark energy” can be explained by the same generalization of “quantity” to non-Hermitian operators only secondarily projected on the pseudo-Riemannian space-time “screen” of general relativity according to Einstein's “Mach’s principle” and his field equation. (shrink)
This paper shows how the classical finite probability theory (with equiprobable outcomes) can be reinterpreted and recast as the quantum probability calculus of a pedagogical or "toy" model of quantum mechanics over sets (QM/sets). There are two parts. The notion of an "event" is reinterpreted from being an epistemological state of indefiniteness to being an objective state of indefiniteness. And the mathematical framework of finite probability theory is recast as the quantum probability calculus for QM/sets. The point is (...) not to clarify finite probability theory but to elucidate quantum mechanics itself by seeing some of its quantum features in a classical setting. (shrink)
The objective of this report is twofold. In the first place it aims to demonstrate that a four-dimensional local U(1) gauge invariant relativistic quantum mechanical Dirac-type equation is derivable from the equations for the classical electromagnetic field. In the second place, the transformational consequences of this local U(1) invariance are used to obtain solutions of different Maxwell equations.
Quantum mechanics makes some very significant observations about nature. Unfortunately, these observations remain a mystery because they do not fit into and/or cannot be explained through classicalmechanics. However, we can still explore the philosophical and practical implications of these observations. This article aims to explain philosophical and practical implications of one of the most important observations of quantum mechanics – uncertainty or the arbitrariness in the behavior of particles.
Following the pioneering studies of the receptive field (RF), the concept gained further significance for visual perception by the discovery of input effects from beyond the classical RF. These studies demonstrated that neuronal responses could be modulated by stimuli outside their RFs, consistent with the perception of induced brightness, color, orientation, and motion. Lesion scotomata are similarly modulated perceptually from the surround by RFs that have migrated from the interior to the outer edge of the scotoma and in this (...) way provide filling-in of the void. Large RFs are advantageous to this task. In higher visual areas, such as the middle temporal and inferotemporal lobe, RFs increase in size and lose most of their retinotopic organization while encoding increasingly complex features. Whereas lowerlevel RFs mediate perceptual filling-in, contour integration, and figure–ground segregation, RFs at higher levels serve the perception of grouping by common fate, biological motion, and other biologically relevant stimuli, such as faces. Studies in alert monkeys while freely viewing natural scenes showed that classical and nonclassical RFs cooperate in forming representations of the visual world. Today, our understanding of the mechanisms underlying the RF is undergoing a quantum leap. What had started out as a hierarchical feedforward concept for simple stimuli, such as spots, lines, and bars, now refers to mechanisms involving ascending, descending, and lateral signal flow. By extension of the bottom-up paradigm, RFs are nowadays understood as adaptive processors, enabling the predictive coding of complex scenes. Top-down effects guiding attention and tuned to task-relevant information complement the bottom-up analysis. (shrink)
The explicit history of the “hidden variables” problem is well-known and established. The main events of its chronology are traced. An implicit context of that history is suggested. It links the problem with the “conservation of energy conservation” in quantum mechanics. Bohr, Kramers, and Slaters (1924) admitted its violation being due to the “fourth Heisenberg uncertainty”, that of energy in relation to time. Wolfgang Pauli rejected the conjecture and even forecast the existence of a new and unknown then elementary (...) particle, neutrino, on the ground of energy conservation in quantum mechanics, afterwards confirmed experimentally. Bohr recognized his defeat and Pauli’s truth: the paradigm of elementary particles (furthermore underlying the Standard model) dominates nowadays. However, the reason of energy conservation in quantum mechanics is quite different from that in classicalmechanics (the Lie group of all translations in time). Even more, if the reason was the latter, Bohr, Cramers, and Slatters’s argument would be valid. The link between the “conservation of energy conservation” and the problem of hidden variables is the following: the former is equivalent to their absence. The same can be verified historically by the unification of Heisenberg’s matrix mechanics and Schrödinger’s wave mechanics in the contemporary quantum mechanics by means of the separable complex Hilbert space. The Heisenberg version relies on the vector interpretation of Hilbert space, and the Schrödinger one, on the wave-function interpretation. However the both are equivalent to each other only under the additional condition that a certain well-ordering is equivalent to the corresponding ordinal number (as in Neumann’s definition of “ordinal number”). The same condition interpreted in the proper terms of quantum mechanics means its “unitarity”, therefore the “conservation of energy conservation”. In other words, the “conservation of energy conservation” is postulated in the foundations of quantum mechanics by means of the concept of the separable complex Hilbert space, which furthermore is equivalent to postulating the absence of hidden variables in quantum mechanics (directly deducible from the properties of that Hilbert space). Further, the lesson of that unification (of Heisenberg’s approach and Schrödinger’s version) can be directly interpreted in terms of the unification of general relativity and quantum mechanics in the cherished “quantum gravity” as well as a “manual” of how one can do this considering them as isomorphic to each other in a new mathematical structure corresponding to quantum information. Even more, the condition of the unification is analogical to that in the historical precedent of the unifying mathematical structure (namely the separable complex Hilbert space of quantum mechanics) and consists in the class of equivalence of any smooth deformations of the pseudo-Riemannian space of general relativity: each element of that class is a wave function and vice versa as well. Thus, quantum mechanics can be considered as a “thermodynamic version” of general relativity, after which the universe is observed as if “outside” (similarly to a phenomenological thermodynamic system observable only “outside” as a whole). The statistical approach to that “phenomenological thermodynamics” of quantum mechanics implies Gibbs classes of equivalence of all states of the universe, furthermore re-presentable in Boltzmann’s manner implying general relativity properly … The meta-lesson is that the historical lesson can serve for future discoveries. (shrink)
In his classic Film as Art, Rudolf Arnheim sets out to refute the claim that “Film cannot be art, for it does nothing but reproduce reality mechanically”.1 The usual argument in favor of that claim, he explains, contrasts film with realist painting, and goes something like this: There’s no doubt that what appears on the canvas depends on the way the painter sees the world, on her particular technique, on the colors she’s using, and so on. It is elements like (...) these that justify aesthetic appreciation. What appears on celluloid, on the other hand, is the result of a purely mechanical process of light rays collecting and transforming into an image. In other words, a camera is merely a mechanical recording device, and for this very reason film cannot be art. (shrink)
Distinctions in fundamentality between different levels of description are central to the viability of contemporary decoherence-based Everettian quantum mechanics (EQM). This approach to quantum theory characteristically combines a determinate fundamental reality (one universal wave function) with an indeterminate emergent reality (multiple decoherent worlds). In this chapter I explore how the Everettian appeal to fundamentality and emergence can be understood within existing metaphysical frameworks, identify grounding and concept fundamentality as promising theoretical tools, and use them to characterize a system of (...) explanatory levels (with associated laws of nature) for EQM. This Everettian level structure encompasses and extends the ‘classical’ levels structure. The ‘classical’ levels of physics, chemistry, biology, etc. are recovered, but they are emergent in character and potentially variable across Everett worlds. EQM invokes an additional fundamental level, not present in the classical levels picture, and a novel potential role for self-location in interlevel metaphysics. When given a modal realist interpretation, EQM also makes trouble for supervenience-based approaches to levels. (shrink)
A case study of quantum mechanics is investigated in the framework of the philosophical opposition “mathematical model – reality”. All classical science obeys the postulate about the fundamental difference of model and reality, and thus distinguishing epistemology from ontology fundamentally. The theorems about the absence of hidden variables in quantum mechanics imply for it to be “complete” (versus Einstein’s opinion). That consistent completeness (unlike arithmetic to set theory in the foundations of mathematics in Gödel’s opinion) can be (...) interpreted furthermore as the coincidence of model and reality. The paper discusses the option and fact of that coincidence it its base: the fundamental postulate formulated by Niels Bohr about what quantum mechanics studies (unlike all classical science). Quantum mechanics involves and develops further both identification and disjunctive distinction of the global space of the apparatus and the local space of the investigated quantum entity as complementary to each other. This results into the analogical complementarity of model and reality in quantum mechanics. The apparatus turns out to be both absolutely “transparent” and identically coinciding simultaneously with the reflected quantum reality. Thus, the coincidence of model and reality is postulated as necessary condition for cognition in quantum mechanics by Bohr’s postulate and further, embodied in its formalism of the separable complex Hilbert space, in turn, implying the theorems of the absence of hidden variables (or the equivalent to them “conservation of energy conservation” in quantum mechanics). What the apparatus and measured entity exchange cannot be energy (for the different exponents of energy), but quantum information (as a certain, unambiguously determined wave function) therefore a generalized law of conservation, from which the conservation of energy conservation is a corollary. Particularly, the local and global space (rigorously justified in the Standard model) share the complementarity isomorphic to that of model and reality in the foundation of quantum mechanics. On that background, one can think of the troubles of “quantum gravity” as fundamental, direct corollaries from the postulates of quantum mechanics. Gravity can be defined only as a relation or by a pair of non-orthogonal separable complex Hilbert space attachable whether to two “parts” or to a whole and its parts. On the contrary, all the three fundamental interactions in the Standard model are “flat” and only “properties”: they need only a single separable complex Hilbert space to be defined. (shrink)
We argue against claims that the classical ℏ → 0 limit is “singular” in a way that frustrates an eliminative reduction of classical to quantum physics. We show one precise sense in which quantum mechanics and scaling behavior can be used to recover classicalmechanics exactly, without making prior reference to the classical theory. To do so, we use the tools of strict deformation quantization, which provides a rigorous way to capture the ℏ → (...) 0 limit. We then use the tools of category theory to demonstrate one way that this reduction is explanatory: it illustrates a sense in which the structure of quantum mechanics determines that of classicalmechanics. (shrink)
The paper takes up Bell's “Everett theory” and develops it further. The resulting theory is about the system of all particles in the universe, each located in ordinary, 3-dimensional space. This many-particle system as a whole performs random jumps through 3N-dimensional configuration space – hence “Tychistic Bohmian Mechanics”. The distribution of its spontaneous localisations in configuration space is given by the Born Rule probability measure for the universal wavefunction. Contra Bell, the theory is argued to satisfy the minimal desiderata (...) for a Bohmian theory within the Primitive Ontology framework. TBM's formalism is that of ordinary Bohmian Mechanics, without the postulate of continuous particle trajectories and their deterministic dynamics. This “rump formalism” receives, however, a different interpretation. We defend TBM as an empirically adequate and coherent quantum theory. Objections voiced by Bell and Maudlin are rebutted. The “for all practical purposes”-classical, Everettian worlds exist sequentially in TBM. In a temporally coarse-grained sense, they quasi-persist. By contrast, the individual particles themselves cease to persist. (shrink)
In the paper we will employ set theory to study the formal aspects of quantum mechanics without explicitly making use of space-time. It is demonstrated that von Neuman and Zermelo numeral sets, previously efectively used in the explanation of Hardy’s paradox, follow a Heisenberg quantum form. Here monadic union plays the role of time derivative. The logical counterpart of monadic union plays the part of the Hamiltonian in the commutator. The use of numerals and monadic union in the (...) class='Hi'>classical probability resolution of Hardy’s paradox [1] is supported with the present derivation of a commutator for sets. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.