This careful note is a very initial foray into the issue of the change in entropy with respect to both McTaggart’s A-series and his B-series. We find a possible solution to the Past Hypothesis problem.
This paper assesses branching spacetime theories in light of metaphysical considerations concerning time. I present the A, B, and C series in terms of the temporal structure they impose on sets of events, and raise problems for two elements of extant branching spacetime theories—McCall’s ‘branch attrition’, and the ‘no backward branching’ feature of Belnap’s ‘branching space-time’—in terms of their respective A- and B-theoretic nature. I argue that McCall’s presentation of branch attrition can only be coherently formulated on a model (...) with at least two temporal dimensions, and that this results in severing the link between branch attrition and the ﬂow of time. I argue that ‘no backward branching’ prohibits Belnap’s theory from capturing the modal content of indeterministic physical theories, and results in it ascribing to the world a time-asymmetric modal structure that lacks physical justiﬁcation. (shrink)
This paper presents empirical findings from a set of reasoning and mock jury studies presented at the Experimental Psychology Oxford Seminar Series (2010) and the King's Bench Chambers KBW Barristers Seminar Series (2010). The presentation asks the following questions and presents empirical answers using the Lenses of Evidence Framework (Cowley & Colyer, 2010; see also van Koppen & Wagenaar, 1993): -/- Why is mental representation important for psychology? -/- Why is mental representation important for evidence law? -/- Lens (...) 1: The self representation - Key findings -/- Lens 2: The expert representation - Key findings -/- Lens 3: The anchor representation - Key findings -/- Conclusions & Future directions. -/- The series of research essentially explores how people represent evidence in mind and presents key findings now cited in the following literatures: Philosophy of Science, Cognitive Expertise, Behavioural Economics, Cognitive Science, Psychology and Public Policy, & Causation and the Law. (shrink)
How can McTaggart's A-series notion of time be incorporated into physics while retaining the B-series notion? It may be the A-series 'now' can be construed as ontologically private. How is that modeled? Could a definition of a combined AB-series entropy help with the Past Hypothesis problem? What if the increase in entropy as a system goes from earlier times to later times is canceled by the decrease in entropy as a system goes from future, to present, (...) to past? (shrink)
"Procedural Justice" offers a theory of procedural fairness for civil dispute resolution. The core idea behind the theory is the procedural legitimacy thesis: participation rights are essential for the legitimacy of adjudicatory procedures. The theory yields two principles of procedural justice: the accuracy principle and the participation principle. The two principles require a system of procedure to aim at accuracy and to afford reasonable rights of participation qualified by a practicability constraint. The Article begins in Part I, Introduction, with two (...) observations. First, the function of procedure is to particularize general substantive norms so that they can guide action. Second, the hard problem of procedural justice corresponds to the following question: How can we regard ourselves as obligated by legitimate authority to comply with a judgment that we believe (or even know) to be in error with respect to the substantive merits? The theory of procedural justice is developed in several stages, beginning with some preliminary questions and problems. The first question - what is procedure? - is the most difficult and requires an extensive answer: Part II, Substance and Procedure, defines the subject of the inquiry by offering a new theory of the distinction between substance and procedure that acknowledges the entanglement of the action-guiding roles of substantive and procedural rules while preserving the distinction between two ideal types of rules. The key to the development of this account of the nature of procedure is a thought experiment, in which we imagine a world with the maximum possible acoustic separation between substance and procedure. Part III, The Foundations of Procedural Justice, lays out the premises of general jurisprudence that ground the theory and answers a series of objections to the notion that the search for a theory of procedural justice is a worthwhile enterprise. Sections II and III set the stage for the more difficult work of constructing a theory of procedural legitimacy. Part IV, Views of Procedural Justice, investigates the theories of procedural fairness found explicitly or implicitly in case law and commentary. After a preliminary inquiry that distinguishes procedural justice from other forms of justice, Part IV focuses on three models or theories. The first, the accuracy model, assumes that the aim of civil dispute resolution is correct application of the law to the facts. The second, the balancing model, assumes that the aim of civil procedure is to strike a fair balance between the costs and benefits of adjudication. The third, the participation model, assumes that the very idea of a correct outcome must be understood as a function of process that guarantees fair and equal participation. Part IV demonstrates that none of these models provides the basis for a fully adequate theory of procedural justice. In Part V, The Value of Participation, the lessons learned from analysis and critique of the three models are then applied to the question whether a right of participation can be justified for reasons that are not reducible to either its effect on the accuracy or its effect on the cost of adjudication. The most important result of Part V is the Participatory Legitimacy Thesis: it is (usually) a condition for the fairness of a procedure that those who are to be finally bound shall have a reasonable opportunity to participate in the proceedings. The central normative thrust of Procedural Justice is developed in Part VI, Principles of Procedural Justice. The first principle, the Participation Principle, stipulates a minimum (and minimal) right of participation, in the form of notice and an opportunity to be heard, that must be satisfied (if feasible) in order for a procedure to be considered fair. The second principle, the Accuracy Principle, specifies the achievement of legally correct outcomes as the criterion for measuring procedural fairness, subject to four provisos, each of which sets out circumstances under which a departure from the goal of accuracy is justified by procedural fairness itself. In Part VII, The Problem of Aggregation, the Participation Principle and the Accuracy Principle are applied to the central problem of contemporary civil procedure - the aggregation of claims in mass litigation. Part VIII offers some concluding observations about the point and significance of Procedural Justice. (shrink)
"It is the purpose of this article to attempt to re-examine the account of Thrasymachus' doctrine in Plato's Republic, and to show how it can form a self-consistent whole. [...] In this paper it is maintained that Thrasymachus is holding a form of [natural right]." Note: Volume 40 = new series 9.
Much has been made of Deleuze’s Neo-Leibnizianism,3 however not very much detailed work has been done on the specific nature of Deleuze’s critique of Leibniz that positions his work within the broader framework of Deleuze’s own philo- sophical project. The present chapter undertakes to redress this oversight by providing an account of the reconstruction of Leibniz’s metaphysics that Deleuze undertakes in The Fold. Deleuze provides a systematic account of the structure of Leibniz’s metaphys- ics in terms of its mathematical underpinnings. (...) However, in doing so, Deleuze draws upon not only the mathematics developed by Leibniz – including the law of continuity as reflected in the calculus of infinite series and the infinitesimal calculus – but also the developments in mathematics made by a number of Leibniz’s contemporaries – including Newton’s method of fluxions – and a number of subsequent developments in mathematics, the rudiments of which can be more or less located in Leibniz’s own work – including the theory of functions and singularities, the theory of continuity and Poincaré’s theory of automorphic functions. Deleuze then retrospectively maps these developments back onto the structure of Leibniz’s metaphysics. While the theory of continuity serves to clarify Leibniz’s work, Poincaré’s theory of automorphic functions offers a solution to overcome and extend the limits that Deleuze identifies in Leibniz’s metaphysics. Deleuze brings this elaborate conjunction of material together in order to set up a mathematical idealization of the system that he considers to be implicit in Leibniz’s work. The result is a thoroughly mathematical explication of the structure of Leibniz’s metaphysics. What is provided in this chapter is an exposition of the very mathematical underpinnings of this Deleuzian account of the structure of Leibniz’s metaphysics, which, I maintain, subtends the entire text of The Fold. (shrink)
In the chapter of Difference and Repetition entitled ‘Ideas and the synthesis of difference,’ Deleuze mobilizes mathematics to develop a ‘calculus of problems’ that is based on the mathematical philosophy of Albert Lautman. Deleuze explicates this process by referring to the operation of certain conceptual couples in the field of contemporary mathematics: most notably the continuous and the discontinuous, the infinite and the finite, and the global and the local. The two mathematical theories that Deleuze draws upon for this purpose (...) are the differential calculus and the theory of dynamical systems, and Galois’ theory of polynomial equations. For the purposes of this paper I will only treat the first of these, which is based on the idea that the singularities of vector fields determine the local trajectories of solution curves, or their ‘topological behaviour’. These singularities can be described in terms of the given mathematical problematic, that is for example, how to solve two divergent series in the same field, and in terms of the solutions, as the trajectories of the solution curves to the problem. What actually counts as a solution to a problem is determined by the specific characteristics of the problem itself, typically by the singularities of this problem and the way in which they are distributed in a system. Deleuze understands the differential calculus essentially as a ‘calculus of problems’, and the theory of dynamical systems as the qualitative and topological theory of problems, which, when connected together, are determinative of the complex logic of different/ciation. (DR 209). Deleuze develops the concept of a problematic idea from the differential calculus, and following Lautman considers the concept of genesis in mathematics to ‘play the role of model ... with respect to all other domains of incarnation’. While Lautman explicated the philosophical logic of the actualization of ideas within the framework of mathematics, Deleuze (along with Guattari) follows Lautman’s suggestion and explicates the operation of this logic within the framework of a multiplicity of domains, including for example philosophy, science and art in What is Philosophy?, and the variety of domains which characterise the plateaus in A Thousand Plateaus. While for Lautman, a mathematical problem is resolved by the development of a new mathematical theory, for Deleuze, it is the construction of a concept that offers a solution to a philosophical problem; even if this newly constructed concept is characteristic of, or modelled on the new mathematical theory. (shrink)
One of the Bell's assumptions in the original derivation of his inequalities was the hypothesis of locality, i.e., the absence of the in uence of two remote measuring instruments on one another. That is why violations of these inequalities observed in experiments are often interpreted as a manifestation of the nonlocal nature of quantum mechanics, or a refutation of a local realism. It is well known that the Bell's inequality was derived in its traditional form, without resorting to the hypothesis (...) of locality and without the introduction of hidden variables, the only assumption being that the probability distributions are nonnegative. This can therefore be regarded as a rigorous proof that the hypothesis of locality and the hypothesis of existence of the hidden variables not relevant to violations of Bell's inequalities. The physical meaning of the obtained results is examined. Physical nature of the violation of the Bell inequalities is explained under new EPR-B nonlocality postulate.We show that the correlations of the observables involved in the Bohm{Bell type experiments can be expressed as correlations of classical random variables. The revisited Bell type inequality in canonical notatons reads <AB>+<A′B>+<AB′>-<A′B′>≤6. (shrink)
This essay series explores the human costs and policy challenges associated with the displacement crises in the Mediterranean and Andaman Seas. The essays explore the myths or misconceptions that have pervaded discussions about these two crises, as well as the constraints or capacity deficiencies have hampered the responses to them.
This summary note series outlines legal empirical approaches to the study of juries and jury decision-making behaviour for undergraduate students of sociology, criminology and legal systems, and forensic psychology. The note series is divided into two lectures. The first lecture attends to the background relevant to the historical rise of juries and socio-legal methodologies used to understand jury behaviour. The second lecture attends to questions surrounding jury competence, classic studies illustrative of juror bias, and a critical comparison of (...) juries to legal alternatives not reliant on jury deliberation for judicial process. Where appropriate the note series indicates key readings relevant to each core component of the note series, for students to develop their understanding in self-study time. (shrink)
In a series of recent publications, orofacial researchers have debated the question of how ‘bruxism’ should be defined for the purposes of accurate diagnosis and reliable clinical research. Following the principles of realism-based ontology, we performed an analysis of the arguments involved. This revealed that the disagreements rested primarily on inconsistent use of terms, so that issues of ontology were thus obfuscated by shortfalls in terminology. In this paper, we demonstrate how bruxism terminology can be improved by paying attention (...) to the relationships between (1) particulars and types, and (2) continuants and occurrents. (shrink)
Over the last decade, multi-agent systems have come to form one of the key tech- nologies for software development. The Formal Approaches to Multi-Agent Systems (FAMAS) workshop series brings together researchers from the fields of logic, theoreti- cal computer science and multi-agent systems in order to discuss formal techniques for specifying and verifying multi-agent systems. FAMAS addresses the issues of logics for multi-agent systems, formal methods for verification, for example model check- ing, and formal approaches to cooperation, multi-agent planning, (...) communication, coordination, negotiation, games, and reasoning under uncertainty in a distributed environment. In 2007, the third FAMAS workshop, FAMAS'007, was one of the agent workshops gathered together under the umbrella of Multi-Agent Logics, Languages, and Organ- isations - Federated Workshops, MALLOW'007, taking place from 3 to 7 September 2007 in Durham. This current special issue of the Logic Journal of the IGPL gathers together the revised and updated versions of the five best FAMAS'007 contributions. (shrink)
This is the first volume to offer a systematic consideration and comprehensive overview of Christianity’s long engagement with the Platonic philosophical tradition. The book offers a detailed consideration of the most fertile sources and concepts in Christian Platonism, a historical contextualization of its development, and a series of constructive engagements with central questions. Bringing together a range of leading scholars, the volume guides readers through each of these dimensions, uniquely investigating and explicating one of the most important, controversial, and (...) often misunderstood elements of Christian intellectual history. (shrink)
Russell’s second philosophy of time (1899–1913), which will be the subject of this paper, is of special interest for two reasons. (1) It was basic to his New Philosophy, later called the “philosophy of logical atomism”. In fact, this philosophy didn’t initially emerge in the period of 1914– 1919, as many interpreters (e.g. A. J. Ayer) suggest, but with the introduction of Russell’s second philosophy of time (and space). The importance of Russell’s second philosophy of time for his early and (...) middle philosophy can be seen from the fact that it survived the dramatic changes in his philosophy of August–December 1900, and of July 1905. There is of course no surprise about this point: it served as their fundament. (2) Russell’s second philosophy of time is a locus classicus of all so called B-theories of time which define it in terms of the relations of before, after and simultaneous between events or moments. 20th century philosophy; absolute theory of time; theory of time; order; relation; relationist theory of time; B-series. (shrink)
Intended and merely foreseen consequences: The psychology of the ‘cause or allow’ offence. A short report for the Socio-Legal Community on ESRC Grant RES-000-22-3114.
Similarity and difference, patterns of variation, consistency and coherence: these are the reference points of the philosopher. Understanding experience, exploring ideas through particular instantiations, novel and innovative thinking: these are the reference points of the artist. However, at certain points in the proceedings of our Symposium titled, Next to Nothing: Art as Performance, this characterisation of philosopher and artist respectively might have been construed the other way around. The commentator/philosophers referenced their philosophical interests through the particular examples/instantiations created by the (...) artist and in virtue of which they were then able to engage with novel and innovative thinking. From the artists’ presentations, on the other hand, emerged a series of contrasts within which philosophical and artistic ideas resonated. This interface of philosopher-artist bore witness to the fact that just as art approaches philosophy in providing its own analysis, philosophy approaches art in being a co-creator of art’s meaning. In what follows, we discuss the conception of philosophy-art that emerged from the Symposium, and the methodological minimalism which we employed in order to achieve it. We conclude by drawing out an implication of the Symposium’s achievement which is that a counterpoint to Institutional theories of art may well be the point from which future directions will take hold, if philosophy-art gains traction. (shrink)
Girolamo Saccheri (1667--1733) was an Italian Jesuit priest, scholastic philosopher, and mathematician. He earned a permanent place in the history of mathematics by discovering and rigorously deducing an elaborate chain of consequences of an axiom-set for what is now known as hyperbolic (or Lobachevskian) plane geometry. Reviewer's remarks: (1) On two pages of this book Saccheri refers to his previous and equally original book Logica demonstrativa (Turin, 1697) to which 14 of the 16 pages of the editor's "Introduction" are devoted. (...) At the time of the first edition, 1920, the editor was apparently not acquainted with the secondary literature on Logica demonstrativa which continued to grow in the period preceding the second edition \ref[see D. J. Struik, in Dictionary of scientific biography, Vol. 12, 55--57, Scribner's, New York, 1975]. Of special interest in this connection is a series of three articles by A. F. Emch [Scripta Math. 3 (1935), 51--60; Zbl 10, 386; ibid. 3 (1935), 143--152; Zbl 11, 193; ibid. 3 (1935), 221--333; Zbl 12, 98]. (2) It seems curious that modern writers believe that demonstration of the "nondeducibility" of the parallel postulate vindicates Euclid whereas at first Saccheri seems to have thought that demonstration of its "deducibility" is what would vindicate Euclid. Saccheri is perfectly clear in his commitment to the ancient (and now discredited) view that it is wrong to take as an "axiom" a proposition which is not a "primal verity", which is not "known through itself". So it would seem that Saccheri should think that he was convicting Euclid of error by deducing the parallel postulate. The resolution of this confusion is that Saccheri thought that he had proved, not merely that the parallel postulate was true, but that it was a "primal verity" and, thus, that Euclid was correct in taking it as an "axiom". As implausible as this claim about Saccheri may seem, the passage on p. 237, lines 3--15, seems to admit of no other interpretation. Indeed, Emch takes it this way. (3) As has been noted by many others, Saccheri was fascinated, if not obsessed, by what may be called "reflexive indirect deductions", indirect deductions which show that a conclusion follows from given premises by a chain of reasoning beginning with the given premises augmented by the denial of the desired conclusion and ending with the conclusion itself. It is obvious, of course, that this is simply a species of ordinary indirect deduction; a conclusion follows from given premises if a contradiction is deducible from those given premises augmented by the denial of the conclusion---and it is immaterial whether the contradiction involves one of the premises, the denial of the conclusion, or even, as often happens, intermediate propositions distinct from the given premises and the denial of the conclusion. Saccheri seemed to think that a proposition proved in this way was deduced from its own denial and, thus, that its denial was self-contradictory (p. 207). Inference from this mistake to the idea that propositions proved in this way are "primal verities" would involve yet another confusion. The reviewer gratefully acknowledges extensive communication with his former doctoral students J. Gasser and M. Scanlan. ADDED 14 March 14, 2015: (1) Wikipedia reports that many of Saccheri's ideas have a precedent in the 11th Century Persian polymath Omar Khayyám's Discussion of Difficulties in Euclid, a fact ignored in most Western sources until recently. It is unclear whether Saccheri had access to this work in translation, or developed his ideas independently. (2) This book is another exemplification of the huge difference between indirect deduction and indirect reduction. Indirect deduction requires making an assumption that is inconsistent with the premises previously adopted. This means that the reasoner must perform a certain mental act of assuming a certain proposition. It case the premises are all known truths, indirect deduction—which would then be indirect proof—requires the reasoner to assume a falsehood. This fact has been noted by several prominent mathematicians including Hardy, Hilbert, and Tarski. Indirect reduction requires no new assumption. Indirect reduction is simply a transformation of an argument in one form into another argument in a different form. In an indirect reduction one proposition in the old premise set is replaced by the contradictory opposite of the old conclusion and the new conclusion becomes the contradictory opposite of the replaced premise. Roughly and schematically, P,Q/R becomes P,~R/~Q or ~R, Q/~P. Saccheri’s work involved indirect deduction not indirect reduction. (3) The distinction between indirect deduction and indirect reduction has largely slipped through the cracks, the cracks between medieval-oriented logic and modern-oriented logic. The medievalists have a heavy investment in reduction and, though they have heard of deduction, they think that deduction is a form of reduction, or vice versa, or in some cases they think that the word ‘deduction’ is the modern way of referring to reduction. The modernists have no interest in reduction, i.e. in the process of transforming one argument into another having exactly the same number of premises. Modern logicians, like Aristotle, are concerned with deducing a single proposition from a set of propositions. Some focus on deducing a single proposition from the null set—something difficult to relate to reduction. (shrink)
There appears to be a temporal analogue to the Knowledge argument. If correct, it could be read as an argument that B-theorism is false: time is not completely described by McTaggart's B-series. We analyse the temporal knowledge argument in terms of Chalmers's 2-dimensional semantics. An adaptation of the most popular response to the Knowledge argument indicates that McTaggart's A-series and B-series have different modes of presentation.
The purpose of this yet-another version of this note is to make another attempt to show how an 'AB-series' interpretation of time, given in a companion paper, leads, surprisingly, apparently, to the signature of the physicists' important AdS^5 geometry. This is not a theory of 2 time dimensions. Rather, it is a theory of 1 time dimension that has both A-series and B-series characteristics.
I accept that McTaggart's A-series and B-series are not inter-reducible and that both are needed for a complete temporal description of a physical system. I consider the Wigner's Friend thought experiment. The A-series are associated with each (quantum) system, and relativity is associated with the B-series. I consider temporal evolution through this 'hybrid' time. We may define the rate of temporal flow as 1 B-series second per A-series second.
Non-locality is one of the great mysteries of quantum mechanics (qm). There is a new realist interpretation of qm on the table whose notion of time incorporates both of McTaggart's A-series and B-series. In this philosophically motivated interpretation there is no fact of the matter as to whether the 'now' of one system is the 'now' of another system, until measurement. But this reproduces the idea that the spins of a Bell pair of electrons do not become definite (...) 'until' measurement. And this almost trivially allows for non-locality. (shrink)
We motivate and develop a perspectival A-theory of time (future/present/past) and probe its implied interpretation of quantum mechanics. It will emerge that, as a first take, the time of relativity is a B-series (earlier-times to later-times) and the time of quantum mechanics is an A-series. There is philosophical motivation for the idea that mutual quantum measurement happens when and only when the systems’ A-series become one mutual A-series, as in the way qualia work in the Inverted (...) Spectrum. This seems to account for certain quantum phenomena, including that the electrons of a Bell pair do not have definite spins until measurement, as ‘until’ here is a (perspectival) A-series notion. Various issues in the foundations of quantum mechanics are canvassed. (shrink)
This paper proposes an interpretation of time that is an 'A-theory' in that it incorporates both McTaggart's A-series and his B-series. The A-series characteristics are supposed to be 'ontologically private' analogous to qualia in the problem of other minds and is given a definition. The main idea is that the experimenter and the cat do not share the same A-series characteristics, e.g the same 'now'. So there is no single time at which the cat gets ascribed (...) different states. It is proposed one may define a 'unit of becoming' that coordinatizes the future/present/past 'private' spectrum as well as allowing one to calculate the rates of becoming. Relativity is briefly considered. (shrink)
An important theme running through D.H. Mellor’s work is his realism, or as I shall call it, his objectivism: the idea that reality as such is how it is, regardless of the way we represent it, and that philosophical error often arises from confusing aspects of our subjective representation of the world with aspects of the world itself. Thus central to Mellor’s work on time has been the claim that the temporal A-series (previously called ‘tense’) is unreal while the (...) B-series (the series of ‘dates’) is real. The A-series is something which is a product of our representation of the world, but not a feature of reality itself. And in other, less central, areas of his work, this kind of theme has been repeated: ‘Objective decision making’ (1980) argues that the right way to understand decision theory is as a theory of what is the objectively correct decision, the one that will actually as a matter of fact achieve your intended goal, rather than the one that is justified purely in terms of what you believe, regardless of whether the belief is true or false. ‘I and now’ (1989) argues against a substantial subjective conception of the self, using analogies between subjective and objective ways of thinking about time and subjective and objective ways of thinking about the self. And in the paper which shall be the focus of my attention here, ‘Nothing like experience’ (1992), Mellor contests arguments which try and derive anti-physicalist conclusions from reflections on the subjective character of experience. A common injunction is detectable: when doing metaphysics, keep the subjective where it belongs: inside the subject’s representation of the world. (shrink)
This paper develops a Fragmentalist theory of Presentism and shows how it can help to develop a interpretation of quantum mechanics. There are several fragmental interpretations of physics. In the interpretation of this paper, each quantum system forms a fragment, and fragment f1 makes a measurement on fragment f2 if and only if f2 makes a corresponding measurement on f1. The main idea is then that each fragment has its own present (or ‘now’) until a mutual quantum measurement—at which time (...) they come (‘become’) to share the same ‘now’. The theory of time developed here will make use of both McTaggart’s A-series (in the form of future-present-past) and B-series (earlier-times to later-times). An example of an application is that a Bell pair of electrons does not take on definite spin values until measurement because the measuring system and the Bell pair do not share the same present (‘now’) until mutual quantum measurement, i.e. until they ‘become’ to share the same A-series. Before that point the ‘now’ of the opposing system is not in the reference system’s fragment. Relativistic no-signaling is preserved within each fragment, which will turn out to be sufficient for the general case. Several issues in the foundations of quantum mechanics are canvassed, including Schrodinger’s cat, the Born rule, modifications to Minkowski space that accommodate both the A-series and the B-series, and entropy. (shrink)
McTaggart distinguished two conceptions of time: the A-series, according to which events are either past, present or future; and the B-series, according to which events are merely earlier or later than other events. Elsewhere, I have argued that these two views, ostensibly about the nature of time, need to be reinterpreted as two views about the nature of the universe. According to the so-called A-theory, the universe is three dimensional, with a past and future; according to the B-theory, (...) the universe is four dimensional. Given special relativity (SR), we are obliged, it seems, to accept (a modified version of) the B-series, four dimensional view, and reject the A-series, three dimensional view, because SR denies that there is a privileged, instantaneous cosmic "now" which seems to be required by the A-theory. Whether this is correct or not, it is important to remember that the fundamental problem, here, is not "What does SR imply?", but rather "What is the best guess about the ultimate nature of the universe in the light of current theoretical knowledge in physics?". In order to know how to answer this question, we need to have some inkling as to how the correct theory of quantum gravity incorporates quantum theory, probability and time. This is, at present, an entirely open question. String theory, or M-theory, seems to evade the issue, and other approaches to quantum gravity seem equally evasive. However, if probabilism is a fundamental feature of ultimate physical reality, then it may well be that the A-theory, or rather a closely related doctrine I call “objectism”, is built into the ultimate constitution of things. (shrink)
This paper proposes an interpretation of time that incorporates both McTaggart's A-series and his B-series, and attempts to cast it in a way that might be usable by physicists. This interpretation allows one to reconcile special relativity with temporal becoming as the latter is understood as 'ontologically private', which is given a mathematical definition. This allows one to define a unit of becoming, as well as the rates of becoming. This paper gives a picture of this interpretation and (...) applies a rough outline of the concepts to several test cases. (shrink)
This article proposes an interpretation of time that incorporates both McTaggart's A-series and his B-series, and tries to cast it in a role that could be useful to physicists. This AB-series allows one, to reconcile special relativity with temporal becoming if the latter is understood as 'ontologically private', which is given a mathematical definition. This allows one to define a unit of becoming, as well as the rates of becoming. This article gives a picture of this interpretation.
The purpose of this note is to show how an 'AB-series' interpretation of time, given in a companion paper, leads, surprisingly, directly to the physicists' important AdS5 geometry. This is not a theory of 2 time dimensions. Rather, it is a theory of 1 time dimension that has both B-series and A-series characteristics. -/- To summarize the result, a spacetime in terms of 1. the earlier-to-later aspect of time, and 2. the related future-present-past aspect of time, and (...) 3. 3-d space, automatically gives us AdS_5. (shrink)
The purpose of this note is to show how an 'AB-series' interpretation of time, given in a companion paper, leads, surprisingly, to AdS_5 geometry. This is not a theory of 2 time dimensions. Rather, it is a theory of 1 time dimension that has both A-series and B-series characteristics.
The purpose of this note is to show how an 'AB-series' interpretation of time leads, surprisingly, apparently, to AdS_5 geometry. This is not a theory of 2 time dimensions. Rather, it is a theory of 1 time dimension that has both A-series and B-series characteristics. To summarize the result, a spacetime in terms of (1) the earlier-to-later aspect of time, and (2) the (related) future-present-past aspect of time, and (3) 3-d space, it would seem, gives us the (...) AdS_5 geometry. (shrink)
The motivation comes from the analogy (equivalence?) of the A-series to ontologically private qualia in Dualism. This leads to the proposal that two quantum systems, no matter how small, mutually observe each other when and only when they come to share the same A-series. McTaggart's A-series and B-series can be varied independently so they cannot be the same temporal variable.
This paper proposes an interpretation of time that is an 'A-theory' in that it incorporates both McTaggart's A-series and his B-series. The A-series characteristics are supposed to be 'ontologically private' analogous to qualia in the Inverted Spectrum thought experiment and is given a definition. It is proposed one may define a 'unit of becoming' that coordinatizes the future/present/past spectrum as well as allowing one to calculate the rates of becoming. We give a picture of this interpretation and (...) discuss how it relates to the Schrodinger's Cat paradox. (shrink)
We give a mathematical definition of the present or 'what is real' and its duration on McTaggart's A-series future/present/past. This is applicable to at least one conception of the block-world, the growing-block, and presentism.
Philosophical views about the logical structure of time are typically divided between proponents of A and B theories, based on McTaggart's A and B series. Drawing on Paul Ricoeur's hermeneutic phenomenology, I develop and defend McTaggart's thesis that the C series and the A series working together give a consistent description of temporal experience, provided that the two series are treated as distinct dimensions internal to time. In the proposed two-dimensional model, the C series expresses (...) a nesting order of the constitutive states of a world, whereas ontological continuity and change are properties of the A series. This, I argue, allows for limited backward causation. (shrink)
We give an apparently new possible explanation for why there might be something rather than nothing—the weakest assumptions coupled with a kind of perspectivalism. Within a Fragmentalist interpretation of quantum mechanics (each quantum mechanics system forms a fragment), McTaggart’s A-series of time has this kind of perspectivalism. We then use the A-series and the B-series to differentiate between how far in the past the big bang was vs. how much earlier than now the big bang was. In (...) one example model, the former goes to infinity while the latter stays finite. This implies the number of quantum interactions per unit 4-volume goes up to infinity as we approach the big bang from the present epoch. (shrink)
We start by asking the question of ‘why there is something rather than nothing’ and change this to the question of ‘what are the weakest assumptions for existence’ Eagle [1]. Then we give a kind of Fragmental Perspectivalism. Within this Fragmentalist interpretation of quantum mechanics (each quantum mechanical system forms a fragment) Merriam [2], it turns out McTaggart’s [3] A-series of time (the A-series is future to the present to the past) has a kind of perspectivalism. We then (...) use McTaggart’s A-series and the B-series (the B-series is earlier times to later times) of time to differentiate between how far in the past the big bang was vs. how much earlier than now the big bang was. In one example model, the former goes infinitely far into the past while the latter stays finitely earlier-than. In this model the number of quantum interactions per unit 4-volume goes up to infinity as the big bang is approached from the present epoch. (shrink)
It is often thought the relativity of simultaneity is inconsistent with presentism. This would be troubling as it conflicts with common sense and—arguably—the empirical data. This note gives a novel fragmentalist-presentist theory that allows for the (non-trivial) relativity of simultaneity. A detailed account of the canonical moving train argument is considered. Alice, standing at the train station, forms her own ontological fragment, in which Bob’s frame of reference, given by the moving train, is modified by the Lorentz transformations. On the (...) other hand, Bob, in the train, forms his own ontological fragment from which Alice’s space and time are modified by the corresponding Lorentz transformations. Each fragment accommodates a unique present moment but does not contain information about the unique present moment of another fragment. This allows for a ‘universal’ present moment that extends throughout space, but only from the perspective of each fragment. The relativity of simultaneity is, as it were, ‘relativised’ to each fragment. This is related to the idea that, roughly speaking, the time of relativity is McTaggart’s (1908) B-series (earlier times to later times) and the time of quantum mechanics is a (fragmentalist) A-series (future/present/past), where these two related series characterize one dimension of time. (shrink)
This paper proposes an interpretation of time that is an 'A-theory' in that it incorporates both McTaggart's A-series and his B-series. The A-series characteristics are supposed to be 'ontologically private' analogous to qualia in the problem of other minds, such as in the Inverted Spectrum thought experiment, and is given a definition. The main idea is then that the experimenter and the cat do not share the same A-series characteristics, e.g. the same 'now', to some extent. (...) So there is no single time at which the cat gets ascribed different states, one by the experimenter and one by the cat. Also it is proposed one may define an ontologically private 'unit of becoming' that coordinatizes the future/present/past A-series spectrum as well as allow one to calculate rates of becoming with seconds. The latter are taken to measure differences in B-series times. (shrink)
We define and develop a notion of spacetime that incorporates both McTaggart's A-series and his B-series that is consistent with special relativity. This 'McTaggartian spacetime' or 'AB-spacetime' requires *five* not *4* variables. The interface of two AB-spacetimes from different *ontological perspectives* is quantum mechanical. This note concentrates on the physics and not the philosophy. This is an invitation to contribute to a theory that is a work in progress.
This paper proposes an interpretation of time that is an 'A-theory' in that it incorporates both McTaggart's A-series and his B-series. The A-series characteristics are supposed to be 'ontologically private' analogous to qualia in the Inverted Spectrum thought experiment and is given a definition. The main idea is that the experimenter and the cat do not share the same A-series characteristics. So there is no single time at which the cat gets ascribed different states. It is (...) proposed one may define a 'unit of becoming' that coordinatizes the future/present/past spectrum as well as allowing one to calculate the rates of becoming. We give a picture of this interpretation and discuss how it relates to the Schrodinger's Cat 'paradox'. Also relativity is briefly considered. (shrink)
A theory of time was proposed in "A theory of time", an early version of which is on PhilPapers. The idea was that the A-series features of a physical system are ontologically private, and this was given a mathematical definition. Also B-series features are ontologically public. This brief note is a detailed rumination on path-integrals and Schrodinger's Cat, in this theory.
Abstract We motivate and develop an A-theory of time and probe its implied interpretation of quantum mechanics. It will emerge that, as a first take, the time of relativity is a B-series and the time of quantum mechanics is an A-series. There is philosophical motivation for the idea that mutual quantum measurement happens when and only when the systems’ A-series become one mutual A-series. This accounts almost trivially for many quantum phenomena, including that the electrons of (...) a Bell pair do not have definite spins ‘until’ measurement, as ‘until’ here is an A-series notion. This version of the paper is in the process of being re-formatted for submission to a journal. (shrink)
The intention of this critical review of McTaggart’s 1908 paper is to bring about a distinction between Time and Motion . This distinction is crucial to our understanding of both time as well as motion because so far they have ben treated by all as one and the same. McTaggart, by at least recognizing two different “series” which he calls the A-series and the B-series, has given us a starting point to further understand this distinction. In the (...) process of establishing this distinction we will find ourselves encountering new concepts hitherto unrecognized in philosophy and physics and eventually understand the basis of the numerous paradoxes that time and motion involve dating all the way back to Zeno. (shrink)
In a series of pre-registered studies, we explored (a) the difference between people’s intuitions about indeterministic scenarios and their intuitions about deterministic scenarios, (b) the difference between people’s intuitions about indeterministic scenarios and their intuitions about neurodeterministic scenarios (that is, scenarios where the determinism is described at the neurological level), (c) the difference between people’s intuitions about neutral scenarios (e.g., walking a dog in the park) and their intuitions about negatively valenced scenarios (e.g., murdering a stranger), and (d) the (...) difference between people’s intuitions about free will and responsibility in response to first-person scenarios and third-person scenarios. We predicted that once we focused participants’ attention on the two different abilities to do otherwise available to agents in indeterministic and deterministic scenarios, their intuitions would support natural incompatibilism—the view that laypersons judge that free will and moral responsibility are incompatible with determinism. This prediction was borne out by our findings. (shrink)
John Ellis McTaggart defended an idealistic view of time in the tradition of Hegel and Bradley. His famous paper makes two independent claims (McTaggart1908): First, time is a complex conception with two different logical roots. Second, time is unreal. To reject the second claim seems to commit to the first one, i.e., to a pluralistic account of time. We compare McTaggarts views to the most important concepts of time investigated in physics, neurobiology, and philosophical phenomenology. They indicate that a unique, (...) reductionist account of time is far from being plausible, even though too many conceptions of time may seem unsatisfactory from an ontological point of view. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.