A fundamental problem in science is how to make logical inferences from scientiﬁc data. Mere data does not suﬃce since additional information is necessary to select a domain of models or hypotheses and thus determine the likelihood of each model or hypothesis. Thomas Bayes’ Theorem relates the data and prior information to posterior probabilities associated with diﬀering models or hypotheses and thus is useful in identifying the roles played by the known data and the assumed prior information when making (...) inferences. Scientists, philosophers, and theologians accumulate knowledge when analyzing diﬀerent aspects of reality and search for particular hypotheses or models to ﬁt their respective subject matters. Of course, a main goal is then to integrate all kinds of knowledge into an all-encompassing worldview that would describe the whole of reality. A generous description of the whole of reality would span, in the order of complexity, from the purely physical to the supernatural. These two extreme aspects of reality are bridged by a nonphysical realm, which would include elements of life, man, consciousness, rationality, mental and mathematical abstractions, etc. An urgent problem in the theory of knowledge is what science is and what it is not. Albert Einstein’s notion of science in terms of sense perception is reﬁned by deﬁning operationally the data that makes up the subject matter of science. It is shown, for instance, that theological considerations included in the prior information assumed by Isaac Newton is irrelevant in relating the data logically to the model or hypothesis. In addition, the concepts of naturalism, intelligent design, and evolutionary theory are critically analyzed. Finally, Eugene P. Wigner’s suggestions concerning the nature of human consciousness, life, and the success of mathematics in the natural sciences is considered in the context of the creative power endowed in humans by God. (shrink)
Medical diagnosis has been traditionally recognized as a privileged field of application for so called probabilistic induction. Consequently, the Bayesian theorem, which mathematically formalizes this form of inference, has been seen as the most adequate tool for quantifying the uncertainty surrounding the diagnosis by providing probabilities of different diagnostic hypotheses, given symptomatic or laboratory data. On the other side, it has also been remarked that differential diagnosis rather works by exclusion, e.g. by modus tollens, i.e. deductively. By drawing on (...) a case history, this paper aims at clarifying some points on the issue. Namely: 1) Medical diagnosis does not represent, strictly speaking, a form of induction, but a type, of what in Peircean terms should be called ‘abduction’ (identifying a case as the token of a specific type); 2) in performing the single diagnostic steps, however, different inferential methods are used for both inductive and deductive nature: modus tollens, hypothetical-deductive method, abduction; 3) Bayes’ theorem is a probabilized form of abduction which uses mathematics in order to justify the degree of confidence which can be entertained on a hypothesis given the available evidence; 4) although theoretically irreconcilable, in practice, both the hypothetical- deductive method and the Bayesian one, are used in the same diagnosis with no serious compromise for its correctness; 5) Medical diagnosis, especially differential diagnosis, also uses a kind of “probabilistic modus tollens”, in that, signs (symptoms or laboratory data) are taken as strong evidence for a given hypothesis not to be true: the focus is not on hypothesis confirmation, but instead on its refutation [Pr (¬ H/E1, E2, …, En)]. Especially at the beginning of a complicated case, odds are between the hypothesis that is potentially being excluded and a vague “other”. This procedure has the advantage of providing a clue of what evidence to look for and to eventually reduce the set of candidate hypotheses if conclusive negative evidence is found. 6) Bayes’ theorem in the hypothesis-confirmation form can more faithfully, although idealistically, represent the medical diagnosis when the diagnostic itinerary has come to a reduced set of plausible hypotheses after a process of progressive elimination of candidate hypotheses; 7) Bayes’ theorem is however indispensable in the case of litigation in order to assess doctor’s responsibility for medical error by taking into account the weight of the evidence at his disposal. (shrink)
This paper offers a probabilistic treatment of the conditions for argument cogency as endorsed in informal logic: acceptability, relevance, and sufficiency. Treating a natural language argument as a reason-claim-complex, our analysis identifies content features of defeasible argument on which the RSA conditions depend, namely: change in the commitment to the reason, the reason’s sensitivity and selectivity to the claim, one’s prior commitment to the claim, and the contextually determined thresholds of acceptability for reasons and for claims. Results contrast with, and (...) may indeed serve to correct, the informal understanding and applications of the RSA criteria concerning their conceptual dependence, their function as update-thresholds, and their status as obligatory rather than permissive norms, but also show how these formal and informal normative approachs can in fact align. (shrink)
In a recent article, David Kyle Johnson has claimed to have provided a ‘refutation’ of skeptical theism. Johnson’s refutation raises several interesting issues. But in this note, I focus on only one—an implicit principle Johnson uses in his refutation to update probabilities after receiving new evidence. I argue that this principle is false. Consequently, Johnson’s refutation, as it currently stands, is undermined.
This is a series of lectures on formal decision theory held at the University of Bayreuth during the summer terms 2008 and 2009. It largely follows the book from Michael D. Resnik: Choices. An Introduction to Decision Theory, 5th ed. Minneapolis London 2000 and covers the topics: -/- Decisions under ignorance and risk Probability calculus (Kolmogoroff Axioms, Bayes' Theorem) Philosophical interpretations of probability (R. v. Mises, Ramsey-De Finetti) Neuman-Morgenstern Utility Theory Introductory Game Theory Social Choice Theory (Sen's Paradox of (...) Liberalism, Arrow's Theorem) . (shrink)
I present a solution to the epistemological or characterisation problem of induction. In part I, Bayesian Confirmation Theory (BCT) is discussed as a good contender for such a solution but with a fundamental explanatory gap (along with other well discussed problems); useful assigned probabilities like priors require substantive degrees of belief about the world. I assert that one does not have such substantive information about the world. Consequently, an explanation is needed for how one can be licensed to act as (...) if one has substantive information about the world when one does not. I sketch the outlines of a solution in part I, showing how it differs from others, with full details to follow in subsequent parts. The solution is pragmatic in sentiment (though differs in specifics to arguments from, for example, William James); the conceptions we use to guide our actions are and should be at least partly determined by preferences. This is cashed out in a reformulation of decision theory motivated by a non-reductive formulation of hypotheses and logic. A distinction emerges between initial assumptions--that can be non-dogmatic--and effective assumptions that can simultaneously be substantive. An explanation is provided for the plausibility arguments used to explain assigned probabilities in BCT. -/- In subsequent parts, logic is constructed from principles independent of language and mind. In particular, propositions are defined to not have form. Probabilities are logical and uniquely determined by assumptions. The problems considered fatal to logical probabilities--Goodman's `grue' problem and the uniqueness of priors problem are dissolved due to the particular formulation of logic used. Other problems such as the zero-prior problem are also solved. -/- A universal theory of (non-linguistic) meaning is developed. Problems with counterfactual conditionals are solved by developing concepts of abstractions and corresponding pictures that make up hypotheses. Spaces of hypotheses and the version of Bayes' theorem that utilises them emerge from first principles. -/- Theoretical virtues for hypotheses emerge from the theory. Explanatory force is explicated. The significance of effective assumptions is partly determined by combinatoric factors relating to the structure of hypotheses. I conjecture that this is the origin of simplicity. (shrink)
We use Bayesian tools to assess Law’s skeptical argument against the historicity of Jesus. We clarify and endorse his sub-argument for the conclusion that there is good reason to be skeptical about the miracle claims of the New Testament. However, we dispute Law’s contamination principle that he claims entails that we should be skeptical about the existence of Jesus. There are problems with Law’s defense of his principle, and we show, more importantly, that it is not supported by Bayesian considerations. (...) Finally, we show that Law’s principle is false in the specific case of Jesus and thereby show, contrary to the main conclusion of Law’s argument, that biblical historians are entitled to remain confident that Jesus existed. (shrink)
Racial profiling has come under intense public scrutiny especially since the rise of the Black Lives Matter movement. This article discusses two questions: whether racial profiling is sometimes rational, and whether it can be morally permissible. It is argued that under certain circumstances the affirmative answer to both questions is justified.
A group is often construed as one agent with its own probabilistic beliefs (credences), which are obtained by aggregating those of the individuals, for instance through averaging. In their celebrated “Groupthink”, Russell et al. (2015) require group credences to undergo Bayesian revision whenever new information is learnt, i.e., whenever individual credences undergo Bayesian revision based on this information. To obtain a fully Bayesian group, one should often extend this requirement to non-public or even private information (learnt by not all or (...) just one individual), or to non-representable information (not representable by any event in the domain where credences are held). I pro- pose a taxonomy of six types of ‘group Bayesianism’. They differ in the information for which Bayesian revision of group credences is required: public representable information, private representable information, public non-representable information, etc. Six corre- sponding theorems establish how individual credences must (not) be aggregated to ensure group Bayesianism of any type, respectively. Aggregating through standard averaging is never permitted; instead, different forms of geometric averaging must be used. One theorem—that for public representable information—is essentially Russell et al.’s central result (with minor corrections). Another theorem—that for public non-representable information—fills a gap in the theory of externally Bayesian opinion pooling. (shrink)
The problem addressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen, BC, 1983, Theory comparison and relevant Evidence. In J. Earman (Ed.), Testing scientific theories (pp. 27–42). Minneapolis: University of Minnesota Press). Sections 1– 3 contain the general plausibility-informativeness theory of theory assessment. In a nutshell, the message is (1) that there are two values a theory should exhibit: (...) truth and informativeness—measured respectively by a truth indicator and a strength indicator; (2) that these two values are conflicting in the sense that the former is a decreasing and the latter an increasing function of the logical strength of the theory to be assessed; and (3) that in assessing a given theory by the available data one should weigh between these two conflicting aspects in such a way that any surplus in informativeness succeeds, if the shortfall in plausibility is small enough. Particular accounts of this general theory arise by inserting particular strength indicators and truth indicators. In Section 4 the theory is spelt out for the Bayesian paradigm of subjective probabilities. It is then compared to incremental Bayesian confirmation theory. Section 4 closes by asking whether it is likely to be lovely. Section 5 discusses a few problems of confirmation theory in the light of the present approach. In particular, it is briefly indicated how the present account gives rise to a new analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, CG, 1945, Studies in the logic of comfirmation. Mind, 54, 1–26, 97–121.), differing from the one Carnap gave in § 87 of his Logical foundations of probability (1962, Chicago: University of Chicago Press). Section 6 adresses the question of justification any theory of theory assessment has to face: why should one stick to theories given high assessment values rather than to any other theories? The answer given by the Bayesian version of the account presented in section 4 is that one should accept theories given high assessment values, because, in the medium run, theory assessment almost surely takes one to the most informative among all true theories when presented separating data. The concluding section 7 continues the comparison between the present account and incremental Bayesian confirmation theory. (shrink)
We present a general framework for representing belief-revision rules and use it to characterize Bayes's rule as a classical example and Jeffrey's rule as a non-classical one. In Jeffrey's rule, the input to a belief revision is not simply the information that some event has occurred, as in Bayes's rule, but a new assignment of probabilities to some events. Despite their differences, Bayes's and Jeffrey's rules can be characterized in terms of the same axioms: "responsiveness", which requires that revised beliefs (...) incorporate what has been learnt, and "conservativeness", which requires that beliefs on which the learnt input is "silent" do not change. To illustrate the use of non-Bayesian belief revision in economic theory, we sketch a simple decision-theoretic application. (shrink)
In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of our second result. Although we (...) thereby provide a new proof of Arrow’s theorem, our main aim is to identify the analogue of Arrow’s theorem in judgment aggregation, to clarify the relation between judgment and preference aggregation, and to illustrate the generality of the judgment aggregation model. JEL Classi…cation: D70, D71.. (shrink)
Amalgamating evidence of different kinds for the same hypothesis into an overall confirmation is analogous, I argue, to amalgamating individuals’ preferences into a group preference. The latter faces well-known impossibility theorems, most famously “Arrow’s Theorem”. Once the analogy between amalgamating evidence and amalgamating preferences is tight, it is obvious that amalgamating evidence might face a theorem similar to Arrow’s. I prove that this is so, and end by discussing the plausibility of the axioms required for the theorem.
In this short survey article, I discuss Bell’s theorem and some strategies that attempt to avoid the conclusion of non-locality. I focus on two that intersect with the philosophy of probability: (1) quantum probabilities and (2) superdeterminism. The issues they raised not only apply to a wide class of no-go theorems about quantum mechanics but are also of general philosophical interest.
The standard representation theorem for expected utility theory tells us that if a subject’s preferences conform to certain axioms, then she can be represented as maximising her expected utility given a particular set of credences and utilities—and, moreover, that having those credences and utilities is the only way that she could be maximising her expected utility. However, the kinds of agents these theorems seem apt to tell us anything about are highly idealised, being always probabilistically coherent with infinitely precise (...) degrees of belief and full knowledge of all a priori truths. Ordinary subjects do not look very rational when compared to the kinds of agents usually talked about in decision theory. In this paper, I will develop an expected utility representation theorem aimed at the representation of those who are neither probabilistically coherent, logically omniscient, nor expected utility maximisers across the board—that is, agents who are frequently irrational. The agents in question may be deductively fallible, have incoherent credences, limited representational capacities, and fail to maximise expected utility for all but a limited class of gambles. (shrink)
In the context of EPR-Bohm type experiments and spin detections confined to spacelike hypersurfaces, a local, deterministic and realistic model within a Friedmann-Robertson-Walker spacetime with a constant spatial curvature (S^3 ) is presented that describes simultaneous measurements of the spins of two fermions emerging in a singlet state from the decay of a spinless boson. Exact agreement with the probabilistic predictions of quantum theory is achieved in the model without data rejection, remote contextuality, superdeterminism or backward causation. A singularity-free Clifford-algebraic (...) representation of S^3 with vanishing spatial curvature and non-vanishing torsion is then employed to transform the model in a more elegant form. Several event-by-event numerical simulations of the model are presented, which confirm our analytical results with the accuracy of 4 parts in 10^4 . Possible implications of our results for practical applications such as quantum security protocols and quantum computing are briefly discussed. (shrink)
This paper deals with, prepositional calculi with strong negation (N-logics) in which the Craig interpolation theorem holds. N-logics are defined to be axiomatic strengthenings of the intuitionistic calculus enriched with a unary connective called strong negation. There exists continuum of N-logics, but the Craig interpolation theorem holds only in 14 of them.
This paper generalises the classical Condorcet jury theorem from majority voting over two options to plurality voting over multiple options. The paper further discusses the debate between epistemic and procedural democracy and situates its formal results in that debate. The paper finally compares a number of different social choice procedures for many-option choices in terms of their epistemic merits. An appendix explores the implications of some of the present mathematical results for the question of how probable majority cycles (as (...) in Condorcet's paradox) are in large electorates. (shrink)
Following a long-standing philosophical tradition, impartiality is a distinctive and determining feature of moral judgments, especially in matters of distributive justice. This broad ethical tradition was revived in welfare economics by Vickrey, and above all, Harsanyi, under the form of the so-called Impartial Observer Theorem. The paper offers an analytical reconstruction of this argument and a step-wise philosophical critique of its premisses. It eventually provides a new formal version of the theorem based on subjective probability.
In this paper, I present an argument for a rational norm involving a kind of credal attitude called a quantificational credence – the kind of attitude we can report by saying that Lucy thinks that each record in Schroeder’s collection is 5% likely to be scratched. I prove a result called a Dutch Book Theorem, which constitutes conditional support for the norm. Though Dutch Book Theorems exist for norms on ordinary and conditional credences, there is controversy about the epistemic (...) significance of these results. So, my conclusion is that if Dutch Book Theorems do, in general, support norms on credal states, then we have support for the suggested norm on quantificational credences. Providing conditional support for this norm gives us a fuller picture of the normative landscape of credal states. (shrink)
A proof of Fermat’s last theorem is demonstrated. It is very brief, simple, elementary, and absolutely arithmetical. The necessary premises for the proof are only: the three definitive properties of the relation of equality (identity, symmetry, and transitivity), modus tollens, axiom of induction, the proof of Fermat’s last theorem in the case of.
Condorcet's famous jury theorem reaches an optimistic conclusion on the correctness of majority decisions, based on two controversial premises about voters: they are competent and vote independently, in a technical sense. I carefully analyse these premises and show that: whether a premise is justi…ed depends on the notion of probability considered; none of the notions renders both premises simultaneously justi…ed. Under the perhaps most interesting notions, the independence assumption should be weakened.
In this article, a possible generalization of the Löb’s theorem is considered. Main result is: let κ be an inaccessible cardinal, then ¬Con( ZFC +∃κ) .
REVIEW OF: Automated Development of Fundamental Mathematical Theories by Art Quaife. (1992: Kluwer Academic Publishers) 271pp. Using the theorem prover OTTER Art Quaife has proved four hundred theorems of von Neumann-Bernays-Gödel set theory; twelve hundred theorems and definitions of elementary number theory; dozens of Euclidean geometry theorems; and Gödel's incompleteness theorems. It is an impressive achievement. To gauge its significance and to see what prospects it offers this review looks closely at the book and the proofs it presents.
I argue that Composition as Identity blocks the plural version of Cantor's Theorem, and that therefore the plural version of Cantor's Theorem can no longer be uncritically appealed to. As an example, I show how this result blocks a recent argument by Hawthorne and Uzquiano.
We generalize and extend the class of Sahlqvist formulae in arbitrary polyadic modal languages, to the class of so called inductive formulae. To introduce them we use a representation of modal polyadic languages in a combinatorial style and thus, in particular, develop what we believe to be a better syntactic approach to elementary canonical formulae altogether. By generalizing the method of minimal valuations à la Sahlqvist–van Benthem and the topological approach of Sambin and Vaccaro we prove that all inductive formulae (...) are elementary canonical and thus extend Sahlqvist’s theorem over them. In particular, we give a simple example of an inductive formula which is not frame-equivalent to any Sahlqvist formula. Then, after a deeper analysis of the inductive formulae as set-theoretic operators in descriptive and Kripke frames, we establish a somewhat stronger model-theoretic characterization of these formulae in terms of a suitable equivalence to syntactically simpler formulae in the extension of the language with reversive modalities. Lastly, we study and characterize the elementary canonical formulae in reversive languages with nominals, where the relevant notion of persistence is with respect to discrete frames. (shrink)
It has been known for a few years that no more than Pi-1-1 comprehension is needed for the proof of "Frege's Theorem". One can at least imagine a view that would regard Pi-1-1 comprehension axioms as logical truths but deny that status to any that are more complex—a view that would, in particular, deny that full second-order logic deserves the name. Such a view would serve the purposes of neo-logicists. It is, in fact, no part of my view that, (...) say, Delta-3-1 comprehension axioms are not logical truths. What I am going to suggest, however, is that there is a special case to be made on behalf of Pi-1-1 comprehension. Making the case involves investigating extensions of first-order logic that do not rely upon the presence of second-order quantifiers. A formal system for so-called "ancestral logic" is developed, and it is then extended to yield what I call "Arché logic". (shrink)
Some conditions are given to ensure that for a jump homogeneous Markov process $\{X(t),t\ge 0\}$ the law of the integral functional of the process $T^{-1/2} \int^T_0\varphi(X(t))dt$ converges to the normal law $N(0,\sigma^2)$ as $T\to \infty$, where $\varphi$ is a mapping from the state space $E$ into $\bbfR$.
The text is a continuation of the article of the same name published in the previous issue of Philosophical Alternatives. The philosophical interpretations of the Kochen- Specker theorem (1967) are considered. Einstein's principle regarding the,consubstantiality of inertia and gravity" (1918) allows of a parallel between descriptions of a physical micro-entity in relation to the macro-apparatus on the one hand, and of physical macro-entities in relation to the astronomical mega-entities on the other. The Bohmian interpretation ( 1952) of quantum mechanics (...) proposes that all quantum systems be interpreted as dissipative ones and that the theorem be thus derstood. The conclusion is that the continual representation, by force or (gravitational) field between parts interacting by means of it, of a system is equivalent to their mutual entanglement if representation is discrete. Gravity (force field) and entanglement are two different, correspondingly continual and discrete, images of a single common essence. General relativity can be interpreted as a superluminal generalization of special relativity. The postulate exists of an alleged obligatory difference between a model and reality in science and philosophy. It can also be deduced by interpreting a corollary of the heorem. On the other hand, quantum mechanics, on the basis of this theorem and of V on Neumann's (1932), introduces the option that a model be entirely identified as the modeled reality and, therefore, that absolutely reality be recognized: this is a non-standard hypothesis in the epistemology of science. Thus, the true reality begins to be understood mathematically, i.e. in a Pythagorean manner, for its identification with its mathematical model. A few linked problems are highlighted: the role of the axiom of choice forcorrectly interpreting the theorem; whether the theorem can be considered an axiom; whether the theorem can be considered equivalent to the negation of the axiom. (shrink)
The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin's famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims that the limiting constant is determined by the complexity of the theory itself, which is assumed to be good (...) measure of the strength of the theory. I exhibit certain strong counterexamples and establish conclusively that the received view is false. Moreover, I show that the limiting constants provided by the theorem do not in any way reflect the power of formalized theories, but that the values of these constants are actually determined by the chosen coding of Turing machines, and are thus quite accidental. (shrink)
This note clarifies an error in the proof of the main theorem of “The Ricean Objection: An Analogue of Rice’s Theorem for First-Order Theories”, Logic Journal of the IGPL, 16(6): 585–590(2008).
Bell inequalities are usually derived by assuming locality and realism, and therefore violations of the Bell-CHSH inequality are usually taken to imply violations of either locality or realism, or both. But, after reviewing an oversight by Bell, in the Corollary below we derive the Bell-CHSH inequality by assuming only that Bob can measure along vectors b and b' simultaneously while Alice measures along either a or a', and likewise Alice can measure along vectors a and a' simultaneously while Bob measures (...) along either b or b', without assuming locality. The violations of the Bell-CHSH inequality therefore only mean impossibility of measuring along b and b' simultaneously. (shrink)
My aim in this paper is to explain what Condorcet’s jury theorem is, and to examine its central assumptions, its significance to the epistemic theory of democracy and its connection with Rousseau’s theory of general will. In the first part of the paper I will analyze an epistemic theory of democracy and explain how its connection with Condorcet’s jury theorem is twofold: the theorem is at the same time a contributing historical source, and the model used by (...) the authors to this day. In the second part I will specify the purposes of the theorem itself, and examine its underlying assumptions. Third part will be about an interpretation of Rousseau’s theory, which is given by Grofman and Feld relying on Condorcet’s jury theorem, and about criticisms of such interpretation. In the fourth, and last, part I will focus on one particular assumption of Condorcet’s theorem, which proves to be especially problematic if we would like to apply the theorem under real-life conditions; namely, the assumption that voters choose between two options only. (shrink)
Many mathematicians have cited depth as an important value in their research. However, there is no single widely accepted account of mathematical depth. This article is an attempt to bridge this gap. The strategy is to begin with a discussion of Szemerédi's theorem, which says that each subset of the natural numbers that is sufficiently dense contains an arithmetical progression of arbitrary length. This theorem has been judged deep by many mathematicians, and so makes for a good case (...) on which to focus in analyzing mathematical depth. After introducing the theorem, four accounts of mathematical depth will be considered. (shrink)
The “four-color” theorem seems to be generalizable as follows. The four-letter alphabet is sufficient to encode unambiguously any set of well-orderings including a geographical map or the “map” of any logic and thus that of all logics or the DNA plan of any alive being. Then the corresponding maximally generalizing conjecture would state: anything in the universe or mind can be encoded unambiguously by four letters. That admits to be formulated as a “four-letter theorem”, and thus one can (...) search for a properly mathematical proof of the statement. It would imply the “four colour theorem”, the proof of which many philosophers and mathematicians believe not to be entirely satisfactory for it is not a “human proof”, but intermediated by computers unavoidably since the necessary calculations exceed the human capabilities fundamentally. It is furthermore rather unsatisfactory because it consists in enumerating and proving all cases one by one. Sometimes, a more general theorem turns out to be much easier for proving including a general “human” method, and the particular and too difficult for proving theorem to be implied as a corollary in certain simple conditions. The same approach will be followed as to the four colour theorem, i.e. to be deduced more or less trivially from the “four-letter theorem” if the latter is proved. References are only classical and thus very well-known papers: their complete bibliographic description is omitted. (shrink)
This paper critically engages Philip Mirowki's essay, "The scientific dimensions of social knowledge and their distant echoes in 20th-century American philosophy of science." It argues that although the cold war context of anti-democratic elitism best suited for making decisions about engaging in nuclear war may seem to be politically and ideologically motivated, in fact we need to carefully consider the arguments underlying the new rational choice based political philosophies of the post-WWII era typified by Arrow's impossibility theorem. A distrust (...) of democratic decision-making principles may be developed by social scientists whose leanings may be toward the left or right side of the spectrum of political practices. (shrink)
In this article, it is argued that, for a classical Hamiltonian system which is closed, the ergodic theorem emerge from the Gibbs-Liouville theorem in the limit that the system has evolved for an infinitely long period of time. In this limit, from the perspective of an ignorant observer, who do not have perfect knowledge about the complete set of degrees of freedom for the system, distinctions between the possible states of the system, i.e. the information content, is lost (...) leading to the notion of statistical equilibrium where states are assigned equal probabilities. Finally, by linking the concept of entropy, which gives a measure for the amount of uncertainty, with the concept of information, the second law of thermodynamics is expressed in terms of the tendency of an observer to loose information over time. (shrink)
Riker (1982) famously argued that Arrow’s impossibility theorem undermined the logical foundations of “populism”, the view that in a democracy, laws and policies ought to express “the will of the people”. In response, his critics have questioned the use of Arrow’s theorem on the grounds that not all configurations of preferences are likely to occur in practice; the critics allege, in particular, that majority preference cycles, whose possibility the theorem exploits, rarely happen. In this essay, I argue (...) that the critics’ rejoinder to Riker misses the mark even if its factual claim about preferences is correct: Arrow’s theorem and related results threaten the populist’s principle of democratic legitimacy even if majority preference cycles never occur. In this particular context, the assumption of an unrestricted domain is justified irrespective of the preferences citizens are likely to have. (shrink)
According to conciliatory views about the epistemology of disagreement, when epistemic peers have conflicting doxastic attitudes toward a proposition and fully disclose to one another the reasons for their attitudes toward that proposition (and neither has independent reason to believe the other to be mistaken), each peer should always change his attitude toward that proposition to one that is closer to the attitudes of those peers with which there is disagreement. According to pure higher-order evidence views, higher-order evidence for a (...) proposition always suffices to determine the proper rational response to disagreement about that proposition within a group of epistemic peers. Using an analogue of Arrow's Impossibility Theorem, I shall argue that no conciliatory and pure higher-order evidence view about the epistemology of disagreement can provide a true and general answer to the question of what disagreeing epistemic peers should do after fully disclosing to each other the (first-order) reasons for their conflicting doxastic attitudes. (shrink)
Non-commuting quantities and hidden parameters – Wave-corpuscular dualism and hidden parameters – Local or nonlocal hidden parameters – Phase space in quantum mechanics – Weyl, Wigner, and Moyal – Von Neumann’s theorem about the absence of hidden parameters in quantum mechanics and Hermann – Bell’s objection – Quantum-mechanical and mathematical incommeasurability – Kochen – Specker’s idea about their equivalence – The notion of partial algebra – Embeddability of a qubit into a bit – Quantum computer is not Turing machine (...) – Is continuality universal? – Diffeomorphism and velocity – Einstein’s general principle of relativity – „Mach’s principle“ – The Skolemian relativity of the discrete and the continuous – The counterexample in § 6 of their paper – About the classical tautology which is untrue being replaced by the statements about commeasurable quantum-mechanical quantities – Logical hidden parameters – The undecidability of the hypothesis about hidden parameters – Wigner’s work and и Weyl’s previous one – Lie groups, representations, and psi-function – From a qualitative to a quantitative expression of relativity − psi-function, or the discrete by the random – Bartlett’s approach − psi-function as the characteristic function of random quantity – Discrete and/ or continual description – Quantity and its “digitalized projection“ – The idea of „velocity−probability“ – The notion of probability and the light speed postulate – Generalized probability and its physical interpretation – A quantum description of macro-world – The period of the as-sociated de Broglie wave and the length of now – Causality equivalently replaced by chance – The philosophy of quantum information and religion – Einstein’s thesis about “the consubstantiality of inertia ant weight“ – Again about the interpretation of complex velocity – The speed of time – Newton’s law of inertia and Lagrange’s formulation of mechanics – Force and effect – The theory of tachyons and general relativity – Riesz’s representation theorem – The notion of covariant world line – Encoding a world line by psi-function – Spacetime and qubit − psi-function by qubits – About the physical interpretation of both the complex axes of a qubit – The interpretation of the self-adjoint operators components – The world line of an arbitrary quantity – The invariance of the physical laws towards quantum object and apparatus – Hilbert space and that of Minkowski – The relationship between the coefficients of -function and the qubits – World line = psi-function + self-adjoint operator – Reality and description – Does a „curved“ Hilbert space exist? – The axiom of choice, or when is possible a flattening of Hilbert space? – But why not to flatten also pseudo-Riemannian space? – The commutator of conjugate quantities – Relative mass – The strokes of self-movement and its philosophical interpretation – The self-perfection of the universe – The generalization of quantity in quantum physics – An analogy of the Feynman formalism – Feynman and many-world interpretation – The psi-function of various objects – Countable and uncountable basis – Generalized continuum and arithmetization – Field and entanglement – Function as coding – The idea of „curved“ Descartes product – The environment of a function – Another view to the notion of velocity-probability – Reality and description – Hilbert space as a model both of object and description – The notion of holistic logic – Physical quantity as the information about it – Cross-temporal correlations – The forecasting of future – Description in separable and inseparable Hilbert space – „Forces“ or „miracles“ – Velocity or time – The notion of non-finite set – Dasein or Dazeit – The trajectory of the whole – Ontological and onto-theological difference – An analogy of the Feynman and many-world interpretation − psi-function as physical quantity – Things in the world and instances in time – The generation of the physi-cal by mathematical – The generalized notion of observer – Subjective or objective probability – Energy as the change of probability per the unite of time – The generalized principle of least action from a new view-point – The exception of two dimensions and Fermat’s last theorem. (shrink)
In this article, it is argued that the Gibbs-Liouville theorem is a mathematical representation of the statement that closed classical systems evolve deterministically. From the perspective of an observer of the system, whose knowledge about the degrees of freedom of the system is complete, the statement of deterministic evolution is equivalent to the notion that the physical distinctions between the possible states of the system, or, in other words, the information possessed by the observer about the system, is never (...) lost. Thus, it is proposed that the Gibbs-Liouville theorem is a statement about the dynamical evolution of a closed classical system valid in such situations where information about the system is conserved in time. Furthermore, in this article it is shown that the Hamilton equations and the Hamilton principle on phase space follow directly from the differential representation of the Gibbs-Liouville theorem, i.e. that the divergence of the Hamiltonian phase flow velocity vanish. Thus, considering that the Lagrangian and Hamiltonian formulations of classical mechanics are related via the Legendre transformation, it is obtained that these two standard formulations are both logical consequences of the statement of deterministic evolution, or, equivalently, information conservation. (shrink)
In this work we consider the problem of the approximate hedging of a contingent claim in the minimum mean square deviation criterion. A theorem on martingale representation in case of discrete time and an application of the result for semi-continuous market model are also given.
On the heels of Franzén's fine technical exposition of Gödel's incompleteness theorems and related topics (Franzén 2004) comes this survey of the incompleteness theorems aimed at a general audience. Gödel's Theorem: An Incomplete Guide to its Use and Abuse is an extended and self-contained exposition of the incompleteness theorems and a discussion of what informal consequences can, and in particular cannot, be drawn from them.
If the conclusion of the Tarski Undefinability Theorem was that some artificially constrained limited notions of a formal system necessarily have undecidable sentences, then Tarski made no mistake within his assumptions. When we expand the scope of his investigation to other notions of formal systems we reach an entirely different conclusion showing that Tarski's assumptions were wrong.
Introduction to mathematical logic, part 2.Textbook for students in mathematical logic and foundations of mathematics. Platonism, Intuition, Formalism. Axiomatic set theory. Around the Continuum Problem. Axiom of Determinacy. Large Cardinal Axioms. Ackermann's Set Theory. First order arithmetic. Hilbert's 10th problem. Incompleteness theorems. Consequences. Connected results: double incompleteness theorem, unsolvability of reasoning, theorem on the size of proofs, diophantine incompleteness, Loeb's theorem, consistent universal statements are provable, Berry's paradox, incompleteness and Chaitin's theorem. Around Ramsey's theorem.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.