A fundamental problem in science is how to make logical inferences from scientiﬁc data. Mere data does not suﬃce since additional information is necessary to select a domain of models or hypotheses and thus determine the likelihood of each model or hypothesis. Thomas Bayes’ Theorem relates the data and prior information to posterior probabilities associated with diﬀering models or hypotheses and thus is useful in identifying the roles played by the known data and the assumed prior information when (...) making inferences. Scientists, philosophers, and theologians accumulate knowledge when analyzing diﬀerent aspects of reality and search for particular hypotheses or models to ﬁt their respective subject matters. Of course, a main goal is then to integrate all kinds of knowledge into an all-encompassing worldview that would describe the whole of reality. A generous description of the whole of reality would span, in the order of complexity, from the purely physical to the supernatural. These two extreme aspects of reality are bridged by a nonphysical realm, which would include elements of life, man, consciousness, rationality, mental and mathematical abstractions, etc. An urgent problem in the theory of knowledge is what science is and what it is not. Albert Einstein’s notion of science in terms of sense perception is reﬁned by deﬁning operationally the data that makes up the subject matter of science. It is shown, for instance, that theological considerations included in the prior information assumed by Isaac Newton is irrelevant in relating the data logically to the model or hypothesis. In addition, the concepts of naturalism, intelligent design, and evolutionary theory are critically analyzed. Finally, Eugene P. Wigner’s suggestions concerning the nature of human consciousness, life, and the success of mathematics in the natural sciences is considered in the context of the creative power endowed in humans by God. (shrink)
Medical diagnosis has been traditionally recognized as a privileged field of application for so called probabilistic induction. Consequently, the Bayesian theorem, which mathematically formalizes this form of inference, has been seen as the most adequate tool for quantifying the uncertainty surrounding the diagnosis by providing probabilities of different diagnostic hypotheses, given symptomatic or laboratory data. On the other side, it has also been remarked that differential diagnosis rather works by exclusion, e.g. by modus tollens, i.e. deductively. By drawing on (...) a case history, this paper aims at clarifying some points on the issue. Namely: 1) Medical diagnosis does not represent, strictly speaking, a form of induction, but a type, of what in Peircean terms should be called ‘abduction’ (identifying a case as the token of a specific type); 2) in performing the single diagnostic steps, however, different inferential methods are used for both inductive and deductive nature: modus tollens, hypothetical-deductive method, abduction; 3) Bayes’ theorem is a probabilized form of abduction which uses mathematics in order to justify the degree of confidence which can be entertained on a hypothesis given the available evidence; 4) although theoretically irreconcilable, in practice, both the hypothetical- deductive method and the Bayesian one, are used in the same diagnosis with no serious compromise for its correctness; 5) Medical diagnosis, especially differential diagnosis, also uses a kind of “probabilistic modus tollens”, in that, signs (symptoms or laboratory data) are taken as strong evidence for a given hypothesis not to be true: the focus is not on hypothesis confirmation, but instead on its refutation [Pr (¬ H/E1, E2, …, En)]. Especially at the beginning of a complicated case, odds are between the hypothesis that is potentially being excluded and a vague “other”. This procedure has the advantage of providing a clue of what evidence to look for and to eventually reduce the set of candidate hypotheses if conclusive negative evidence is found. 6) Bayes’ theorem in the hypothesis-confirmation form can more faithfully, although idealistically, represent the medical diagnosis when the diagnostic itinerary has come to a reduced set of plausible hypotheses after a process of progressive elimination of candidate hypotheses; 7) Bayes’ theorem is however indispensable in the case of litigation in order to assess doctor’s responsibility for medical error by taking into account the weight of the evidence at his disposal. (shrink)
The paper argues that the two best known formal logical fallacies, namely denying the antecedent (DA) and affirming the consequent (AC) are not just basic and simple errors, which prove human irrationality, but rather informational shortcuts, which may provide a quick and dirty way of extracting useful information from the environment. DA and AC are shown to be degraded versions of Bayes’ theorem, once this is stripped of some of its probabilities. The less the probabilities count, the closer (...) these fallacies become to a reasoning that is not only informationally useful but also logically valid. (shrink)
The paper argues that the two best known formal logical fallacies, namely denying the antecedent (DA) and affirming the consequent (AC) are not just basic and simple errors, which prove human irrationality, but rather informational shortcuts, which may provide a quick and dirty way of extracting useful information from the environment. DA and AC are shown to be degraded versions of Bayes’ theorem, once this is stripped of some of its probabilities. The less the probabilities count, the closer (...) these fallacies become to a reasoning that is not only informationally useful but also logically valid. (shrink)
This paper offers a probabilistic treatment of the conditions for argument cogency as endorsed in informal logic: acceptability, relevance, and sufficiency. Treating a natural language argument as a reason-claim-complex, our analysis identifies content features of defeasible argument on which the RSA conditions depend, namely: change in the commitment to the reason, the reason’s sensitivity and selectivity to the claim, one’s prior commitment to the claim, and the contextually determined thresholds of acceptability for reasons and for claims. Results contrast with, and (...) may indeed serve to correct, the informal understanding and applications of the RSA criteria concerning their conceptual dependence, their function as update-thresholds, and their status as obligatory rather than permissive norms, but also show how these formal and informal normative approachs can in fact align. (shrink)
According to a simple Bayesian argument from evil, the evil we observe is less likely given theism than given atheism, and therefore lowers the probability of theism. I consider the most common skeptical theist response to this argument, according to which our cognitive limitations make the probability of evil given theism inscrutable. I argue that if skeptical theists are right about this, then the probability of theism given evil is itself largely inscrutable, and that if this is so, we ought (...) to be agnostic about whether God exists. (shrink)
In a recent article, David Kyle Johnson has claimed to have provided a ‘refutation’ of skeptical theism. Johnson’s refutation raises several interesting issues. But in this note, I focus on only one—an implicit principle Johnson uses in his refutation to update probabilities after receiving new evidence. I argue that this principle is false. Consequently, Johnson’s refutation, as it currently stands, is undermined.
English abstract: This paper discusses the delicate relationship between traditional epistemology and the increasingly influential probabilistic (or ‘Bayesian’) approach to epistemology. The paper introduces some of the key ideas of probabilistic epistemology, including credences or degrees of belief, Bayes’ theorem, conditionalization, and the Dutch Book argument. The tension between traditional and probabilistic epistemology is brought out by considering the lottery and preface paradoxes as they relate to rational (binary) belief and credence respectively. It is then argued that this (...) tension can be alleviated by rejecting the requirement that rational (binary) beliefs must be consistent and closed under logical entailment. Instead, it is suggested that this logical requirement applies to a different type of binary propositional attitude, viz. acceptance. (shrink)
I present a solution to the epistemological or characterisation problem of induction. In part I, Bayesian Confirmation Theory (BCT) is discussed as a good contender for such a solution but with a fundamental explanatory gap (along with other well discussed problems); useful assigned probabilities like priors require substantive degrees of belief about the world. I assert that one does not have such substantive information about the world. Consequently, an explanation is needed for how one can be licensed to act as (...) if one has substantive information about the world when one does not. I sketch the outlines of a solution in part I, showing how it differs from others, with full details to follow in subsequent parts. The solution is pragmatic in sentiment (though differs in specifics to arguments from, for example, William James); the conceptions we use to guide our actions are and should be at least partly determined by preferences. This is cashed out in a reformulation of decision theory motivated by a non-reductive formulation of hypotheses and logic. A distinction emerges between initial assumptions--that can be non-dogmatic--and effective assumptions that can simultaneously be substantive. An explanation is provided for the plausibility arguments used to explain assigned probabilities in BCT. -/- In subsequent parts, logic is constructed from principles independent of language and mind. In particular, propositions are defined to not have form. Probabilities are logical and uniquely determined by assumptions. The problems considered fatal to logical probabilities--Goodman's `grue' problem and the uniqueness of priors problem are dissolved due to the particular formulation of logic used. Other problems such as the zero-prior problem are also solved. -/- A universal theory of (non-linguistic) meaning is developed. Problems with counterfactual conditionals are solved by developing concepts of abstractions and corresponding pictures that make up hypotheses. Spaces of hypotheses and the version of Bayes' theorem that utilises them emerge from first principles. -/- Theoretical virtues for hypotheses emerge from the theory. Explanatory force is explicated. The significance of effective assumptions is partly determined by combinatoric factors relating to the structure of hypotheses. I conjecture that this is the origin of simplicity. (shrink)
This is a series of lectures on formal decision theory held at the University of Bayreuth during the summer terms 2008 and 2009. It largely follows the book from Michael D. Resnik: Choices. An Introduction to Decision Theory, 5th ed. Minneapolis London 2000 and covers the topics: -/- Decisions under ignorance and risk Probability calculus (Kolmogoroff Axioms, Bayes' Theorem) Philosophical interpretations of probability (R. v. Mises, Ramsey-De Finetti) Neuman-Morgenstern Utility Theory Introductory Game Theory Social Choice Theory (Sen's Paradox (...) of Liberalism, Arrow's Theorem) . (shrink)
Several scholars, including Martin Hengel, R. Alan Culpepper, and Richard Bauckham, have argued that Papias had knowledge of the Gospel of John on the grounds that Papias’s prologue lists six of Jesus’s disciples in the same order that they are named in the Gospel of John: Andrew, Peter, Philip, Thomas, James, and John. In “A Note on Papias’s Knowledge of the Fourth Gospel” (JBL 129 [2010]: 793–794), Jake H. O’Connell presents a statistical analysis of this argument, according to which the (...) probability of this correspondence occurring by chance is lower than 1%. O’Connell concludes that it is more than 99% probable that this correspondence is the result of Papias copying John, rather than chance. I show that O’Connell’s analysis contains multiple mistakes, both substantive and mathematical: it ignores relevant evidence, overstates the correspondence between John and Papias, wrongly assumes that if Papias did not know John he ordered the disciples randomly, and conflates the probability of A given B with the probability of B given A. In discussing these errors, I aim to inform both Johannine scholarship and the use of probabilistic methods in historical reasoning. (shrink)
We use Bayesian tools to assess Law’s skeptical argument against the historicity of Jesus. We clarify and endorse his sub-argument for the conclusion that there is good reason to be skeptical about the miracle claims of the New Testament. However, we dispute Law’s contamination principle that he claims entails that we should be skeptical about the existence of Jesus. There are problems with Law’s defense of his principle, and we show, more importantly, that it is not supported by Bayesian considerations. (...) Finally, we show that Law’s principle is false in the specific case of Jesus and thereby show, contrary to the main conclusion of Law’s argument, that biblical historians are entitled to remain confident that Jesus existed. (shrink)
A group is often construed as one agent with its own probabilistic beliefs (credences), which are obtained by aggregating those of the individuals, for instance through averaging. In their celebrated “Groupthink”, Russell et al. (2015) require group credences to undergo Bayesian revision whenever new information is learnt, i.e., whenever individual credences undergo Bayesian revision based on this information. To obtain a fully Bayesian group, one should often extend this requirement to non-public or even private information (learnt by not all or (...) just one individual), or to non-representable information (not representable by any event in the domain where credences are held). I pro- pose a taxonomy of six types of ‘group Bayesianism’. They differ in the information for which Bayesian revision of group credences is required: public representable information, private representable information, public non-representable information, etc. Six corre- sponding theorems establish how individual credences must (not) be aggregated to ensure group Bayesianism of any type, respectively. Aggregating through standard averaging is never permitted; instead, different forms of geometric averaging must be used. One theorem—that for public representable information—is essentially Russell et al.’s central result (with minor corrections). Another theorem—that for public non-representable information—fills a gap in the theory of externally Bayesian opinion pooling. (shrink)
Racial profiling has come under intense public scrutiny especially since the rise of the Black Lives Matter movement. This article discusses two questions: whether racial profiling is sometimes rational, and whether it can be morally permissible. It is argued that under certain circumstances the affirmative answer to both questions is justified.
Lately in the past couple of years, there are an increasing in the normal rate of playing computer games or video games compared to the E-learning content that are introduced for the safety of our children, and the impact of the video game addictiveness that ranges from (Musculoskeletal issues, Vision problems and Obesity). Furthermore, this paper introduce an intelligent tutoring system for both parent and their children for enhancement the experience of gaming and tell us about the health problems and (...) how we can solve them, with an easy user interface that way can our children be happy and excited about the information and their health. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Jury theorems are mathematical theorems about the ability of collectives to make correct decisions. Several jury theorems carry the optimistic message that, in suitable circumstances, ‘crowds are wise’: many individuals together (using, for instance, majority voting) tend to make good decisions, outperforming fewer or just one individual. Jury theorems form the technical core of epistemic arguments for democracy, and provide probabilistic tools for reasoning about the epistemic quality of collective decisions. The popularity of jury theorems spans across various disciplines such (...) as economics, political science, philosophy, and computer science. This entry reviews and critically assesses a variety of jury theorems. It first discusses Condorcet's initial jury theorem, and then progressively introduces jury theorems with more appropriate premises and conclusions. It explains the philosophical foundations, and relates jury theorems to diversity, deliberation, shared evidence, shared perspectives, and other phenomena. It finally connects jury theorems to their historical background and to democratic theory, social epistemology, and social choice theory. (shrink)
To counter a general belief that all the paradoxes stem from a kind of circularity (or involve some self--reference, or use a diagonal argument) Stephen Yablo designed a paradox in 1993 that seemingly avoided self--reference. We turn Yablo's paradox, the most challenging paradox in the recent years, into a genuine mathematical theorem in Linear Temporal Logic (LTL). Indeed, Yablo's paradox comes in several varieties; and he showed in 2004 that there are other versions that are equally paradoxical. Formalizing these (...) versions of Yablo's paradox, we prove some theorems in LTL. This is the first time that Yablo's paradox(es) become new(ly discovered) theorems in mathematics and logic. (shrink)
Despite their popularity, relatively scant attention has been paid to the upshot of Bayesian and predictive processing models of cognition for views of overall cognitive architecture. Many of these models are hierarchical ; they posit generative models at multiple distinct "levels," whose job is to predict the consequences of sensory input at lower levels. I articulate one possible position that could be implied by these models, namely, that there is a continuous hierarchy of perception, cognition, and action control comprising levels (...) of generative models. I argue that this view is not entailed by a general Bayesian/predictive processing outlook. Bayesian approaches are compatible with distinct formats of mental representation. Focusing on Bayesian approaches to motor control, I argue that the junctures between different types of mental representation are places where the transitivity of hierarchical prediction may be broken, and I consider the upshot of this conclusion for broader discussions of cognitive architecture. (shrink)
We give a review and critique of jury theorems from a social-epistemology perspective, covering Condorcet’s (1785) classic theorem and several later refinements and departures. We assess the plausibility of the conclusions and premises featuring in jury theorems and evaluate the potential of such theorems to serve as formal arguments for the ‘wisdom of crowds’. In particular, we argue (i) that there is a fundamental tension between voters’ independence and voters’ competence, hence between the two premises of most jury theorems; (...) (ii) that the (asymptotic) conclusion that ‘huge groups are infallible’, reached by many jury theorems, is an artifact of unjustified premises; and (iii) that the (nonasymptotic) conclusion that ‘larger groups are more reliable’, also reached by many jury theorems, is not an artifact and should be regarded as the more adequate formal rendition of the ‘wisdom of crowds’. (shrink)
The problem addressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen, BC, 1983, Theory comparison and relevant Evidence. In J. Earman (Ed.), Testing scientific theories (pp. 27–42). Minneapolis: University of Minnesota Press). Sections 1– 3 contain the general plausibility-informativeness theory of theory assessment. In a nutshell, the message is (1) that there are two values a theory should exhibit: (...) truth and informativeness—measured respectively by a truth indicator and a strength indicator; (2) that these two values are conflicting in the sense that the former is a decreasing and the latter an increasing function of the logical strength of the theory to be assessed; and (3) that in assessing a given theory by the available data one should weigh between these two conflicting aspects in such a way that any surplus in informativeness succeeds, if the shortfall in plausibility is small enough. Particular accounts of this general theory arise by inserting particular strength indicators and truth indicators. In Section 4 the theory is spelt out for the Bayesian paradigm of subjective probabilities. It is then compared to incremental Bayesian confirmation theory. Section 4 closes by asking whether it is likely to be lovely. Section 5 discusses a few problems of confirmation theory in the light of the present approach. In particular, it is briefly indicated how the present account gives rise to a new analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, CG, 1945, Studies in the logic of comfirmation. Mind, 54, 1–26, 97–121.), differing from the one Carnap gave in § 87 of his Logical foundations of probability (1962, Chicago: University of Chicago Press). Section 6 adresses the question of justification any theory of theory assessment has to face: why should one stick to theories given high assessment values rather than to any other theories? The answer given by the Bayesian version of the account presented in section 4 is that one should accept theories given high assessment values, because, in the medium run, theory assessment almost surely takes one to the most informative among all true theories when presented separating data. The concluding section 7 continues the comparison between the present account and incremental Bayesian confirmation theory. (shrink)
In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of our second result. Although we (...) thereby provide a new proof of Arrow’s theorem, our main aim is to identify the analogue of Arrow’s theorem in judgment aggregation, to clarify the relation between judgment and preference aggregation, and to illustrate the generality of the judgment aggregation model. JEL Classi…cation: D70, D71.. (shrink)
In this short survey article, I discuss Bell’s theorem and some strategies that attempt to avoid the conclusion of non-locality. I focus on two that intersect with the philosophy of probability: (1) quantum probabilities and (2) superdeterminism. The issues they raised not only apply to a wide class of no-go theorems about quantum mechanics but are also of general philosophical interest.
Our conscious minds exist in the Universe, therefore they should be identified with physical states that are subject to physical laws. In classical theories of mind, the mental states are identified with brain states that satisfy the deterministic laws of classical mechanics. This approach, however, leads to insurmountable paradoxes such as epiphenomenal minds and illusionary free will. Alternatively, one may identify mental states with quantum states realized within the brain and try to resolve the above paradoxes using the standard Hilbert (...) space formalism of quantum mechanics. In this essay, we first show that identification of mind states with quantum states within the brain is biologically feasible, and then elaborating on the mathematical proofs of two quantum mechanical no-go theorems, we explain why quantum theory might have profound implications for the scientific understanding of one's mental states, self identity, beliefs and free will. (shrink)
Any intermediate propositional logic can be extended to a calculus with epsilon- and tau-operators and critical formulas. For classical logic, this results in Hilbert’s $\varepsilon $ -calculus. The first and second $\varepsilon $ -theorems for classical logic establish conservativity of the $\varepsilon $ -calculus over its classical base logic. It is well known that the second $\varepsilon $ -theorem fails for the intuitionistic $\varepsilon $ -calculus, as prenexation is impossible. The paper investigates the effect of adding critical $\varepsilon $ (...) - and $\tau $ -formulas and using the translation of quantifiers into $\varepsilon $ - and $\tau $ -terms to intermediate logics. It is shown that conservativity over the propositional base logic also holds for such intermediate ${\varepsilon \tau }$ -calculi. The “extended” first $\varepsilon $ -theorem holds if the base logic is finite-valued Gödel–Dummett logic, and fails otherwise, but holds for certain provable formulas in infinite-valued Gödel logic. The second $\varepsilon $ -theorem also holds for finite-valued first-order Gödel logics. The methods used to prove the extended first $\varepsilon $ -theorem for infinite-valued Gödel logic suggest applications to theories of arithmetic. (shrink)
Amalgamating evidence of different kinds for the same hypothesis into an overall confirmation is analogous, I argue, to amalgamating individuals’ preferences into a group preference. The latter faces well-known impossibility theorems, most famously “Arrow’s Theorem”. Once the analogy between amalgamating evidence and amalgamating preferences is tight, it is obvious that amalgamating evidence might face a theorem similar to Arrow’s. I prove that this is so, and end by discussing the plausibility of the axioms required for the theorem.
The standard representation theorem for expected utility theory tells us that if a subject’s preferences conform to certain axioms, then she can be represented as maximising her expected utility given a particular set of credences and utilities—and, moreover, that having those credences and utilities is the only way that she could be maximising her expected utility. However, the kinds of agents these theorems seem apt to tell us anything about are highly idealised, being always probabilistically coherent with infinitely precise (...) degrees of belief and full knowledge of all a priori truths. Ordinary subjects do not look very rational when compared to the kinds of agents usually talked about in decision theory. In this paper, I will develop an expected utility representation theorem aimed at the representation of those who are neither probabilistically coherent, logically omniscient, nor expected utility maximisers across the board—that is, agents who are frequently irrational. The agents in question may be deductively fallible, have incoherent credences, limited representational capacities, and fail to maximise expected utility for all but a limited class of gambles. (shrink)
In this paper a symmetry argument against quantity absolutism is amended. Rather than arguing against the fundamentality of intrinsic quantities on the basis of transformations of basic quantities, e.g. mass doubling, a class of symmetries defined by the Π-theorem are used. This theorem is a fundamental result of dimensional analysis and shows that all unit-invariant equations which adequately represent physical systems can be put into the form of a function of dimensionless quantities. Quantity transformations that leave those dimensionless (...) quantities invariant are empirical and dynamical symmetries. The proposed symmetries of the original argument fail to be both dynamical and empirical symmetries and are open to counterexamples. The amendment of the original argument requires consideration of the relationships between quantity dimensions, particularly the constraint of dimensional homogeneity on our physical equations. The discussion raises a pertinent issue: what is the modal status of the constants of nature which figure in the laws? Two positions, constant necessitism and constant contingentism, are introduced and their relationships to absolutism and comparativism undergo preliminary investigation. It is argued that the absolutist can only reject the amended symmetry argument by accepting constant necessitism, which has a costly outcome: unit transformations are no longer symmetries. (shrink)
Plagiarism detection is the process of finding similarities on electronic based documents. Recently, this process is highly required because of the large number of available documents on the internet and the ability to copy and paste the text of relevant documents with simply Control+C and Control+V commands. The proposed solution is to investigate and develop an easy, fast, and multi-language support plagiarism detector with the easy of one click to detect the document plagiarism. This process will be done with the (...) support of intelligent system that can learn, change and adapt to the input document and make a cross-fast search for the content on the local repository and the online repository and link the content of the file with the matching content everywhere found. Furthermore, the supported document type that we will use is word, text and in some cases, the pdf files –where is the text can be extracting from them- and this made possible by using the DLL file from Word application that Microsoft provided on OS. The using of DLL will let us to not constrain on how to get the text from files; and will help us to apply the file on our Delphi project and walk throw our methodology and read the file word by word to grantee the best working scenarios for the calculation. In the result, this process will help in the uprising the documents quality and enhance the writer experience related to his work and will save the copyrights for the official writer of the documents by providing a new alternative tool for plagiarism detection problem for easy and fast use to the concerned Institutions for free. (shrink)
The previous two parts of the paper demonstrate that the interpretation of Fermat’s last theorem (FLT) in Hilbert arithmetic meant both in a narrow sense and in a wide sense can suggest a proof by induction in Part I and by means of the Kochen - Specker theorem in Part II. The same interpretation can serve also for a proof FLT based on Gleason’s theorem and partly similar to that in Part II. The concept of (probabilistic) measure (...) of a subspace of Hilbert space and especially its uniqueness can be unambiguously linked to that of partial algebra or incommensurability, or interpreted as a relation of the two dual branches of Hilbert arithmetic in a wide sense. The investigation of the last relation allows for FLT and Gleason’s theorem to be equated in a sense, as two dual counterparts, and the former to be inferred from the latter, as well as vice versa under an additional condition relevant to the Gödel incompleteness of arithmetic to set theory. The qubit Hilbert space itself in turn can be interpreted by the unity of FLT and Gleason’s theorem. The proof of such a fundamental result in number theory as FLT by means of Hilbert arithmetic in a wide sense can be generalized to an idea about “quantum number theory”. It is able to research mathematically the origin of Peano arithmetic from Hilbert arithmetic by mediation of the “nonstandard bijection” and its two dual branches inherently linking it to information theory. Then, infinitesimal analysis and its revolutionary application to physics can be also re-realized in that wider context, for example, as an exploration of the way for physical quantity of time (respectively, for time derivative in any temporal process considered in physics) to appear at all. Finally, the result admits a philosophical reflection of how any hierarchy arises or changes itself only thanks to its dual and idempotent counterpart. (shrink)
The aggregation of individual judgments over interrelated propositions is a newly arising field of social choice theory. I introduce several independence conditions on judgment aggregation rules, each of which protects against a specific type of manipulation by agenda setters or voters. I derive impossibility theorems whereby these independence conditions are incompatible with certain minimal requirements. Unlike earlier impossibility results, the main result here holds for any (non-trivial) agenda. However, independence conditions arguably undermine the logical structure of judgment aggregation. I therefore (...) suggest restricting independence to premises, which leads to a generalised premise-based procedure. This procedure is proven to be possible if the premises are logically independent. (shrink)
We present a general framework for representing belief-revision rules and use it to characterize Bayes's rule as a classical example and Jeffrey's rule as a non-classical one. In Jeffrey's rule, the input to a belief revision is not simply the information that some event has occurred, as in Bayes's rule, but a new assignment of probabilities to some events. Despite their differences, Bayes's and Jeffrey's rules can be characterized in terms of the same axioms: "responsiveness", which requires (...) that revised beliefs incorporate what has been learnt, and "conservativeness", which requires that beliefs on which the learnt input is "silent" do not change. To illustrate the use of non-Bayesian belief revision in economic theory, we sketch a simple decision-theoretic application. (shrink)
This paper begins with a puzzle regarding Lewis' theory of radical interpretation. On the one hand, Lewis convincingly argued that the facts about an agent's sensory evidence and choices will always underdetermine the facts about her beliefs and desires. On the other hand, we have several representation theorems—such as those of (Ramsey 1931) and (Savage 1954)—that are widely taken to show that if an agent's choices satisfy certain constraints, then those choices can suffice to determine her beliefs and desires. In (...) this paper, I will argue that Lewis' conclusion is correct: choices radically underdetermine beliefs and desires, and representation theorems provide us with no good reasons to think otherwise. Any tension with those theorems is merely apparent, and relates ultimately to the difference between how 'choices' are understood within Lewis' theory and the problematic way that they're represented in the context of the representation theorems. For the purposes of radical interpretation, representation theorems like Ramsey's and Savage's just aren't very relevant after all. (shrink)
A proof of Fermat’s last theorem is demonstrated. It is very brief, simple, elementary, and absolutely arithmetical. The necessary premises for the proof are only: the three definitive properties of the relation of equality (identity, symmetry, and transitivity), modus tollens, axiom of induction, the proof of Fermat’s last theorem in the case of.
Pettit (2012) presents a model of popular control over government, according to which it consists in the government being subject to those policy-making norms that everyone accepts. In this paper, I provide a formal statement of this interpretation of popular control, which illuminates its relationship to other interpretations of the idea with which it is easily conflated, and which gives rise to a theorem, similar to the famous Gibbard-Satterthwaite theorem. The theorem states that if government policy is (...) subject to popular control, as Pettit interprets it, and policy responds positively to changes in citizens' normative attitudes, then there is a single individual whose normative attitudes unilaterally determine policy. I use the model and theorem as an illustrative example to discuss the role of mathematics in normative political theory. (shrink)
This paper generalises the classical Condorcet jury theorem from majority voting over two options to plurality voting over multiple options. The paper further discusses the debate between epistemic and procedural democracy and situates its formal results in that debate. The paper finally compares a number of different social choice procedures for many-option choices in terms of their epistemic merits. An appendix explores the implications of some of the present mathematical results for the question of how probable majority cycles (as (...) in Condorcet's paradox) are in large electorates. (shrink)
This paper deals with, prepositional calculi with strong negation (N-logics) in which the Craig interpolation theorem holds. N-logics are defined to be axiomatic strengthenings of the intuitionistic calculus enriched with a unary connective called strong negation. There exists continuum of N-logics, but the Craig interpolation theorem holds only in 14 of them.
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results rely on stronger assumptions than they really need. When we strip these arguments down to a minimal core, we can see both how certain replies miss the mark, and also how to devise parallel arguments for other domains, including epistemic “might,” probability claims, claims about comparative value, and so on. A (...) popular reply to Lewis's results is to claim that conditional claims, or claims about subjective value, lack truth conditions. For this strategy to have a chance of success, it needs to give up basic structural principles about how epistemic states can be updated—in a way that is strikingly parallel to the commitments of the project of dynamic semantics. (shrink)
Peer review is often taken to be the main form of quality control on academic research. Usually journals carry this out. However, parts of maths and physics appear to have a parallel, crowd-sourced model of peer review, where papers are posted on the arXiv to be publicly discussed. In this paper we argue that crowd-sourced peer review is likely to do better than journal-solicited peer review at sorting papers by quality. Our argument rests on two key claims. First, crowd-sourced peer (...) review will lead on average to more reviewers per paper than journal-solicited peer review. Second, due to the wisdom of the crowds, more reviewers will tend to make better judgments than fewer. We make the second claim precise by looking at the Condorcet Jury Theorem as well as two related jury theorems developed specifically to apply to peer review. (shrink)
Nếu nền kinh tế quốc gia là một cơ thể sống, thì hệ thống tài chính là cơ chế tạo, cung cấp và lưu thông máu tới từng tế bào, bộ phận. Thiếu hay thừa đều phát sinh các vấn đề cần giải quyết. Với quá trình chuyển đổi kinh tế và hội nhập quốc tế mạnh mẽ, liên tục giám sát, kịp thời dự đoán sát thực các dấu hiệu và biến động của thị trường để từ đó xây (...) dựng chính sách điều tiết thị trường tài chính và toàn bộ nền kinh tế một cách hợp lý là việc làm cần thiết bảo đảm tăng trưởng kinh tế bền vững, đúng định hướng. Đối với nền tài chính của Việt Nam, hiện đang tồn tại bảy dấu hiệu cảnh báo cần được các nhà nghiên cứu và hoạch định chính sách quan tâm. (shrink)
Following a long-standing philosophical tradition, impartiality is a distinctive and determining feature of moral judgments, especially in matters of distributive justice. This broad ethical tradition was revived in welfare economics by Vickrey, and above all, Harsanyi, under the form of the so-called Impartial Observer Theorem. The paper offers an analytical reconstruction of this argument and a step-wise philosophical critique of its premisses. It eventually provides a new formal version of the theorem based on subjective probability.
Textbook on Gödel’s incompleteness theorems and computability theory, based on the Open Logic Project. Covers recursive function theory, arithmetization of syntax, the first and second incompleteness theorem, models of arithmetic, second-order logic, and the lambda calculus.
It has been known for a few years that no more than Pi-1-1 comprehension is needed for the proof of "Frege's Theorem". One can at least imagine a view that would regard Pi-1-1 comprehension axioms as logical truths but deny that status to any that are more complex—a view that would, in particular, deny that full second-order logic deserves the name. Such a view would serve the purposes of neo-logicists. It is, in fact, no part of my view that, (...) say, Delta-3-1 comprehension axioms are not logical truths. What I am going to suggest, however, is that there is a special case to be made on behalf of Pi-1-1 comprehension. Making the case involves investigating extensions of first-order logic that do not rely upon the presence of second-order quantifiers. A formal system for so-called "ancestral logic" is developed, and it is then extended to yield what I call "Arché logic". (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.