I propose a new reading of Hegel’s discussion of modality in the ‘Actuality’ chapter of the Science of Logic. On this reading, the main purpose of the chapter is a critical engagement with Spinoza’s modal metaphysics. Hegel first reconstructs a rationalist line of thought — corresponding to the cosmological argument for the existence of God — that ultimately leads to Spinozist necessitarianism. He then presents a reductio argument against necessitarianism, contending that as a consequence of necessitarianism, no adequate explanatory accounts (...) of facts about finite reality can be given. (shrink)
Recently influential “rationalist” views of self-knowledge about our rational attitudes hold that such self-knowledge is essentially connected to rational agency, and therefore has to be particularly reliable, immediate, and distinct from third-personal access. This approach has been challenged by “theory theory” or “interpretationist” views of self-knowledge: on such views, self-knowledge is based on the interpretation of information about ourselves, and this interpretation involves the same mindreading mechanisms that we use to access other persons’ mental states. Interpretationist views are usually dismissed (...) as implausible and unwarranted by advocates of rationalism. In this article, I argue that rationalists should revise their attitude towards interpretationism: they can, and ought to, accept themselves a form of interpretationism. First, I argue that interpretationism is correct at least for a substantive range of cases. These are cases in which we respond to a question ab.. (shrink)
This introduction is part of the special issue ‘ Self-knowledge in perspective’ guest edited by Fleur Jongepier and Derek Strijbos. // Papers included in the special issue: Transparency, expression, and self-knowledge Dorit Bar-On -/- Self-knowledge and communication Johannes Roessler -/- First-person privilege, judgment, and avowal Kateryna Samoilova -/- Self-knowledge about attitudes: rationalism meets interpretation FranzKnappik -/- How do you know that you settled a question? Tillmann Vierkant -/- On knowing one’s own resistant beliefs Cristina Borgoni -/- Self-knowledge (...) and imagination Peter Langland-Hassan -/- Transparent emotions? A critical analysis of Moran’s transparency claim Naomi Kloosterboer -/- Mind-making practices: the social infrastructure of self-knowing agency and responsibility Victoria McGeer -/- Pluralistic folk psychology and varieties of self-knowledge : an exploration Kristin Andrews -/-. (shrink)
Franz C. Brentano, 'La psicología de Aristóteles, con especial atención a la doctrina del entendimiento agente. Seguida de un apéndice sobre la actividad del Dios aristotélico'. Traducción y presentación de David Torrijos Castrillejo. Madrid, Ediciones Universidad San Dámaso, 2015, ISBN: 978-84-15027-81-2, xix + 344 pp. Título original: 'Die Psychologie des Aristoteles insbesondere seine Lehre vom ΝΟΥΣ ΠΟΙΗΤΙΚΟΣ. Nebst einer Beilage über das Wirken des Aristotelischen Gottes'. Mainz: Franz Kirchheim, 1867.
This paper presents the Spanish translation of the only two texts of Franz Brentano which deal specifically with St. Thomas Aquinas. The first text is a section about St. Albert the Great and Aquinas in an article published during Brentano’s youth, “The History of Ecclesiastical Sciences” (1867). The second text is an article, “Thomas Aquinas” (1908), written at the end of his life. Both texts reveal the immense value that Brentano saw in Aquinas. They also show that he regarded (...) Aquinas mainly as an important interpreter of Aristotle rather than as a philosopher in his own right. Brentano’s approach here also gives us some insight into his own conception of philosophical hermeneutics. The differences between the two texts are evident; for instance, in the second one, there is a Brentano’s manipulation of Aquinas’ thought to justify his leaving the Catholic Faith. The texts are also preceded by a little introduction of mine. Original titles: «Geschichte der kirchlichen Wissenschaften», in: Johann Adam Möhler (ed.), 'Kirchengeschichte', Band 2 (Regensburg: Manz, 1867), pp. 550-556 and «Thomas von Aquin», 'Neue Freie Presse' 15683 (18/4/1908): 1-5. (shrink)
The problem addressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen, BC, 1983, Theory comparison and relevant Evidence. In J. Earman (Ed.), Testing scientific theories (pp. 27–42). Minneapolis: University of Minnesota Press). Sections 1– 3 contain the general plausibility-informativeness theory of theory assessment. In a nutshell, the message is (1) that there are two values a theory should exhibit: (...) truth and informativeness—measured respectively by a truth indicator and a strength indicator; (2) that these two values are conflicting in the sense that the former is a decreasing and the latter an increasing function of the logical strength of the theory to be assessed; and (3) that in assessing a given theory by the available data one should weigh between these two conflicting aspects in such a way that any surplus in informativeness succeeds, if the shortfall in plausibility is small enough. Particular accounts of this general theory arise by inserting particular strength indicators and truth indicators. In Section 4 the theory is spelt out for the Bayesian paradigm of subjective probabilities. It is then compared to incremental Bayesian confirmation theory. Section 4 closes by asking whether it is likely to be lovely. Section 5 discusses a few problems of confirmation theory in the light of the present approach. In particular, it is briefly indicated how the present account gives rise to a new analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, CG, 1945, Studies in the logic of comfirmation. Mind, 54, 1–26, 97–121.), differing from the one Carnap gave in § 87 of his Logical foundations of probability (1962, Chicago: University of Chicago Press). Section 6 adresses the question of justification any theory of theory assessment has to face: why should one stick to theories given high assessment values rather than to any other theories? The answer given by the Bayesian version of the account presented in section 4 is that one should accept theories given high assessment values, because, in the medium run, theory assessment almost surely takes one to the most informative among all true theories when presented separating data. The concluding section 7 continues the comparison between the present account and incremental Bayesian confirmation theory. (shrink)
How can the propositional attitudes of several individuals be aggregated into overall collective propositional attitudes? Although there are large bodies of work on the aggregation of various special kinds of propositional attitudes, such as preferences, judgments, probabilities and utilities, the aggregation of propositional attitudes is seldom studied in full generality. In this paper, we seek to contribute to filling this gap in the literature. We sketch the ingredients of a general theory of propositional attitude aggregation and prove two new theorems. (...) Our first theorem simultaneously characterizes some prominent aggregation rules in the cases of probability, judgment and preference aggregation, including linear opinion pooling and Arrovian dictatorships. Our second theorem abstracts even further from the specific kinds of attitudes in question and describes the properties of a large class of aggregation rules applicable to a variety of belief-like attitudes. Our approach integrates some previously disconnected areas of investigation. (shrink)
This is a review article on Franz Brentano’s Descriptive Psychology published in 1982. We provide a detailed exposition of Brentano’s work on this topic, focusing on the unity of consciousness, the modes of connection and the types of part, including separable parts, distinctive parts, logical parts and what Brentano calls modificational quasi-parts. We also deal with Brentano’s account of the objects of sensation and the experience of time.
Democratic decision-making is often defended on grounds of the ‘wisdom of crowds’: decisions are more likely to be correct if they are based on many independent opinions, so a typical argument in social epistemology. But what does it mean to have independent opinions? Opinions can be probabilistically dependent even if individuals form their opinion in causal isolation from each other. We distinguish four probabilistic notions of opinion independence. Which of them holds depends on how individuals are causally affected by environmental (...) factors such as commonly perceived evidence. In a general theorem, we identify causal conditions guaranteeing each kind of opinion independence. These results have implications for whether and how ‘wisdom of crowds’ arguments are possible, and how truth-conducive institutions can be designed. (shrink)
In the emerging literature on judgment aggregation over logically connected proposi- tions, expert rights or liberal rights have not been investigated yet. A group making collective judgments may assign individual members or subgroups with expert know- ledge on, or particularly affected by, certain propositions the right to determine the collective judgment on those propositions. We identify a problem that generalizes Sen's 'liberal paradox'. Under plausible conditions, the assignment of rights to two or more individuals or subgroups is inconsistent with the (...) unanimity principle, whereby unanimously accepted propositions are collectively accepted. The inconsistency can be avoided if individual judgments or rights satisfy special conditions. (shrink)
What is the relationship between degrees of belief and binary beliefs? Can the latter be expressed as a function of the former—a so-called “belief-binarization rule”—without running into difficulties such as the lottery paradox? We show that this problem can be usefully analyzed from the perspective of judgment-aggregation theory. Although some formal similarities between belief binarization and judgment aggregation have been noted before, the connection between the two problems has not yet been studied in full generality. In this paper, we seek (...) to fill this gap. The paper is organized around a baseline impossibility theorem, which we use to map out the space of possible solutions to the belief-binarization problem. Our theorem shows that, except in limiting cases, there exists no belief-binarization rule satisfying four initially plausible desiderata. Surprisingly, this result is a direct corollary of the judgment-aggregation variant of Arrow’s classic impossibility theorem in social choice theory. (shrink)
I present a formal theory of the logic and aboutness of imagination. Aboutness is understood as the relation between meaningful items and what they concern, as per Yablo and Fine’s works on the notion. Imagination is understood as per Chalmers’ positive conceivability: the intentional state of a subject who conceives that p by imagining a situation—a configuration of objects and properties—verifying p. So far aboutness theory has been developed mainly for linguistic representation, but it is natural to extend it to (...) intentional states. The proposed framework combines a modal semantics with a mereology of contents: imagination operators are understood as variably strict quantifiers over worlds with a content-preservation constraint. (shrink)
Standard impossibility theorems on judgment aggregation over logically connected propositions either use a controversial systematicity condition or apply only to agendas of propositions with rich logical connections. Are there any serious impossibilities without these restrictions? We prove an impossibility theorem without requiring systematicity that applies to most standard agendas: Every judgment aggregation function (with rational inputs and outputs) satisfying a condition called unbiasedness is dictatorial (or effectively dictatorial if we remove one of the agenda conditions). Our agenda conditions are tight. (...) When applied illustratively to (strict) preference aggregation represented in our model, the result implies that every unbiased social welfare function with universal domain is effectively dictatorial. (shrink)
All existing impossibility theorems on judgment aggregation require individual and collective judgment sets to be consistent and complete, arguably a demanding rationality requirement. They do not carry over to aggregation functions mapping profiles of consistent individual judgment sets to consistent collective ones. We prove that, whenever the agenda of propositions under consideration exhibits mild interconnections, any such aggregation function that is "neutral" between the acceptance and rejection of each proposition is dictatorial. We relate this theorem to the literature.
Suppose several individuals (e.g., experts on a panel) each assign probabilities to some events. How can these individual probability assignments be aggregated into a single collective probability assignment? This article reviews several proposed solutions to this problem. We focus on three salient proposals: linear pooling (the weighted or unweighted linear averaging of probabilities), geometric pooling (the weighted or unweighted geometric averaging of probabilities), and multiplicative pooling (where probabilities are multiplied rather than averaged). We present axiomatic characterisations of each class of (...) pooling functions (most of them classic, but one new) and argue that linear pooling can be justified procedurally, but not epistemically, while the other two pooling methods can be justified epistemically. The choice between them, in turn, depends on whether the individuals' probability assignments are based on shared information or on private information. We conclude by mentioning a number of other pooling methods. (shrink)
Behaviourism is the view that preferences, beliefs, and other mental states in social-scientific theories are nothing but constructs re-describing people's behaviour. Mentalism is the view that they capture real phenomena, on a par with the unobservables in science, such as electrons and electromagnetic fields. While behaviourism has gone out of fashion in psychology, it remains influential in economics, especially in ‘revealed preference’ theory. We defend mentalism in economics, construed as a positive science, and show that it fits best scientific practice. (...) We distinguish mentalism from, and reject, the radical neuroeconomic view that behaviour should be explained in terms of brain processes, as distinct from mental states. (shrink)
We present a new “reason-based” approach to the formal representation of moral theories, drawing on recent decision-theoretic work. We show that any moral theory within a very large class can be represented in terms of two parameters: a specification of which properties of the objects of moral choice matter in any given context, and a specification of how these properties matter. Reason-based representations provide a very general taxonomy of moral theories, as differences among theories can be attributed to differences in (...) their two key parameters. We can thus formalize several distinctions, such as between consequentialist and non-consequentialist theories, between universalist and relativist theories, between agent-neutral and agent-relative theories, between monistic and pluralistic theories, between atomistic and holistic theories, and between theories with a teleological structure and those without. Reason-based representations also shed light on an important but under-appreciated phenomenon: the “underdetermination of moral theory by deontic content”. (shrink)
In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of our second result. Although we thereby (...) provide a new proof of Arrow’s theorem, our main aim is to identify the analogue of Arrow’s theorem in judgment aggregation, to clarify the relation between judgment and preference aggregation, and to illustrate the generality of the judgment aggregation model. JEL Classi…cation: D70, D71.. (shrink)
There is a surprising disconnect between formal rational choice theory and philosophical work on reasons. The one is silent on the role of reasons in rational choices, the other rarely engages with the formal models of decision problems used by social scientists. To bridge this gap, we propose a new, reason-based theory of rational choice. At its core is an account of preference formation, according to which an agent’s preferences are determined by his or her motivating reasons, together with a (...) ‘weighing relation’ between different combinations of reasons. By explaining how someone’s preferences may vary with changes in his or her motivating reasons, our theory illuminates the relationship between deliberation about reasons and rational choices. Although primarily positive, the theory can also help us think about how those preferences and choices ought to respond to normative reasons. (shrink)
We introduce a “reason-based” framework for explaining and predicting individual choices. It captures the idea that a decision-maker focuses on some but not all properties of the options and chooses an option whose motivationally salient properties he/she most prefers. Reason-based explanations allow us to distinguish between two kinds of context-dependent choice: the motivationally salient properties may (i) vary across choice contexts, and (ii) include not only “intrinsic” properties of the options, but also “context-related” properties. Our framework can accommodate boundedly rational (...) and sophisticatedly rational choice. Since properties can be recombined in new ways, it also offers resources for predicting choices in unobserved contexts. (shrink)
How can different individuals' probability assignments to some events be aggregated into a collective probability assignment? Classic results on this problem assume that the set of relevant events -- the agenda -- is a sigma-algebra and is thus closed under disjunction (union) and conjunction (intersection). We drop this demanding assumption and explore probabilistic opinion pooling on general agendas. One might be interested in the probability of rain and that of an interest-rate increase, but not in the probability of rain or (...) an interest-rate increase. We characterize linear pooling and neutral pooling for general agendas, with classic results as special cases for agendas that are sigma-algebras. As an illustrative application, we also consider probabilistic preference aggregation. Finally, we compare our results with existing results on binary judgment aggregation and Arrovian preference aggregation. This paper is the first of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
The aggregation of individual judgments over interrelated propositions is a newly arising field of social choice theory. I introduce several independence conditions on judgment aggregation rules, each of which protects against a specific type of manipulation by agenda setters or voters. I derive impossibility theorems whereby these independence conditions are incompatible with certain minimal requirements. Unlike earlier impossibility results, the main result here holds for any (non-trivial) agenda. However, independence conditions arguably undermine the logical structure of judgment aggregation. I therefore (...) suggest restricting independence to premises, which leads to a generalised premise-based procedure. This procedure is proven to be possible if the premises are logically independent. (shrink)
The new field of judgment aggregation aims to merge many individual sets of judgments on logically interconnected propositions into a single collective set of judgments on these propositions. Judgment aggregation has commonly been studied using classical propositional logic, with a limited expressive power and a problematic representation of conditional statements ("if P then Q") as material conditionals. In this methodological paper, I present a simple unified model of judgment aggregation in general logics. I show how many realistic decision problems can (...) be represented in it. This includes decision problems expressed in languages of classical propositional logic, predicate logic (e.g. preference aggregation problems), modal or conditional logics, and some multi-valued or fuzzy logics. I provide a list of simple tools for working with general logics, and I prove impossibility results that generalise earlier theorems. (shrink)
John Broome has developed an account of rationality and reasoning which gives philosophical foundations for choice theory and the psychology of rational agents. We formalize his account into a model that differs from ordinary choice-theoretic models through focusing on psychology and the reasoning process. Within that model, we ask Broome’s central question of whether reasoning can make us more rational: whether it allows us to acquire transitive preferences, consistent beliefs, non-akratic intentions, and so on. We identify three structural types of (...) rationality requirements: consistency requirements, completeness requirements, and closedness requirements. Many standard rationality requirements fall under this typology. Based on three theorems, we argue that reasoning is successful in achieving closedness requirements, but not in achieving consistency or completeness requirements. We assess how far our negative results reveal gaps in Broome's theory, or deficiencies in choice theory and behavioral economics. (shrink)
The contemporary theory of epistemic democracy often draws on the Condorcet Jury Theorem to formally justify the ‘wisdom of crowds’. But this theorem is inapplicable in its current form, since one of its premises – voter independence – is notoriously violated. This premise carries responsibility for the theorem's misleading conclusion that ‘large crowds are infallible’. We prove a more useful jury theorem: under defensible premises, ‘large crowds are fallible but better than small groups’. This theorem rehabilitates the importance of deliberation (...) and education, which appear inessential in the classical jury framework. Our theorem is related to Ladha's (1993) seminal jury theorem for interchangeable (‘indistinguishable’) voters based on de Finetti's Theorem. We also prove a more general and simpler such jury theorem. (shrink)
Which rules for aggregating judgments on logically connected propositions are manipulable and which not? In this paper, we introduce a preference-free concept of non-manipulability and contrast it with a preference-theoretic concept of strategy-proofness. We characterize all non-manipulable and all strategy-proof judgment aggregation rules and prove an impossibility theorem similar to the Gibbard--Satterthwaite theorem. We also discuss weaker forms of non-manipulability and strategy-proofness. Comparing two frequently discussed aggregation rules, we show that “conclusion-based voting” is less vulnerable to manipulation than “premise-based voting”, (...) which is strategy-proof only for “reason-oriented” individuals. Surprisingly, for “outcome-oriented” individuals, the two rules are strategically equivalent, generating identical judgments in equilibrium. Our results introduce game-theoretic considerations into judgment aggregation and have implications for debates on deliberative democracy. (shrink)
Public reason as a political ideal aims to reconcile reasonable disagreement; however, is public reason itself the object of reasonable disagreement? Jonathan Quong, David Estlund, Andrew Lister, and some other philosophers maintain that public reason is beyond reasonable disagreement. I argue this view is untenable. In addition, I consider briefly whether or not two main versions of the public reason principle, namely, the consensus version and the convergence version, need to satisfy their own requirements. My discussion has several important implications (...) for the debate on public reason. (shrink)
Choice-theoretic and philosophical accounts of rationality and reasoning address a multi-attitude psychology, including beliefs, desires, intentions, etc. By contrast, logicians traditionally focus on beliefs only. Yet there is 'logic' in multiple attitudes. We propose a generalization of the three standard logical requirements on beliefs -- consistency, completeness, and deductive closedness -- towards multiple attitudes. How do these three logical requirements relate to rational requirements, e.g., of transitive preferences or non-akratic intentions? We establish a systematic correspondence: each logical requirement (consistency, completeness, (...) or closedness) is equivalent to a class of rational requirements. Loosely speaking, this correspondence connects the logical and rational approaches to psychology. Addressing John Broome's central question, we characterize the extent to which reasoning can help achieve consistent, complete, or closed attitudes, respectively. (shrink)
This article shows that a slight variation of the argument in Milne 1996 yields the log‐likelihood ratio l rather than the log‐ratio measure r as “the one true measure of confirmation. ” *Received December 2006; revised December 2007. †To contact the author, please write to: Formal Epistemology Research Group, Zukunftskolleg and Department of Philosophy, University of Konstanz, P.O. Box X906, 78457 Konstanz, Germany; e‐mail: franz.huber@uni‐konstanz.de.
A group is often construed as one agent with its own probabilistic beliefs (credences), which are obtained by aggregating those of the individuals, for instance through averaging. In their celebrated “Groupthink”, Russell et al. (2015) require group credences to undergo Bayesian revision whenever new information is learnt, i.e., whenever individual credences undergo Bayesian revision based on this information. To obtain a fully Bayesian group, one should often extend this requirement to non-public or even private information (learnt by not all or (...) just one individual), or to non-representable information (not representable by any event in the domain where credences are held). I pro- pose a taxonomy of six types of ‘group Bayesianism’. They differ in the information for which Bayesian revision of group credences is required: public representable information, private representable information, public non-representable information, etc. Six corre- sponding theorems establish how individual credences must (not) be aggregated to ensure group Bayesianism of any type, respectively. Aggregating through standard averaging is never permitted; instead, different forms of geometric averaging must be used. One theorem—that for public representable information—is essentially Russell et al.’s central result (with minor corrections). Another theorem—that for public non-representable information—fills a gap in the theory of externally Bayesian opinion pooling. (shrink)
Some Confucian scholars have recently claimed that Confucian political meritocracy is superior to Western democracy. I have great reservations about such a view. In this article, I argue that so lo...
Several recent results on the aggregation of judgments over logically connected propositions show that, under certain conditions, dictatorships are the only propositionwise aggregation functions generating fully rational (i.e., complete and consistent) collective judgments. A frequently mentioned route to avoid dictatorships is to allow incomplete collective judgments. We show that this route does not lead very far: we obtain oligarchies rather than dictatorships if instead of full rationality we merely require that collective judgments be deductively closed, arguably a minimal condition of (...) rationality, compatible even with empty judgment sets. We derive several characterizations of oligarchies and provide illustrative applications to Arrowian preference aggregation and Kasher and Rubinsteinís group identification problem. (shrink)
I propose a relevance-based independence axiom on how to aggregate individual yes/no judgments on given propositions into collective judgments: the collective judgment on a proposition depends only on people’s judgments on propositions which are relevant to that proposition. This axiom contrasts with the classical independence axiom: the collective judgment on a proposition depends only on people’s judgments on the same proposition. I generalize the premise-based rule and the sequential-priority rule to an arbitrary priority order of the propositions, instead of a (...) dichotomous premise/conclusion order resp. a linear priority order. I prove four impossibility theorems on relevance-based aggregation. One theorem simultaneously generalizes Arrow’s Theorem (in its general and indifference-free versions) and the well-known Arrow-like theorem in judgment aggregation. (shrink)
Degrees of belief are familiar to all of us. Our confidence in the truth of some propositions is higher than our confidence in the truth of other propositions. We are pretty confident that our computers will boot when we push their power button, but we are much more confident that the sun will rise tomorrow. Degrees of belief formally represent the strength with which we believe the truth of various propositions. The higher an agent’s degree of belief for a particular (...) proposition, the higher her confidence in the truth of that proposition. For instance, Sophia’s degree of belief that it will be sunny in Vienna tomorrow might be .52, whereas her degree of belief that the train will leave on time might be .23. The precise meaning of these statements depends, of course, on the underlying theory of degrees of belief. These theories offer a formal tool to measure degrees of belief, to investigate the relations between various degrees of belief in different propositions, and to normatively evaluate degrees of belief. (shrink)
Confucian scholars should satisfy two conditions insofar as they think their theories enable Confucianism to make contributions to liberal politics and social policy. The liberal accommodation condition stipulates that the theory in question should accommodate as many reasonable conceptions of the good and religious doctrines as possible while the intelligibility condition stipulates that the theory must have a recognizable Confucian character. By and large, Joseph Chan’s Confucian perfectionism is able to satisfy the above two conditions. However, contrary to Chan and (...) many other Confucian scholars, I argue that any active promotion of Confucianism will violate the liberal accommodation condition. I propose the “wide view of moderate perfectionism,” which enables Confucianism to shed light on a wide range of political and social issues without promoting Confucianism actively. Thus, I present a new approach to the long-standing question of how Confucianism may improve political and social development in a liberal society. (shrink)
Under the independence and competence assumptions of Condorcet’s classical jury model, the probability of a correct majority decision converges to certainty as the jury size increases, a seemingly unrealistic result. Using Bayesian networks, we argue that the model’s independence assumption requires that the state of the world (guilty or not guilty) is the latest common cause of all jurors’ votes. But often – arguably in all courtroom cases and in many expert panels – the latest such common cause is a (...) shared ‘body of evidence’ observed by the jurors. In the corresponding Bayesian network, the votes are direct descendants not of the state of the world, but of the body of evidence, which in turn is a direct descendant of the state of the world. We develop a model of jury decisions based on this Bayesian network. Our model permits the possibility of misleading evidence, even for a maximally competent observer, which cannot easily be accommodated in the classical model. We prove that (i) the probability of a correct majority verdict converges to the probability that the body of evidence is not misleading, a value typically below 1; (ii) depending on the required threshold of ‘no reasonable doubt’, it may be impossible, even in an arbitrarily large jury, to establish guilt of a defendant ‘beyond any reasonable doubt’. (shrink)
The widely discussed "discursive dilemma" shows that majority voting in a group of individuals on logically connected propositions may produce irrational collective judgments. We generalize majority voting by considering quota rules, which accept each proposition if and only if the number of individuals accepting it exceeds a given threshold, where different thresholds may be used for different propositions. After characterizing quota rules, we prove necessary and sufficient conditions on the required thresholds for various collective rationality requirements. We also consider sequential (...) quota rules, which ensure collective rationality by adjudicating propositions sequentially and letting earlier judgments constrain later ones. Sequential rules may be path-dependent and strategically manipulable. We characterize path-independence and prove its essential equivalence to strategy-proofness. Our results shed light on the rationality of simple-, super-, and sub-majoritarian decision-making. (shrink)
We present a general framework for representing belief-revision rules and use it to characterize Bayes's rule as a classical example and Jeffrey's rule as a non-classical one. In Jeffrey's rule, the input to a belief revision is not simply the information that some event has occurred, as in Bayes's rule, but a new assignment of probabilities to some events. Despite their differences, Bayes's and Jeffrey's rules can be characterized in terms of the same axioms: "responsiveness", which requires that revised beliefs (...) incorporate what has been learnt, and "conservativeness", which requires that beliefs on which the learnt input is "silent" do not change. To illustrate the use of non-Bayesian belief revision in economic theory, we sketch a simple decision-theoretic application. (shrink)
In solving judgment aggregation problems, groups often face constraints. Many decision problems can be modelled in terms the acceptance or rejection of certain propositions in a language, and constraints as propositions that the decisions should be consistent with. For example, court judgments in breach-of-contract cases should be consistent with the constraint that action and obligation are necessary and sufficient for liability; judgments on how to rank several options in an order of preference with the constraint of transitivity; and judgments on (...) budget items with budgetary constraints. Often more or less demanding constraints on decisions are imaginable. For instance, in preference ranking problems, the transitivity constraint is often contrasted with the weaker acyclicity constraint. In this paper, we make constraints explicit in judgment aggregation by relativizing the rationality conditions of consistency and deductive closure to a constraint set, whose variation yields more or less strong notions of rationality. We review several general results on judgment aggregation in light of such constraints. (shrink)
Rational choice theory analyzes how an agent can rationally act, given his or her preferences, but says little about where those preferences come from. Preferences are usually assumed to be fixed and exogenously given. Building on related work on reasons and rational choice, we describe a framework for conceptualizing preference formation and preference change. In our model, an agent's preferences are based on certain "motivationally salient" properties of the alternatives over which the preferences are held. Preferences may change as new (...) properties of the alternatives become salient or previously salient properties cease to be salient. Our approach captures endogenous preferences in various contexts and helps to illuminate the distinction between formal and substantive concepts of rationality, as well as the role of perception in rational choice. (shrink)
In the framework of judgment aggregation, we assume that some formulas of the agenda are singled out as premisses, and that both Independence (formula-wise aggregation) and Unanimity Preservation hold for them. Whether premiss-based aggregation thus defined is compatible with conclusion-based aggregation, as defined by Unanimity Preservation on the non-premisses, depends on how the premisses are logically connected, both among themselves and with other formulas. We state necessary and sufficient conditions under which the combination of both approaches leads to dictatorship (resp. (...) oligarchy), either just on the premisses or on the whole agenda. This framework is inspired by the doctrinal paradox of legal theory and arguably relevant to this field as well as political science and political economy. When the set of premisses coincides with the whole agenda, a limiting case of our assumptions, we obtain several existing results in judgment aggregation theory. (shrink)
In judgment aggregation, unlike preference aggregation, not much is known about domain restrictions that guarantee consistent majority outcomes. We introduce several conditions on individual judgments su¢ - cient for consistent majority judgments. Some are based on global orders of propositions or individuals, others on local orders, still others not on orders at all. Some generalize classic social-choice-theoretic domain conditions, others have no counterpart. Our most general condition generalizes Sen’s triplewise value-restriction, itself the most general classic condition. We also prove a (...) new characterization theorem: for a large class of domains, if there exists any aggregation function satisfying some democratic conditions, then majority voting is the unique such function. Taken together, our results provide new support for the robustness of majority rule. (shrink)
This article addresses the recent reception of Franz Brentano's writings on consciousness. I am particularly interested in the connection established between Brentano's theory of consciousness and higher-order theories of consciousness and, more specifically, the theory proposed by David Rosenthal. My working hypothesis is that despite the many similarities that can be established with Rosenthal's philosophy of mind, Brentano's theory of consciousness differs in many respects from higher-order theories of consciousness and avoids most of the criticisms generally directed to them. (...) This article is divided into eight parts. The first two sections expound the basic outline of Rosenthal's theory, and the third summarizes the principal objections that Rosenthal addresses to Brentano, which I, then, examine in sections 4 and 5. In sections 6 and 7, I discuss Brentano's principle of the unity of consciousness, and in section 8, I consider the scope of the changes that Brentano brings to his theory of consciousness in his later writings, which follow the 1874 publication of Psychology. I then draw the conclusion that Brentano's theory rests on a view of intransitive and intrinsic self-consciousness. (shrink)
In this paper, we identify a new and mathematically well-defined sense in which the coherence of a set of hypotheses can be truth-conducive. Our focus is not, as usual, on the probability but on the confirmation of a coherent set and its members. We show that, if evidence confirms a hypothesis, confirmation is “transmitted” to any hypotheses that are sufficiently coherent with the former hypothesis, according to some appropriate probabilistic coherence measure such as Olsson’s or Fitelson’s measure. Our findings have (...) implications for scientific methodology, as they provide a formal rationale for the method of indirect confirmation and the method of confirming theories by confirming their parts. (shrink)
If a group is modelled as a single Bayesian agent, what should its beliefs be? I propose an axiomatic model that connects group beliefs to beliefs of group members, who are themselves modelled as Bayesian agents, possibly with different priors and different information. Group beliefs are proven to take a simple multiplicative form if people’s information is independent, and a more complex form if information overlaps arbitrarily. This shows that group beliefs can incorporate all information spread over the individuals without (...) the individuals having to communicate their (possibly complex and hard-to-describe) private information; communicating prior and posterior beliefs sufices. JEL classification: D70, D71.. (shrink)
Recent accounts of actual causation are stated in terms of extended causal models. These extended causal models contain two elements representing two seemingly distinct modalities. The first element are structural equations which represent the or mechanisms of the model, just as ordinary causal models do. The second element are ranking functions which represent normality or typicality. The aim of this paper is to show that these two modalities can be unified. I do so by formulating two constraints under which extended (...) causal models with their two modalities can be subsumed under so called which contain just one modality. These two constraints will be formally precise versions of Lewissystem of weights or priorities” governing overall similarity between possible worlds. (shrink)
In the theory of judgment aggregation, it is known for which agendas of propositions it is possible to aggregate individual judgments into collective ones in accordance with the Arrow-inspired requirements of universal domain, collective rationality, unanimity preservation, non-dictatorship and propositionwise independence. But it is only partially known (e.g., only in the monotonic case) for which agendas it is possible to respect additional requirements, notably non-oligarchy, anonymity, no individual veto power, or implication preservation. We fully characterize the agendas for which there (...) are such possibilities, thereby answering the most salient open questions about propositionwise judgment aggregation. Our results build on earlier results by Nehring and Puppe (2002), Nehring (2006), Dietrich and List (2007a) and Dokow and Holzman (2010a). (shrink)
The paper provides an argument for the thesis that an agent’s degrees of disbelief should obey the ranking calculus. This Consistency Argument is based on the Consistency Theorem. The latter says that an agent’s belief set is and will always be consistent and deductively closed iff her degrees of entrenchment satisfy the ranking axioms and are updated according to the ranktheoretic update rules.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.