One reasons not just in beliefs, but also in intentions, preferences, and other attitudes. For instance, one forms preferences from preferences, or intentions from beliefs and preferences. Formal logic has proved useful for modelling reasoning in beliefs -- the formation of beliefs from beliefs. Can logic also model reasoning in multiple attitudes? We identify principled obstacles. Logic can model reasoning about attitudes. But this models the discovery of attitudes of (usually) others, not the formation of one's own attitudes. Beliefs are (...) special in that reasoning in beliefs can follow logical entailment between belief contents. This makes beliefs the privileged target of logic, when applying logic to psychology. (shrink)
Maximising expected value is the classic doctrine in choice theory under empirical uncertainty, and a prominent proposal in the emerging philosophical literature on normative uncertainty, i.e., uncertainty about the standard of evaluation. But how should Expectationalism be stated in general, when we can face both uncertainties simultaneously, as is common in life? Surprisingly, different possibilities arise, ranging from Ex-Ante to Ex-Post Expectationalism, with several hybrid versions. The difference lies in the perspective from which expectations are taken, or equivalently the amount (...) of uncertainty packed into the prospect evaluated. Expectationalism thus faces the classic dilemma between ex-ante and ex-post approaches, familiar elsewhere in ethics and aggregation theory under uncertainty. We analyse the spectrum of expectational theories, showing that they reach diverging evaluations, use different modes of reasoning, take different attitudes to normative risk as well as empirical risk, but converge under an interesting (necessary and sufficient) condition. (shrink)
Suppose several individuals (e.g., experts on a panel) each assign probabilities to some events. How can these individual probability assignments be aggregated into a single collective probability assignment? This article reviews several proposed solutions to this problem. We focus on three salient proposals: linear pooling (the weighted or unweighted linear averaging of probabilities), geometric pooling (the weighted or unweighted geometric averaging of probabilities), and multiplicative pooling (where probabilities are multiplied rather than averaged). We present axiomatic characterisations of each class of (...) pooling functions (most of them classic, but one new) and argue that linear pooling can be justified procedurally, but not epistemically, while the other two pooling methods can be justified epistemically. The choice between them, in turn, depends on whether the individuals' probability assignments are based on shared information or on private information. We conclude by mentioning a number of other pooling methods. (shrink)
Behaviourism is the view that preferences, beliefs, and other mental states in social-scientific theories are nothing but constructs re-describing people's behaviour. Mentalism is the view that they capture real phenomena, on a par with the unobservables in science, such as electrons and electromagnetic fields. While behaviourism has gone out of fashion in psychology, it remains influential in economics, especially in ‘revealed preference’ theory. We defend mentalism in economics, construed as a positive science, and show that it fits best scientific practice. (...) We distinguish mentalism from, and reject, the radical neuroeconomic view that behaviour should be explained in terms of brain processes, as distinct from mental states. (shrink)
Leonard Savage famously contravened his own theory when first confronting the Allais Paradox, but then convinced himself that he had made an error. We examine the formal structure of Savage’s ‘error-correcting’ reasoning in the light of (i) behavioural economists’ claims to identify the latent preferences of individuals who violate conventional rationality requirements and (ii) John Broome’s critique of arguments which presuppose that rationality requirements can be achieved through reasoning. We argue that Savage’s reasoning is not vulnerable to Broome’s critique, but (...) does not provide support for the view that behavioural scientists can identify and counteract errors in people’s choices. (shrink)
Choice-theoretic and philosophical accounts of rationality and reasoning address a multi-attitude psychology, including beliefs, desires, intentions, etc. By contrast, logicians traditionally focus on beliefs only. Yet there is 'logic' in multiple attitudes. We propose a generalization of the three standard logical requirements on beliefs -- consistency, completeness, and deductive closedness -- towards multiple attitudes. How do these three logical requirements relate to rational requirements, e.g., of transitive preferences or non-akratic intentions? We establish a systematic correspondence: each logical requirement (consistency, completeness, (...) or closedness) is equivalent to a class of rational requirements. Loosely speaking, this correspondence connects the logical and rational approaches to psychology. Addressing John Broome's central question, we characterize the extent to which reasoning can help achieve consistent, complete, or closed attitudes, respectively. (shrink)
In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of our second result. Although we thereby (...) provide a new proof of Arrow’s theorem, our main aim is to identify the analogue of Arrow’s theorem in judgment aggregation, to clarify the relation between judgment and preference aggregation, and to illustrate the generality of the judgment aggregation model. JEL Classi…cation: D70, D71.. (shrink)
We present a new “reason-based” approach to the formal representation of moral theories, drawing on recent decision-theoretic work. We show that any moral theory within a very large class can be represented in terms of two parameters: a specification of which properties of the objects of moral choice matter in any given context, and a specification of how these properties matter. Reason-based representations provide a very general taxonomy of moral theories, as differences among theories can be attributed to differences in (...) their two key parameters. We can thus formalize several distinctions, such as between consequentialist and non-consequentialist theories, between universalist and relativist theories, between agent-neutral and agent-relative theories, between monistic and pluralistic theories, between atomistic and holistic theories, and between theories with a teleological structure and those without. Reason-based representations also shed light on an important but under-appreciated phenomenon: the “underdetermination of moral theory by deontic content”. (shrink)
How can different individuals' probability assignments to some events be aggregated into a collective probability assignment? Classic results on this problem assume that the set of relevant events -- the agenda -- is a sigma-algebra and is thus closed under disjunction (union) and conjunction (intersection). We drop this demanding assumption and explore probabilistic opinion pooling on general agendas. One might be interested in the probability of rain and that of an interest-rate increase, but not in the probability of rain or (...) an interest-rate increase. We characterize linear pooling and neutral pooling for general agendas, with classic results as special cases for agendas that are sigma-algebras. As an illustrative application, we also consider probabilistic preference aggregation. Finally, we compare our results with existing results on binary judgment aggregation and Arrovian preference aggregation. This paper is the first of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
There is a surprising disconnect between formal rational choice theory and philosophical work on reasons. The one is silent on the role of reasons in rational choices, the other rarely engages with the formal models of decision problems used by social scientists. To bridge this gap, we propose a new, reason-based theory of rational choice. At its core is an account of preference formation, according to which an agent’s preferences are determined by his or her motivating reasons, together with a (...) ‘weighing relation’ between different combinations of reasons. By explaining how someone’s preferences may vary with changes in his or her motivating reasons, our theory illuminates the relationship between deliberation about reasons and rational choices. Although primarily positive, the theory can also help us think about how those preferences and choices ought to respond to normative reasons. (shrink)
The aggregation of individual judgments over interrelated propositions is a newly arising field of social choice theory. I introduce several independence conditions on judgment aggregation rules, each of which protects against a specific type of manipulation by agenda setters or voters. I derive impossibility theorems whereby these independence conditions are incompatible with certain minimal requirements. Unlike earlier impossibility results, the main result here holds for any (non-trivial) agenda. However, independence conditions arguably undermine the logical structure of judgment aggregation. I therefore (...) suggest restricting independence to premises, which leads to a generalised premise-based procedure. This procedure is proven to be possible if the premises are logically independent. (shrink)
We introduce a “reason-based” framework for explaining and predicting individual choices. It captures the idea that a decision-maker focuses on some but not all properties of the options and chooses an option whose motivationally salient properties he/she most prefers. Reason-based explanations allow us to distinguish between two kinds of context-dependent choice: the motivationally salient properties may (i) vary across choice contexts, and (ii) include not only “intrinsic” properties of the options, but also “context-related” properties. Our framework can accommodate boundedly rational (...) and sophisticatedly rational choice. Since properties can be recombined in new ways, it also offers resources for predicting choices in unobserved contexts. (shrink)
The new field of judgment aggregation aims to merge many individual sets of judgments on logically interconnected propositions into a single collective set of judgments on these propositions. Judgment aggregation has commonly been studied using classical propositional logic, with a limited expressive power and a problematic representation of conditional statements ("if P then Q") as material conditionals. In this methodological paper, I present a simple unified model of judgment aggregation in general logics. I show how many realistic decision problems can (...) be represented in it. This includes decision problems expressed in languages of classical propositional logic, predicate logic (e.g. preference aggregation problems), modal or conditional logics, and some multi-valued or fuzzy logics. I provide a list of simple tools for working with general logics, and I prove impossibility results that generalise earlier theorems. (shrink)
The contemporary theory of epistemic democracy often draws on the Condorcet Jury Theorem to formally justify the ‘wisdom of crowds’. But this theorem is inapplicable in its current form, since one of its premises – voter independence – is notoriously violated. This premise carries responsibility for the theorem's misleading conclusion that ‘large crowds are infallible’. We prove a more useful jury theorem: under defensible premises, ‘large crowds are fallible but better than small groups’. This theorem rehabilitates the importance of deliberation (...) and education, which appear inessential in the classical jury framework. Our theorem is related to Ladha's (1993) seminal jury theorem for interchangeable (‘indistinguishable’) voters based on de Finetti's Theorem. We also prove a more general and simpler such jury theorem. (shrink)
Which rules for aggregating judgments on logically connected propositions are manipulable and which not? In this paper, we introduce a preference-free concept of non-manipulability and contrast it with a preference-theoretic concept of strategy-proofness. We characterize all non-manipulable and all strategy-proof judgment aggregation rules and prove an impossibility theorem similar to the Gibbard--Satterthwaite theorem. We also discuss weaker forms of non-manipulability and strategy-proofness. Comparing two frequently discussed aggregation rules, we show that “conclusion-based voting” is less vulnerable to manipulation than “premise-based voting”, (...) which is strategy-proof only for “reason-oriented” individuals. Surprisingly, for “outcome-oriented” individuals, the two rules are strategically equivalent, generating identical judgments in equilibrium. Our results introduce game-theoretic considerations into judgment aggregation and have implications for debates on deliberative democracy. (shrink)
John Broome has developed an account of rationality and reasoning which gives philosophical foundations for choice theory and the psychology of rational agents. We formalize his account into a model that differs from ordinary choice-theoretic models through focusing on psychology and the reasoning process. Within that model, we ask Broome’s central question of whether reasoning can make us more rational: whether it allows us to acquire transitive preferences, consistent beliefs, non-akratic intentions, and so on. We identify three structural types of (...) rationality requirements: consistency requirements, completeness requirements, and closedness requirements. Many standard rationality requirements fall under this typology. Based on three theorems, we argue that reasoning is successful in achieving closedness requirements, but not in achieving consistency or completeness requirements. We assess how far our negative results reveal gaps in Broome's theory, or deficiencies in choice theory and behavioral economics. (shrink)
I propose a relevance-based independence axiom on how to aggregate individual yes/no judgments on given propositions into collective judgments: the collective judgment on a proposition depends only on people’s judgments on propositions which are relevant to that proposition. This axiom contrasts with the classical independence axiom: the collective judgment on a proposition depends only on people’s judgments on the same proposition. I generalize the premise-based rule and the sequential-priority rule to an arbitrary priority order of the propositions, instead of a (...) dichotomous premise/conclusion order resp. a linear priority order. I prove four impossibility theorems on relevance-based aggregation. One theorem simultaneously generalizes Arrow’s Theorem (in its general and indiﬀerence-free versions) and the well-known Arrow-like theorem in judgment aggregation. (shrink)
Democratic decision-making is often defended on grounds of the ‘wisdom of crowds’: decisions are more likely to be correct if they are based on many independent opinions, so a typical argument in social epistemology. But what does it mean to have independent opinions? Opinions can be probabilistically dependent even if individuals form their opinion in causal isolation from each other. We distinguish four probabilistic notions of opinion independence. Which of them holds depends on how individuals are causally affected by environmental (...) factors such as commonly perceived evidence. In a general theorem, we identify causal conditions guaranteeing each kind of opinion independence. These results have implications for whether and how ‘wisdom of crowds’ arguments are possible, and how truth-conducive institutions can be designed. (shrink)
Condorcet's famous jury theorem reaches an optimistic conclusion on the correctness of majority decisions, based on two controversial premises about voters: they are competent and vote independently, in a technical sense. I carefully analyse these premises and show that: whether a premise is justi…ed depends on the notion of probability considered; none of the notions renders both premises simultaneously justi…ed. Under the perhaps most interesting notions, the independence assumption should be weakened.
Several recent results on the aggregation of judgments over logically connected propositions show that, under certain conditions, dictatorships are the only propositionwise aggregation functions generating fully rational (i.e., complete and consistent) collective judgments. A frequently mentioned route to avoid dictatorships is to allow incomplete collective judgments. We show that this route does not lead very far: we obtain oligarchies rather than dictatorships if instead of full rationality we merely require that collective judgments be deductively closed, arguably a minimal condition of (...) rationality, compatible even with empty judgment sets. We derive several characterizations of oligarchies and provide illustrative applications to Arrowian preference aggregation and Kasher and Rubinsteinís group identification problem. (shrink)
In the emerging literature on judgment aggregation over logically connected proposi- tions, expert rights or liberal rights have not been investigated yet. A group making collective judgments may assign individual members or subgroups with expert know- ledge on, or particularly affected by, certain propositions the right to determine the collective judgment on those propositions. We identify a problem that generalizes Sen's 'liberal paradox'. Under plausible conditions, the assignment of rights to two or more individuals or subgroups is inconsistent with the (...) unanimity principle, whereby unanimously accepted propositions are collectively accepted. The inconsistency can be avoided if individual judgments or rights satisfy special conditions. (shrink)
We present a general framework for representing belief-revision rules and use it to characterize Bayes's rule as a classical example and Jeffrey's rule as a non-classical one. In Jeffrey's rule, the input to a belief revision is not simply the information that some event has occurred, as in Bayes's rule, but a new assignment of probabilities to some events. Despite their differences, Bayes's and Jeffrey's rules can be characterized in terms of the same axioms: "responsiveness", which requires that revised beliefs (...) incorporate what has been learnt, and "conservativeness", which requires that beliefs on which the learnt input is "silent" do not change. To illustrate the use of non-Bayesian belief revision in economic theory, we sketch a simple decision-theoretic application. (shrink)
The widely discussed "discursive dilemma" shows that majority voting in a group of individuals on logically connected propositions may produce irrational collective judgments. We generalize majority voting by considering quota rules, which accept each proposition if and only if the number of individuals accepting it exceeds a given threshold, where different thresholds may be used for different propositions. After characterizing quota rules, we prove necessary and sufficient conditions on the required thresholds for various collective rationality requirements. We also consider sequential (...) quota rules, which ensure collective rationality by adjudicating propositions sequentially and letting earlier judgments constrain later ones. Sequential rules may be path-dependent and strategically manipulable. We characterize path-independence and prove its essential equivalence to strategy-proofness. Our results shed light on the rationality of simple-, super-, and sub-majoritarian decision-making. (shrink)
Agents are often assumed to have degrees of belief (“credences”) and also binary beliefs (“beliefs simpliciter”). How are these related to each other? A much-discussed answer asserts that it is rational to believe a proposition if and only if one has a high enough degree of belief in it. But this answer runs into the “lottery paradox”: the set of believed propositions may violate the key rationality conditions of consistency and deductive closure. In earlier work, we showed that this problem (...) generalizes: there exists no local function from degrees of belief to binary beliefs that satisfies some minimal conditions of rationality and non-triviality. “Locality” means that the binary belief in each proposition depends only on the degree of belief in that proposition, not on the degrees of belief in others. One might think that the impossibility can be avoided by dropping the assumption that binary beliefs are a function of degrees of belief. We prove that, even if we drop the “functionality” restriction, there still exists no local relation between degrees of belief and binary beliefs that satisfies some minimal conditions. Thus functionality is not the source of the impossibility; its source is the condition of locality. If there is any non-trivial relation between degrees of belief and binary beliefs at all, it must be a “holistic” one. We explore several concrete forms this “holistic” relation could take. (shrink)
How can the propositional attitudes of several individuals be aggregated into overall collective propositional attitudes? Although there are large bodies of work on the aggregation of various special kinds of propositional attitudes, such as preferences, judgments, probabilities and utilities, the aggregation of propositional attitudes is seldom studied in full generality. In this paper, we seek to contribute to filling this gap in the literature. We sketch the ingredients of a general theory of propositional attitude aggregation and prove two new theorems. (...) Our first theorem simultaneously characterizes some prominent aggregation rules in the cases of probability, judgment and preference aggregation, including linear opinion pooling and Arrovian dictatorships. Our second theorem abstracts even further from the specific kinds of attitudes in question and describes the properties of a large class of aggregation rules applicable to a variety of belief-like attitudes. Our approach integrates some previously disconnected areas of investigation. (shrink)
Under the independence and competence assumptions of Condorcet’s classical jury model, the probability of a correct majority decision converges to certainty as the jury size increases, a seemingly unrealistic result. Using Bayesian networks, we argue that the model’s independence assumption requires that the state of the world (guilty or not guilty) is the latest common cause of all jurors’ votes. But often – arguably in all courtroom cases and in many expert panels – the latest such common cause is a (...) shared ‘body of evidence’ observed by the jurors. In the corresponding Bayesian network, the votes are direct descendants not of the state of the world, but of the body of evidence, which in turn is a direct descendant of the state of the world. We develop a model of jury decisions based on this Bayesian network. Our model permits the possibility of misleading evidence, even for a maximally competent observer, which cannot easily be accommodated in the classical model. We prove that (i) the probability of a correct majority verdict converges to the probability that the body of evidence is not misleading, a value typically below 1; (ii) depending on the required threshold of ‘no reasonable doubt’, it may be impossible, even in an arbitrarily large jury, to establish guilt of a defendant ‘beyond any reasonable doubt’. (shrink)
In the framework of judgment aggregation, we assume that some formulas of the agenda are singled out as premisses, and that both Independence (formula-wise aggregation) and Unanimity Preservation hold for them. Whether premiss-based aggregation thus defined is compatible with conclusion-based aggregation, as defined by Unanimity Preservation on the non-premisses, depends on how the premisses are logically connected, both among themselves and with other formulas. We state necessary and sufficient conditions under which the combination of both approaches leads to dictatorship (resp. (...) oligarchy), either just on the premisses or on the whole agenda. This framework is inspired by the doctrinal paradox of legal theory and arguably relevant to this field as well as political science and political economy. When the set of premisses coincides with the whole agenda, a limiting case of our assumptions, we obtain several existing results in judgment aggregation theory. (shrink)
Can a group be a standard rational agent? This would require the group to hold aggregate preferences which maximise expected utility and change only by Bayesian updating. Group rationality is possible, but the only preference aggregation rules which support it (and are minimally Paretian and continuous) are the linear-geometric rules, which combine individual tastes linearly and individual beliefs geometrically.
Rational choice theory analyzes how an agent can rationally act, given his or her preferences, but says little about where those preferences come from. Preferences are usually assumed to be fixed and exogenously given. Building on related work on reasons and rational choice, we describe a framework for conceptualizing preference formation and preference change. In our model, an agent's preferences are based on certain "motivationally salient" properties of the alternatives over which the preferences are held. Preferences may change as new (...) properties of the alternatives become salient or previously salient properties cease to be salient. Our approach captures endogenous preferences in various contexts and helps to illuminate the distinction between formal and substantive concepts of rationality, as well as the role of perception in rational choice. (shrink)
In this paper, we identify a new and mathematically well-defined sense in which the coherence of a set of hypotheses can be truth-conducive. Our focus is not, as usual, on the probability but on the confirmation of a coherent set and its members. We show that, if evidence confirms a hypothesis, confirmation is “transmitted” to any hypotheses that are sufficiently coherent with the former hypothesis, according to some appropriate probabilistic coherence measure such as Olsson’s or Fitelson’s measure. Our findings have (...) implications for scientific methodology, as they provide a formal rationale for the method of indirect confirmation and the method of confirming theories by confirming their parts. (shrink)
If a group is modelled as a single Bayesian agent, what should its beliefs be? I propose an axiomatic model that connects group beliefs to beliefs of group members, who are themselves modelled as Bayesian agents, possibly with different priors and different information. Group beliefs are proven to take a simple multiplicative form if people’s information is independent, and a more complex form if information overlaps arbitrarily. This shows that group beliefs can incorporate all information spread over the individuals without (...) the individuals having to communicate their (possibly complex and hard-to-describe) private information; communicating prior and posterior beliefs sufices. JEL classification: D70, D71.. (shrink)
In judgment aggregation, unlike preference aggregation, not much is known about domain restrictions that guarantee consistent majority outcomes. We introduce several conditions on individual judgments su¢ - cient for consistent majority judgments. Some are based on global orders of propositions or individuals, others on local orders, still others not on orders at all. Some generalize classic social-choice-theoretic domain conditions, others have no counterpart. Our most general condition generalizes Sen’s triplewise value-restriction, itself the most general classic condition. We also prove a (...) new characterization theorem: for a large class of domains, if there exists any aggregation function satisfying some democratic conditions, then majority voting is the unique such function. Taken together, our results provide new support for the robustness of majority rule. (shrink)
Decision-making typically requires judgments about causal relations: we need to know the causal effects of our actions and the causal relevance of various environmental factors. We investigate how several individuals' causal judgments can be aggregated into collective causal judgments. First, we consider the aggregation of causal judgments via the aggregation of probabilistic judgments, and identify the limitations of this approach. We then explore the possibility of aggregating causal judgments independently of probabilistic ones. Formally, we introduce the problem of causal-network aggregation. (...) Finally, we revisit the aggregation of probabilistic judgments when this is constrained by prior aggregation of qualitative causal judgments. (shrink)
In the theory of judgment aggregation, it is known for which agendas of propositions it is possible to aggregate individual judgments into collective ones in accordance with the Arrow-inspired requirements of universal domain, collective rationality, unanimity preservation, non-dictatorship and propositionwise independence. But it is only partially known (e.g., only in the monotonic case) for which agendas it is possible to respect additional requirements, notably non-oligarchy, anonymity, no individual veto power, or implication preservation. We fully characterize the agendas for which there (...) are such possibilities, thereby answering the most salient open questions about propositionwise judgment aggregation. Our results build on earlier results by Nehring and Puppe (2002), Nehring (2006), Dietrich and List (2007a) and Dokow and Holzman (2010a). (shrink)
The new …eld of judgment aggregation aims to …nd collective judgments on logically interconnected propositions. Recent impossibility results establish limitations on the possibility to vote independently on the propositions. I show that, fortunately, the impossibility results do not apply to a wide class of realistic agendas once propositions like “if a then b” are adequately modelled, namely as subjunctive implications rather than material implications. For these agendas, consistent and complete collective judgments can be reached through appropriate quota rules (which decide (...) propositions using acceptance thresholds). I characterise the class of these quota rules. I also prove an abstract result that characterises consistent aggregation for arbitrary agendas in a general logic. (shrink)
What is the relationship between degrees of belief and binary beliefs? Can the latter be expressed as a function of the former—a so-called “belief-binarization rule”—without running into difficulties such as the lottery paradox? We show that this problem can be usefully analyzed from the perspective of judgment-aggregation theory. Although some formal similarities between belief binarization and judgment aggregation have been noted before, the connection between the two problems has not yet been studied in full generality. In this paper, we seek (...) to fill this gap. The paper is organized around a baseline impossibility theorem, which we use to map out the space of possible solutions to the belief-binarization problem. Our theorem shows that, except in limiting cases, there exists no belief-binarization rule satisfying four initially plausible desiderata. Surprisingly, this result is a direct corollary of the judgment-aggregation variant of Arrow’s classic impossibility theorem in social choice theory. (shrink)
Standard impossibility theorems on judgment aggregation over logically connected propositions either use a controversial systematicity condition or apply only to agendas of propositions with rich logical connections. Are there any serious impossibilities without these restrictions? We prove an impossibility theorem without requiring systematicity that applies to most standard agendas: Every judgment aggregation function (with rational inputs and outputs) satisfying a condition called unbiasedness is dictatorial (or effectively dictatorial if we remove one of the agenda conditions). Our agenda conditions are tight. (...) When applied illustratively to (strict) preference aggregation represented in our model, the result implies that every unbiased social welfare function with universal domain is effectively dictatorial. (shrink)
All existing impossibility theorems on judgment aggregation require individual and collective judgment sets to be consistent and complete, arguably a demanding rationality requirement. They do not carry over to aggregation functions mapping profiles of consistent individual judgment sets to consistent collective ones. We prove that, whenever the agenda of propositions under consideration exhibits mild interconnections, any such aggregation function that is "neutral" between the acceptance and rejection of each proposition is dictatorial. We relate this theorem to the literature.
Economic models describe individuals in terms of underlying characteristics, such as taste for some good, sympathy level for another player, time discount rate, risk attitude, and so on. In real life, such characteristics change through experiences: taste for Mozart changes through listening to it, sympathy for another player through observing his moves, and so on. Models typically ignore change, not just for simplicity but also because it is unclear how to incorporate change. I introduce a general axiomatic framework for defining, (...) analysing and comparing rival models of change. I show that seemingly basic postulates on modelling change together have strong implications, like irrelevance of the order in which someone has his experiences and ‘linearity’ of change. This is a step towards placing the modelling of change on solid axiomatic grounds and enabling non-arbitrary incorporation of change into economic models. (shrink)
Group decisions must often obey exogenous constraints. While in a preference aggregation problem constraints are modelled by restricting the set of feasible alternatives, this paper discusses the modelling of constraints when aggregating individual yes/no judgments on interconnected propositions. For example, court judgments in breach-of-contract cases should respect the constraint that action and obligation are necessary and sufficient for liability, and judgments on budget items should respect budgetary constraints. In this paper, we make constraints in judgment aggregation explicit by relativizing the (...) rationality conditions of consistency and deductive closure to a constraint set, whose variation yields more or less strong notions of rationality. This approach of modelling constraints explicitly contrasts with that of building constraints as axioms into the logic, which turns compliance with constraints into a matter of logical consistency and thereby conflates requirements of ordinary logical consistency and requirements dictated by the environment . We present some general impossibility results on constrained judgment aggregation; they are immediate corollaries of known results on judgment aggregation. (shrink)
A group is often construed as one agent with its own probabilistic beliefs (credences), which are obtained by aggregating those of the individuals, for instance through averaging. In their celebrated “Groupthink”, Russell et al. (2015) require group credences to undergo Bayesian revision whenever new information is learnt, i.e., whenever individual credences undergo Bayesian revision based on this information. To obtain a fully Bayesian group, one should often extend this requirement to non-public or even private information (learnt by not all or (...) just one individual), or to non-representable information (not representable by any event in the domain where credences are held). I pro- pose a taxonomy of six types of ‘group Bayesianism’. They differ in the information for which Bayesian revision of group credences is required: public representable information, private representable information, public non-representable information, etc. Six corre- sponding theorems establish how individual credences must (not) be aggregated to ensure group Bayesianism of any type, respectively. Aggregating through standard averaging is never permitted; instead, different forms of geometric averaging must be used. One theorem—that for public representable information—is essentially Russell et al.’s central result (with minor corrections). Another theorem—that for public non-representable information—fills a gap in the theory of externally Bayesian opinion pooling. (shrink)
According to standard rational choice theory, as commonly used in political science and economics, an agent's fundamental preferences are exogenously fixed, and any preference change over decision options is due to Bayesian information learning. Although elegant and parsimonious, such a model fails to account for preference change driven by experiences or psychological changes distinct from information learning. We develop a model of non-informational preference change. Alternatives are modelled as points in some multidimensional space, only some of whose dimensions play a (...) role in shaping the agentís preferences. Any change in these "motivationally salient" dimensions can change the agent's preferences. How it does so is described by a new representation theorem. Our model not only captures a wide range of frequently observed phenomena, but also generalizes some standard representations of preferences in political science and economics. (shrink)
Bayesian epistemology tells us with great precision how we should move from prior to posterior beliefs in light of new evidence or information, but says little about where our prior beliefs come from. It offers few resources to describe some prior beliefs as rational or well-justified, and others as irrational or unreasonable. A different strand of epistemology takes the central epistemological question to be not how to change one’s beliefs in light of new evidence, but what reasons justify a given (...) set of beliefs in the first place. We offer an account of rational belief formation that closes some of the gap between Bayesianism and its reason-based alternative, formalizing the idea that an agent can have reasons for his or her (prior) beliefs, in addition to evidence or information in the ordinary Bayesian sense. Our analysis of reasons for belief is part of a larger programme of research on the role of reasons in rational agency (Dietrich and List, Nous, 2012a, in press; Int J Game Theory, 2012b, in press). (shrink)
How can different individuals' probability functions on a given sigma-algebra of events be aggregated into a collective probability function? Classic approaches to this problem often require 'event-wise independence': the collective probability for each event should depend only on the individuals' probabilities for that event. In practice, however, some events may be 'basic' and others 'derivative', so that it makes sense first to aggregate the probabilities for the former and then to let these constrain the probabilities for the latter. We formalize (...) this idea by introducing a 'premise-based' approach to probabilistic opinion pooling, and show that, under a variety of assumptions, it leads to linear or neutral opinion pooling on the 'premises'. This paper is the second of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
Assuming that votes are independent, the epistemically optimal procedure in a binary collective choice problem is known to be a weighted supermajority rule with weights given by personal log-likelihood-ratios. It is shown here that an analogous result holds in a much more general model. Firstly, the result follows from a more basic principle than expected-utility maximisation, namely from an axiom (Epistemic Monotonicity) which requires neither utilities nor prior probabilities of the ‘correctness’ of alternatives. Secondly, a person’s input need not be (...) a vote for an alternative, it may be any type of input, for instance a subjective degree of belief or probability of the correctness of one of the alternatives. The case of a proﬁle of subjective degrees of belief is particularly appealing, since here no parameters such as competence parameters need to be known. (shrink)
In solving judgment aggregation problems, groups often face constraints. Many decision problems can be modelled in terms the acceptance or rejection of certain propositions in a language, and constraints as propositions that the decisions should be consistent with. For example, court judgments in breach-of-contract cases should be consistent with the constraint that action and obligation are necessary and sufficient for liability; judgments on how to rank several options in an order of preference with the constraint of transitivity; and judgments on (...) budget items with budgetary constraints. Often more or less demanding constraints on decisions are imaginable. For instance, in preference ranking problems, the transitivity constraint is often contrasted with the weaker acyclicity constraint. In this paper, we make constraints explicit in judgment aggregation by relativizing the rationality conditions of consistency and deductive closure to a constraint set, whose variation yields more or less strong notions of rationality. We review several general results on judgment aggregation in light of such constraints. (shrink)
There has been much discussion on the two-envelope paradox. Clark and Shackel (2000) have proposed a solution to the paradox, which has been refuted by Meacham and Weisberg (2003). Surprisingly, however, the literature still contains no axiomatic justification for the claim that one should be indifferent between the two envelopes before opening one of them. According to Meacham and Weisberg, "decision theory does not rank swapping against sticking [before opening any envelope]" (p. 686). To fill this gap in the literature, (...) we present a simple axiomatic justification for indifference, avoiding any expectation reasoning, which is often considered problematic in infinite cases. Although the two-envelope paradox assumes an expectation-maximizing agent, we show that analogous paradoxes arise for agents using different decision principles such as maximin and maximax, and that our justification for indifference before opening applies here too. (shrink)
Tough anti-terrorism policies are often defended by focusing on a fixed minority of the population who prefer violent outcomes, and arguing that toughness reduces the risk of terrorism from this group. This reasoning implicitly assumes that tough policies do not increase the group of 'potential terrorists', i.e., of people with violent preferences. Preferences and their level of violence are treated as stable, exogenously fixed features. To avoid this unrealis- tic assumption, I formulate a model in which policies can 'brutalise' or (...) 'appease' someone's personality, i.e., his preferences. This follows the endogenous prefer- ences approach, popular elsewhere in political science and economics. I formally decompose the effect of toughness into a (desirable) deterrence effect and an (un- desirable) provocation effect. Whether toughness is overall effi cient depends on which effect overweighs. I show that neglecting provocation typically leads to toughness exaggeration. This suggests that some tough anti-terrorism policies observable in the present and past can be explained by a neglect of provocation. (shrink)
We give a review and critique of jury theorems from a social-epistemology perspective, covering Condorcet’s (1785) classic theorem and several later refinements and departures. We assess the plausibility of the conclusions and premises featuring in jury theorems and evaluate the potential of such theorems to serve as formal arguments for the ‘wisdom of crowds’. In particular, we argue (i) that there is a fundamental tension between voters’ independence and voters’ competence, hence between the two premises of most jury theorems; (ii) (...) that the (asymptotic) conclusion that ‘huge groups are infallible’, reached by many jury theorems, is an artifact of unjustified premises; and (iii) that the (nonasymptotic) conclusion that ‘larger groups are more reliable’, also reached by many jury theorems, is not an artifact and should be regarded as the more adequate formal rendition of the ‘wisdom of crowds’. (shrink)
In a single framework, I address the question of the informational basis for evaluating social states. I particularly focus on information about individual welfare, individual preferences and individual (moral) judgments, but the model is also open to any other informational input deemed relevant, e.g. sources of welfare and motivations behind preferences. In addition to proving some possibility and impossibility results, I discuss objections against using information about only one aspect (e.g. using only preference information). These objections suggest a multi-aspect informational (...) basis for aggregation. However, the multi-aspect approach faces an impossibility result created by a lack of inter-aspect comparability. The impossibility could be overcome by measuring information on non-cardinal scales. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.