The aggregation of individual judgments over interrelated propositions is a newly arising field of social choice theory. I introduce several independence conditions on judgment aggregation rules, each of which protects against a specific type of manipulation by agenda setters or voters. I derive impossibility theorems whereby these independence conditions are incompatible with certain minimal requirements. Unlike earlier impossibility results, the main result here holds for any (non-trivial) agenda. However, independence conditions arguably undermine the logical structure of judgment (...) class='Hi'>aggregation. I therefore suggest restricting independence to premises, which leads to a generalised premise-based procedure. This procedure is proven to be possible if the premises are logically independent. (shrink)
Suppose that the members of a group each hold a rational set of judgments on some interconnected questions, and imagine that the group itself has to form a collective, rational set of judgments on those questions. How should it go about dealing with this task? We argue that the question raised is subject to a difficulty that has recently been noticed in discussion of the doctrinal paradox in jurisprudence. And we show that there is a general impossibility theorem that that (...) difficulty illustrates. Our paper describes this impossibility result and provides an exploration of its significance. The result naturally invites comparison with Kenneth Arrow's famous theorem (Arrow, 1963 and 1984; Sen, 1970) and we elaborate that comparison in a companion paper (List and Pettit, 2002). The paper is in four sections. The first section documents the need for various groups to aggregate its members' judgments; the second presents the discursive paradox; the third gives an informal statement of the more general impossibility result; the formal proof is presented in an appendix. The fourth section, finally, discusses some escape routes from that impossibility. (shrink)
This paper is about the role of interpersonal comparisons in Harsanyi's aggregation theorem. Harsanyi interpreted his theorem to show that a broadly utilitarian theory of distribution must be true even if there are no interpersonal comparisons of well-being. How is this possible? The orthodox view is that it is not. Some argue that the interpersonal comparability of well-being is hidden in Harsanyi's premises. Others argue that it is a surprising conclusion of Harsanyi's theorem, which is not presupposed by any (...) one of the premises. I argue instead that Harsanyi was right: his theorem and its weighted-utilitarian conclusion do not require interpersonal comparisons of well-being. The key to making sense of this possibility is to treat Harsanyi's weights as dimensional constants rather than dimensionless numbers. (shrink)
Several recent results on the aggregation of judgments over logically connected propositions show that, under certain conditions, dictatorships are the only propositionwise aggregation functions generating fully rational (i.e., complete and consistent) collective judgments. A frequently mentioned route to avoid dictatorships is to allow incomplete collective judgments. We show that this route does not lead very far: we obtain oligarchies rather than dictatorships if instead of full rationality we merely require that collective judgments be deductively closed, arguably a minimal (...) condition of rationality, compatible even with empty judgment sets. We derive several characterizations of oligarchies and provide illustrative applications to Arrowian preference aggregation and Kasher and Rubinsteinís group identification problem. (shrink)
I propose a relevance-based independence axiom on how to aggregate individual yes/no judgments on given propositions into collective judgments: the collective judgment on a proposition depends only on people’s judgments on propositions which are relevant to that proposition. This axiom contrasts with the classical independence axiom: the collective judgment on a proposition depends only on people’s judgments on the same proposition. I generalize the premise-based rule and the sequential-priority rule to an arbitrary priority order of the propositions, instead of a (...) dichotomous premise/conclusion order resp. a linear priority order. I prove four impossibility theorems on relevance-based aggregation. One theorem simultaneously generalizes Arrow’s Theorem (in its general and indiﬀerence-free versions) and the well-known Arrow-like theorem in judgment aggregation. (shrink)
How can the propositional attitudes of several individuals be aggregated into overall collective propositional attitudes? Although there are large bodies of work on the aggregation of various special kinds of propositional attitudes, such as preferences, judgments, probabilities and utilities, the aggregation of propositional attitudes is seldom studied in full generality. In this paper, we seek to contribute to filling this gap in the literature. We sketch the ingredients of a general theory of propositional attitude aggregation and prove (...) two new theorems. Our first theorem simultaneously characterizes some prominent aggregation rules in the cases of probability, judgment and preference aggregation, including linear opinion pooling and Arrovian dictatorships. Our second theorem abstracts even further from the specific kinds of attitudes in question and describes the properties of a large class of aggregation rules applicable to a variety of belief-like attitudes. Our approach integrates some previously disconnected areas of investigation. (shrink)
For aggregative theories of moral value, it is a challenge to rank worlds that each contain infinitely many valuable events. And, although there are several existing proposals for doing so, few provide a cardinal measure of each world's value. This raises the even greater challenge of ranking lotteries over such worlds—without a cardinal value for each world, we cannot apply expected value theory. How then can we compare such lotteries? To date, we have just one method for doing so (proposed (...) separately by Arntzenius, Bostrom, and Meacham), which is to compare the prospects for value at each individual location, and to then represent and compare lotteries by their expected values at each of those locations. But, as I show here, this approach violates several key principles of decision theory and generates some implausible verdicts. I propose an alternative—one which delivers plausible rankings of lotteries, which is implied by a plausible collection of axioms, and which can be applied alongside almost any ranking of infinite worlds. (shrink)
Aggregative moral theories face a series of devastating problems when we apply them in a physically realistic setting. According to current physics, our universe is likely _infinitely large_, and will contain infinitely many morally valuable events. But standard aggregative theories are ill-equipped to compare outcomes containing infinite total value so, applied in a realistic setting, they cannot compare any outcomes a real-world agent must ever choose between. This problem has been discussed extensively, and non-standard aggregative theories proposed to overcome it. (...) This paper addresses a further problem of similar severity. Physics tells us that, in our universe, how remotely in time an event occurs is _relative_. But our most promising aggregative theories, designed to compare outcomes containing infinitely many valuable events, are sensitive to how remote in time those events are. As I show, the evaluations of those theories are then relative too. But this is absurd; evaluations of outcomes must be absolute. So we must reject such theories. Is this objection fatal for all aggregative theories, at least in a relativistic universe like ours? I demonstrate here that, by further modifying these theories to fit with the physics, we can overcome it. (shrink)
Deferential Monadic Panpsychism is a view that accepts that physical science is capable of discovering the basic structure of reality. However, it denies that reality is fully and exhaustively de- scribed purely in terms of physical science. Consciousness is missing from the physical description and cannot be reduced to it. DMP explores the idea that the physically fundamental features of the world possess some intrinsic mental aspect. It thereby faces a se- vere problem of understanding how more complex mental states (...) emerge from the mental features of the fundamental features. Here I explore the idea that a new form of aggregative emergence, which I call 'combinatorial infusion', could shed light on this problem and bolster the prospects for this form of panpsychism. (shrink)
Decision-making typically requires judgments about causal relations: we need to know the causal effects of our actions and the causal relevance of various environmental factors. We investigate how several individuals' causal judgments can be aggregated into collective causal judgments. First, we consider the aggregation of causal judgments via the aggregation of probabilistic judgments, and identify the limitations of this approach. We then explore the possibility of aggregating causal judgments independently of probabilistic ones. Formally, we introduce the problem of (...) causal-network aggregation. Finally, we revisit the aggregation of probabilistic judgments when this is constrained by prior aggregation of qualitative causal judgments. (shrink)
The ``doctrinal paradox'' or ``discursive dilemma'' shows that propositionwise majority voting over the judgments held by multiple individuals on some interconnected propositions can lead to inconsistent collective judgments on these propositions. List and Pettit (2002) have proved that this paradox illustrates a more general impossibility theorem showing that there exists no aggregation procedure that generally produces consistent collective judgments and satisfies certain minimal conditions. Although the paradox and the theorem concern the aggregation of judgments rather than preferences, they (...) invite comparison with two established results on the aggregation of preferences: the Condorcet paradox and Arrow's impossibility theorem. We may ask whether the new impossibility theorem is a special case of Arrow's theorem, or whether there are interesting disanalogies between the two results. In this paper, we compare the two theorems, and show that they are not straightforward corollaries of each other. We further suggest that, while the framework of preference aggregation can be mapped into the framework of judgment aggregation, there exists no obvious reverse mapping. Finally, we address one particular minimal condition that is used in both theorems – an independence condition – and suggest that this condition points towards a unifying property underlying both impossibility results. (shrink)
Weakly Aggregative Modal Logic (WAML) is a collection of disguised polyadic modal logics with n-ary modalities whose arguments are all the same. WAML has some interesting applications on epistemic logic and logic of games, so we study some basic model theoretical aspects of WAML in this paper. Specifically, we give a van Benthem-Rosen characterization theorem of WAML based on an intuitive notion of bisimulation and show that each basic WAML system Kn lacks Craig Interpolation.
Many of us believe (1) Saving a life is more important than averting any number of headaches. But what about risky cases? Surely: (2) In a single choice, if the risk of death is low enough, and the number of headaches at stake high enough, one should avert the headaches rather than avert the risk of death. And yet, if we will face enough iterations of cases like that in (2), in the long run some of those small risks of (...) serious harms will surely eventuate. And yet: (3) Isn't it still permissible for us to run these repeated risks, despite that knowledge? After all, if it were not, then many of the risky activities that we standardly think permissible would in fact be impermissible. Nobody has yet offered a principle that can accommodate all of 1-3. In this paper, I show that we can accommodate all of these judgements, by taking into account both ex ante and ex post perspectives. In doing so, I clear aside an important obstacle to a viable deontological decision theory. (shrink)
Many believe that we ought to save a large number from being permanently bedridden rather than save one from death. Many also believe that we ought to save one from death rather than a multitude from a very minor harm, no matter how large this multitude. I argue that a principle I call “Aggregate Relevant Claims” satisfactorily explains these judgments. I offer a rationale for this principle and defend it against objections.
As the ongoing literature on the paradoxes of the Lottery and the Preface reminds us, the nature of the relation between probability and rational acceptability remains far from settled. This article provides a novel perspective on the matter by exploiting a recently noted structural parallel with the problem of judgment aggregation. After offering a number of general desiderata on the relation between finite probability models and sets of accepted sentences in a Boolean sentential language, it is noted that a (...) number of these constraints will be satisfied if and only if acceptable sentences are true under all valuations in a distinguished non-empty set W. Drawing inspiration from distance-based aggregation procedures, various scoring rule based membership conditions for W are discussed and a possible point of contact with ranking theory is considered. The paper closes with various suggestions for further research. (shrink)
In solving judgment aggregation problems, groups often face constraints. Many decision problems can be modelled in terms the acceptance or rejection of certain propositions in a language, and constraints as propositions that the decisions should be consistent with. For example, court judgments in breach-of-contract cases should be consistent with the constraint that action and obligation are necessary and sufficient for liability; judgments on how to rank several options in an order of preference with the constraint of transitivity; and judgments (...) on budget items with budgetary constraints. Often more or less demanding constraints on decisions are imaginable. For instance, in preference ranking problems, the transitivity constraint is often contrasted with the weaker acyclicity constraint. In this paper, we make constraints explicit in judgment aggregation by relativizing the rationality conditions of consistency and deductive closure to a constraint set, whose variation yields more or less strong notions of rationality. We review several general results on judgment aggregation in light of such constraints. (shrink)
All existing impossibility theorems on judgment aggregation require individual and collective judgment sets to be consistent and complete, arguably a demanding rationality requirement. They do not carry over to aggregation functions mapping profiles of consistent individual judgment sets to consistent collective ones. We prove that, whenever the agenda of propositions under consideration exhibits mild interconnections, any such aggregation function that is "neutral" between the acceptance and rejection of each proposition is dictatorial. We relate this theorem to the (...) literature. (shrink)
The widely discussed "discursive dilemma" shows that majority voting in a group of individuals on logically connected propositions may produce irrational collective judgments. We generalize majority voting by considering quota rules, which accept each proposition if and only if the number of individuals accepting it exceeds a given threshold, where different thresholds may be used for different propositions. After characterizing quota rules, we prove necessary and sufficient conditions on the required thresholds for various collective rationality requirements. We also consider sequential (...) quota rules, which ensure collective rationality by adjudicating propositions sequentially and letting earlier judgments constrain later ones. Sequential rules may be path-dependent and strategically manipulable. We characterize path-independence and prove its essential equivalence to strategy-proofness. Our results shed light on the rationality of simple-, super-, and sub-majoritarian decision-making. (shrink)
Judgment aggregation is naturally applied to the modeling of collective attitudes. In the individual case, we represent agents as having not just beliefs, but also as supporting them with reasons. Can the Judgment Aggregation help model a concept of collective reason? I argue that the resources of the standard judgment aggregation framework are insufficiently general. I develop a generalization of the framework that improves along this dimension. In the new framework, new aggregation rules become available, as (...) well as a natural account of collective reasons. (shrink)
This work contributes to the theory of judgement aggregation by discussing a number of significant non-classical logics. After adapting the standard framework of judgement aggregation to cope with non-classical logics, we discuss in particular results for the case of Intuitionistic Logic, the Lambek calculus, Linear Logic and Relevant Logics. The motivation for studying judgement aggregation in non-classical logics is that they offer a number of modelling choices to represent agents’ reasoning in aggregation problems. By studying judgement (...)aggregation in logics that are weaker than classical logic, we investigate whether some well-known impossibility results, that were tailored for classical logic, still apply to those weak systems. (shrink)
An important objection to preference-satisfaction theories of well-being is that they cannot make sense of interpersonal comparisons. A tradition dating back to Harsanyi :434, 1953) attempts to solve this problem by appeal to people’s so-called extended preferences. This paper presents a new problem for the extended preferences program, related to Arrow’s celebrated impossibility theorem. We consider three ways in which the extended-preference theorist might avoid this problem, and recommend that she pursue one: developing aggregation rules that violate Arrow’s Independence (...) of Irrelevant Alternatives condition. (shrink)
There are many reasons we might want to take the opinions of various individuals and pool them to give the opinions of the group they constitute. If all the individuals in the group have probabilistic opinions about the same propositions, there is a host of pooling functions we might deploy, such as linear or geometric pooling. However, there are also cases where different members of the group assign probabilities to different sets of propositions, which might overlap a lot, a little, (...) or not at all. There are far fewer proposals for how to proceed in these cases, and those there are have undesirable features. I begin by considering four proposals and arguing that they don't work. Then I'll describe my own proposal, which is intended to cover the situation in which we want to pool the individual opinions in order to ascribe an opinion to the group considered as an agent in its own right. (shrink)
This paper provides an introductory review of the theory of judgment aggregation. It introduces the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation. The present illustrative rather than (...) exhaustive review is intended to give readers new to the field of judgment aggregation a sense of this rapidly growing research area. (shrink)
Which rules for aggregating judgments on logically connected propositions are manipulable and which not? In this paper, we introduce a preference-free concept of non-manipulability and contrast it with a preference-theoretic concept of strategy-proofness. We characterize all non-manipulable and all strategy-proof judgment aggregation rules and prove an impossibility theorem similar to the Gibbard--Satterthwaite theorem. We also discuss weaker forms of non-manipulability and strategy-proofness. Comparing two frequently discussed aggregation rules, we show that “conclusion-based voting” is less vulnerable to manipulation than (...) “premise-based voting”, which is strategy-proof only for “reason-oriented” individuals. Surprisingly, for “outcome-oriented” individuals, the two rules are strategically equivalent, generating identical judgments in equilibrium. Our results introduce game-theoretic considerations into judgment aggregation and have implications for debates on deliberative democracy. (shrink)
This paper addresses the problem of judgment aggregation in science. How should scientists decide which propositions to assert in a collaborative document? We distinguish the question of what to write in a collaborative document from the question of collective belief. We argue that recent objections to the application of the formal literature on judgment aggregation to the problem of judgment aggregation in science apply to the latter, not the former question. The formal literature has introduced various desiderata (...) for an aggregation procedure. Proposition-wise majority voting emerges as a procedure that satisfies all desiderata which represent norms of science. An interesting consequence is that not all collaborating scientists need to endorse every proposition asserted in a collaborative document. (shrink)
The new …eld of judgment aggregation aims to …nd collective judgments on logically interconnected propositions. Recent impossibility results establish limitations on the possibility to vote independently on the propositions. I show that, fortunately, the impossibility results do not apply to a wide class of realistic agendas once propositions like “if a then b” are adequately modelled, namely as subjunctive implications rather than material implications. For these agendas, consistent and complete collective judgments can be reached through appropriate quota rules (which (...) decide propositions using acceptance thresholds). I characterise the class of these quota rules. I also prove an abstract result that characterises consistent aggregation for arbitrary agendas in a general logic. (shrink)
Patient preference predictors aim to solve the moral problem of making treatment decisions on behalf of incapacitated patients. This commentary on a case of an unrepresented patient at the end of life considers 3 related problems of such predictors: the problem of restricting the scope of inputs to the models (the “scope” problem), the problem of weighing inputs against one another (the “weight” problem), and the problem of multiple reasonable solutions to the scope and weight problems (the “multiple reasonable models” (...) problem). Each of these problems poses challenges to reliably implementing patient preference predictors in important, high-stakes health care decision making. This commentary also suggests a way forward. (shrink)
We present an abstract social aggregation theorem. Society, and each individual, has a preorder that may be interpreted as expressing values or beliefs. The preorders are allowed to violate both completeness and continuity, and the population is allowed to be infinite. The preorders are only assumed to be represented by functions with values in partially ordered vector spaces, and whose product has convex range. This includes all preorders that satisfy strong independence. Any Pareto indifferent social preorder is then shown (...) to be represented by a linear transformation of the representations of the individual preorders. Further Pareto conditions on the social preorder correspond to positivity conditions on the transformation. When all the Pareto conditions hold and the population is finite, the social preorder is represented by a sum of individual preorder representations. We provide two applications. The first yields an extremely general version of Harsanyi's social aggregation theorem. The second generalizes a classic result about linear opinion pooling. (shrink)
It is plausible to think that it is wrong to cure many people’s headaches rather than save someone else’s life. On the other hand, it is plausible to think that it is not wrong to expose someone to a tiny risk of death when curing this person’s headache. I will argue that these claims are inconsistent. For if we keep taking this tiny risk then it is likely that one person dies, while many others’ headaches are cured. In light of (...) this inconsistency, there is a conflict in our intuitions about beneficence and chance. This conflict is perplexing. And I have not been able to find a satisfactory way of resolving it. Perhaps you can do better? (shrink)
Joe Horton argues that partial aggregation yields unacceptable verdicts in cases with risk and multiple decisions. I begin by showing that Horton’s challenge does not depend on risk, since exactly similar arguments apply to riskless cases. The underlying conflict Horton exposes is between partial aggregation and certain principles of diachronic choice. I then provide two arguments against these diachronic principles: they conflict with intuitions about parity, prerogatives, and cyclical preferences, and they rely on an odd assumption about diachronic (...) choice. Finally, I offer an explanation, on behalf of partial aggregation, for why these diachronic principles fail. (shrink)
The aim of this article is to introduce the theory of judgment aggregation, a growing interdisciplinary research area. The theory addresses the following question: How can a group of individuals make consistent collective judgments on a given set of propositions on the basis of the group members' individual judgments on them? I begin by explaining the observation that initially sparked the interest in judgment aggregation, the so-called "doctinal" and "discursive paradoxes". I then introduce the basic formal model of (...) judgment aggregation, which allows me to present some illustrative variants of a generic impossibility result. I subsequently turn to the question of how this impossibility result can be avoided, going through several possible escape routes. Finally, I relate the theory of judgment aggregation to other branches of aggregation theory. Rather than offering a comprehensive survey of the theory of judgment aggregation, I hope to introduce the theory in a succinct and pedagogical way, providing an illustrative rather than exhaustive coverage of some of its key ideas and results. (shrink)
Judgment-aggregation theory has always focused on the attainment of rational collective judgments. But so far, rationality has been understood in static terms: as coherence of judgments at a given time, defined as consistency, completeness, and/or deductive closure. This paper asks whether collective judgments can be dynamically rational, so that they change rationally in response to new information. Formally, a judgment aggregation rule is dynamically rational with respect to a given revision operator if, whenever all individuals revise their judgments (...) in light of some information (a learnt proposition), then the new aggregate judgments are the old ones revised in light of this information, i.e., aggregation and revision commute. We prove an impossibility theorem: if the propositions on the agenda are non-trivially connected, no judgment aggregation rule with standard properties is dynamically rational with respect to any revision operator satisfying some basic conditions on revision. Our theorem is the dynamic-rationality counterpart of some well-known impossibility theorems for static rationality. We also explore how dynamic rationality might be achieved by relaxing some of the conditions on the aggregation rule and/or the revision operator. Notably, premise-based aggregation rules are dynamically rational with respect to so-called premise-based revision operators. (shrink)
We analyse the computational complexity of three problems in judgment aggregation: (1) computing a collective judgment from a profile of individual judgments (the winner determination problem); (2) deciding whether a given agent can influence the outcome of a judgment aggregation procedure in her favour by reporting insincere judgments (the strategic manipulation problem); and (3) deciding whether a given judgment aggregation scenario is guaranteed to result in a logically consistent outcome, independently from what the judgments supplied by the (...) individuals are (the problem of the safety of the agenda). We provide results both for specific aggregation procedures (the quota rules, the premisebased procedure, and a distance-based procedure) and for classes of aggregation procedures characterised in terms of fundamental axioms. (shrink)
In the theory of judgment aggregation, it is known for which agendas of propositions it is possible to aggregate individual judgments into collective ones in accordance with the Arrow-inspired requirements of universal domain, collective rationality, unanimity preservation, non-dictatorship and propositionwise independence. But it is only partially known (e.g., only in the monotonic case) for which agendas it is possible to respect additional requirements, notably non-oligarchy, anonymity, no individual veto power, or implication preservation. We fully characterize the agendas for which (...) there are such possibilities, thereby answering the most salient open questions about propositionwise judgment aggregation. Our results build on earlier results by Nehring and Puppe (2002), Nehring (2006), Dietrich and List (2007a) and Dokow and Holzman (2010a). (shrink)
With the rapidly growing amounts of information, visualization is becoming increasingly important, as it allows users to easily explore and understand large amounts of information. However the field of information visualiza- tion currently lacks sufficient theoretical foundations. This article addresses foundational questions connecting information visualization with computing and philosophy studies. The idea of multiscale information granula- tion is described based on two fundamental concepts: information (structure) and computation (process). A new information processing paradigm of Granular Computing enables stepwise increase of (...) granulation/aggregation of information on different levels of resolution, which makes possible dynamical viewing of data. Information produced by Google Earth is an illustration of visualization based on clustering (granulation) of information on a succession of layers. Depending on level, specific emergent properties become visible as a result of different ways of aggregation of data/information. As information visualization ultimately aims at amplifying cognition, we discuss the process of simulation and emulation in relation to cognition, and in particular visual cognition. (shrink)
Interpersonal aggregation involves the combining and weighing of benefits and losses to multiple individuals in the course of determining what ought to be done. Most consequentialists embrace thoroughgoing interpersonal aggregation, the view that any large benefit to each of a few people can be morally outweighed by allocating any smaller benefit to each of many others, so long as this second group is sufficiently large. This would permit letting one person die in order to cure some number of (...) mild headaches instead. Most non-consequentialists reject thoroughgoing interpersonal aggregation despite also believing it is permissible to let one person die in order to prevent many cases of paraplegia instead. Non-consequentialists defend this asymmetry largely on the basis of intuition, and some rely on the notion of relevance to formalize the grounding intuitions. This article seeks to clarify and strengthen the non-consequentialist notion of relevance by engaging with three objections to it. (shrink)
The debate on the epistemology of disagreement has so far focused almost exclusively on cases of disagreement between individual persons. Yet, many social epistemologists agree that at least certain kinds of groups are equally capable of having beliefs that are open to epistemic evaluation. If so, we should expect a comprehensive epistemology of disagreement to accommodate cases of disagreement between group agents, such as juries, governments, companies, and the like. However, this raises a number of fundamental questions concerning what it (...) means for groups to be epistemic peers and to disagree with each other. In this paper, we explore what group peer disagreement amounts to given that we think of group belief in terms of List and Pettit’s ‘belief aggregation model’. We then discuss how the so-called ‘equal weight view’ of peer disagreement is best accommodated within this framework. The account that seems most promising to us says, roughly, that the parties to a group peer disagreement should adopt the belief that results from applying the most suitable belief aggregation function for the combined group on all members of the combined group. To motivate this view, we test it against various intuitive cases, derive some of its notable implications, and discuss how it relates to the equal weight view of individual peer disagreement. (shrink)
Axiom weakening is a novel technique that allows for fine-grained repair of inconsistent ontologies. In a multi-agent setting, integrating ontologies corresponding to multiple agents may lead to inconsistencies. Such inconsistencies can be resolved after the integrated ontology has been built, or their generation can be prevented during ontology generation. We implement and compare these two approaches. First, we study how to repair an inconsistent ontology resulting from a voting-based aggregation of views of heterogeneous agents. Second, we prevent the generation (...) of inconsistencies by letting the agents engage in a turn-based rational protocol about the axioms to be added to the integrated ontology. We instantiate the two approaches using real-world ontologies and compare them by measuring the levels of satisfaction of the agents w.r.t. the ontology obtained by the two procedures. (shrink)
In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of (...) our second result. Although we thereby provide a new proof of Arrow’s theorem, our main aim is to identify the analogue of Arrow’s theorem in judgment aggregation, to clarify the relation between judgment and preference aggregation, and to illustrate the generality of the judgment aggregation model. JEL Classi…cation: D70, D71.. (shrink)
Buddhism originated and developed in an Indian cultural context that featured many first-person practices for producing and exploring states of consciousness through the systematic training of attention. In contrast, the dominant methods of investigating the mind in Western cognitive science have emphasized third-person observation of the brain and behavior. In this chapter, we explore how these two different projects might prove mutually beneficial. We lay the groundwork for a cross-cultural cognitive science by using one traditional Buddhist model of the mind (...) – that of the five aggregates – as a lens for examining contemporary cognitive science conceptions of consciousness. (shrink)
Is an outcome where many people are saved and one person dies better than an outcome where the one is saved and the many die? According to the standard utilitarian justification, the former is better because it has a greater sum total of well-being. This justification involves a controversial form of moral aggregation, because it is based on a comparison between aggregates of different people's well-being. Still, an alternative justification-the Argument for Best Outcomes-does not involve moral aggregation. I (...) extend the Argument for Best Outcomes to show that any utilitarian evaluation can be justified without moral aggregation. (shrink)
Why is there a specific problem with biological individuality? Because the living realm contains a wide range of exotic particular concrete entities that do not easily match our ordinary concept of an individual. Slime moulds, dandelions, siphonophores are among the Odd Entities that excite the ontological zeal of the philosophers of biology. Most of these philosophers, however, seem to believe that these Odd Cases oblige us to refine or revise our common concept of an individual. They think, explicitly or tacitly, (...) that to be a living, evolutionary entity is to be a living individual. In this paper, we explore an alternative proposal: the variety and oddity of the forms of the living realm might be ontologically regimented through an increase in the categorial complexity of the living realm, by admitting, beside living individuals, living non-individuals or by acknowledging, more generally, that the evolutionary development of the living forms is not necessarily a process of building individuals, that life is not necessarily individuals-oriented. We claim that, from an ontological point of view, the spectacle of the living realm obliges us to take aggregativity seriously. (shrink)
Many collective decision making problems have a combinatorial structure: the agents involved must decide on multiple issues and their preferences over one issue may depend on the choices adopted for some of the others. Voting is an attractive method for making collective decisions, but conducting a multi-issue election is challenging. On the one hand, requiring agents to vote by expressing their preferences over all combinations of issues is computationally infeasible; on the other, decomposing the problem into several elections on smaller (...) sets of issues can lead to paradoxical outcomes. Any pragmatic method for running a multi-issue election will have to balance these two concerns. We identify and analyse the problem of generating an agenda for a given election, specifying which issues to vote on together in local elections and in which order to schedule those local elections. (shrink)
The new field of judgment aggregation aims to merge many individual sets of judgments on logically interconnected propositions into a single collective set of judgments on these propositions. Judgment aggregation has commonly been studied using classical propositional logic, with a limited expressive power and a problematic representation of conditional statements ("if P then Q") as material conditionals. In this methodological paper, I present a simple unified model of judgment aggregation in general logics. I show how many realistic (...) decision problems can be represented in it. This includes decision problems expressed in languages of classical propositional logic, predicate logic (e.g. preference aggregation problems), modal or conditional logics, and some multi-valued or fuzzy logics. I provide a list of simple tools for working with general logics, and I prove impossibility results that generalise earlier theorems. (shrink)
Judgment aggregation theory, or rather, as we conceive of it here, logical aggregation theory generalizes social choice theory by having the aggregation rule bear on judgments of all kinds instead of merely preference judgments. It derives from Kornhauser and Sager’s doctrinal paradox and List and Pettit’s discursive dilemma, two problems that we distinguish emphatically here. The current theory has developed from the discursive dilemma, rather than the doctrinal paradox, and the final objective of the paper is to (...) give the latter its own theoretical development along the line of recent work by Dietrich and Mongin. However, the paper also aims at reviewing logical aggregation theory as such, and it covers impossibility theorems by Dietrich, Dietrich and List, Dokow and Holzman, List and Pettit, Mongin, Nehring and Puppe, Pauly and van Hees, providing a uniform logical framework in which they can be compared with each other. The review goes through three historical stages: the initial paradox and dilemma, the scattered early results on the independence axiom, and the so-called canonical theorem, a collective achievement that provided the theory with its specific method of analysis. The paper goes some way towards philosophical logic, first by briefly connecting the aggregative framework of judgment with the modern philosophy of judgment, and second by thoroughly discussing and axiomatizing the ‘general logic’ built in this framework. (shrink)
Desire', 'preference', 'utility', '(utility-aggregating) moral desirability' are terms that build on each other in this order. The article follows this definitional structure and presents these terms and their justifications. The aim is to present welfare-ethical criteria of the common good that define 'moral desirability' as an aggregation, e.g. addition, of individual utility: utilitarianism, utility egalitarianism, leximin, prioritarianism.
In the framework of judgment aggregation, we assume that some formulas of the agenda are singled out as premisses, and that both Independence (formula-wise aggregation) and Unanimity Preservation hold for them. Whether premiss-based aggregation thus defined is compatible with conclusion-based aggregation, as defined by Unanimity Preservation on the non-premisses, depends on how the premisses are logically connected, both among themselves and with other formulas. We state necessary and sufficient conditions under which the combination of both approaches (...) leads to dictatorship (resp. oligarchy), either just on the premisses or on the whole agenda. This framework is inspired by the doctrinal paradox of legal theory and arguably relevant to this field as well as political science and political economy. When the set of premisses coincides with the whole agenda, a limiting case of our assumptions, we obtain several existing results in judgment aggregation theory. (shrink)
I propose a straightforward reconciliation of Leibniz’s conception of bodies as aggregates of simple substances (i.e., monads) with his doctrine that bodies are the phenomena of perceivers, without in the process saddling him with any equivocations. The reconciliation relies on the familiar idea that in Leibniz’s idiolect, an aggregate of Fs is that which immediately presupposes those Fs, or in other words, has those Fs as immediate requisites. But I take this idea in a new direction. Taking notice of the (...) fact that Leibniz speaks of three respects in which one thing may immediately presuppose others--i.e., with respect to its being, its existence, and its reality--I argue that a phenomenon having its being in one perceiving substance (monad) can plausibly be understood to presuppose other perceiving substances (monads) in two of these respects. Accordingly, good sense can be made of both the claim that a phenomenon in one monad is an aggregate of other monads (in Leibniz’s technical sense of 'aggregate') and the (equivalent) claim that the latter monads are constituents of the phenomenon (in his technical sense of 'constituent'). So understood, the two conceptions of body are perfectly compatible, just as Leibniz seems to think. (shrink)
The article proceeds upon the assumption that the beliefs and degrees of belief of rational agents satisfy a number of constraints, including: consistency and deductive closure for belief sets, conformity to the axioms of probability for degrees of belief, and the Lockean Thesis concerning the relationship between belief and degree of belief. Assuming that the beliefs and degrees of belief of both individuals and collectives satisfy the preceding three constraints, I discuss what further constraints may be imposed on the (...) class='Hi'>aggregation of beliefs and degrees of belief. Some possibility and impossibility results are presented. The possibility results suggest that the three proposed rationality constraints are compatible with reasonable aggregation procedures for belief and degree of belief. (shrink)
Can a group be an orthodox rational agent? This requires the group's aggregate preferences to follow expected utility (static rationality) and to evolve by Bayesian updating (dynamic rationality). Group rationality is possible, but the only preference aggregation rules which achieve it (and are minimally Paretian and continuous) are the linear-geometric rules, which combine individual values linearly and combine individual beliefs geometrically. Linear-geometric preference aggregation contrasts with classic linear-linear preference aggregation, which combines both values and beliefs linearly, but (...) achieves only static rationality. Our characterisation of linear-geometric preference aggregation has two corollaries: a characterisation of linear aggregation of values (Harsanyi's Theorem) and a characterisation of geometric aggregation of beliefs. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.