I formulate a counterfactual version of the notorious ‘Ramsey Test’. Even in a weak form, this makes counterfactuals subject to the very argument that Lewis used to persuade the majority of the philosophical community that indicative conditionals were in hot water. I outline two reactions: to indicativize the debate on counterfactuals; or to counterfactualize the debate on indicatives.
Suppose that the members of a group each hold a rational set of judgments on some interconnected questions, and imagine that the group itself has to form a collective, rational set of judgments on those questions. How should it go about dealing with this task? We argue that the question raised is subject to a difficulty that has recently been noticed in discussion of the doctrinal paradox in jurisprudence. And we show that there is a general impossibility theorem that (...) that difficulty illustrates. Our paper describes this impossibility result and provides an exploration of its significance. The result naturally invites comparison with Kenneth Arrow's famous theorem (Arrow, 1963 and 1984; Sen, 1970) and we elaborate that comparison in a companion paper (List and Pettit, 2002). The paper is in four sections. The first section documents the need for various groups to aggregate its members' judgments; the second presents the discursive paradox; the third gives an informal statement of the more general impossibility result; the formal proof is presented in an appendix. The fourth section, finally, discusses some escape routes from that impossibility. (shrink)
The ``doctrinal paradox'' or ``discursive dilemma'' shows that propositionwise majority voting over the judgments held by multiple individuals on some interconnected propositions can lead to inconsistent collective judgments on these propositions. List and Pettit (2002) have proved that this paradox illustrates a more general impossibility theorem showing that there exists no aggregation procedure that generally produces consistent collective judgments and satisfies certain minimal conditions. Although the paradox and the theorem concern the aggregation of judgments rather than preferences, they invite (...) comparison with two established results on the aggregation of preferences: the Condorcet paradox and Arrow's impossibility theorem. We may ask whether the new impossibility theorem is a special case of Arrow's theorem, or whether there are interesting disanalogies between the two results. In this paper, we compare the two theorems, and show that they are not straightforward corollaries of each other. We further suggest that, while the framework of preference aggregation can be mapped into the framework of judgment aggregation, there exists no obvious reverse mapping. Finally, we address one particular minimal condition that is used in both theorems – an independence condition – and suggest that this condition points towards a unifying property underlying both impossibilityresults. (shrink)
In this paper, I investigate whether we can use a world-involving framework to model the epistemic states of non-ideal agents. The standard possible-world framework falters in this respect because of a commitment to logical omniscience. A familiar attempt to overcome this problem centers around the use of impossible worlds where the truths of logic can be false. As we shall see, if we admit impossible worlds where “anything goes” in modal space, it is easy to model extremely non-ideal agents that (...) are incapable of performing even the most elementary logical deductions. A much harder, and considerably less investigated challenge is to ensure that the resulting modal space can also be used to model moderately ideal agents that are not logically omniscient but nevertheless logically competent. Intuitively, while such agents may fail to rule out subtly impossible worlds that verify complex logical falsehoods, they are nevertheless able to rule out blatantly impossible worlds that verify obvious logical falsehoods. To model moderately ideal agents, I argue, the job is to construct a modal space that contains only possible and non-trivially impossible worlds where it is not the case that “anything goes”. But I prove that it is impossible to develop an impossible-world framework that can do this job and that satisfies certain standard conditions. Effectively, I show that attempts to model moderately ideal agents in a world-involving framework collapse to modeling either logical omniscient agents, or extremely non-ideal agents. (shrink)
While a large social-choice-theoretic literature discusses the aggregation of individual judgments into collective ones, there is much less formal work on the transformation of judgments in group communication. I develop a model of judgment transformation and prove a baseline impossibility theorem: Any judgment transformation function satisfying some initially plausible conditions is the identity function, under which no opinion change occurs. I identify escape routes from this impossibility and argue that the kind of group communication envisaged by deliberative democats (...) must be "holistic": It must focus on webs of connected propositions, not on one proposition at a time, which echoes the Duhem-Quine "holism thesis" on scientific theory testing. My approach provides a map of the logical space in which different possible group communication processes are located. (shrink)
Theories of content are at the centre of philosophical semantics. The most successful general theory of content takes contents to be sets of possible worlds. But such contents are very coarse-grained, for they cannot distinguish between logically equivalent contents. They draw intensional but not hyperintensional distinctions. This is often remedied by including impossible as well as possible worlds in the theory of content. Yet it is often claimed that impossible worlds are metaphysically obscure; and it is sometimes claimed that their (...) use results in a trivial theory of content. In this paper, I set out the need for impossible worlds in a theory of content; I briefly sketch a metaphysical account of their nature; I argue that worlds in general must be very fine-grained entities; and, finally, I argue that the resulting conception of impossible worlds is not a trivial one. (shrink)
Standard impossibility theorems on judgment aggregation over logically connected propositions either use a controversial systematicity condition or apply only to agendas of propositions with rich logical connections. Are there any serious impossibilities without these restrictions? We prove an impossibility theorem without requiring systematicity that applies to most standard agendas: Every judgment aggregation function (with rational inputs and outputs) satisfying a condition called unbiasedness is dictatorial (or effectively dictatorial if we remove one of the agenda conditions). Our agenda conditions (...) are tight. When applied illustratively to (strict) preference aggregation represented in our model, the result implies that every unbiased social welfare function with universal domain is effectively dictatorial. (shrink)
The theory of possible worlds has permeated analytic philosophy in recent decades, and its best versions have a consequence which has gone largely unnoticed: in addition to the panoply of possible worlds, there are a great many impossible worlds. A uniform ontological method alone should bring the friends of possible worlds to adopt impossible worlds, I argue, but the theory's applications also provide strong incentives. In particular, the theory facilitates an account of counterfactuals which avoids several of the implausible (...) class='Hi'>results of David Lewis's account, and it paves the way for the analogues of Kripkean semantics for epistemic and relevant logics. On the theories of possible worlds as abstract objects, worlds bear a strong resemblance to propositions. I contend that if there are distinct necessarily false propositions, then there are likewise distinct impossible worlds. However, one who regards possible worlds as concrete objects must not recognize impossible worlds, in part because concrete worlds cannot misrepresent certain features of reality, as some impossible worlds must. Accordingly, I defend and develop a theory of impossible worlds as maximal impossible states of affairs. Impossible worlds perform admirably in the analysis of counterfactuals with impossible antecedents. I argue that, contrary to standard accounts, not all counterpossibles are trivially true, and I develop a Lewis-style semantics which allows this result. The point is crucial, since many views presuppose that some counterpossibles are substantive philosophical truths. Finally, I show that impossible worlds hold great promise for doxastic and relevant logics. Epistemic logic needs a domain of propositions which is not closed under strict implication to avoid the problem of logical omniscience, and relevant logic needs such a domain to avoid the famous paradoxes of implication. In sum, impossible world theory promises natural, elegant solutions to philosophical problems in numerous areas where possible worlds alone flounder. These solutions come to most possible world theorists at no cost, since the existence of impossible worlds is entailed by theses they already hold. (shrink)
Modal knowledge accounts that are based on standards possible-worlds semantics face well-known problems when it comes to knowledge of necessities. Beliefs in necessities are trivially sensitive and safe and, therefore, trivially constitute knowledge according to these accounts. In this paper, I will first argue that existing solutions to this necessity problem, which accept standard possible-worlds semantics, are unsatisfactory. In order to solve the necessity problem, I will utilize an unorthodox account of counterfactuals, as proposed by Nolan, on which we also (...) consider impossible worlds. Nolan’s account for counterpossibles delivers the intuitively correct result for sensitivity i.e. S’s belief is sensitive in intuitive cases of knowledge of necessities and insensitive in intuitive cases of knowledge failure. However, we acquire the same plausible result for safety only if we reject his strangeness of impossibility condition and accept the modal closeness of impossible worlds. In this case, the necessity problem can be analogously solved for sensitivity and safety. For some, such non-moderate accounts might come at too high a cost. In this respect, sensitivity is better off than safety when it comes to knowing necessities. (shrink)
Several theorists have been attracted to the idea that in order to account for counterpossibles, i.e. counterfactuals with impossible antecedents, we must appeal to impossible worlds. However, few have attempted to provide a detailed impossible worlds account of counterpossibles. Berit Brogaard and Joe Salerno’s ‘Remarks on Counterpossibles’ is one of the few attempts to fill in this theoretical gap. In this article, I critically examine their account. I prove a number of unanticipated implications of their account that end up implying (...) a counterintuitive result. I then examine a suggested revision and point out a surprising implication of the revision. (shrink)
This paper argues for a view of free will that I will call the conceptual impossibility of the truth of free will error theory - the conceptual impossibility thesis. I will argue that given the concept of free will we in fact deploy, it is impossible for our free will judgements - judgements regarding whether some action is free or not - to be systematically false. Since we do judge many of our actions to be free, it follows (...) from the conceptual impossibility thesis that many of our actions are in fact free. Hence it follows that free will error theory - the view that no judgement of the form ‘action A was performed freely’ - is false. I will show taking seriously the conceptual impossibility thesis helps makes good sense of some seemingly inconsistent results in recent experimental philosophy work on determinism and our concept of free will. Further, I will present some reasons why we should expect to find similar results for every other factor we might have thought was important for free will. (shrink)
The most recent and, arguably, the most scientifically rigorous study of the healing power of intercessory prayer, the so-called “STEP” (“Study of the Therapeutic Effects of Prayer”) study involved over 1,800 subjects and roughly a decade of study. Though the results did little, if anything, to lend support to the idea that prayers really can heal the sick, religious believers might remain optimistic. Two main reasons for this optimism stem from, first, a crucial missing (though practically unavoidable) study control (...) and, second, the warning that studying the effectiveness of prayer amounts to an (improper) attempt at quantifying God’s effects in the world. And few serious religious believers will want to say that we could ever pin God down in that way and, hence, that God, in order to maintain the importance of faith in religion, would have good reason to manipulate the results of an experiment on the effectiveness of prayer (even if prayer was very effective when not formally studied). But, then, the only evidence to which prayer proponents can appeal is anecdotal. The problems associated with anecdotal evidence as support for hypotheses is discussed here (generally, as well as with respect to intercessory prayer) and, I submit, the empirical case for the healing powers of prayer is weak. (shrink)
Suppose you are at the gym trying to see some naked beauties by peeping through a hole in the wall. A policeman happens by, he asks you what you are doing, and you honestly tell him. He then arrests you for voyeurism. Are you guilty? We don’t know yet because there is one more fact to be considered: while you honestly thought that a locker room was on the other side of the wall, it was actually a squash court. Are (...) you guilty now? -/- Probably. You might argue that your scopophiliac ambition was impossible to satisfy given that you were peeping into a squash court, not a locker room. But this “Impossibility Defense” would fail because most jurisdictions follow the very influential Model Penal Code (MPC), which says that what is important about attempt is not the likelihood of success but rather what was going on in your head. You tried to peer into a locker room with the intention of seeing some nudity; that is enough for culpability. The fact that you were mistaken about the location does not exonerate you. -/- But now suppose that the particular jurisdiction you are in does not criminalize voyeurism. While most people think that voyeurism is just plain wrong, if not disgusting, the legislature just never got around to drafting a statute against it. Are you guilty now? The answer is no. But you might just be out of luck and convicted anyway. -/- The reason for this strange conclusion is that most jurisdictions have followed the Model Penal Code in yet another respect: along with the MPC’s “subjectivist” emphasis on what is in your head, they have followed the MPC’s lead in abolishing the Impossibility Defense entirely. As a result, people who believe that they are breaking laws when they really are not may still be subject to arrest, prosecution, and conviction respectively by police, prosecutors, and judges/juries merely if all three parties regard your conduct — especially your trying to violate a law that you mistakenly believed in — as morally reprehensible. The best, if not only, defense against this charge is the Impossibility Defense, but — again — most jurisdictions have decided to make this defense unavailable to defendants. -/- Depriving eligible defendants of the Impossibility Defense is unjust. It violates one of the most basic principles of criminal justice: the legality principle. The legality principle says that there cannot be just punishment without a crime, and there should not be a crime without an explicit law designating it as such. So you cannot be charged with, and convicted of, attempted voyeurism if voyeurism, reprehensible as it may be, was not explicitly prohibited at the time that you made the attempt. -/- If we believe in the legality principle, then we must restore the Impossibility Defense. Without the latter, too many defendants are being — and will continue to be — punished for attempts to perform acts that were not themselves illegal but which various parties in the criminal justice system (except the legislature) thought should be illegal based on their extralegal, moral prejudices. -/- In addition to the MPC, the principal obstacle to resurrecting the Impossibility Defense is a good deal of conceptual confusion that permeates relevant cases and scholarship. Too many courts and academics have conflated “factual impossibility” with “legal impossibility” and have fallaciously inferred “hybrid impossibility” from “hybrid mistakes” (that is, legal mistakes that derive from factual mistakes). One of the principal goals of this Article, then, is to clear up all of this confusion. I will explicate in the simplest possible terms (a) the difference between factual impossibility and legal impossibility, (b) why only legal impossibility qualifies as exculpatory, and (c) why hybrid impossibility simply does not exist. (shrink)
. The author has compared the world-view attitudes of oligarchy and capitalism on the basis of analysis of Ludwig von Mises’ writings. The results of such comparison allow us to maintain that there is neither market economy nor competition, and so nor capitalism in Ukraine. The world-view basis of capitalism is the philosophy of liberalism, which has such principles as equality, freedom, inviolability of private property, cooperation in favor of profits of the whole society. On the contrary, oligarchy based (...) on the strong desire of infinitive enrichment and exploitation hasn’t any philosophical basis. (shrink)
The coherence of independent reports provides a strong reason to believe that the reports are true. This plausible claim has come under attack from recent work in Bayesian epistemology. This work shows that, under certain probabilistic conditions, coherence cannot increase the probability of the target claim. These theorems are taken to demonstrate that epistemic coherentism is untenable. To date no one has investigated how these results bear on different conceptions of coherence. I investigate this situation using Thagard’s ECHO model (...) of explanatory coherence. Thagard’s ECHO model provides a natural representation of the evidential significance of multiple independent reports. (shrink)
It is a widespread intuition that the coherence of independent reports provides a powerful reason to believe that the reports are true. Formal results by Huemer, M. 1997. “Probability and Coherence Justification.” Southern Journal of Philosophy 35: 463–72, Olsson, E. 2002. “What is the Problem of Coherence and Truth?” Journal of Philosophy XCIX : 246–72, Olsson, E. 2005. Against Coherence: Truth, Probability, and Justification. Oxford University Press., Bovens, L., and S. Hartmann. 2003. Bayesian Epistemology. Oxford University Press, prove that, (...) under certain conditions, coherence cannot increase the probability of the target claim. These formal results, known as ‘the impossibility theorems’ have been widely discussed in the literature. They are taken to have significant epistemic upshot. In particular, they are taken to show that reports must first individually confirm the target claim before the coherence of multiple reports offers any positive confirmation. In this paper, I dispute this epistemic interpretation. The impossibility theorems are consistent with the idea that the coherence of independent reports provides a powerful reason to believe that the reports are true even if the reports do not individually confirm prior to coherence. Once we see that the formal discoveries do not have this implication, we can recover a model of coherence justification consistent with Bayesianism and these results. This paper, thus, seeks to turn the tide of the negative findings for coherence reasoning by defending coherence as a unique source of confirmation. (shrink)
During the 13th century, several logicians in the Latin medieval tradition showed a special interest in the nature of impossibility, and in the different kinds or ‘degrees’ of impossibility that could be distinguished. This discussion resulted in an analysis of the modal concept with a fineness of grain unprecedented in earlier modal accounts. Of the several divisions of the term ‘impossible’ that were offered, one became particularly relevant in connection with the debate on ars obligatoria and positio impossibilis: (...) the distinction between ‘intelligible’ and ‘unintelligible’ impossibilities. In this article, I consider some 13th-century tracts on obligations that provide an account of the relation between impossibility and intelligibility and discuss the inferential principles that are permissible when we reason from an impossible – but intelligible – premise. I also explore the way in which the 13th-century reflection on this topic survives, in a revised form, in some early 14th-century accounts of positio, namely, those of William of Ockham, Roger Swineshead and Thomas Bradwardine. (shrink)
Reber and Alcock have recently made a sharp attack on the entire psi literature, and in particular a recent overview by Cardeña of the meta-analyses across various categories of psi. They claim the data are inherently fl awed because of their disconnect with our current understanding of the world. As a result, they ignore the data and identify key scientific principles that they argue clash with psi. In this Commentary, I argue that these key principles are diffi cult to apply (...) in areas where our understanding remains poor, especially quantum mechanics and consciousness. I also explore how the psi data may fi t within these two domains. (shrink)
Fictionalists maintain that possible worlds, numbers or composite objects exist only according to theories which are useful but false. Hale, Divers and Woodward have provided arguments which threaten to show that fictionalists must be prepared to regard the theories in question as contingently, rather than necessarily, false. If warranted, this conclusion would significantly limit the appeal of the fictionalist strategy rendering it unavailable to anyone antecedently convinced that mathematics and metaphysics concern non-contingent matters. I try to show that their arguments (...) can be resisted by developing and defending a strategy suggested by Rosen, Nolan and Dorr, according to which the fiction-operator is to be analysed in terms of a counterfactual that admits of non-trival truth-values even when the antecedent is impossible. (shrink)
Mereological nihilism is the philosophical position that there are no items that have parts. If there are no items with parts then the only items that exist are partless fundamental particles, such as the true atoms (also called philosophical atoms) theorized to exist by some ancient philosophers, some contemporary physicists, and some contemporary philosophers. With several novel arguments I show that mereological nihilism is the correct theory of reality. I will also discuss strong similarities that mereological nihilism has with empirical (...)results in quantum physics. And I will discuss how mereological nihilism vindicates a few other theories, such as a very specific theory of philosophical atomism, which I will call quantum abstract atomism. I will show that mereological nihilism also is an interpretation of quantum mechanics that avoids the problems of other interpretations, such as the widely known, metaphysically generated, quantum paradoxes of quantum physics, which ironically are typically accepted as facts about reality. I will also show why it is very surprising that mereological nihilism is not a widely held theory, and not the premier theory in philosophy. (shrink)
Covid-19 presents itself as a strange catastrophe. It has neither destroyed the planet nor has it erased humanity… but it has, in many ways, served to upend and alter what was previously considered ‘normal.’ As a result, what is perhaps the most notable characteristic of the Covid catastrophe is the very way it endures. Beyond any notion of catastrophic shock, the Covid catastrophe continues, indeed, it lingers in daily news cycles, changes to working environments and restrictions on travel. It is (...) an enduring presence, from which any determination of its ‘end’ is either nullified by an unending stream of Covid reports, or worse, ignored altogether. On this basis alone, is it even possible to discern an ‘end’ to Covid? (shrink)
The impossibilityresults in judgement aggregation show a clash between fair aggregation procedures and rational collective outcomes. In this paper, we are interested in analysing the notion of rational outcome by proposing a proof-theoretical understanding of collective rationality. In particular, we use the analysis of proofs and inferences provided by linear logic in order to define a fine-grained notion of group reasoning that allows for studying collective rationality with respect to a number of logics. We analyse the well-known (...) paradoxes in judgement aggregation and we pinpoint the reasoning steps that trigger the inconsistencies. Moreover, we extend the map of possibility and impossibilityresults in judgement aggregation by discussing the case of substructural logics. In particular, we show that there exist fragments of linear logic for which general possibility results can be obtained. (shrink)
Population axiology concerns how to evaluate populations in terms of their moral goodness, that is, how to order populations by the relations “is better than” and “is as good as”. The task has been to find an adequate theory about the moral value of states of affairs where the number of people, the quality of their lives, and their identities may vary. So far, this field has largely ignored issues about uncertainty and the conditions that have been discussed mostly pertain (...) to the ranking of risk-free outcomes. Most public policy choices, however, are decisions under uncertainty, including policy choices that affect the size of a population. Here, we shall address the question of how to rank population prospects—that is, alternatives that contain uncertainty as to which population they will bring about—by the relations “is better than” and “is as good as”. We start by illustrating how well-known population axiologies can be extended to population prospect axiologies. And we show that new problems arise when extending population axiologies to prospects. In particular, traditional population axiologies lead to prospect-versions of the problems that they praised for avoiding in the risk-free settings. Finally, we identify an intuitive adequacy condition that, we contend, should be satisfied by any population prospect axiology, and show how given this condition, the impossibility theorems in population axiology can be extended to (non-trivial) impossibility theorems for population prospect axiology. (shrink)
The aggregation of individual judgments over interrelated propositions is a newly arising field of social choice theory. I introduce several independence conditions on judgment aggregation rules, each of which protects against a specific type of manipulation by agenda setters or voters. I derive impossibility theorems whereby these independence conditions are incompatible with certain minimal requirements. Unlike earlier impossibilityresults, the main result here holds for any (non-trivial) agenda. However, independence conditions arguably undermine the logical structure of judgment (...) aggregation. I therefore suggest restricting independence to premises, which leads to a generalised premise-based procedure. This procedure is proven to be possible if the premises are logically independent. (shrink)
The new field of judgment aggregation aims to merge many individual sets of judgments on logically interconnected propositions into a single collective set of judgments on these propositions. Judgment aggregation has commonly been studied using classical propositional logic, with a limited expressive power and a problematic representation of conditional statements ("if P then Q") as material conditionals. In this methodological paper, I present a simple unified model of judgment aggregation in general logics. I show how many realistic decision problems can (...) be represented in it. This includes decision problems expressed in languages of classical propositional logic, predicate logic (e.g. preference aggregation problems), modal or conditional logics, and some multi-valued or fuzzy logics. I provide a list of simple tools for working with general logics, and I prove impossibilityresults that generalise earlier theorems. (shrink)
In this paper we distinguish between various kinds of doxastic theories. One distinction is between informal and formal doxastic theories. AGM-type theories of belief change are of the former kind, while Hintikka’s logic of knowledge and belief is of the latter. Then we distinguish between static theories that study the unchanging beliefs of a certain agent and dynamic theories that investigate not only the constraints that can reasonably be imposed on the doxastic states of a rational agent but also rationality (...) constraints on the changes of doxastic state that may occur in such agents. An additional distinction is that between non-introspective theories and introspective ones. Non-introspective theories investigate agents that have opinions about the external world but no higher-order opinions about their own doxasticnstates. Standard AGM-type theories as well as the currently existing versions of Segerberg’s dynamic doxastic logic (DDL) are non-introspective. Hintikka-style doxastic logic is of course introspective but it is a static theory. Thus, the challenge remains to devise doxastic theories that are both dynamic and introspective. We outline the semantics for truly introspective dynamic doxastic logic, i.e., a dynamic doxastic logic that allows us to describe agents who have both the ability to form higher-order beliefs and to reflect upon and change their minds about their own (higher-order) beliefs. This extension of DDL demands that we give up the Preservation condition on revision. We make some suggestions as to how such a non-preservative revision operation can be constructed. We also consider extending DDL with conditionals satisfying the Ramsey test and show that Gärdenfors’ well-known impossibility result applies to such a framework. Also in this case, Preservation has to be given up. (shrink)
Several recent results on the aggregation of judgments over logically connected propositions show that, under certain conditions, dictatorships are the only propositionwise aggregation functions generating fully rational (i.e., complete and consistent) collective judgments. A frequently mentioned route to avoid dictatorships is to allow incomplete collective judgments. We show that this route does not lead very far: we obtain oligarchies rather than dictatorships if instead of full rationality we merely require that collective judgments be deductively closed, arguably a minimal condition (...) of rationality, compatible even with empty judgment sets. We derive several characterizations of oligarchies and provide illustrative applications to Arrowian preference aggregation and Kasher and Rubinsteinís group identification problem. (shrink)
In a single framework, I address the question of the informational basis for evaluating social states. I particularly focus on information about individual welfare, individual preferences and individual (moral) judgments, but the model is also open to any other informational input deemed relevant, e.g. sources of welfare and motivations behind preferences. In addition to proving some possibility and impossibilityresults, I discuss objections against using information about only one aspect (e.g. using only preference information). These objections suggest a (...) multi-aspect informational basis for aggregation. However, the multi-aspect approach faces an impossibility result created by a lack of inter-aspect comparability. The impossibility could be overcome by measuring information on non-cardinal scales. (shrink)
Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a (...) variety of algorithms in attempts to satisfy subsets of these parities or to trade o the degree to which they are satised against utility. In this paper, we connect this approach to fair machine learning to the literature on ideal and non-ideal methodological approaches in political philosophy. The ideal approach requires positing the principles according to which a just world would operate. In the most straightforward application of ideal theory, one supports a proposed policy by arguing that it closes a discrepancy between the real and the perfectly just world. However, by failing to account for the mechanisms by which our non-ideal world arose, the responsibilities of various decision-makers, and the impacts of proposed policies, naive applications of ideal thinking can lead to misguided interventions. In this paper, we demonstrate a connection between the fair machine learning literature and the ideal approach in political philosophy, and argue that the increasingly apparent shortcomings of proposed fair machine learning algorithms reflect broader troubles faced by the ideal approach. We conclude with a critical discussion of the harms of misguided solutions, a reinterpretation of impossibilityresults, and directions for future research. (shrink)
This paper develops a semantic solution to the puzzle of Free Choice permission. The paper begins with a battery of impossibilityresults showing that Free Choice is in tension with a variety of classical principles, including Disjunction Introduction and the Law of Excluded Middle. Most interestingly, Free Choice appears incompatible with a principle concerning the behavior of Free Choice under negation, Double Prohibition, which says that Mary can’t have soup or salad implies Mary can’t have soup and Mary (...) can’t have salad. Alonso-Ovalle 2006 and others have appealed to Double Prohibition to motivate pragmatic accounts of Free Choice. Aher 2012, Aloni 2018, and others have developed semantic accounts of Free Choice that also explain Double Prohibition. -/- This paper offers a new semantic analysis of Free Choice designed to handle the full range of impossibilityresults involved in Free Choice. The paper develops the hypothesis that Free Choice is a homogeneity effect. The claim possibly A or B is defined only when A and B are homogenous with respect to their modal status, either both possible or both impossible. Paired with a notion of entailment that is sensitive to definedness conditions, this theory validates Free Choice while retaining a wide variety of classical principles except for the transitivity of entailment. The homogeneity hypothesis is implemented in two different ways, homogeneous alternative semantics and homogeneous dynamic semantics, with interestingly different consequences. (shrink)
A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibilityresults, owed (...) to Kleinberg et al. (2016), Chouldechova (2017), and Corbett-Davies et al. (2017), which show that classification parity and calibration are often incompatible. This paper aims to argue that classification parity, calibration, and a newer, interesting measure called counterfactual fairness are unsatisfactory measures of fairness, offer a general diagnosis of the failure of these measures, and sketch an alternative approach to understanding fairness in machine learning. (shrink)
The Precautionary Principle (PP) is an influential principle of risk management. It has been widely introduced into environmental legislation, and it plays an important role in most international environmental agreements. Yet, there is little consensus on precisely how to understand and formulate the principle. In this paper I prove some impossibilityresults for two plausible formulations of the PP as a decision-rule. These results illustrate the difficulty in making the PP consistent with the acceptance of any trade-offs (...) between catastrophic risks and more ordinary goods. (shrink)
This short paper has two parts. First,we prove a generalisation of Aumann’s surprising impossibility result in the context of rational decision making. We then move, in the second part, to discuss the interpretational meaning of some formal setups of epistemic models, and we do so by means of presenting an interesting puzzle in epistemic logic. The aim is to highlight certain problematic aspects of these epistemic systems concerning first/third-person asymmetry which underlies both parts of the story. This asymmetry, we (...) argue, reveals certain limits of what epistemic models can be. (shrink)
In normative political theory, it is widely accepted that democracy cannot be reduced to voting alone, but that it requires deliberation. In formal social choice theory, by contrast, the study of democracy has focused primarily on the aggregation of individual opinions into collective decisions, typically through voting. While the literature on deliberation has an optimistic flavour, the literature on social choice is more mixed. It is centred around several paradoxes and impossibilityresults identifying conflicts between different intuitively plausible (...) desiderata. In recent years, there has been a growing dialogue between the two literatures. This paper discusses the connections between them. Important insights are that (i) deliberation can complement aggregation and open up an escape route from some of its negative results; and (ii) the formal models of social choice theory can shed light on some aspects of deliberation, such as the nature of deliberation-induced opinion change. (shrink)
The new …eld of judgment aggregation aims to …nd collective judgments on logically interconnected propositions. Recent impossibilityresults establish limitations on the possibility to vote independently on the propositions. I show that, fortunately, the impossibilityresults do not apply to a wide class of realistic agendas once propositions like “if a then b” are adequately modelled, namely as subjunctive implications rather than material implications. For these agendas, consistent and complete collective judgments can be reached through appropriate (...) quota rules (which decide propositions using acceptance thresholds). I characterise the class of these quota rules. I also prove an abstract result that characterises consistent aggregation for arbitrary agendas in a general logic. (shrink)
In the face of an impossibility result, some assumption must be relaxed. The Mere Addition Paradox is an impossibility result in population ethics. Here, I explore substantially weakening the decision-theoretic assumptions involved. The central finding is that the Mere Addition Paradox persists even in the general framework of choice functions when we assume Path Independence as a minimal decision-theoretic constraint. Choice functions can be thought of either as generalizing the standard axiological assumption of a binary “betterness” relation, or (...) as providing a general framework for a normative (rather than axiological) theory of population ethics. Path Independence, a weaker assumption than typically (implicitly) made in population ethics, expresses the idea that, in making a choice from a set of alternatives, the order in which options are assessed or considered is ethically arbitrary and should not affect the final choice. Since the result establishes a conflict between the relevant ethical principles and even very weak decision-theoretic principles, we have more reason to doubt the ethical principles. (shrink)
Epistemic rationality is typically taken to be immodest at least in this sense: a rational epistemic state should always take itself to be doing at least as well, epistemically and by its own light, than any alternative epistemic state. If epistemic states are probability functions and their alternatives are other probability functions defined over the same collection of proposition, we can capture the relevant sense of immodesty by claiming that epistemic utility functions are (strictly) proper. In this paper I examine (...) what happens if we allow for the alternatives to an epistemic state to include probability functions with different domains. I first prove an impossibility result: on minimal assumptions, I show that there is no way of vindicating strong immodesty principles to the effect that any probability function should take itself to be doing at least as well than any alternative probability function, regardless of its domain. I then consider alternative, weaker generalizations of the traditional immodesty principle and prove some characterization results for some classes of epistemic utility functions satisfying each of the relevant principles. (shrink)
Possible worlds models of belief have difficulties accounting for unawareness, the inability to entertain (and hence believe) certain propositions. Accommodating unawareness is important for adequately modelling epistemic states, and representing the informational content to which agents have in principle access given their explicit beliefs. In this paper, I develop a model of explicit belief, awareness, and informational content, along with an sound and complete axiomatisation. I furthermore defend the model against the seminal impossibility result of Dekel, Lipman and Rustichini, (...) according to which three intuitive conditions preclude non-trivial unawareness on any possible worlds model of belief. (shrink)
Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of false (...) positives be equal across the relevant groups. We might seek to ensure that algorithms satisfy all of these purported fairness criteria. But a series of impossibilityresults shows that this is impossible, unless base rates are equal across the relevant groups. What are we to make of these pessimistic results? I argue that none of the purported criteria, except for a calibration criterion, are necessary conditions for fairness, on the grounds that they can all be simultaneously violated by a manifestly fair and uniquely optimal predictive algorithm, even when base rates are equal. I conclude with some general reflections on algorithmic fairness. (shrink)
The aim of this article is to introduce the theory of judgment aggregation, a growing interdisciplinary research area. The theory addresses the following question: How can a group of individuals make consistent collective judgments on a given set of propositions on the basis of the group members' individual judgments on them? I begin by explaining the observation that initially sparked the interest in judgment aggregation, the so-called "doctinal" and "discursive paradoxes". I then introduce the basic formal model of judgment aggregation, (...) which allows me to present some illustrative variants of a generic impossibility result. I subsequently turn to the question of how this impossibility result can be avoided, going through several possible escape routes. Finally, I relate the theory of judgment aggregation to other branches of aggregation theory. Rather than offering a comprehensive survey of the theory of judgment aggregation, I hope to introduce the theory in a succinct and pedagogical way, providing an illustrative rather than exhaustive coverage of some of its key ideas and results. (shrink)
Fine (2017) proposes a new logic of vagueness, CL, that promises to provide both a solution to the sorites paradox and a way to avoid the impossibility result from Fine (2008). The present paper presents a challenge to his new theory of vagueness. I argue that the possibility theorem stated in Fine (2017), as well as his solution to the sorites paradox, fail in certain reasonable extensions of the language of CL. More specifically, I show that if we extend (...) the language with any negation operator that obeys reductio ad absurdum, we can prove a new impossibility result that makes the kind of indeterminacy that Fine takes to be a hallmark of vagueness impossible. I show that such negation operators can be conservatively added to CL and examine some of the philosophical consequences of this result. Moreover, I demonstrate that we can define a particular negation operator that behaves exactly like intuitionistic negation in a natural and unobjectionable propositionally quantified extension of CL. Since intuitionistic negation obeys reductio, the new impossibility result holds in this propositionally quantified extension of CL. In addition, the sorites paradox resurfaces for the new negation. (shrink)
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of different credence functions. An extension of the standard Dutch-book arguments that apply to individual decision-makers recommends that group credences should be updated by conditionalization. This imposes a constraint on what aggregation rules can be like. Taking conditionalization as a basic constraint, we gather (...) lessons from the established work on credence aggregation, and extend this work with two new impossibilityresults. We then explore contrasting features of two kinds of rules that satisfy the constraints we articulate: one kind uses fixed prior credences, and the other uses geometric averaging, as opposed to arithmetic averaging. We also prove a new characterisation result for geometric averaging. Finally we consider applications to neighboring philosophical issues, including the epistemology of disagreement. (shrink)
It has been claimed that deliberation is capable of overcoming so- cial choice theory impossibilityresults, by bringing about single- peakedness. Our aim is to better understand the relationship be- tween single-peakedness and collective justifications of preferences.
Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibilityresults (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain (...) people would not understand some of those explanations. (shrink)
The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. In conclusion, impact of Unpredictability on AI Safety is discussed.
The naive theory of properties states that for every condition there is a property instantiated by exactly the things which satisfy that condition. The naive theory of properties is inconsistent in classical logic, but there are many ways to obtain consistent naive theories of properties in nonclassical logics. The naive theory of classes adds to the naive theory of properties an extensionality rule or axiom, which states roughly that if two classes have exactly the same members, they are identical. In (...) this paper we examine the prospects for obtaining a satisfactory naive theory of classes. We start from a result by Ross Brady, which demonstrates the consistency of something resembling a naive theory of classes. We generalize Brady’s result somewhat and extend it to a recent system developed by Andrew Bacon. All of the theories we prove consistent contain an extensionality rule or axiom. But we argue that given the background logics, the relevant extensionality principles are too weak. For example, in some of these theories, there are universal classes which are not declared coextensive. We elucidate some very modest demands on extensionality, designed to rule out this kind of pathology. But we close by proving that even these modest demands cannot be jointly satisfied. In light of this new impossibility result, the prospects for a naive theory of classes are bleak. (shrink)
The article proceeds upon the assumption that the beliefs and degrees of belief of rational agents satisfy a number of constraints, including: consistency and deductive closure for belief sets, conformity to the axioms of probability for degrees of belief, and the Lockean Thesis concerning the relationship between belief and degree of belief. Assuming that the beliefs and degrees of belief of both individuals and collectives satisfy the preceding three constraints, I discuss what further constraints may be imposed on the aggregation (...) of beliefs and degrees of belief. Some possibility and impossibilityresults are presented. The possibility results suggest that the three proposed rationality constraints are compatible with reasonable aggregation procedures for belief and degree of belief. (shrink)
This work contributes to the theory of judgement aggregation by discussing a number of significant non-classical logics. After adapting the standard framework of judgement aggregation to cope with non-classical logics, we discuss in particular results for the case of Intuitionistic Logic, the Lambek calculus, Linear Logic and Relevant Logics. The motivation for studying judgement aggregation in non-classical logics is that they offer a number of modelling choices to represent agents’ reasoning in aggregation problems. By studying judgement aggregation in logics (...) that are weaker than classical logic, we investigate whether some well-known impossibilityresults, that were tailored for classical logic, still apply to those weak systems. (shrink)
In contemporary mathematics, a Colombeau algebra of Colombeau generalized functions is an algebra of a certain kind containing the space of Schwartz distributions. While in classical distribution theory a general multiplication of distributions is not possible, Colombeau algebras provide a rigorous framework for this. Remark 1.1.1.Such a multiplication of distributions has been a long time mistakenly believed to be impossible because of Schwartz’ impossibility result, which basically states that there cannot be a differential algebra containing the space of distributions (...) and preserving the product of continuous functions. However, if one only wants to preserve the product of smooth functions instead such a construction becomes possible, as demonstrated first by J.F.Colombeau [1],[2]. As a mathematical tool, Colombeau algebras can be said to combine a treatment of singularities, differentiation and nonlinear operations in one framework, lifting the limitations of distribution theory. These algebras have found numerous applications in the fields of partial differential equations, geophysics, microlocal analysis and general relativity so far. (shrink)
This paper explores the interaction of well-motivated (if controversial) principles governing the probability conditionals, with accounts of what it is for a sentence to be indefinite. The conclusion can be played in a variety of ways. It could be regarded as a new reason to be suspicious of the intuitive data about the probability of conditionals; or, holding fixed the data, it could be used to give traction on the philosophical analysis of a contentious notion—indefiniteness. The paper outlines the various (...) options, and shows that ‘rejectionist’ theories of indefiniteness are incompatible with the results. Rejectionist theories include popular accounts such as supervaluationism, non-classical truth-value gap theories, and accounts of indeterminacy that centre on rejecting the law of excluded middle. An appendix compares the results obtained here with the ‘impossibility’ results descending from Lewis ( 1976 ). (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.