The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (Legg, (...) 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI. (shrink)
Automated reasoning about uncertain knowledge has many applications. One difficulty when developing such systems is the lack of a completely satisfactory integration of logic and probability. We address this problem directly. Expressive languages like higher-order logic are ideally suited for representing and reasoning about structured knowledge. Uncertain knowledge can be modeled by using graded probabilities rather than binary truth-values. The main technical problem studied in this paper is the following: Given a set of sentences, each having some probability of being (...) true, what probability should be ascribed to other (query) sentences? A natural wish-list, among others, is that the probability distribution (i) is consistent with the knowledge base, (ii) allows for a consistent inference procedure and in particular (iii) reduces to deductive logic in the limit of probabilities being 0 and 1, (iv) allows (Bayesian) inductive reasoning and (v) learning in the limit and in particular (vi) allows confirmation of universally quantified hypotheses/sentences. We translate this wish-list into technical requirements for a prior probability and show that probabilities satisfying all our criteria exist. We also give explicit constructions and several general characterizations of probabilities that satisfy some or all of the criteria and various (counter) examples. We also derive necessary and sufficient conditions for extending beliefs about finitely many sentences to suitable probabilities over all sentences, and in particular least dogmatic or least biased ones. We conclude with a brief outlook on how the developed theory might be used and approximated in autonomous reasoning agents. Our theory is a step towards a globally consistent and empirically satisfactory unification of probability and logic. (shrink)
Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose (...) an elegant answer based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures. (shrink)
Across two studies the hypotheses were tested that stressful situations affect both leadership ethical acting and leaders' recognition of ethical dilemmas. In the studies, decision makers recruited from 3 sites of a Swedish multinational civil engineering company provided personal data on stressful situations, made ethical decisions, and answered to stress-outcome questions. Stressful situations were observed to have a greater impact on ethical acting than on the recognition of ethical dilemmas. This was particularly true for situations involving punishment and lack of (...) rewards. The results are important for the Corporate Social Responsibility (CSR) of an organization, especially with regard to the analysis of the Stressors influencing managerial work and its implications for ethical behavior. (shrink)
I claim that there is pro tanto moral reason for parents to not raise their child on a vegan diet because a vegan diet bears a risk of harm to both the physical and the social well-being of children. After giving the empirical evidence from nutrition science and sociology that supports this claim, I turn to the question of how vegan parents should take this moral reason into account. Since many different moral frameworks have been used to argue for veganism, (...) this is a complex question. I suggest that, on some of these moral frameworks, the moral reason that some parents have for not raising their child on a vegan diet on account of this risk is plausibly as strong as the reason they have for raising their child on a vegan diet. In other words, the moral reason I outline is weighty enough to justify some vegan parents in plausibly finding it permissible to not raise their child on a vegan diet. (shrink)
Consider the following three claims. (i) There are no truths of the form ‘p and ~p’. (ii) No one holds a belief of the form ‘p and ~p’. (iii) No one holds any pairs of beliefs of the form {p, ~p}. Irad Kimhi has recently argued, in effect, that each of these claims holds and holds with metaphysical necessity. Furthermore, he maintains that they are ultimately not distinct claims at all, but the same claim formulated in different ways. I find (...) his argument suggestive, if not entirely transparent. I do think there is at least an important kernel of truth even in (iii), and that (i) ultimately explains what’s right about the other two. Consciousness of an impossibility makes belief in the obtaining of the corresponding state of affairs an impossibility. Interestingly, an appreciation of this fact brings into view a novel conception of inference, according to which it consists in the consciousness of necessity. This essay outlines and defends this position. A central element of the defense is that it reveals how reasoners satisfy what Paul Boghossian calls the Taking Condition and do so without engendering regress. (shrink)
This paper shows that several live philosophical and scientific hypotheses – including the holographic principle and multiverse theory in quantum physics, and eternalism and mind-body dualism in philosophy – jointly imply an audacious new theory of free will. This new theory, "Libertarian Compatibilism", holds that the physical world is an eternally existing array of two-dimensional information – a vast number of possible pasts, presents, and futures – and the mind a nonphysical entity or set of properties that "read" that physical (...) information off to subjective conscious awareness (in much the same way that a song written on an ordinary compact-disc is only played when read by an outside medium, i.e. a CD-player). According to this theory, every possible physical “timeline” in the multiverse may be fully physically deterministic or physically-causally closed but each person’s consciousness still entirely free to choose, ex nihilo, outside of the physical order, which physically-closed timeline is experienced by conscious observers. Although Libertarian Compatibilism is admittedly fantastic, I show that it not only follows from several live scientific and philosophical hypotheses, I also show that it (A) is a far more explanatorily powerful model of quantum mechanics than more traditional interpretations (e.g. the Copenhagen, Everett, and Bohmian interpretations), (B) makes determinate, testable empirical predictions in quantum theory, and finally, (C) predicts and explains the very existence of a number of philosophical debates and positions in the philosophy of mind, time, personal identity, and free will. First, I show that whereas traditional interpretations of quantum mechanics are all philosophically problematic and roughly as ontologically “extravagant” as Libertarian Compatibilism – in that they all posit “unseen” processes – Libertarian Compatibilism is nearly identical in structure to the only working simulation that human beings have ever constructed capable of reproducing (and so explaining) every general feature of quantum mechanics we perceive: namely, massive-multiplayer-online-roleplaying videogames (or MMORPGs). Although I am not the first to suggest that our world is akin to a computer simulation, I show that existing MMORPGs (online simulations we have already created) actually reproduce every general feature of quantum mechanics within their simulated-world reference-frames. Second, I show that existing MMORPGs also replicate (and so explain) many philosophical problems we face in the philosophy of mind, time, personal identity, and free will – all while conforming to the Libertarian Compatibilist model of reality. -/- I conclude, as such, that as fantastic and metaphysically extravagant as Libertarian Compatibilism may initially seem, it may well be true. It explains a number of features of our reality that no other physical or metaphysical theory does. (shrink)
The effect of violent video games is among the most widely discussed topics in media studies, and for good reason. These games are immensely popular, but many seem morally objectionable. Critics attack them for a number of reasons ranging from their capacity to teach players weapons skills to their ability to directly cause violent actions. This essay shows that many of these criticisms are misguided. Theoretical and empirical arguments against violent video games often suffer from a number of significant shortcomings (...) that make them ineffective. This essay argues that video games are defensible from the perspective of Kantian, Aristotelian, and utilitarian moral theories. (shrink)
Theorists have long debated whether John Rawls’ conception of justice as fairness can be extended to nonideal (i.e. unjust) social and political conditions, and if so, what the proper way of extending it is. This paper argues that in order to properly extend justice as fairness to nonideal conditions, Rawls’ most famous innovation – the original position – must be reconceived in the form of a “nonideal original position.” I begin by providing a new analysis of the ideal/nonideal theory distinction (...) within Rawls’ theoretical framework. I then systematically construct a nonideal original position, showing that although its parties must have Rawls’ principles of ideal justice and priority relations as background aims, the parties should be entirely free to weigh those aims against whatever burdens and benefits they might face under nonideal conditions. Next, I show that the parties ought to aim to secure for themselves a special class of nonideal primary goods: all-purpose goods similar to Rawls’ original primary goods, but which in this case are all-purpose goods individuals might use to (A) promote Rawlsian ideals under nonideal conditions, (B) weigh Rawls’ principles of ideal justice and priority relations against whatever burdens and benefits they might face under nonideal conditions, and (C) effectively pursue their most favored weighting thereof. I then defend a provisional list of nonideal primary goods which include opportunities to participate effectively in equitable and inclusive grassroots reform movements guided by a series of substantive aims. Finally, I briefly speculate on how the parties to the nonideal original position might deliberate to principles of nonideal justice for distributing nonideal primary goods, suggesting that those goods should be distributed in proportion to unjust disadvantage. (shrink)
We argue that honesty in assertion requires non-empirical knowledge that what one asserts is what one believes. Our argument proceeds from the thought that to assert honestly, one must follow and not merely conform to the norm ‘Assert that p only if you believe that p’. Furthermore, careful consideration of cases shows that the sort of doxastic self-knowledge required for following this norm cannot be acquired on the basis of observation, inference, or any other form of detection of one’s own (...) doxastic states. It is, as we put it, transparent rather than empirical self-knowledge. (shrink)
According to one interpretation of Aristotle’s famous thesis, to say that action is the conclusion of practical reasoning is to say that action is itself a judgment about what to do. A central motivation for the thesis is that it suggests a path for understanding the non-observational character of practical knowledge. If actions are judgments, then whatever explains an agent’s knowledge of the relevant judgment can explain her knowledge of the action. I call the approach to action that accepts Aristotle’s (...) thesis so understood Normativism. There are many reasons to doubt Normativism. My aim in this paper is to defend Normativism from a pair of arguments that purport to show that a normative judgment could not constitute an event in material reality and also the knowledge of such a happening. Both highlight a putative mismatch between the natures of, on the one hand, an agent’s knowledge of her normative judgment and, on the other, her knowledge of her own action. According to these objections, knowledge of action includes (a) perceptual knowledge and (b) knowledge of what one has already done. But knowledge of a normative judgment includes neither. Hence knowledge of action cannot simply be knowledge of a normative judgment. (shrink)
This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ _GenEth_. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...) demonstrate how mental time-travel psychology leads us to resolve the semantic trilemma through a six-step process of interpersonal negotiation and renegotiation, and then conclude by showing how comparative advantages in processing power would plausibly cause AI to use similar processes to solve the semantic trilemma more reliably than we do, leading AI to make better moral-semantic choices than humans do by our very own lights. (shrink)
This chapter derives and refines a novel normative moral theory and descriptive theory of moral psychology--Rightness as Fairness--from the theory of prudence defended in Chapter 2. It briefly summarizes Chapter 2’s finding that prudent agents typically internalize ‘moral risk-aversion’. It then outlines how this prudential psychology leads prudent agents to want to know how to act in ways they will not regret in morally salient cases, as well as to regard moral actions as the only types of actions that satisfy (...) this prudential interest. It then uses these findings to defend a new derivation of my (2016) theory of morality, Rightness as Fairness, showing how the derivation successfully defends Rightness as Fairness against a variety of objections. The chapter also details how this book’s theory helps to substantiate the claim that Rightness as Fairness unifies a variety of competing moral frameworks: deontology, consequentialism, contractualism, and virtue ethics. Finally, the chapter shows how Chapter 2’s theory of prudence entails some revisions to Rightness as Fairness, including the adoption of a series of Rawlsian original positions to settle moral and social-political issues under ideal and nonideal circumstances—thus entailing a unified normative and descriptive psychological framework for prudence, morality, and justice. (shrink)
Most agree that believing a proposition normally or ideally results in believing that one believes it, at least if one considers the question of whether one believes it. I defend a much stronger thesis. It is impossible to believe without knowledge of one's belief. I argue, roughly, as follows. Believing that p entails that one is able to honestly assert that p. But anyone who is able to honestly assert that p is also able to just say – i.e., authoritatively, (...) yet not on the basis of evidence – that she believes that p. And anyone who is able to just say that she believes that p is able to act in light of the fact that she holds that belief. This ability to act, in turn, constitutes knowledge of the psychological fact. However, without a broader theory of belief to help us make sense of this result, this conclusion will be hard to accept. Why should being in a particular mental state by itself necessitate an awareness of being in that state? I sketch a theory that helps to answer this question: believing is a matter of viewing a proposition as what one ought to believe. I show how this theory explains the thesis that to believe is to know that you believe. (shrink)
Are the circumstances in which moral testimony serves as evidence that our judgement-forming processes are unreliable the same circumstances in which mundane testimony serves as evidence that our mundane judgement-forming processes are unreliable? In answering this question, we distinguish two possible roles for testimony: (i) providing a legitimate basis for a judgement, (ii) providing (‘higher-order’) evidence that a judgement-forming process is unreliable. We explore the possibilities for a view according to which moral testimony does not, in contrast to mundane testimony (...) play role (i), but can play role (ii). We argue that standard motivations for rejecting this hybrid position are unpersuasive but suggest that a more compelling reason might be found in considering the social nature of morality. (shrink)
In a widely discussed forthcoming article, “What you can't expect when you're expecting,” L. A. Paul challenges culturally and philosophically traditional views about how to rationally make major life-decisions, most specifically the decision of whether to have children. The present paper argues that because major life-decisions are transformative, the only rational way to approach them is to become resilient people: people who do not “over-plan” their lives or expect their lives to play out “according to plan”—people who understand that beyond (...) a certain limit, life cannot be rationally planned and must be accepted as it comes. I show that this focus on resilience—on self-mastery—stands in direct opposition to culturally dominant attitudes toward decision-making, which focus not on the self-mastery but on control and mastery over one's surroundings. In short, I argue that if Paul's general point about transformative experiences is correct, it follows that we rationally ought to adopt a very different approach to life choices, self-development, and the moral education of our children than currently-dominant cultural norms and practices suggest. (shrink)
This study examined correlations between moral value judgments on a 17-item Moral Intuition Survey (MIS), and participant scores on the Short-D3 “Dark Triad” Personality Inventory—a measure of three related “dark and socially destructive” personality traits: Machiavellianism, Narcissism, and Psychopathy. Five hundred sixty-seven participants (302 male, 257 female, 2 transgendered; median age 28) were recruited online through Amazon Mechanical Turk and Yale Experiment Month web advertisements. Different responses to MIS items were initially hypothesized to be “conservative” or “liberal” in line with (...) traditional public divides. Our demographic data confirmed all of these hypothesized categorizations. We then tested two broad, exploratory hypotheses: (H1) the hypothesis that there would be “many” significant correlations between conservative MIS judgments and the Dark Triad, and (H2) the hypothesis that there would be no significant correlations between liberal MIS judgments and Machiavellianism or Psychopathy, but “some” significant correlations between liberal MIS judgments and Narcissism. Because our hypotheses were exploratory and we ran a large number of statistical tests (62 total), we utilized a Bonferroni Correction to set a very high threshold for significance (p = .0008). Our results broadly supported our two hypotheses. We found eleven significant correlations between conservative MIS judgments and the Dark Triad—all at significance level of p < .00001—but no significant correlations between the Dark Triad and liberal MIS judgments. We believe that these results raise provocative moral questions about the personality bases of moral judgments. In particular, we propose that because the Short-D3 measures three “dark and antisocial” personality traits, our results raise some prima facie worries about the moral justification of some conservative moral judgments. (shrink)
Chapter 1 of this book argued that moral philosophy should be based on seven principles of theory selection adapted from the sciences. Chapter 2 argued that these principles support basing normative moral philosophy on a particular problem of diachronic instrumental rationality: the ‘problem of possible future selves.’ Chapter 3 argued that a new moral principle, the Categorical-Instrumental Imperative, is the rational solution to this problem. Chapter 4 argued that the Categorical-Instrumental Imperative has three equivalent formulations akin to but superior to (...) Kant’s formulations of the Categorical Imperative. Chapter 5 argued that my principle’s three formulations make it rational to adopt a Moral Original Position to derive moral principles. The present chapter derives Four Principles of Fairness from the Moral Original Position--principles of coercion minimization, mutual assistance, fair negotiation, and virtue—and unifies them into a single principle of rightness: Rightness as Fairness. Finally, this chapter argues that Rightness as Fairness entails a novel approach to applied ethics called ‘principled fair negotiation’, illustrating how the theory provides a plausible new framework for addressing applied cases including lying, suicide, trolleys, torture, distribution of scarce resources, poverty, and the ethical treatment of animals. (shrink)
In their thought-provoking paper, Legg and Hutter consider a certain abstrac- tion of an intelligent agent, and define a universal intelligence measure, which assigns every such agent a numerical intelligence rating. We will briefly summarize Legg and Hutter’s paper, and then give a tongue-in-cheek argument that if one’s goal is to become more intelligent by cultivating music appreciation, then it is bet- ter to use classical music (such as Bach, Mozart, and Beethoven) than to use more recent pop (...) music. The same argument could be adapted to other media: books beat films, card games beat first-person shooters, parables beat dissertations, etc. We leave it to the reader to decide whether this argument tells us something about classical music, something about Legg-Hutter intelligence, or something about both. (shrink)
We argue that many intuitions do not have conscious propositional contents. In particular, many of the intuitions had in response to philosophical thought experiments, like Gettier cases, do not have such contents. They are more like hunches, urgings, murky feelings, and twinges. Our view thus goes against the received view of intuitions in philosophy, which we call Mainstream Propositionalism. Our positive view is that many thought-experimental intuitions are conscious, spontaneous, non-theoretical, non-propositional psychological states that often motivate belief revision, but they (...) require interpretation, in light of background beliefs, before a subject can form a propositional judgment as a consequence of them. We call our view Interpretationalism. We argue (i) that Interpretationalism avoids the problems that beset Mainstream Propositionalism and (ii) that our view meshes well with empirical results in contemporary cognitive science. (shrink)
Many critics, Descartes himself included, have seen Hobbes as uncharitable or even incoherent in his Objections to the Meditations on First Philosophy. I argue that when understood within the wider context of his views of the late 1630s and early 1640s, Hobbes's Objections are coherent and reflect his goal of providing an epistemology consistent with a mechanical philosophy. I demonstrate the importance of this epistemology for understanding his Fourth Objection concerning the nature of the wax and contend that Hobbes's brief (...) claims in that Objection are best understood as a summary of the mechanism for scientific knowledge found in his broader work. Far from displaying his confusion, Hobbes's Fourth Objection in fact pinpoints a key weakness of Descartes's faculty psychology: its unintelligibility within a mechanical philosophy. (shrink)
Several recent commentators argue that Thomas Hobbes’s account of the nature of science is conventionalist. Engaging in scientific practice on a conventionalist account is more a matter of making sure one connects one term to another properly rather than checking one’s claims, e.g., by experiment. In this paper, I argue that the conventionalist interpretation of Hobbesian science accords neither with Hobbes’s theoretical account in De corpore and Leviathan nor with Hobbes’s scientific practice in De homine and elsewhere. Closely tied to (...) the conventionalist interpretation is the deductivist interpretation, on which it is claimed that Hobbes believed sciences such as optics are deduced from geometry. I argue that Hobbesian science places simplest conceptions as the foundation for geometry and the sciences in which we use geometry, which provides strong evidence against both the conventionalist and deductivist interpretations. (shrink)
In a recent study appearing in Neuroethics, I reported observing 11 significant correlations between the “Dark Triad” personality traits – Machiavellianism, Narcissism, and Psychopathy – and “conservative” judgments on a 17-item Moral Intuition Survey. Surprisingly, I observed no significant correlations between the Dark Triad and “liberal” judgments. In order to determine whether these results were an artifact of the particular issues I selected, I ran a follow-up study testing the Dark Triad against conservative and liberal judgments on 15 additional moral (...) issues. The new issues examined include illegal immigration, abortion, the teaching of “intelligent design” in public schools, the use of waterboarding and other “enhanced interrogation techniques” in the war on terrorism, laws defining marriage as the union of one man and one woman, and environmentalism. 1154 participants (680 male, 472 female; median age 29), recruited online through Amazon Mechanical Turk, completed three surveys: a 15-item Moral Intuition Survey (MIS), the 28-item Short Dark Triad personality inventory, and a five-item demographic survey. The results strongly reinforce my earlier findings. Twenty-two significant correlations were observed between “conservative” judgments and the Dark Triad (all of which were significant past a Bonferonni-corrected significance threshold of p = .0008), compared to seven significant correlations between Dark Triad and “liberal” judgments (only one of which was significant past p = .0008). This article concludes by developing a novel research proposal for determining whether the results of my two studies are “bad news” for conservatives or liberals. (shrink)
I offer an alternative account of the relationship of Hobbesian geometry to natural philosophy by arguing that mixed mathematics provided Hobbes with a model for thinking about it. In mixed mathematics, one may borrow causal principles from one science and use them in another science without there being a deductive relationship between those two sciences. Natural philosophy for Hobbes is mixed because an explanation may combine observations from experience (the ‘that’) with causal principles from geometry (the ‘why’). My argument shows (...) that Hobbesian natural philosophy relies upon suppositions that bodies plausibly behave according to these borrowed causal principles from geometry, acknowledging that bodies in the world may not actually behave this way. First, I consider Hobbes's relation to Aristotelian mixed mathematics and to Isaac Barrow's broadening of mixed mathematics in Mathematical Lectures (1683). I show that for Hobbes maker's knowledge from geometry provides the ‘why’ in mixed-mathematical explanations. Next, I examine two explanations from De corpore Part IV: (1) the explanation of sense in De corpore 25.1-2; and (2) the explanation of the swelling of parts of the body when they become warm in De corpore 27.3. In both explanations, I show Hobbes borrowing and citing geometrical principles and mixing these principles with appeals to experience. (shrink)
Many of Margaret Cavendish’s criticisms of Thomas Hobbes in the Philosophical Letters (1664) relate to the disorder and damage that she holds would result if Hobbesian pressure were the cause of visual perception. In this paper, I argue that her “two men” thought experiment in Letter IV is aimed at a different goal: to show the explanatory potency of her account. First, I connect Cavendish’s view of visual perception as “patterning” to the “two men” thought experiment in Letter IV. Second, (...) I provide a potential reply on Hobbes’s behalf that appeals to physiological differences between perceivers’ sense organs, drawing upon Hobbes’s optics in De homine. Third, I argue that such a reply would misunderstand Cavendish’s objective of showing the limited explanatory resources available in understanding visual perception as pressing when compared to her view of visual perception as patterning. (shrink)
In my 2013 article, “A New Theory of Free Will”, I argued that several serious hypotheses in philosophy and modern physics jointly entail that our reality is structurally identical to a peer-to-peer (P2P) networked computer simulation. The present paper outlines how quantum phenomena emerge naturally from the computational structure of a P2P simulation. §1 explains the P2P Hypothesis. §2 then sketches how the structure of any P2P simulation realizes quantum superposition and wave-function collapse (§2.1.), quantum indeterminacy (§2.2.), wave-particle duality (§2.3.), (...) and quantum entanglement (§2.4.). Finally, §3 argues that although this is by no means a philosophical proof that our reality is a P2P simulation, it provides ample reasons to investigate the hypothesis further using the methods of computer science, physics, philosophy, and mathematics. (shrink)
The thesis that mental states are physical states enjoys widespread popularity. After the abandonment of typeidentity theories, however, this thesis has typically been framed in terms of state tokens. I argue that token states are a philosopher’s fiction, and that debates about the identity of mental and physical state tokens thus rest on a mistake.
We present evidence from a pre-registered experiment indicating that a philosophical argument––a type of rational appeal––can persuade people to make charitable donations. The rational appeal we used follows Singer’s well-known “shallow pond” argument (1972), while incorporating an evolutionary debunking argument (Paxton, Ungar, & Greene 2012) against favoring nearby victims over distant ones. The effectiveness of this rational appeal did not differ significantly from that of a well-tested emotional appeal involving an image of a single child in need (Small, Loewenstein, and (...) Slovic 2007). This is a surprising result, given evidence that emotions are the primary drivers of moral action, a view that has been very influential in the work of development organizations. We did not find support for our pre-registered hypothesis that combining our rational and emotional appeals would have a significantly stronger effect than either appeal in isolation. However, our finding that both kinds of appeal can increase charitable donations is cause for optimism, especially concerning the potential efficacy of well-designed rational appeals. We consider the significance of these findings for moral psychology, ethics, and the work of organizations aiming to alleviate severe poverty. (shrink)
We argue that the aesthetic domain falls inside the scope of rationality, but does so in its own way. Aesthetic judgment is a stance neither on whether a proposition is to be believed nor on whether an action is to be done, but on whether an object is to be appreciated. Aesthetic judgment is simply appreciation. Correlatively, reasons supporting theoretical, practical and aesthetic judgments operate in fundamentally different ways. The irreducibility of the aesthetic domain is due to the fact that (...) aesthetic judgment is a sensory-affective disclosure of, and responsiveness to, merit: it is a feeling that presents an object, and is responsive to it, as worthy of being liked. Aesthetic judgment is thus shown to be, on the hand, first personal and non-transferable; and, on the other hand, a presentation of reality. We thereby capture what is right in both subjectivist and objectivist conceptions of aesthetic judgment. (shrink)
This book argues that moral philosophy should be based on seven scientific principles of theory selection. It then argues that a new moral theory—Rightness as Fairness—satisfies those principles more successfully than existing theories. Chapter 1 explicates the seven principles of theory-selection, arguing that moral philosophy must conform to them to be truth-apt. Chapter 2 argues those principles jointly support founding moral philosophy in known facts of empirical moral psychology: specifically, our capacities for mental time-travel and modal imagination. Chapter 2 then (...) shows that these capacities present human decisionmakers with a problem of diachronic rationality that includes but generalizes beyond, L.A. Paul’s problem of transformative experience: a problem that I call “the problem of possible future selves.” Chapter 3 then argues that a new principle of rationality—the Categorical-Instrumental Imperative—is the only rational solution to this problem, as it requires our present and future selves to forge and uphold a recursive, bi-directional contract with each another given mutual recognition of the problem. Chapter 4 then shows that the Categorical-Instrumental Imperative has three identical formulations analogous but superior to Immanuel Kant’s various formulations of his ‘categorical imperative.’ Chapter 5 shows that these unified formulas jointly entail a particular test of moral principles: a Moral Original Position similar to John Rawls’ famous ‘original position’, but which avoids a variety of problems with Rawls' model. Chapter 6 then shows that the Moral Original Position generates Four Principles of Fairness, which can then be combined into a single principle of moral rightness: Rightness as Fairness. This new conception of rightness is shown to reconcile four dominant moral frameworks (deontology, consequentialism, virtue ethics, and contractualism), as well as entail a new method of moral decisionmaking for applied ethics: a method of “principled fair negotiation” according to which applied ethical issues cannot be wholly resolved through principled debate, but must instead be resolved by actual negotiation and compromise. This method is then argued to generate novel, nuanced analyses of a variety of applied moral issues, including trolley cases, torture, and the ethical treatment of nonhuman animals. Chapter 7 then shows that Rightness as Fairness reconciles three leading political frameworks—libertarianism, egalitarianism, and communitarianism—showing how all three embody legitimate moral ideals that can, and should, be fairly negotiated against each other to settle the scope, and nature, of domestic, international, and global justice on an ongoing, iterated basis. Finally, Chapter 8 argues that Rightness as Fairness satisfies all seven of the principles of theory-selected defended in Chapter 1 more successfully than rival theories. (shrink)
I examine how co-parents should handle differing commitments about how to raise their child. Via thought experiment and the examination of our practices and affective reactions, I argue for a thesis about the locus of parental authority: that parental authority is invested in full in each individual parent, meaning that that the command of one parent is sufficient to bind the child to act in obedience. If this full-authority thesis is true, then for co-parents to command different things would be (...) for them to contest one another’s authority. The only course that respects the authority of both parents is for co-parents to agree to command the same thing. Further, what is commanded must not result from a ‘capitulation’ by one co-parent, rather, it should result from a compromise. Parental authority involves a duty to deliberate about which commands it is best to give the child. If a command results from a capitulation, one parent will rightly think of themselves as not having fulfilled their parental duty. Parental compromises are not best understood as bargains or conflicts, but by the metaphor of gifts given by each parent out of respect for the other’s authority. (shrink)
We offer a new argument for the claim that there can be non-degenerate objective chance (“true randomness”) in a deterministic world. Using a formal model of the relationship between different levels of description of a system, we show how objective chance at a higher level can coexist with its absence at a lower level. Unlike previous arguments for the level-specificity of chance, our argument shows, in a precise sense, that higher-level chance does not collapse into epistemic probability, despite higher-level properties (...) supervening on lower-level ones. We show that the distinction between objective chance and epistemic probability can be drawn, and operationalized, at every level of description. There is, therefore, not a single distinction between objective and epistemic probability, but a family of such distinctions. (shrink)
Human rights theory and practice have long been stuck in a rut. Although disagreement is the norm in philosophy and social-political practice, the sheer depth and breadth of disagreement about human rights is truly unusual. Human rights theorists and practitioners disagree – wildly in many cases – over just about every issue: what human rights are, what they are for, how many of them there are, how they are justified, what human interests or capacities they are supposed to protect, what (...) they require of persons and institutions, etc. Disagreement about human rights is so profound, in fact, that several prominent theorists have remarked that the very concept of a “human right” appears nearly criterionless. In my 2012 article, “Reconceptualizing Human Rights”, I diagnosed the root cause of these problems. Theorists and practitioners have falsely supposed that the concept of “human right” picks out a single, unified class of moral entitlements. However, the concept actually refers to two fundamentally different types of moral entitlements: (A) international human rights, which are universal human moral entitlements to coercive international protections, and (B) domestic human rights, which are universal human moral entitlements to coercive domestic protections. Accordingly, I argue, an adequate “theory of human rights” must be a dual theory. The present paper provides the first such theory. First, I show that almost every justificatory ground given for “human rights” in the literature – such as the notion of a “minimally decent human life”, “urgent human interests”, and “human needs” – faces at least one of two fatal problems. Second, I show that after some revisions, James Griffin’s conception of “personhood” provides a compelling justificatory ground for international human rights. Third, I show that the account entails that there are very few international human rights – far fewer than existing human rights theories and practices suggest. Fourth, I show that there are reasons to find my very short list of international human rights compelling: “human rights justifications” for coercive international and foreign policy actions over the past several decades have consistently overstepped what can be morally justified, and my account reveals precisely how existing human rights theories and practices have failed to adequately grapple with these moral hazards. Finally, I outline an account of domestic human rights which fits well with many existing human rights beliefs and practices, vindicating those beliefs and practices, but only at a domestic level. (shrink)
This paper demonstrates something that Kant notoriously claimed to be possible, but which Kant scholars today widely believe to be impossible: unification of all three formulations of the Categorical Imperative. Part 1 of this paper tells a broad-brush story of how I understand Kant’s theory of practical reason and morality, showing how the three formulations of the Categorical Imperative appear to be unified. Part 2 then provides clear textual support for each premise in the argument for my interpretation.
Research on preference reversals has demonstrated a disproportionate influence of outcome probability on choices between monetary gambles. The aim was to investigate the hypothesis that this is a prominence effect originally demonstrated for riskless choice. Another aim was to test the structure compatibility hypothesis as an explanation of the effect. The hypothesis implies that probability should be the prominent attribute when compared with value attributes both in a choice and a preference rating procedure. In Experiment 1, two groups of undergraduates (...) were presented with medical treatments described by two value attributes (effectiveness and pain-relief). All participants performed both a matching task and made preference ratings. In the latter task, outcome probabilities were added to the descriptions of the medical treatments for one of the groups. In line with the hypothesis, this reduced the prominence effect on the preference ratings observed for effectiveness. In Experiment 2, a matching task was used to demonstrate that probability was considered more important by a group of participating undergraduates than the value attributes. Furthermore, in both choices and preference ratings the expected prominence effect was found for probability. (shrink)
In Ethics for a Broken World : Imagining Philosophy after Catastrophe, Tim Mulgan applies a number of influential moral and political theories to a “broken world ”: a world of environmental catastrophe in which resources are insufficient to meet everyone’s basic needs. This paper shows that John Rawls’ conception of justice as fairness has very different implications for a broken world than Mulgan suggests it does. §1 briefly summarizes Rawls’ conception of justice, including how Rawls uses a hypothetical model – (...) the “original position” – to argue for principles of justice. §2 explains how Mulgan uses a variation of Rawls’ original position – a broken original position – to argue that justice as fairness requires a “fair survival lottery” in a broken world. §3 shows that the parties to a broken original position have reasons not to agree to such a survival lottery. §4 then shows that Mulgan’s argument hangs upon a false assumption: that there are no viable options to adopt in a broken world besides some kind of survival lottery. Finally, §5 shows that the parties to a broken original position would instead rationally agree to a scheme of equal rights and opportunities to earn or forfeit shares of scarce resources on the basis of each person’s comparative contribution to human survival. (shrink)
This paper defends several highly revisionary theses about human rights. Section 1 shows that the phrase 'human rights' refers to two distinct types of moral claims. Sections 2 and 3 argue that several longstanding problems in human rights theory and practice can be solved if, and only if, the concept of a human right is replaced by two more exact concepts: (A) International human rights, which are moral claims sufficient to warrant coercive domestic and international social protection; and (B) Domestic (...) human rights, which are moral claims sufficient to warrant coercive domestic social protection but only non-coercive international action. Section 3 then argues that because coercion is central to both types of human rights, and coercion is a matter of justice, the traditional view of human rights -- that they are normative entitlements prior to and independent of substantive theories of justice -- is incorrect. Human rights must instead be seen as emerging from substantive theories of domestic and international justice. Finally, Section 4 uses this reconceptualization to show that only a few very minimal claims about international human rights are presently warranted. Because international human rights are rights of international justice, but theorists of international justice disagree widely about the demands of international justice, much more research on international justice is needed -- and much greater agreement about international justice should be reached -- before anything more than a very minimal list of international human rights can be justified. (shrink)
This dissertation defends a “non-ideal theory” of justice: a systematic theory of how to respond justly to injustice. Chapter 1 argues that contemporary political philosophy lacks a non-ideal theory of justice, and defends a variation of John Rawls’ famous original position – a Non-Ideal Original Position – as a method with which to construct such a theory. Chapter 1 then uses the Non-Ideal Original Position to argue for a Fundamental Principle of Non-Ideal Theory: a principle that requires injustices to be (...) dealt with in whichever way will best satisfy the preferences of all relevant individuals, provided those individuals are all rational, adequately informed, broadly moral, and accept the correct “ideal theory” of fully just conditions. Chapter 2 then argues for the Principle of Application – an epistemic principle that represents the Fundamental Principle’s satisfaction conditions in terms of the aims of actual or hypothetical reformist groups. Chapters 3-5 then use these two principles to argue for substantive views regarding global/international justice. Chapter 3 argues that the two principles establish a higher-order human right for all other human rights to promoted and protected in accordance with the two principles of non-ideal theory. Chapter 4 argues that the two principles defeasibly require the international community to tolerate unjust societies, provided those societies respect the most basic rights of individuals. Finally, Chapter 5 argues that the two principles imply a duty of the international community to ameliorate the most severe forms of global poverty, as well as a duty to pursue “fair trade” in international economics. (shrink)
We investigate the conflict between the ex ante and ex post criteria of social welfare in a new framework of individual and social decisions, which distinguishes between two sources of uncertainty, here interpreted as an objective and a subjective source respectively. This framework makes it possible to endow the individuals and society not only with ex ante and ex post preferences, as is usually done, but also with interim preferences of two kinds, and correspondingly, to introduce interim forms of the (...) Pareto principle. After characterizing the ex ante and ex post criteria, we present a first solution to their conflict that extends the former as much possible in the direction of the latter. Then, we present a second solution, which goes in the opposite direction, and is also maximally assertive. Both solutions translate the assumed Pareto conditions into weighted additive utility representations, and both attribute to the individuals common probability values on the objective source of uncertainty, and different probability values on the subjective source. We discuss these solutions in terms of two conceptual arguments, i.e., the by now classic spurious unanimity argument and a novel informational argument labelled complementary ignorance. The paper complies with the standard economic methodology of basing probability and utility representations on preference axioms, but for the sake of completeness, also considers a construal of objective uncertainty based on the assumption of an exogeneously given probability measure. JEL classification: D70; D81. (shrink)
In a couple of classical studies, Keeney proposed two sets of variables labelled as value focused thinking (VFT) and alternative-focused thinking (AFT). Value-focused thinking (VFT), he argued, is a creative method that centres on the different decision objectives and how as many alternatives as possible may be generated from them. Alternative-focused thinking (AFT), on the other hand, is a method in which the decision maker takes notice of all the available alternatives and then makes a choice that seems to fit (...) the problem best. The impact of these two methods on idea generation was measured using a sample of employees. The results revealed that employees in the value-focused thinking condition (VFT) produced fewer ideas. Thus, value-focused thinking (VFT) is not only able to facilitate ideation fluency but also to constrain it. Factors such as cognitive effort and motivation may play a part here. However, the quality of the ideas was judged to be higher in terms of creativity and innovativeness. Hence, value-focused thinking (VFT) seems to have a positive impact on the quality of ideas in terms of creativity and innovativeness regardless of ideation fluency. Implications for the design of idea management systems are discussed. (shrink)
This article evaluates the effects of two types of rewards (performance-contingent versus engagement-contingent) on self-regulation, intrinsic motivation and creativity. Forty-two undergraduate students were randomly assigned to three conditions; i.e. a performance-contingent reward group, an engagement-contingent reward group and a control group. Results provide little support for the negative effects of performance rewards on motivational components. However, they do indicate that participants in the engagement-contingent reward group and the control group achieved higher rated creativity than participants in the performance-contingent reward group. (...) Alternative explanations for this finding are discussed. (shrink)
The use of different response modes has been found to influence how subjects evaluate pairs of alternatives described by two attributes. It has been suggested that judgments and choices evoke different kinds of cognitive processes, leading to an overweighing of the prominent attribute in choice (Tversky, Sattath, & Slovic, 1988; Fischer & Hawkins, 1993). Four experiments were conducted to compare alternative cognitive explanations of this so-called prominence effect in judgment and choice. The explanations investigated were the structure compatibility hypothesis and (...) the restructuring hypothesis. According to the structure compatibility hypothesis, it was assumed that the prominence effect is due to a lack of compatibility between the required output from subjects and the structure of information in input. The restructuring hypothesis stated that the decision maker uses mental restructuring operations on a representation of decision options to make the options more clearly differentiated. In Experiment 1, a matching procedure was used to provide pairs of equally attractive options (medical treatments) for the following experiments. In Experiments 2, 3, and 4, preferences were elicited with two different response modes, choice and preference rating. Value ranges on the prominent and nonprominent attributes were manipulated to test the structure compatibility hypothesis. Accountability was also subject to manipulation as it was assumed to stimulate restructuring. Since the prominence effect was not restricted to choices, and effects of value ranges were obtained but not of accountability, the results were interpreted in line with the structure compatibility hypothesis. (shrink)
Purpose – The study aims at clarifying whether locus of control may act as a bias in organisational decision-making or not. -/- Design/methodology/approach – Altogether 44 managers working at Skanska (a Swedish multinational construction company) participated in the study. They were asked to complete a booklet including a locus of control test and a couple of decision tasks. The latter were based on case scenarios reflecting strategic issues relevant for consultative/participative decision-making. -/- Findings – The results revealed that managers with (...) low external locus of control used group consultative decision-making more frequently than those with high locus of control. There was also a tendency showing that high externals more frequently used participative decision-making than low externals. This was in line with the general trend, indicating that managers on the whole predominantly used participative decision-making. -/- Originality/value – The results of the present study are valuable for HRM practice, especially with regard to the selection of individuals to management teams. (shrink)
A violation of procedure invariance in preference measurement is that the predominant or prominent attribute looms larger in choice than in a matching task. In Experiment 1, this so-called prominence effect was demonstrated for choices between pairs of options, choices to accept single options, and preference ratings of single options. That is, in all these response modes the prominent attribute loomed larger than in matching. The results were replicated in Experiment 2, in which subjects chose between or rated their preference (...) for pairs of options which were matched to be equally attractive either in the same session or 1 week earlier. On the basis of these and previous results, it is argued that the prominence effect is a reliable phenomenon. However, none of several cognitive explanations which have been offered appears to be completely viable. (shrink)
Within Kantian ethics and Kant scholarship, it is widely assumed that autonomy consists in the self-legislation of the principle of morality. In this paper, we challenge this view on both textual and philosophical grounds. We argue that Kant never unequivocally claims that the Moral Law is self-legislated and that he is not philosophically committed to this claim by his overall conception of morality. Instead, the idea of autonomy concerns only substantive moral laws, such as the law that one ought not (...) to lie. We argue that autonomy, thus understood, does not have the paradoxical features widely associated with it. Rather, our account highlights a theoretical option that has been neglected in the current debate on whether Kant is best interpreted as a realist or a constructivist, namely that the Moral Law is an a priori principle of pure practical reason that neither requires nor admits of being grounded in anything else. (shrink)
The life-cycle theory of saving behavior (Modigliani, 1988) suggests that humans strive towards an equal intertemporal distribution of wealth. However, behavioral life-cycle theory (Shefrin & Thaler, 1988) proposes that people use self-control heuristics to postpone wealth until later in life. According to this theory, people use a system of cognitive budgeting known as mental accounting. In the present study it was found that mental accounts were used differently depending on if the income change was positive or negative. This was shown (...) both in a representative nationwide sample of households and in a student sample. Respondents were more willing to cut down on their propensity to consume when faced with an income decrease than to raise it when the income increased. Furthermore, contrary to the predictions of behavioral life-cycle theory, it was found that the respondents adjusted their propensity to consume the most when the income increases or decreases took place immediately. Hence, it is suggested that theories of intertemporal choice (e.g., Loewenstein, 1988; Loewenstein & Prelec, 1992) provide a better account of the data than does the behavioral lifecycle theory. (shrink)
It is argued that the design of decisions is a process that in many ways is shaped by social factors such as identities, values, and influences. To be able to understand how these factors impact organizational decisions, the focus must be set on the management level. It is the management that shoulders the chief responsibility for designing collective actions, such as decisions. Our propositions indicate that the following measures must be taken in order to improve the quality of organizational decisions: (...) 1. The identity of the people, involved in organizational decision making, affects the quality of decisions and should be taken into account in the design of decisions. 2. The decision maker or designer of decisions should engage the members of an organization to create a shared vision. 3. Getting the members of an organization to express and share common values should improve the decision making process. 4. Being able to socially influence the members of an organization, or other stakeholders involved, as well as letting them participate in the process, should improve the quality of decisions. (shrink)
This chapter focuses on the psychological mechanisms behind the construction of preference, especially the actual processes used by humans when they make decisions in their everyday lives or in business situations. The chapter uses cognitive psychological techniques to break down these processes and set them in their social context. When attributes are compatible with the response scale, they are assigned greater weight because they are most easily mapped onto the response. For instance, when subjects are asked to set a price (...) for a gamble this task is compatible with the information about the gamble payoff, which is also expressed in monetary values (e.g., dollars). Conversely, when the task requires a choice the payoff information is not easily mapped onto the response anymore and loses some of its salience. In fact, Slovic, Griffin, and Tversky (1990) could show that using non-monetary outcomes attenuates preference reversals when no compatibility between the pricing task and the outcome attribute was possible. An assumption of the compatibility effect is that response modes compatible with specific characteristics of the options (e.g., payoffs) draw attention to them. One of the main themes that has emerged from behavioral decision research during the past decades is the view that people's preferences are often constructed—not merely revealed—in the process of elicitation (see e.g. Slovic). This conception is derived in part from studies demonstrating that normatively equivalent methods of elicitation often give rise to systematically different responses. These "preference reversals" violate the principle of procedure invariance fundamental to theories of rational choice and raise difficult questions about the nature of human values. If different elicitation procedures produce different orderings of options, how can preferences be defined and in what sense do they exist? Describing and explaining such failures of invariance will require choice models of far greater complexity than the traditional models -/- . (shrink)
In the present study it was shown that decision heuristics and confidence judgements play important roles in the building of preferences. Based on a dual-process account of thinking, the study compared people who did well versus poorly on a series of decision heuristics and overconfidence judgement tasks. The two groups were found to differ with regard to their information search behaviour in introduced multiattribute choice tasks. High performers on the judgemental tasks were less influenced in their decision processes by numerical (...) information format (probabilities vs. frequencies) compared to low performers. They also looked at more attributes and spent more time on the multiattribute choice tasks. The results reveal that performance on decision heuristics and overconfidence tasks has a bearing both on heuristic and analytic processes in multiattribute decision making. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.