Theories that use expected utility maximization to evaluate acts have difficulty handling cases with infinitely many utility contributions. In this paper I present and motivate a way of modifying such theories to deal with these cases, employing what I call “Direct Difference Taking”. This proposal has a number of desirable features: it’s natural and well-motivated, it satisfies natural dominance intuitions, and it yields plausible prescriptions in a wide range of cases. I then compare my account to the most plausible alternative, (...) a proposal offered by Arntzenius :31–58, 2014). I argue that while Arntzenius’s proposal has many attractive features, it runs into a number of problems which Direct Difference Taking avoids. (shrink)
The halting theorem counter-examples present infinitely nested simulation (non-halting) behavior to every simulating halt decider. The pathological self-reference of the conventional halting problem proof counter-examples is overcome. The halt status of these examples is correctly determined. A simulating halt decider remains in pure simulation mode until after it determines that its input will never reach its final state. This eliminates the conventional feedback loop where the behavior of the halt decider effects the behavior of its input.
Although expected utility theory has proven a fruitful and elegant theory in the finite realm, attempts to generalize it to infinite values have resulted in many paradoxes. In this paper, we argue that the use of John Conway's surreal numbers shall provide a firm mathematical foundation for transfinite decision theory. To that end, we prove a surreal representation theorem and show that our surreal decision theory respects dominance reasoning even in the case of infinite values. We (...) then bring our theory to bear on one of the more venerable decisionproblems in the literature: Pascal's Wager. Analyzing the wager showcases our theory's virtues and advantages. To that end, we analyze two objections against the wager: Mixed Strategies and Many Gods. After formulating the two objections in the framework of surreal utilities and probabilities, our theory correctly predicts that (1) the pure Pascalian strategy beats all mixed strategies, and (2) what one should do in a Pascalian decision problem depends on what one's credence function is like. Our analysis therefore suggests that although Pascal's Wager is mathematically coherent, it does not deliver what it purports to, a rationally compelling argument that people should lead a religious life regardless of how confident they are in theism and its alternatives. (shrink)
Some environmental ethicists and economists argue that attributing infinite value to the environment is a good way to represent an absolute obligation to protect it. Others argue against modelling the value of the environment in this way: the assignment of infinite value leads to immense technical and philosophical difficulties that undermine the environmentalist project. First, there is a problem of discrimination: saving a large region of habitat is better than saving a small region; yet if both outcomes have (...)infinite value, then decision theory prescribes indifference. Second, there is a problem of swamping probabilities: an act with a small but positive probability of saving an endangered species appears to be on par with an act that has a high probability of achieving this outcome, since both have infinite expected value. Our paper shows that a relative concept of infinite value can be meaningfully defined, and provides a good model for securing the priority of the natural environment while avoiding the failures noted by sceptics about infinite value. Our claim is not that the relative infinity utility model gets every detail correct, but rather that it provides a rigorous philosophical framework for thinking about decisions affecting the environment. (shrink)
Among recent objections to Pascal's Wager, two are especially compelling. The first is that decision theory, and specifically the requirement of maximizing expected utility, is incompatible with infinite utility values. The second is that even if infinite utility values are admitted, the argument of the Wager is invalid provided that we allow mixed strategies. Furthermore, Hájek has shown that reformulations of Pascal's Wager that address these criticisms inevitably lead to arguments that are philosophically unsatisfying and historically unfaithful. (...) Both the objections and Hájek's philosophical worries disappear, however, if we represent our preferences using relative utilities rather than a one-place utility function. Relative utilities provide a conservative way to make sense of infinite value that preserves the familiar equation of rationality with the maximization of expected utility. They also provide a means of investigating a broader class of problems related to the Wager. (shrink)
I argue that medieval solutions to the limit decision problem imply four-dimensionalism, i.e. the view according to which substances that persist through time are extended through time as well as through space, and have different temporal parts at different times.
Heinrich Behmann (1891-1970) obtained his Habilitation under David Hilbert in Göttingen in 1921 with a thesis on the decision problem. In his thesis, he solved - independently of Löwenheim and Skolem's earlier work - the decision problem for monadic second-order logic in a framework that combined elements of the algebra of logic and the newer axiomatic approach to logic then being developed in Göttingen. In a talk given in 1921, he outlined this solution, but also presented important programmatic (...) remarks on the significance of the decision problem and of decision procedures more generally. The text of this talk as well as a partial English translation are included. (shrink)
By “Brentanian inner consciousness” I mean the conception of inner consciousness developed by Franz Brentano. The aim of this paper is threefold: first, to present Brentano’s account of inner consciousness; second, to discuss this account in light of the mereology outlined by Brentano himself; and third, to decide whether this account incurs an infinite regress. In this regard, I distinguish two kinds of infinite regress: external infinite regress and internal infinite regress. I contend that the most (...) plausible reading of Brentano’s account is the so-called fusion thesis, and I argue that internal infinite regress turns out to be inherent to Brentanian inner consciousness. (shrink)
The menu-dependent nature of regret-minimization creates subtleties when it is applied to dynamic decisionproblems. It is not clear whether forgone opportunities should be included in the menu. We explain commonly observed behavioral patterns as minimizing regret when forgone opportunities are present. If forgone opportunities are included, we can characterize when a form of dynamic consistency is guaranteed.
Many think that Pascal’s Wager is a hopeless failure. A primary reason for this is because a number of challenging objections have been raised to the wager, including the “many gods” objection and the “mixed strategy” objection. We argue that both objections are formal, but not substantive, problems for the wager, and that they both fail for the same reason. We then respond to additional objections to the wager. We show how a version of Pascalian reasoning succeeds, giving us (...) a reason to pay special attention to the infinite consequences of our actions. (shrink)
We argue that C. Darwin and more recently W. Hennig worked at times under the simplifying assumption of an eternal biosphere. So motivated, we explicitly consider the consequences which follow mathematically from this assumption, and the infinite graphs it leads to. This assumption admits certain clusters of organisms which have some ideal theoretical properties of species, shining some light onto the species problem. We prove a dualization of a law of T.A. Knight and C. Darwin, and sketch a decomposition (...) result involving the internodons of D. Kornet, J. Metz and H. Schellinx. A further goal of this paper is to respond to B. Sturmfels’ question, “Can biology lead to new theorems?”. (shrink)
I argue that the Universal Law formulation of the Categorical Imperative is best interpreted as a test or decision procedure of moral rightness and not as a criterion intended to explain the deontic status of actions. Rather, the Humanity formulation is best interpreted as a moral criterion. I also argue that because the role of a moral criterion is to explain, and thus specify what makes an action right or wrong, Kant's Humanity formulation yields a theory of relevant descriptions.
Relative to any reasonable frame, satisfiability of modal quantificational formulae in which “= ” is the sole predicate is undecidable; but if we restrict attention to satisfiability in structures with the expanding domain property, satisfiability relative to the familiar frames (K, K4, T, S4, B, S5) is decidable. Furthermore, relative to any reasonable frame, satisfiability for modal quantificational formulae with a single monadic predicate is undecidable ; this improves the result of Kripke concerning formulae with two monadic predicates.
People with the kind of preferences that give rise to the St. Petersburg paradox are problematic---but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage's Sure Thing Principle, which we call the *Countable Sure Thing Principle*, as well as an infinitary generalization of von Neumann and Morgenstern's Independence axiom, which we call (...) *Countable Independence*. In violating these principles, they display foibles like those of people who deviate from standard expected utility theory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representation theorem, and the principle we began with, that every prospect has a value that some outcome could have. (shrink)
Our universe is both chaotic and (most likely) infinite in space and time. But it is within this setting that we must make moral decisions. This presents problems. The first: due to our universe's chaotic nature, our actions often have long-lasting, unpredictable effects; and this means we typically cannot say which of two actions will turn out best in the long run. The second problem: due to the universe's infinite dimensions, and infinite population therein, we cannot (...) compare outcomes by simply adding up their total moral values - those totals will typically be infinite or undefined. Each of these problems poses a threat to aggregative moral theories. But, for each, we have solutions: a proposal from Greaves let us overcome the problem of chaos, and proposals from the infinite aggregation literature let us overcome the problem of infinite value. But a further problem emerges. If our universe is both chaotic and infinite, those solutions no longer work - outcomes that are infinite and differ by chaotic effects are incomparable, even by those proposals. In this paper, I show that we can overcome this further problem. But, to do so, we must accept some peculiar implications about how aggregation works. (shrink)
For aggregative theories of moral value, it is a challenge to rank worlds that each contain infinitely many valuable events. And, although there are several existing proposals for doing so, few provide a cardinal measure of each world's value. This raises the even greater challenge of ranking lotteries over such worlds—without a cardinal value for each world, we cannot apply expected value theory. How then can we compare such lotteries? To date, we have just one method for doing so (proposed (...) separately by Arntzenius, Bostrom, and Meacham), which is to compare the prospects for value at each individual location, and to then represent and compare lotteries by their expected values at each of those locations. But, as I show here, this approach violates several key principles of decision theory and generates some implausible verdicts. I propose an alternative—one which delivers plausible rankings of lotteries, which is implied by a plausible collection of axioms, and which can be applied alongside almost any ranking of infinite worlds. (shrink)
By making a slight refinement to the halt status criterion measure that remains consistent with the original a halt decider may be defined that correctly determines the halt status of the conventional halting problem proof counter-examples. This refinement overcomes the pathological self-reference issue that previously prevented halting decidability.
The halting theorem counter-examples present infinitely nested simulation (non-halting) behavior to every simulating halt decider. Whenever the pure simulation of the input to simulating halt decider H(x,y) never stops running unless H aborts its simulation H correctly aborts this simulation and returns 0 for not halting.
This is an explanation of a key new insight into the halting problem provided in the language of software engineering. Technical computer science terms are explained using software engineering terms. -/- To fully understand this paper a software engineer must be an expert in the C programming language, the x86 programming language, exactly how C translates into x86 and what an x86 process emulator is. No knowledge of the halting problem is required.
Because of its non-representational nature, music has always had familiarity with computational and algorithmic methodologies for automatic composition and performance. Today, AI and computer technology are transforming systems of automatic music production from passive means within musical creative processes into ever more autonomous active collaborators of human musicians. This raises a large number of interrelated questions both about the theoretical problems of artificial musical creativity and about its ethical consequences. Considering two of the most urgent ethical problems of (...) Musical AI (music job replacement and machine musical authorship), we show in this essay the strict dependence of every form of acknowledgment of a moral and legal status to systems of automatic music production from the theoretical account of musical creativity by turns implicitly or explicitly adopted, arguing, on the basis of pragmatic reasons, for the necessity and the desirability of this acknowledgment. (shrink)
This book examines the philosophy of the nineteenth-century Indian mystic Sri Ramakrishna and brings him into dialogue with Western philosophers of religion, primarily in the recent analytic tradition. Sri Ramakrishna’s expansive conception of God as the impersonal-personal Infinite Reality, Maharaj argues, opens up an entirely new paradigm for addressing central topics in the philosophy of religion, including divine infinitude, religious diversity, the nature and epistemology of mystical experience, and the problem of evil.
A common argument for atheism runs as follows: God would not create a world worse than other worlds he could have created instead. However, if God exists, he could have created a better world than this one. Therefore, God does not exist. In this paper I challenge the second premise of this argument. I argue that if God exists, our world will continue without end, with God continuing to create value-bearers, and sustaining and perfecting the value-bearers he has already created. (...) Given this, if God exists, our world—considered on the whole—is infinitely valuable. I further contend that this theistic picture makes our world's value unsurpassable. In support of this contention, I consider proposals for how infinitely valuable worlds might be improved upon, focusing on two main ways—adding value-bearers and increasing the value in present value-bearers. I argue that neither of these can improve our world. Depending on how each method is understood, either it would not improve our world, or our world is unsurpassable with respect to it. I conclude by considering the implications of my argument for the problem of evil more generally conceived. (shrink)
This article discusses how the concept of a fair finite lottery can best be extended to denumerably infinite lotteries. Techniques and ideas from non-standard analysis are brought to bear on the problem.
A Simulating Halt Decider (SHD) computes the mapping from its input to its own accept or reject state based on whether or not the input simulated by a UTM would reach its final state in a finite number of simulated steps. -/- A halt decider (because it is a decider) must report on the behavior specified by its finite string input. This is its actual behavior when it is simulated by the UTM contained within its simulating halt decider while this (...) SHD remains in UTM mode. (shrink)
Andy Egan recently drew attention to a class of decision situations that provide a certain kind of informational feedback, which he claims constitute a counterexample to causal decision theory. Arntzenius and Wallace have sought to vindicate a form of CDT by describing a dynamic process of deliberation that culminates in a “mixed” decision. I show that, for many of the cases in question, this proposal depends on an incorrect way of calculating expected utilities, and argue that it (...) is therefore unsuccessful. I then tentatively defend an alternative proposal by Joyce, which produces a similar process of dynamic deliberation but for a different reason. (shrink)
The problem of the man who met death in Damascus appeared in the infancy of the theory of rational choice known as causal decision theory. A straightforward, unadorned version of causal decision theory is presented here and applied, along with Brian Skyrms’ deliberation dynamics, to Death in Damascus and similar problems. Decision instability is a fascinating topic, but not a source of difficulty for causal decision theory. Andy Egan’s purported counterexample to causal decision theory, (...) Murder Lesion, is considered; a simple response shows how Murder Lesion and similar examples fail to be counterexamples, and clarifies the use of the unadorned theory in problems of decision instability. I compare unadorned causal decision theory to previous treatments by Frank Arntzenius and by Jim Joyce, and recommend a well-founded heuristic that all three accounts can endorse. Whatever course deliberation takes, causal decision theory is consistently a good guide to rational action. (shrink)
The dispute in philosophical decision theory between causalists and evidentialists remains unsettled. Many are attracted to the causal view’s endorsement of a species of dominance reasoning, and to the intuitive verdicts it gets on a range of cases with the structure of the infamous Newcomb’s Problem. But it also faces a rising wave of purported counterexamples and theoretical challenges. In this paper I will describe a novel decision theory which saves what is appealing about the causal view while (...) avoiding its most worrying objections, and which promises to generalize to solve a set of related problems in other normative domains. (shrink)
Jackson (1991) proposes an interpretation of consequentialism, namely, the Decision Theoretic Consequentialism (DTC), which provides a middle ground between internal and external criteria of rightness inspired by decision theory. According to DTC, a right decision either leads to the best outcomes (external element) or springs from right motivations (internal element). He raises an objection to fully external interpretations, like objective consequentialism (OC), which he claims that DTC can resolve. He argues that those interpretations are either too objective, (...) which prevents them from giving guidance for action, or their guidance leads to wrong and blameworthy actions or decisions. I discuss how the emphasis on blameworthiness in DTC constraints its domain to merely the justification of decisions that relies on rationality to provide a justification criterion for moral decisions. I provide examples that support the possibility of rational but immoral decisions that are at odds with DTC’s prescription for right decisions. Moreover, I argue what I call the desire-luck problem for the external element of justification criterion leads to the same objection for DTC that Jackson raised for OC. Therefore, DTC, although successful in response to some objections, fails to provide a prescription for the right decision. (shrink)
This essay makes the case for, in the phrase of Angelika Kratzer, packing the fruits of the study of rational decision-making into our semantics for deontic modals—specifically, for parametrizing the truth-condition of a deontic modal to things like decisionproblems and decision theories. Then it knocks it down. While the fundamental relation of the semantic theory must relate deontic modals to things like decisionproblems and theories, this semantic relation cannot be intelligibly understood as (...) representing the conditions under which a deontic modal is true. Rather it represents the conditions under which it is accepted by a semantically competent agent. This in turn motivates a reorientation of the whole of semantic theorizing, away from the truth-conditional paradigm, toward a form of Expressivism. (shrink)
Aggregative moral theories face a series of devastating problems when we apply them in a physically realistic setting. According to current physics, our universe is likely _infinitely large_, and will contain infinitely many morally valuable events. But standard aggregative theories are ill-equipped to compare outcomes containing infinite total value so, applied in a realistic setting, they cannot compare any outcomes a real-world agent must ever choose between. This problem has been discussed extensively, and non-standard aggregative theories proposed to (...) overcome it. This paper addresses a further problem of similar severity. Physics tells us that, in our universe, how remotely in time an event occurs is _relative_. But our most promising aggregative theories, designed to compare outcomes containing infinitely many valuable events, are sensitive to how remote in time those events are. As I show, the evaluations of those theories are then relative too. But this is absurd; evaluations of outcomes must be absolute. So we must reject such theories. Is this objection fatal for all aggregative theories, at least in a relativistic universe like ours? I demonstrate here that, by further modifying these theories to fit with the physics, we can overcome it. (shrink)
This paper uses a schema for infinite regress arguments to provide a solution to the problem of the infinite regress of justification. The solution turns on the falsity of two claims: that a belief is justified only if some belief is a reason for it, and that the reason relation is transitive.
Big decisions in a person’s life often affect the preferences and standards of a good life which that person’s future self will develop after implementing her decision. This paper argues that in such cases the person might lack any reasons to choose one way rather than the other. Neither preference-based views nor happiness-based views of justified choice offer sufficient help here. The available options are not comparable in the relevant sense and there is no rational choice to make. Thus, (...) ironically, in many of a person’s most important decisions the idea of that person’s good seems to have no application. (shrink)
According to one of Leibniz's theories of contingency a proposition is contingent if and only if it cannot be proved in a finite number of steps. It has been argued that this faces the Problem of Lucky Proof , namely that we could begin by analysing the concept ‘Peter’ by saying that ‘Peter is a denier of Christ and …’, thereby having proved the proposition ‘Peter denies Christ’ in a finite number of steps. It also faces a more general but (...) related problem that we dub the Problem of Guaranteed Proof . We argue that Leibniz has an answer to these problems since for him one has not proved that ‘Peter denies Christ’ unless one has also proved that ‘Peter’ is a consistent concept, an impossible task since it requires the full decomposition of the infinite concept ‘Peter’. We defend this view from objections found in the literature and maintain that for Leibniz all truths about created individual beings are contingent. (shrink)
My topic is how to make decisions when you possess foreknowledge of the consequences of your choice. Many have thought that these kinds of decisions pose a distinctive and novel problem for causal decision theory (CDT). My thesis is that foreknowledge poses no new problems for CDT. Some of the purported problems are not problems. Others are problems, but they are not problems for CDT. Rather, they are problems for our theories of subjunctive (...) supposition. Others are problems, but they are not new problems. They are old problems transposed into a new key. Nonetheless, decisions made with foreknowledge illustrate important lessons about the instrumental value of our choices. Once we've appreciated these lessons, we are left with a version of CDT which faces no novel threats from foreknowledge. (shrink)
Context: The infinite has long been an area of philosophical and mathematical investigation. There are many puzzles and paradoxes that involve the infinite. Problem: The goal of this paper is to answer the question: Which objects are the infinite numbers (when order is taken into account)? Though not currently considered a problem, I believe that it is of primary importance to identify properly the infinite numbers. Method: The main method that I employ is conceptual analysis. In (...) particular, I argue that the infinite numbers should be as much like the finite numbers as possible. Results: Using finite numbers as our guide to the infinite numbers, it follows that infinite numbers are of the structure w + (w* + w) a + w*. This same structure also arises when a large finite number is under investigation. Implications: A first implication of the paper is that infinite numbers may be large finite numbers that have not been investigated fully. A second implication is that there is no number of finite numbers. Third, a number of paradoxes of the infinite are resolved. One change that should occur as a result of these findings is that “infinitely many” should refer to structures of the form w + (w* + w) a + w*; in contrast, there are “indefinitely many” natural numbers. Constructivist content: The constructivist perspective of the paper is a form of strict finitism. (shrink)
A comment on Paul Schoemaker's target article in Behavioral and Brain Sciences, 14 (1991), p. 205-215, "The Quest for Optimality: A Positive Heuristic of Science?" (https://doi.org/10.1017/S0140525X00066140). This comment argues that the optimizing model of decision leads to an infinite regress, once internal costs of decision (i.e., information and computation costs) are duly taken into account.
The paper argues that on three out of eight possible hypotheses about the EPR experiment we can construct novel and realistic decisionproblems on which (a) Causal Decision Theory and Evidential Decision Theory conflict (b) Causal Decision Theory and the EPR statistics conflict. We infer that anyone who fully accepts any of these three hypotheses has strong reasons to reject Causal Decision Theory. Finally, we extend the original construction to show that anyone who gives (...) any of the three hypotheses any non-zero credence has strong reasons to reject Causal Decision Theory. However, we concede that no version of the Many Worlds Interpretation (Vaidman, in Zalta, E.N. (ed.), Stanford Encyclopaedia of Philosophy 2014) gives rise to the conflicts that we point out. (shrink)
According to Leibniz’s infinite-analysis account of contingency, any derivative truth is contingent if and only if it does not admit of a finite proof. Following a tradition that goes back at least as far as Bertrand Russell, several interpreters have been tempted to explain this biconditional in terms of two other principles: first, that a derivative truth is contingent if and only if it contains infinitely complex concepts and, second, that a derivative truth contains infinitely complex concepts if and (...) only if it does not admit of a finite proof. A consequence of this interpretation is that Leibniz’s infinite-analysis account of contingency falls prey to Robert Adams’s Problem of Lucky Proof. I will argue that this interpretation is mistaken and that, once it is properly understood how the idea of an infinite proof fits into Leibniz’s circle of modal notions, the problem of lucky proof simply disappears. (shrink)
This paper develops an argument against causal decision theory. I formulate a principle of preference, which I call the Guaranteed Principle. I argue that the preferences of rational agents satisfy the Guaranteed Principle, that the preferences of agents who embody causal decision theory do not, and hence that causal decision theory is false.
In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. In this paper the authors, with backgrounds (...) in philosophy, maternity care practice and clinical research, draw upon and extend a recent framework for shared decision-making (SDM) that identified a duty of care to the client's knowledge as a necessary condition for SDM. This duty entails the responsibility to acknowledge and overcome epistemic defeaters. This framework is applied to the use of AI in maternity care, in particular, the use of machine learning and deep learning technology to attempt to enhance electronic fetal monitoring (EFM). In doing so, various sub-kinds of epistemic defeater, namely, transparent, opaque, underdetermined, and inherited defeaters are taxonomized and discussed. The authors argue that, although effective current or future AI-enhanced EFM may impose an epistemic obligation on the part of clinicians to rely on such systems' predictions or diagnoses as input to SDM, such obligations may be overridden by inherited defeaters, caused by a form of algorithmic bias. The existence of inherited defeaters implies that the duty of care to the client's knowledge extends to any situation in which a clinician (or anyone else) is involved in producing training data for a system that will be used in SDM. Any future AI must be capable of assessing women individually, taking into account a wide range of factors including women's preferences, to provide a holistic range of evidence for clinical decision-making. (shrink)
According to orthodox causal decision theory, performing an action can give you information about factors outside of your control, but you should not take this information into account when deciding what to do. Causal decision theorists caution against an irrational policy of 'managing the news'. But, by providing information about factors outside of your control, performing an act can give you two, importantly different, kinds of good news. It can tell you that the world in which you find (...) yourself is good in ways you can't control, and it can also tell you that the act itself is in a position to make the world better. While the first kind of news does not speak in favor of performing an act, I believe that the second kind of news does. I present a revision of causal decision theory which advises you to manage the news about the good you stand to promote, while ignoring news about the good the world has provided for you. (shrink)
The ontology of decision theory has been subject to considerable debate in the past, and discussion of just how we ought to view decisionproblems has revealed more than one interesting problem, as well as suggested some novel modifications of classical decision theory. In this paper it will be argued that Bayesian, or evidential, decision-theoretic characterizations of decision situations fail to adequately account for knowledge concerning the causal connections between acts, states, and outcomes in (...)decision situations, and so they are incomplete. Second, it will be argues that when we attempt to incorporate the knowledge of such causal connections into Bayesian decision theory, a substantial technical problem arises for which there is no currently available solution that does not suffer from some damning objection or other. From a broader perspective, this then throws into question the use of decision theory as a model of human or machine planning. (shrink)
Responsibility is impossible because there is no responsibility-maker and there needs to be one if people are morally responsible. The two most plausible candidates, psychology and decision, fail. A person is not responsible for an unchosen psychology or a psychology that was chosen when the person is not responsible for the choice. This can be seen in intuitions about instantly-created and manipulated people. This result is further supported by the notion that, in general, the right, the good, and virtue (...) rest on the exercise of a capacity rather than the capacity itself. It is also supported by the notion that negligence is not a responsibility-maker. A person is not responsible for a choice that does not reflect his psychology or that does reflect it when he is not responsible for the psychology. This can be seen by considering intuitions regarding acts that are unconnected or arbitrarily connected to a person’s psychology. It can also be seen intuitions about acts that result from a manipulated psychology. The problem with choice as a foundation can be further seen in that an infinite or self-created person would not be responsible despite these superhuman choice-related features. (shrink)
The standard formulation of Newcomb's problem compares evidential and causal conceptions of expected utility, with those maximizing evidential expected utility tending to end up far richer. Thus, in a world in which agents face Newcomb problems, the evidential decision theorist might ask the causal decision theorist: "if you're so smart, why ain’cha rich?” Ultimately, however, the expected riches of evidential decision theorists in Newcomb problems do not vindicate their theory, because their success does not generalize. (...) Consider a theory that allows the agents who employ it to end up rich in worlds containing Newcomb problems and continues to outperform in other cases. This type of theory, which I call a “success-first” decision theory, is motivated by the desire to draw a tighter connection between rationality and success, rather than to support any particular account of expected utility. The primary aim of this paper is to provide a comprehensive justification of success-first decision theories as accounts of rational decision. I locate this justification in an experimental approach to decision theory supported by the aims of methodological naturalism. (shrink)
This paper explores the idea that a semantics for ‘ought’ should be neutral between different ways of deciding what an agent ought to do in a situation. While the idea is, I argue, well-motivated, taking it seriously leads to surprising, even paradoxical, problems for theorizing about the meaning of ‘ought’. This paper describes and defends one strategy—a form of Expressivism for the modal ‘ought’—for navigating these problems.
Une preuve est pure si, en gros, elle ne réfère dans son développement qu’à ce qui est « proche » de, ou « intrinsèque » à l’énoncé à prouver. L’infinité des nombres premiers, un théorème classique de l’arithmétique, est un cas d’étude particulièrement riche pour les recherches philosophiques sur la pureté. Deux preuves différentes de ce résultat sont ici considérées, à savoir la preuve euclidienne classique et une preuve « topologique » plus récente proposée par Furstenberg. D’un point de vue (...) naïf, il semblerait que la première soit pure et la seconde impure. Des objections à cette vue naïve sont ici considérées et réfutées. Concernant la preuve euclidienne, la question relève de la logique, notamment de la définissabilité arithmétique de l’addition en termes de successeur et de divisibilité telle que démontrée par Julia Robinson, tandis qu’en ce qui concerne la preuve topologique, la question relève de la sémantique, notamment pour tout ce qui touche au problème de savoir ce qui est « inclus » dans le contenu d’énoncés particuliers.A proof is pure, roughly, if it draws only on what is « close » or « intrinsic » to the statement being proved. The infinitude of prime numbers, a classical theorem of arithmetic, is a rich case study for philosophical investigation of purity. Two different proofs of this result are considered, namely the classical Euclidean proof and a more recent « topological » proof by Furstenberg. Naively the former would seem to be pure and the latter to be impure. Objections to these naive views are considered and met. In the case of the former the issue rests on logical matters, specifically the arithmetic definability of addition in terms of successor and divisibility shown by Julia Robinson, while in the case of the latter the issue rests on semantic matters, specifically with respect to what is « contained » in the content of particular statements. (shrink)
Decision theory requires agents to assign probabilities to states of the world and utilities to the possible outcomes of different actions. When agents commit to having the probabilities and/or utilities in a decision problem defined by objective features of the world, they may find themselves unable to decide which actions maximize expected utility. Decision theory has long recognized that work-around strategies are available in special cases; this is where dominance reasoning, minimax, and maximin play a role. Here (...) we describe a different work around, wherein a rational decision about one decision problem can be reached by “interpolating” information from another problem that the agent believes has already been rationally solved. (shrink)
This paper describes a decision procedure for disjunctions of conjunctions of anti-prenex normal forms of pure first-order logic (FOLDNFs) that do not contain V within the scope of quantifiers. The disjuncts of these FOLDNFs are equivalent to prenex normal forms whose quantifier-free parts are conjunctions of atomic and negated atomic formulae (= Herbrand formulae). In contrast to the usual algorithms for Herbrand formulae, neither skolemization nor unification algorithms with function symbols are applied. Instead, a procedure is described that rests (...) on nothing but equivalence transformations within pure first-order logic (FOL). This procedure involves the application of a calculus for negative normal forms (the NNF-calculus) with A -||- A & A (= &I) as the sole rule that increases the complexity of given FOLDNFs. The described algorithm illustrates how, in the case of Herbrand formulae, decisionproblems can be solved through a systematic search for proofs that reduce the number of applications of the rule &I to a minimum in the NNF-calculus. In the case of Herbrand formulae, it is even possible to entirely abstain from applying &I. Finally, it is shown how the described procedure can be used within an optimized general search for proofs of contradiction and what kind of questions arise for a &I-minimal proof strategy in the case of a general search for proofs of contradiction. (shrink)
2nd edition. Many-valued logics are those logics that have more than the two classical truth values, to wit, true and false; in fact, they can have from three to infinitely many truth values. This property, together with truth-functionality, provides a powerful formalism to reason in settings where classical logic—as well as other non-classical logics—is of no avail. Indeed, originally motivated by philosophical concerns, these logics soon proved relevant for a plethora of applications ranging from switching theory to cognitive modeling, and (...) they are today in more demand than ever, due to the realization that inconsistency and vagueness in knowledge bases and information processes are not only inevitable and acceptable, but also perhaps welcome. The main modern applications of (any) logic are to be found in the digital computer, and we thus require the practical knowledge how to computerize—which also means automate—decisions (i.e. reasoning) in many-valued logics. This, in turn, necessitates a mathematical foundation for these logics. This book provides both these mathematical foundation and practical knowledge in a rigorous, yet accessible, text, while at the same time situating these logics in the context of the satisfiability problem (SAT) and automated deduction. The main text is complemented with a large selection of exercises, a plus for the reader wishing to not only learn about, but also do something with, many-valued logics. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.