The INBIOSA project brings together a group of experts across many disciplines who believe that science requires a revolutionary transformative step in order to address many of the vexing challenges presented by the world. It is INBIOSA’s purpose to enable the focused collaboration of an interdisciplinary community of original thinkers. This paper sets out the case for support for this effort. The focus of the transformative research program proposal is biology-centric. We admit that biology to date has been more fact-oriented (...) and less theoretical than physics. However, the key leverageable idea is that careful extension of the science of living systems can be more effectively applied to some of our most vexing modern problems than the prevailing scheme, derived from abstractions in physics. While these have some universal application and demonstrate computational advantages, they are not theoretically mandated for the living. A new set of mathematical abstractions derived from biology can now be similarly extended. This is made possible by leveraging new formal tools to understand abstraction and enable computability. [The latter has a much expanded meaning in our context from the one known and used in computer science and biology today, that is "by rote algorithmic means", since it is not known if a living system is computable in this sense (Mossio et al., 2009).] Two major challenges constitute the effort. The first challenge is to design an original general system of abstractions within the biological domain. The initial issue is descriptive leading to the explanatory. There has not yet been a serious formal examination of the abstractions of the biological domain. What is used today is an amalgam; much is inherited from physics (via the bridging abstractions of chemistry) and there are many new abstractions from advances in mathematics (incentivized by the need for more capable computational analyses). Interspersed are abstractions, concepts and underlying assumptions “native” to biology and distinct from the mechanical language of physics and computation as we know them. A pressing agenda should be to single out the most concrete and at the same time the most fundamental process-units in biology and to recruit them into the descriptive domain. Therefore, the first challenge is to build a coherent formal system of abstractions and operations that is truly native to living systems. Nothing will be thrown away, but many common methods will be philosophically recast, just as in physics relativity subsumed and reinterpreted Newtonian mechanics. -/- This step is required because we need a comprehensible, formal system to apply in many domains. Emphasis should be placed on the distinction between multi-perspective analysis and synthesis and on what could be the basic terms or tools needed. The second challenge is relatively simple: the actual application of this set of biology-centric ways and means to cross-disciplinary problems. In its early stages, this will seem to be a “new science”. This White Paper sets out the case of continuing support of Information and Communication Technology (ICT) for transformative research in biology and information processing centered on paradigm changes in the epistemological, ontological, mathematical and computational bases of the science of living systems. Today, curiously, living systems cannot be said to be anything more than dissipative structures organized internally by genetic information. There is not anything substantially different from abiotic systems other than the empirical nature of their robustness. We believe that there are other new and unique properties and patterns comprehensible at this bio-logical level. The report lays out a fundamental set of approaches to articulate these properties and patterns, and is composed as follows. -/- Sections 1 through 4 (preamble, introduction, motivation and major biomathematical problems) are incipient. Section 5 describes the issues affecting Integral Biomathics and Section 6 -- the aspects of the Grand Challenge we face with this project. Section 7 contemplates the effort to formalize a General Theory of Living Systems (GTLS) from what we have today. The goal is to have a formal system, equivalent to that which exists in the physics community. Here we define how to perceive the role of time in biology. Section 8 describes the initial efforts to apply this general theory of living systems in many domains, with special emphasis on crossdisciplinary problems and multiple domains spanning both “hard” and “soft” sciences. The expected result is a coherent collection of integrated mathematical techniques. Section 9 discusses the first two test cases, project proposals, of our approach. They are designed to demonstrate the ability of our approach to address “wicked problems” which span across physics, chemistry, biology, societies and societal dynamics. The solutions require integrated measurable results at multiple levels known as “grand challenges” to existing methods. Finally, Section 10 adheres to an appeal for action, advocating the necessity for further long-term support of the INBIOSA program. -/- The report is concluded with preliminary non-exclusive list of challenging research themes to address, as well as required administrative actions. The efforts described in the ten sections of this White Paper will proceed concurrently. Collectively, they describe a program that can be managed and measured as it progresses. (shrink)
Suppositions can be introduced in either the indicative or subjunctive mood. The introduction of either type of supposition initiates judgments that may be either qualitative, binary judgments about whether a given proposition is acceptable or quantitative, numerical ones about how acceptable it is. As such, accounts of qualitative/quantitative judgment under indicative/subjunctive supposition have been developed in the literature. We explore these four different types of theories by systematically explicating the relationships canonical representatives of each. Our representative qualitative accounts of indicative (...) and subjunctive supposition are based on the belief change operations provided by AGM revision and KM update respectively; our representative quantitative ones are offered by conditionalization and imaging. This choice is motivated by the familiar approach of understanding supposition as `provisional belief revision' wherein one temporarily treats the supposition as true and forms judgments by making appropriate changes to their other opinions. To compare the numerical judgments recommended by the quantitative theories with the binary ones recommended by the qualitative accounts, we rely on a suitably adapted version of the Lockean thesis. Ultimately, we establish a number of new results that we interpret as vindicating the often-repeated claim that conditionalization is a probabilistic version of revision, while imaging is a probabilistic version of update. (shrink)
By eliminating the need for an absolute frame of reference or ether, Einstein resolved the problem of the constancy of light-speed in all inertial frames but created a new problem in our understanding of time. The resolution of this problem requires no experimentation but only a careful analysis of special relativity, in particular the relativity of simultaneity. This concept is insufficiently relativistic insofar as Einstein failed to recognize that any given set of events privileges the frame in which the events (...) occur; relative to those events, only the privileged frame yields the correct measurement. Instead of equally valid frames occupying different times, one frame is correct and all others incorrect within a shared present moment. I conclude that (1) time is a succession of universal moments and (2) in the context of flowing time, time dilation requires absolute simultaneity, whereas relative simultaneity predicts a nonexistent phenomenon here dubbed time regression. (shrink)
It is a widespread intuition that the coherence of independent reports provides a powerful reason to believe that the reports are true. Formal results by Huemer (1997), Olsson (2002, 2005), and Bovens and Hartmann (2003) prove that, under certain conditions, coherence cannot increase the probability of the target claim. These formal results, known as ‘the impossibility theorems’ have been widely discussed in the literature. They are taken to have significant epistemic upshot. In particular, they are taken to show that reports (...) must first individually confirm the target claim before the coherence of multiple reports offers any positive confirmation. In this paper, I dispute this epistemic interpretation. The impossibility theorems are consistent with the idea that the coherence of independent reports provides a powerful reason to believe that the reports are true even if the reports do not individually confirm prior to coherence. Once we see that the formal discoveries do not have this implication, we can recover a model of coherence justification consistent with Bayesianism and these results. This paper, thus, seeks to turn the tide of the negative findings for coherence reasoning by defending coherence as a unique source of confirmation. (shrink)
When you and I seriously argue over whether a man of seventy is old enough to count as an "old man", it seems that we are appealing neither to our own separate standards of oldness nor to a common standard that is already fixed in the language. Instead, it seems that both of us implicitly invoke an ideal, shared standard that has yet to be agreed upon: the place where we ought to draw the line. As with other normative standards, (...) it is hard to know whether such borderlines exist prior to our coming to agree on where they are. But epistemicists plausibly argue that they must exist whether we ever agree on them or not, as this provides the only logically acceptable response to the sorites paradox. This paper argues that such boundaries do typically exist as hypothetical ideals, but not as determinate features of the present actual world. There is in fact no general solution to the paradox, but attention to practice in resolving vague disagreements shows that its instances can be dealt with separately, as they arise, in many reasonable ways. (shrink)
Ted Poston's book Reason and Explanation: A Defense of Explanatory Coherentism is a book worthy of careful study. Poston develops and defends an explanationist theory of (epistemic) justification on which justification is a matter of explanatory coherence which in turn is a matter of conservativeness, explanatory power, and simplicity. He argues that his theory is consistent with Bayesianism. He argues, moreover, that his theory is needed as a supplement to Bayesianism. There are seven chapters. I provide a chapter-by-chapter summary along (...) with some substantive concerns. (shrink)
The problem of biological memory led Russell to propose the existence of mnemic causation, a mechanism by which past experience influences current thought directly, that is, without the need for a material intermediary in the form of a neural "memory trace." Russell appears to have been inspired by German biologist Richard Semon's concept of mnemic homophony, which conveys memory to consciousness on the basis of similarity between current and past circumstances. Semon, however, in no way denied a role for stable (...) neural structures or "engrams" in the memory process. In contrast to Russell, contemporary biologist Rupert Sheldrake provides a viable theory of memory by remaining true to Semon's original idea and expanding it beyond personal memory to the collective memory of species, encompassing not only mental but developmental memory. (shrink)
Because an unmeasured quantum system consists of information — neither tangible existence nor its complete absence — no property can be assigned a definite value, only a range of likely values should it be measured. The instantaneous transition from information to matter upon measurement establishes a gradient between being and not-being. A quantum system enters a determinate state in a particular moment until this moment is past, at which point the system resumes its default state as an evolving superposition of (...) potential values of properties, neither strictly being nor not-being. Like a “self-organized” chemical system that derives energy from breaking down environmental gradients, a quantum system derives information from breaking down the ontological gradient. An organism is a body in the context of energy and a mind in the context of information. (shrink)
Prerequisite to memory is a past distinct from present. Because wave evolution is both continuous and time-reversible, the undisturbed quantum system lacks a distinct past and therefore the possibility of memory. With the quantum transition, a reversibly evolving superposition of values yields to an irreversible emergence of definite values in a distinct and transient moment of time. The succession of such moments generates an irretrievable past and thus the possibility of memory. Bohm’s notion of implicate and explicate order provides a (...) conceptual basis for memory as a general feature of nature akin to gravity and electromagnetism. I propose that natural memory is an outcome of the continuity of implicate time in the context of discontinuous explicate time. Among the ramifications of natural memory are that laws of nature can propagate through time much like habits and that personal memory does not require neural information storage. (shrink)
Though Einstein explained time dilation without recourse to a universal frame of reference, he erred by abolishing universal present moments. Relative simultaneity is insufficiently relativistic insofar as it depends on the absolute equality of reference frames in the measurement of the timing of events. Yet any given set of events privileges the frame in which the events take place. Relative to those events, the privileged frame yields the correct measurement of their timing while all other frames yield an incorrect measurement. (...) Instead of multiple frames occupying multiple times, one frame is correct and all others incorrect within a shared present moment. With the collapse of relative simultaneity, we may regard time as a succession of universal moments. Absolute simultaneity, in turn, explains why an accelerated inertial frame dilates in time rather than regressing to a prior moment relative to non-accelerated frames. In the context of flowing time, absolute simultaneity predicts time dilation while relative simultaneity predicts time regression. Einstein's explanation of time dilation is therefore incomplete. (shrink)
The foundation of irreversible, probabilistic time -- the classical time of conscious observation -- is the reversible and deterministic time of the quantum wave function. The tendency in physics is to regard time in the abstract, a mere parameter devoid of inherent direction, implying that a concept of real time begins with irreversibility. In reality time has no need for irreversibility, and every invocation of time implies becoming or flow. Neither symmetry under time reversal, of which Newton was well aware, (...) nor the absence of an absolute parameter, as in relativity, negates temporal passage. Far from encapsulating time, irreversibility is a secondary property dependent on the emergence of distinct moments from the ceaseless presence charted by the wave function. (shrink)
Why are we conscious? What does consciousness enable us to do that cannot be done by zombies in the dark? This paper argues that introspective consciousness probably co-evolved as a "spandrel" along with our more useful ability to represent the mental states of other people. The first part of the paper defines and motivates a conception of consciousness as a kind of "double vision" – the perception of how things seem to us as well as what they are – along (...) lines recently explored by Peter Carruthers. The second part explains the basic socioepistemic function of consciousness and suggests an evolutionary pathway to this cognitive capacity that begins with predators and prey. The third part discusses the relevance of these considerations to the traditional problem of other minds. (shrink)
The coherence of independent reports provides a strong reason to believe that the reports are true. This plausible claim has come under attack from recent results in Bayesian epistemology. Huemer (1997), Olsson (2002, 2005), and Bovens and Hartmann (2003) prove that, under certain probabilistic conditions, coherence cannot increase the probability of the target claim. These results are taken to demonstrate that epistemic coherentism is untenable. To date no one has investigated how these Bayesian results bear on different conceptions of coherence. (...) In this paper, I investigate these Bayesian results by using Paul Thagard’s ECHO model of explanatory coherence (Thagard (2000)). Thagard’s ECHO model provides a natural representation of the evidential significance of multiple independent reports. The ECHO model, in contrast to the Bayesian models, captures the power of coherence in a witness scenario. The conditions that Bayesian models found to be impossible, ECHO models naturally accommodate. This demonstrates that there are different formal tools for representing coherence. I close with a discussion of the differences between the Bayesian model and the ECHO model. (shrink)
This paper articulates a way to ground a relatively high prior probability for grand explanatory theories apart from an appeal to simplicity. I explore the possibility of enumerating the space of plausible grand theories of the universe by using the explanatory properties of possible views to limit the number of plausible theories. I motivate this alternative grounding by showing that Swinburne’s appeal to simplicity is problematic along several dimensions. I then argue that there are three plausible grand views—theism, atheism, and (...) axiarchism–which satisfy explanatory requirements for plausibility. Other possible views lack the explanatory virtue of these three theories. Consequently, this explanatory grounding provides a way of securing a non-trivial prior probability for theism, atheism, and axiarchism. An important upshot of my approach is that a modest amount of empirical evidence can bear significantly on the posterior probability of grand theories of the universe. (shrink)
Ted Honderich's theory of consciousness as existence, which he here calls Radical Externalism, starts with a good phenomenological observation: that perceptual experience appears to involve external things being immediately present to us. As P.F. Strawson once observed, when asked to describe my current perceptual state, it is normally enough simply to describe the things around me (Strawson, 1979, p. 97). But in my view that does not make the whole theory plausible.
Book review of Earl Conee and Ted Sider's "Riddles of Existence: A Guided Tour of Metaphysics", written in Spanish. || Reseña del libro "Riddles of Existence: A Guided Tour of Metaphysics", escrito por Earl Conee y Ted Sider.
Bradford Hill (1965) highlighted nine aspects of the complex evidential situation a medical researcher faces when determining whether a causal relation exists between a disease and various conditions associated with it. These aspects are widely cited in the literature on epidemiological inference as justifying an inference to a causal claim, but the epistemological basis of the Hill aspects is not understood. We offer an explanatory coherentist interpretation, explicated by Thagard's ECHO model of explanatory coherence. The ECHO model captures the complexity (...) of epidemiological inference and provides a tractable model for inferring disease causation. We apply this model to three cases: the inference of a causal connection between the Zika virus and birth defects, the classic inference that smoking causes cancer, and John Snow’s inference about the cause of cholera. (shrink)
Perhaps no one has written more extensively, more deeply, and more insightfully about determinism and freedom than Ted Honderich. His influence and legacy with regard to the problem of free will—or the determinism problem, as he prefers to frame it—looms large. In these comments I would like to focus on three main aspects of Honderich ’s work: his defense of determinism and its consequences for origination and moral responsibility; his concern that the truth of determinism threatens and restricts, but does (...) not eliminate, our life-hopes; and his attack on the traditional justifications for punishment. In many ways, I see my own defense of free will skepticism as the natural successor to Honderich ’s work. There are, however, some small differences between us. My goal in this paper is to clarify our areas of agreement and disagreement and to acknowledge my enormous debt to Ted. If I can also move him toward my own more optimistic brand of free will skepticism that would be great too. (shrink)
This paper raises questions concerning Ted Morris’ interpretation of Hume’s notion of meaning and investigates the private and public aspects of Hume’s notion of meaning.
ABSTRACT: BACKGROUND: The governments and citizens of the developed nations are increasingly called upon to contribute financially to health initiatives outside their borders. Although international development assistance for health has grown rapidly over the last two decades, austerity measures related to the 2008 and 2011 global financial crises may impact negatively on aid expenditures. The competition between national priorities and foreign aid commitments raises important ethical questions for donor nations. This paper aims to foster individual reflection and public debate on (...) donor responsibilities for global health. METHODS: We undertook a critical review of contemporary accounts of justice. We selected theories that: (i) articulate important and widely held moral intuitions; (ii) have had extensive impact on debates about global justice; (iii) represent diverse approaches to moral reasoning; and (iv) present distinct stances on the normative importance of national borders. Due to space limitations we limit the discussion to four frameworks. RESULTS: Consequentialist, relational, human rights, and social contract approaches were considered. Responsibilities to provide international assistance were seen as significant by all four theories and place limits on the scope of acceptable national autonomy. Among the range of potential aid foci, interventions for health enjoyed consistent prominence. The four theories concur that there are important ethical responsibilities to support initiatives to improve the health of the worst off worldwide, but offer different rationales for intervention and suggest different implicit limits on responsibilities. CONCLUSIONS: Despite significant theoretical disagreements, four influential accounts of justice offer important reasons to support many current initiatives to promote global health. Ethical argumentation can complement pragmatic reasons to support global health interventions and provide an important foundation to strengthen collective action. (shrink)
Ted Sider argues that nihilism about objects is incompatible with the metaphysical possibility of gunk and takes this point to show that nihilism is flawed. I shall describe one kind of nihilism able to answer this objection. I believe that most of the things we usually encounter do not exist. That is, I take talk of macroscopic objects and macroscopic properties to refer to sets of fundamental properties, which are invoked as a matter of linguistic convention. This view is a (...) kind of nihilism : it rules out the existence of objects; that is, from an ontological point of view, there are no objects. But unlike the moderate nihilism of Mark Heller, Peter van Inwagen and Trenton Merricks that claims that most objects do not exist, I endorse a radical nihilism according to which there are no objects in the world, but only properties instantiated in spacetime. As I will show, radical nihilism is perfectly compatible with the metaphysical possibility of gunk. It is also compatible with the epistemic possibility that we actually live in a gunk world. The objection raised by Ted Sider only applies to moderate nihilism that admits some objects in its ontology. (shrink)
In the third and final part of his A Theory of Determinism (TD) Ted Honderich addresses the fundamental question concerning “the consequences of determinism.” The critical question he aims to answer is what follows if determinism is true? This question is, of course, intimately bound up with the problem of free will and, in particular, with the question of whether or not the truth of determinism is compatible or incompatible with the sort of freedom required for moral responsibility. It is (...) Honderich’s aim to provide a solution to “the problem of the consequences of determinism” and a key element of this is his articulation and defence of an alternative response to the implications of determinism that collapses the familiar Compatibilist/Incompatibilist dichotomy. Honderich offers us a third way – the response of “Affirmation” (HFY 125-6). Although his account of Affirmation has application and relevance to issues and features beyond freedom and responsibility, my primary concern in this essay will be to examine Honderich’s theory of “Affirmation” as it concerns the free will problem. (shrink)
I here argue that Ted Sider's indeterminacy argument against vagueness in quantifiers fails. Sider claims that vagueness entails precisifications, but holds that precisifications of quantifiers cannot be coherently described: they will either deliver the wrong logical form to quantified sentences, or involve a presupposition that contradicts the claim that the quantifier is vague. Assuming (as does Sider) that the “connectedness” of objects can be precisely defined, I present a counter-example to Sider's contention, consisting of a partial, implicit definition of the (...) existential quantifier that in effect sets a given degree of connectedness among the putative parts of an object as a condition upon there being something (in the sense in question) with those parts. I then argue that such an implicit definition, taken together with an “auxiliary logic” (e.g., introduction and elimination rules), proves to function as a precisification in just the same way as paradigmatic precisifications of, e.g., “red”. I also argue that with a quantifier that is stipulated as maximally tolerant as to what mereological sums there are, precisifications can be given in the form of truth-conditions of quantified sentences, rather than by implicit definition. (shrink)
Alessandro Torza argues that Ted Sider’s Lewisian argument against vague existence is insufficient to rule out the possibility of what he calls ‘super-vague existence’, that is the idea that existence is higher-order vague, for all orders. In this chapter it is argued that the possibility of super-vague existence is ineffective against the conclusion of Sider’s argument since super-vague existence cannot be consistently claimed to be a kind of linguistic vagueness. Torza’s idea of super-vague existence seems to be better suited to (...) model vague existence under the assumption that vague existence is instead a form of ontic indeterminacy, contra what Ted Sider and David Lewis assume. (shrink)
This special issue of the Canadian Journal of Philosophy is dedicated to Timothy Williamson's work on modality. It consists of a new paper by Williamson followed by papers on Williamson's work on modality, with each followed by a reply by Williamson. -/- Contributors: Andrew Bacon, Kit Fine, Peter Fritz, Jeremy Goodman, John Hawthorne, Øystein Linnebo, Ted Sider, Robert Stalnaker, Meghan Sullivan, Gabriel Uzquiano, Barbara Vetter, Timothy Williamson, Juhani Yli-Vakkuri.
In Writing the Book of the World, Ted Sider argues that David Lewis’s distinction between those predicates which are ‘perfectly natural’ and those which are not can be extended so that it applies to words of all semantic types. Just as there are perfectly natural predicates, there may be perfectly natural connectives, operators, singular terms and so on. According to Sider, one of our goals as metaphysicians should be to identify the perfectly natural words. Sider claims that there is a (...) perfectly natural first-order quantifier. I argue that this claim is not justified. Quine has shown that we can dispense with first-order quantifiers, by using a family of ‘predicate functors’ instead. I argue that we have no reason to think that it is the first-order quantifiers, rather than Quine’s predicate functors, which are perfectly natural. The discussion of quantification is used to provide some motivation for a general scepticism about Sider’s project. Shamik Dasgupta’s ‘generalism’ and Jason Turner’s critique of ‘ontological nihilism’ are also discussed. (shrink)
Ted Honderich's edited volume, with introductions to his chosen philosophers shows his contempt/ignorance of the non-white world's thinkers. Further, this review points out the iterative nature of Western philosophy today. The book under review is banal and shows the pathetic state of philosophising in the West now in 2020.
In the Phaedrus, Socreates sympathetically describes the ability “to cut up each kind according to its species along its natural joints, and to try not to splinter any part, as a bad butcher might do.” (265e) In contemporary philosophy, Ted Sider (2009, 2011) defends the same idea. As I shall put it, Plato and Sider’s idea is that limning structure is an epistemic goal. My aim in this paper is to articulate and defend this idea. First, I’ll articulate the notion (...) of a structural proposition (§1), and the notion of an epistemic goal (§2), where I’ll assume that epistemic goals are species of accuracy. Then (§3), I’ll argue against some proposals for understanding the idea that limning structure is an epistemic goal: limning structure is neither an aim of belief (§3.1), nor of inquiry (§3.2), nor of concept possession (§3.3). Importantly, non-structural belief is not thereby inaccurate; belief does not “aim” at being structural. Next (§4), I’ll propose a framework for understanding the idea that limning structure is an epistemic goal, and defend that idea. What is required, to defend the view that limning structure is an epistemic goal, is the notion of (what I call) theorizing – a propositional attitude that, unlike belief, does “aim” at being structural (§4.1). I’ll argue that structural truths constitute a species of “important” truths (§4.2), and that apt theorizing is a species of understanding (§4.3). Finally (§5), I’ll discuss the possibility that there is no structure. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...) - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
The mereological predicate ‘is part of’ can be used to define the predicate ‘is identical with’. I argue that this entails that mereological theories can be ideologically simpler than nihilistic theories that do not use the notion of parthood—contrary to what has been argued by Ted Sider. Moreover, if one accepts an extensional mereology, there are good philosophical reasons apart from ideological simplicity to give a mereological definition of identity.
I explore the claim that “fictive imagining” – imagining what it is like to be a character – can be morally dangerous. In particular, I consider the controversy over William Styron’s imagining the revolutionary protagonist in his Confessions of Nat Turner. I employ Ted Cohen’s model of fictive imagining to argue, following a generally Kantian line of thought, that fictive imagining can be dangerous if one has the wrong motives. After considering several possible motives, I argue that only internally directed (...) motives can satisfy the moral concern. Finally, I suggest that when one has the right motives, fictive imagining is morally praiseworthy since it improves one’s ability to imagine the lives of others. (shrink)
Suppose we get a chance to ask an angel a question of our choice. What should we ask to make the most of our unique opportunity? Ned Markosian has shown that the task is trickier than it might seem. Ted Sider has suggested playing safe and asking: What is the true proposition (or one of the true propositions) that would be most beneficial for us to be told? Let's see whether we can do any better than that.
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...) examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
I argue for the possibility of a proprioceptive art in addition to, for example, visual or auditory arts, where aspects of some martial arts will serve as examples of that art form. My argument is inspired by a thought of Ted Shawn’s, one of the pioneers of American modern dance: "Dance is the only art wherein we ourselves are the stuff in which it is made.” In a first step, I point out that in some practices of martial arts (in (...) the paper I will introduce “hyongs” & “katas”), we are, too, the stuff these performances are made of. Second, I show that we, as martial arts practitioners, are not in the first place visual or auditorial observers (as common in painting or music or ballet) but introspective, proprioceptive perceivers of our bodies and their movements. (As a corollary we get that we are, in such a case, necessarily our exclusive audience.) In an third crucial step, I show that the martial arts practices referred to, hyongs & katas, and especially the proprioceptive aspects thereof, can indeed count as art. Thus, proprioceptive art is possible because some practices of some martial arts are actual existing examples of that art form. (shrink)
Ask most any cognitive scientist working today if a digital computational system could develop aesthetic sensibility and you will likely receive the optimistic reply that this remains an open empirical question. However, I attempt to show, while drawing upon the later Wittgenstein, that the correct answer is in fact available. And it is a negative a priori. It would seem, for example, that recent computational successes in textual attribution, most notably those of Donald Foster (famed finder of Ted Kazinski a.k.a. (...) “the Unibomber”) speak favorably of the digital model's capacity to overcome the “aspect blindness” handicap in this domain. I argue however that such results are only achievable when rigid input‐to‐output parameters are given, and that this element is precisely what is absent in standard examples of aesthetic judgment. I thus conclude that while the connectionist model anticipated by Turing may provide the best approach for the AI project, its capacity for meeting its own sufficiency requirements is necessarily crippled by its inability to share in what can be generally referred to as the collective engagements of human solidarity. (shrink)
A novel argument is offered against the following popular condition on inferential knowledge: a person inferentially knows a conclusion only if they know each of the claims from which they essentially inferred that conclusion. The epistemology of conditional proof reveals that we sometimes come to know conditionals by inferring them from assumptions rather than beliefs. Since knowledge requires belief, cases of knowing via conditional proof refute the popular knowledge from knowledge condition. It also suggests more radical cases against the condition (...) and it brings to light the under-recognized category of inferential basic knowledge. (shrink)
The paper defends the following thesis: the intentionality passage from Brentano’s Psychology from an Empirical Standpoint (1874) can be interpre- ted from two perspectives: intentionality as the most salient distinguishing feature separating the mental from the physical, and intentionality as a the- ory of the way in which mental acts, with their contents, are related to ex- tra-mental objects. Fundamentally, the theory of intentionality from 1874 is an example of the former. Its role is that of allowing the establishment of (...) psychology as a science. However, it can also be understood as a theory of intentionality in the second sense through a clarification of the relations it entails between the content and the object of the act. For this reason, it could be said that the act–content–extra-mental object distinction was already achieved in the 1874 work, at least at the level of sensory acts. The distinction between the psychical act, the content, and the object presen- ted through this content was already made in the EL 80 Logik manuscript from 1869/70 at the level of nominal presentation, which provides a fur- ther argument for the above thesis. (shrink)
This paper discusses the extent to which advances in quantum physics can affect ideas of free will and determinism. It questions whether arguments that conclude the existence of free will from quantum physics are as valid as they seem. -/- The paper discusses the validity of Searle’s philosophy of mind, Robert Kane’s parallel processing, and Ted Honderich’s near-determinism, as well as dealing with chaos theory, the relationship between ‘randomness’ and ‘unpredictability,’ and Bell’s theorem, discussing how they can be used to (...) answer the question regarding quantum physics and free will. -/- The paper is tentative towards forming any definitive conclusion due to the ambiguity and confusion surrounding quantum physics but alludes to the idea that quantum randomness not only retracts from universal determinism, but also retracts from human free will, thus theorising a form of universal chaos and unpredictability. (shrink)
Until recently, perdurantism has been considered to be incompatible with the presentist ontology of time. However, discussions about presentist theories of perdurance are now surfacing, one of the most prominent arguments for which being Berit Brogaard’s essay: “Presentist Four-Dimensionalism”. In this paper, I examine Brogaard’s argument in contrast to Ted Sider’s arguments for (an Eternalist theory of) the “Stage View”. I then argue for another (and, I think, novel) view of presentist perdurantism, which avoids the problematic consequences that Brogaard’s view (...) faces, and which also successfully solves philosophical puzzles without the difficulties that Sider’s view faces. This view, which I call “Stage View Presentism,” thus seems to be an appealing alternative for presentists who remain impartial to both the endurantist and the worm theories of persistence. (shrink)
Bob Solomon enjoyed humor, a good laugh. He was not a teller and collector of jokes or of humorous stories, as Ted Cohen and Noël Carroll are. He did not cultivate clever witticisms. Rather, his interest was in viewing life’s contingency and absurdity for the humor that can be found there, and the target of this humor was as likely to be himself or his friends as it was to be strangers. Bob also displayed philosophical courage. He once argued before (...) an incredulous audience of philosophers that the Three Stooges are funny, and admitted unashamedly to being a life-long devotee. In the published version of that talk he observes: “few adults in their chosen professions would dare attempt a Stooges gesture at risk of being terminally dismissed, but most men carry the secret knowledge around with them, and, in a wild fit of catharsis, display a tell-tale Stooges gesture when the door closes and the boss is out of view. I only hesitate to suggest that it is one of the most basic bonds between men, and perhaps the fact that it mystifies and sometimes horrifies women is far more elemental than the mere phrase ‘a sense of humor’ could ever suggest”. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.