At the phenomenal level, consciousness can be described as a singular, unified field of recursive self-awareness, consistently coherent in a particualr way; that of a subject located both spatially and temporally in an egocentrically-extended domain, such that conscious self-awareness is explicitly characterized by I-ness, now-ness and here-ness. The psychological mechanism underwriting this spatiotemporal self-locatedness and its recursive processing style involves an evolutionary elaboration of the basic orientative reference frame which consistently structures ongoing spatiotemporal self-location computations as i-here-now. Cognition computes action-output (...) in the midst of ongoing movement, and consequently requires a constant self-locating spatiotemporal reference frame as basis for these computations. Over time, constant evolutionary pressures for energy efficiency have encouraged both the proliferation of anticipative feedforward processing mechansims, and the elaboration, at the apex of the sensorimotor processing hierarchy, of self-activating, highly attenuated recursively-feedforward circuitry processing the basic orientational schema independent of external action output. As the primary reference frame of active waking cognition, this recursive i-here-now processing generates a zone of subjective self-awareness in terms of which it feels like something to be oneself here and now. This is consciousness. (shrink)
Natural recursion in syntax is recursion by linguistic value, which is not syntactic in nature but semantic. Syntax-specific recursion is not recursion by name as the term is understood in theoretical computer science. Recursion by name is probably not natural because of its infinite typeability. Natural recursion, or recursion by value, is not species-specific. Human recursion is not syntax-specific. The values on which it operates are most likely domain-specific, including those for syntax. (...) Syntax seems to require no more (and no less) than the resource management mechanisms of an embedded push-down automaton (EPDA). We can conceive EPDA as a common automata-theoretic substrate for syntax, collaborative planning, i-intentions, and we-intentions. They manifest the same kind of dependencies. Therefore, syntactic uniqueness arguments for human behavior can be better explained if we conceive automata-constrained recursion as the most unique human capacity for cognitive processes. (shrink)
Recent advances in neuroscience lead to a wider realm for philosophy to include the science of the Darwinian-evolved computational brain, our inner world producing organ, a non-recursive super- Turing machine combining 100B synapsing-neuron DNA-computers based on the genetic code. The whole system is a logos machine offering a world map for global context, essential for our intentional grasp of opportunities. We start from the observable contrast between the chaotic universe vs. our orderly inner world, the noumenal cosmos. So far, philosophy (...) has been rehearsing our thoughts, our human-internal world, a grand painting of the outer world, how we comprehend subjectively our experience, worked up by the logos machine, but now we seek a wider horizon, how humans understand the world thanks to Darwinian evolution to adapt in response to the metaphysical gap, the chasm between the human animal and its environment, shaping the organism so it can deal with its variable world. This new horizon embraces global context coded in neural structures that support the noumenal cosmos, our inner mental world, for us as denizens of the outer environment. Kant’s inner and outer senses are fundamental ingredients of scientific philosophy. Several sections devoted to Heidegger, his lizard debunked, but his version of the metaphysical gap & his doctrine of the logos praised. Rorty and others of the behaviorist school discussed also. (shrink)
The recursive aspect of process reliabilism has rarely been examined. The regress puzzle, which illustrates infinite regress arising from the combination of the recursive structure and the no-defeater condition incorporated into it, is a valuable exception. However, this puzzle can be dealt with in the framework of process reliabilism by reconsidering the relationship between the recursion and the no-defeater condition based on the distinction between prima facie and ultima facie justification. Thus, the regress puzzle is not a basis for (...) abandoning process reliabilism. A genuinely intractable problem for recursive reliabilism lies in the gap between the reliability of the entire path to a belief and that of its parts. Confronted with this puzzle, reliabilists can orient themselves toward ‘reliable-as-a-whole reliabilism’ instead of ‘reliable-in-every-part reliabilism’, including recursive reliabilism, which is found to be not well-motivated. (shrink)
A considerable literature has emerged around the idea of using ‘personal responsibility’ as an allocation criterion in healthcare distribution, where a person's being suitably responsible for their health needs may justify additional conditions on receiving healthcare, and perhaps even limiting access entirely, sometimes known as ‘responsibilisation’. This discussion focuses most prominently, but not exclusively, on ‘luck egalitarianism’, the view that deviations from equality are justified only by suitably free choices. A superficially separate issue in distributive justice concerns the two–way relationship (...) between health and other social goods: deficits in health typically undermine one's abilities to secure advantage in other areas, which in turn often have further negative effects on health. This paper outlines the degree to which this latter relationship between health and other social goods exacerbates an existing problem for proponents of responsibilisation (the ‘harshness objection’) in ways that standard responses to this objection cannot address. Placing significant conditions on healthcare access because of a person's prior responsibility risks trapping them in, or worsening, negative cycles where poor health and associated lack of opportunity reinforce one another, making further poor yet ultimately responsible choices more likely. It ends by considering three possible solutions to this problem. (shrink)
Recursion or self-reference is a key feature of contemporary research and writing in semiotics. The paper begins by focusing on the role of recursion in poststructuralism. It is suggested that much of what passes for recursion in this field is in fact not recursive all the way down. After the paradoxical meaning of radical recursion is adumbrated, topology is employed to provide some examples. The properties of the Moebius strip prove helpful in bringing out the dialectical (...) nature of radical recursion. The Moebius is employed to explore the recursive interplay of terms that are classically regarded as binary opposites: identity and difference, object and subject, continuity and discontinuity, etc. To realize radical recursion in an even more concrete manner, a higher-dimensional counterpart of the Moebius strip is utilized, namely, the Klein bottle. The presentation concludes by enlisting phenomenological philosopher Maurice Merleau-Ponty’s concept of depth to interpret the Klein bottle’s extra dimension. (shrink)
Transfinite ordinal numbers enter mathematical practice mainly via the method of definition by transfinite recursion. Outside of axiomatic set theory, there is a significant mathematical tradition in works recasting proofs by transfinite recursion in other terms, mostly with the intention of eliminating the ordinals from the proofs. Leaving aside the different motivations which lead each specific case, we investigate the mathematics of this action of proof transforming and we address the problem of formalising the philosophical notion of elimination (...) which characterises this move. (shrink)
It is quite well-known from Kurt G¨odel’s (1931) ground-breaking Incompleteness Theorem that rudimentary relations (i.e., those definable by bounded formulae) are primitive recursive, and that primitive recursive functions are representable in sufficiently strong arithmetical theories. It is also known, though perhaps not as well-known as the former one, that some primitive recursive relations are not rudimentary. We present a simple and elementary proof of this fact in the first part of the paper. In the second part, we review some possible (...) notions of representability of functions studied in the literature, and give a new proof of the equivalence of the weak representability with the (strong) representability of functions in sufficiently strong arithmetical theories. (shrink)
This article explores the metaphor of Science as provider of sharp images of our environment, using the epistemological framework of Objective Cognitive Constructivism. These sharp images are conveyed by precise scientific hypotheses that, in turn, are encoded by mathematical equations. Furthermore, this article describes how such knowledge is pro-duced by a cyclic and recursive development, perfection and reinforcement process, leading to the emergence of eigen-solutions characterized by the four essential properties of precision, stability, separability and composability. Finally, this article discusses (...) the role played by ontology and metaphysics in the scientific production process, and in which sense the resulting knowledge can be considered objective. (shrink)
The previously introduced algorithm \sqema\ computes first-order frame equivalents for modal formulae and also proves their canonicity. Here we extend \sqema\ with an additional rule based on a recursive version of Ackermann's lemma, which enables the algorithm to compute local frame equivalents of modal formulae in the extension of first-order logic with monadic least fixed-points \mffo. This computation operates by transforming input formulae into locally frame equivalent ones in the pure fragment of the hybrid mu-calculus. In particular, we prove that (...) the recursive extension of \sqema\ succeeds on the class of `recursive formulae'. We also show that a certain version of this algorithm guarantees the canonicity of the formulae on which it succeeds. (shrink)
The component structures of two distinct neuropsychological systems are described. "System-Y" depends upon "system-X" which, on the other hand, can operate independently of system-Y. System-X provides a matrix upon which system-Y must operate, and, system-Y is transformed by the operations of system-X. In addition these neuropsychological structures reverberate in political history and in the cosmos. The most fundamental structure in the soul, in society, and in the cosmos, has the form of a conical spiral. It can be described mathematically as (...) an harmonic system and mythologically in terms of the birth, marriage and death of the divine-king. ;System-Y corresponds to the neocortex. System-X corresponds to the region below the cortex including the limbic brain. Many of the essential structures of system-Y are captured by Plato's image of the helmsman: orientation, locomotion, manual control and dexterity, visual guidance, verbal command, intention, and volition. System-X on the other hand has been systematically neglected by Western culture, beginning with Plato. ;The dissertation builds upon Yakovlev's distinction between "teleokinesis," "ectokinesis" and "endokinesis." "Teleokinesis" refers to goal-directed action in external space, and belongs to system-Y. Ectokinesis and endokinesis are components of system-X. "Endokinesis" refers to movements within the body. "Ectokinesis" refers to the emotions, which are expressions of endokinesis. The mechanics of ectokinesis is that of an harmonic or vibrational system, as is the mechanics of a musical instrument. ;Within a trance the structure of system-Y is temporarily altered, such that system-X enters the foreground of awareness. Because ectokinesis is analogous to music cultures inspired by trance experience understand the universe in terms of music. The structure of ectokinesis in a trance is that of a conical spiral. Plato inherited the mystical traditions of the ancient Near East but replaced the spiral of emotions with a spiral of ideas , by understanding the axis of the cone as the paradigmatic dimension, and the angular rotation of the spiral as the syntagmatic dimension, of language. Finally I explain how the mythology and political structure of theocratic society imitated the neuropsychological structures of trance experience. (shrink)
By formalizing some classical facts about provably total functions of intuitionistic primitive recursive arithmetic (iPRA), we prove that the set of decidable formulas of iPRA and of iΣ1+ (intuitionistic Σ1-induction in the language of PRA) coincides with the set of its provably ∆1-formulas and coincides with the set of its provably atomic formulas. By the same methods, we shall give another proof of a theorem of Marković and De Jongh: the decidable formulas of HA are its provably ∆1-formulas.
Textbook on Gödel’s incompleteness theorems and computability theory, based on the Open Logic Project. Covers recursive function theory, arithmetization of syntax, the first and second incompleteness theorem, models of arithmetic, second-order logic, and the lambda calculus.
There is no uniquely standard concept of an effectively decidable set of real numbers or real n-tuples. Here we consider three notions: decidability up to measure zero [M.W. Parker, Undecidability in Rn: Riddled basins, the KAM tori, and the stability of the solar system, Phil. Sci. 70(2) (2003) 359–382], which we abbreviate d.m.z.; recursive approximability [or r.a.; K.-I. Ko, Complexity Theory of Real Functions, Birkhäuser, Boston, 1991]; and decidability ignoring boundaries [d.i.b.; W.C. Myrvold, The decision problem for entanglement, in: R.S. (...) Cohen et al. (Eds.), Potentiality, Entanglement, and Passion-at-a-Distance: Quantum Mechanical Studies fo Abner Shimony, Vol. 2, Kluwer Academic Publishers, Great Britain, 1997, pp. 177–190]. Unlike some others in the literature, these notions apply not only to certain nice sets, but to general sets in Rn and other appropriate spaces. We consider some motivations for these concepts and the logical relations between them. It has been argued that d.m.z. is especially appropriate for physical applications, and on Rn with the standard measure, it is strictly stronger than r.a. [M.W. Parker, Undecidability in Rn: Riddled basins, the KAM tori, and the stability of the solar system, Phil. Sci. 70(2) (2003) 359–382]. Here we show that this is the only implication that holds among our three decidabilities in that setting. Under arbitrary measures, even this implication fails. Yet for intervals of non-zero length, and more generally, convex sets of non-zero measure, the three concepts are equivalent. (shrink)
Mental Maps.Ben Blumson - 2012 - Philosophy and Phenomenological Research 85 (2):413-434.details
It's often hypothesized that the structure of mental representation is map-like rather than language-like. The possibility arises as a counterexample to the argument from the best explanation of productivity and systematicity to the language of thought hypothesis—the hypothesis that mental structure is compositional and recursive. In this paper, I argue that the analogy with maps does not undermine the argument, because maps and language have the same kind of compositional and recursive structure.
This article addresses three questions about well-being. First, is well-being future-sensitive? I.e., can present well-being depend on future events? Second, is well-being recursively dependent? I.e., can present well-being depend on itself? Third, can present and future well-being be interdependent? The third question combines the first two, in the sense that a yes to it is equivalent to yeses to both the first and second. To do justice to the diverse ways we contemplate well-being, I consider our thought and discourse about (...) well-being in three domains: everyday conversation, social science, and philosophy. This article’s main conclusion is that we must answer the third question with no. Present and future well-being cannot be interdependent. The reason, in short, is that a theory of well-being that countenances both future-sensitivity and recursive dependence would have us understand a person’s well-being at a time as so intricately tied to her well-being at other times that it would not make sense to consider her well-being an aspect of her state at particular times. It follows that we must reject either future-sensitivity or recursive dependence. I ultimately suggest, especially in light of arguments based on assumptions of empirical research on well-being, that the balance of reasons favors rejecting future-sensitivity. (shrink)
Neuroscience has studied deductive reasoning over the last 20 years under the assumption that deductive inferences are not only de jure but also de facto distinct from other forms of inference. The objective of this research is to verify if logically valid deductions leave any cerebral electrical trait that is distinct from the trait left by non-valid deductions. 23 subjects with an average age of 20.35 years were registered with MEG and placed into a two conditions paradigm (100 trials for (...) each condition) which each presented the exact same relational complexity (same variables and content) but had distinct logical complexity. Both conditions show the same electromagnetic components (P3, N4) in the early temporal window (250–525 ms) and P6 in the late temporal window (500–775 ms). The significant activity in both valid and invalid conditions is found in sensors from medial prefrontal regions, probably corresponding to the ACC or to the medial prefrontal cortex. The amplitude and intensity of valid deductions is significantly lower in both temporal windows (p = 0.0003). The reaction time was 54.37% slower in the valid condition. Validity leaves a minimal but measurable hypoactive electrical trait in brain processing. The minor electrical demand is attributable to the recursive and automatable character of valid deductions, suggesting a physical indicator of computational deductive properties. It is hypothesized that all valid deductions are recursive and hypoactive. (shrink)
Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze (...) how self-improvement could happen on different stages of the development of AI, including the stages at which AI is boxed or hiding in the internet. (shrink)
Corcoran’s 27 entries in the 1999 second edition of Robert Audi’s Cambridge Dictionary of Philosophy [Cambridge: Cambridge UP]. -/- ancestral, axiomatic method, borderline case, categoricity, Church (Alonzo), conditional, convention T, converse (outer and inner), corresponding conditional, degenerate case, domain, De Morgan, ellipsis, laws of thought, limiting case, logical form, logical subject, material adequacy, mathematical analysis, omega, proof by recursion, recursive function theory, scheme, scope, Tarski (Alfred), tautology, universe of discourse. -/- The entire work is available online free at more (...) than one website. Paste the whole URL. http://archive.org/stream/RobertiAudi_The.Cambridge.Dictionary.of.Philosophy/Robert.Audi_The.Cambrid ge.Dictionary.of.Philosophy -/- The 2015 third edition will be available soon. Before you think of buying it read some reviews on Amazon and read reviews of its competition: For example, my review of the 2008 Oxford Companion to Philosophy, History and Philosophy of Logic,29:3,291-292. URL: http://dx.doi.org/10.1080/01445340701300429 -/- Some of the entries have already been found to be flawed. For example, Tarski’s expression ‘materially adequate’ was misinterpreted in at least one article and it was misused in another where ‘materially correct’ should have been used. The discussion provides an opportunity to bring more flaws to light. -/- Acknowledgements: Each of these entries was presented at meetings of The Buffalo Logic Dictionary Project sponsored by The Buffalo Logic Colloquium. The members of the colloquium read drafts before the meetings and were generous with corrections, objections, and suggestions. Usually one 90-minute meeting was devoted to one entry although in some cases, for example, “axiomatic method”, took more than one meeting. Moreover, about half of the entries are rewrites of similarly named entries in the 1995 first edition. Besides the help received from people in Buffalo, help from elsewhere was received by email. We gratefully acknowledge the following: José Miguel Sagüillo, John Zeis, Stewart Shapiro, Davis Plache, Joseph Ernst, Richard Hull, Concha Martinez, Laura Arcila, James Gasser, Barry Smith, Randall Dipert, Stanley Ziewacz, Gerald Rising, Leonard Jacuzzo, George Boger, William Demopolous, David Hitchcock, John Dawson, Daniel Halpern, William Lawvere, John Kearns, Ky Herreid, Nicolas Goodman, William Parry, Charles Lambros, Harvey Friedman, George Weaver, Hughes Leblanc, James Munz, Herbert Bohnert, Robert Tragesser, David Levin, Sriram Nambiar, and others. -/- . (shrink)
We model infinite regress structures -not arguments- by means of ungrounded recursively defined functions in order to show that no such structure can perform the task of providing determination to the items composing it, that is, that no determination process containing an infinite regress structure is successful.
We describe a software system for the analysis of defined benefit actuarial plans. The system uses a recursive formulation of the actuarial stochastic processes to implement precise and efficient computations of individual and group cash flows.
In this article, we will present a number of technical results concerning Classical Logic, ST and related systems. Our main contribution consists in offering a novel identity criterion for logics in general and, therefore, for Classical Logic. In particular, we will firstly generalize the ST phenomenon, thereby obtaining a recursively defined hierarchy of strict-tolerant systems. Secondly, we will prove that the logics in this hierarchy are progressively more classical, although not entirely classical. We will claim that a logic is to (...) be identified with an infinite sequence of consequence relations holding between increasingly complex relata: formulae, inferences, metainferences, and so on. As a result, the present proposal allows not only to differentiate Classical Logic from ST, but also from other systems sharing with it their valid metainferences. Finally, we show how these results have interesting consequences for some topics in the philosophical logic literature, among them for the debate around Logical Pluralism. The reason being that the discussion concerning this topic is usually carried out employing a rivalry criterion for logics that will need to be modified in light of the present investigation, according to which two logics can be non-identical even if they share the same valid inferences. (shrink)
The objective of this paper is to analyze the broader significance of Frege’s logicist project against the background of Wittgenstein’s philosophy from both Tractatus and Philosophical Investigations. The article draws on two basic observations, namely that Frege’s project aims at saying something that was only implicit in everyday arithmetical practice, as the so-called recursion theorem demonstrates, and that the explicitness involved in logicism does not concern the arithmetical operations themselves, but rather the way they are defined. It thus represents (...) the attempt to make explicit not the rules alone, but rather the rules governing their following, i.e. rules of second-order type. I elaborate on these remarks with short references to Brandom’s refinement of Frege’s expressivist and Wittgenstein’s pragmatist project. (shrink)
The paper intends to zoom in and find a uniqueness in human language by narrowing down the range of cognitive domains to human computational mind having a property of recursion which is exclusively unique to human and not in any other species in animalia kingdom.This notion of recursion is the centrality of the paper. There has been an opposition to the notion of recursion being only unique to human and the paper makes an attempt to reply to (...) such arguments using experimental findings from modern neuroscience. The existing controversies over the proposed minimalist language and its future remains open to the future of modern neuroscience and modern physics. (shrink)
Yuri Matiyasevich's theorem states that the set of all Diophantine equations which have a solution in non-negative integers is not recursive. Craig Smoryński's theorem states that the set of all Diophantine equations which have at most finitely many solutions in non-negative integers is not recursively enumerable. Let R be a subring of Q with or without 1. By H_{10}(R), we denote the problem of whether there exists an algorithm which for any given Diophantine equation with integer coefficients, can decide whether (...) or not the equation has a solution in R. We prove that a positive solution to H_{10}(R) implies that the set of all Diophantine equations with a finite number of solutions in R is recursively enumerable. We show the converse implication for every infinite set R \subseteq Q such that there exist computable functions \tau_1,\tau_2:N \to Z which satisfy (\forall n \in N \tau_2(n) \neq 0) \wedge ({\frac{\tau_1(n)}{\tau_2(n)}: n \in N}=R). This implication for R=N guarantees that Smoryński's theorem follows from Matiyasevich's theorem. Harvey Friedman conjectures that the set of all polynomials of several variables with integer coefficients that have a rational solution is not recursive. Harvey Friedman conjectures that the set of all polynomials of several variables with integer coefficients that have only finitely many rational solutions is not recursively enumerable. These conjectures are equivalent by our results for R=Q. (shrink)
(English then french abstract) -/- This article, which can be read by non-psychoanalysts, intends to browse in four stages through the issue offered to our thinking : two (odd-numbered) stages analyzing the argument that provides its context, and two (even-numbered) of propositions presenting our views on what could be the content of the analytic discourse in the coming years. After this introduction, a first reading will point by point but informally review the argument of J.-P. Journet by showing that each (...) of its clauses may generate a “bifurcated” comment able to serve or go against the analytic discourse. Hence the interest of the “differential diagnosis” mentioned in our title, which makes glimpse the traps that the homonymy may set to this speech. Then, to prepare a second scan which sticks neither to the analytical doxa, nor to the even authoritative opinions of our tenors and seniors, it will be proposed two attempts at redefinitions (“apophatic” and “recursive”) of what psychoanalysis is, as well as methodological tools operating downstream from these redefinitions to thwart the obstacles of “external” and “internal” homonymy (starting from a syllogism which can make consensus).The third stage will precisely be made of our second scan of the issue, whose elements will be reviewed and analyzed more methodically :“external differential diagnosis” between the analytic discourse and psychology, philosophy, sociology, and modern science, and “internal differential diagnosis” on the entanglement between theoretical advances of psychoanalysts and the repeated survival of fantasmatic elements...Finally, a fourth part will present propositions and perspectives resulting from these analysis (principle of economy as to the source of psychoanalytical theorizations; dialogue with the other fields, but without compromising; specific relations with the discourse of science), all this leading to an invitation, beyond disputes, to renew the content of the analytic discourse on some points... -/- --------------------------------------------------------------------- -/- Cet article, qui se veut lisible aux non-analystes, se propose de parcourir en quatre temps la problématique offerte à notre réflexion : deux temps (de rang impair) d'analyse de l'argument qui en fournit le contexte, et deux (de rang pair) de propositions présentant nos vues sur ce que pourrait être la teneur du discours analytique dans les prochaines années. Après cette introduction, un premier parcours réexaminera point par point mais informellement l'argument de J.-P. Journet en montrant que chacune de ses propositions peut donner lieu à un commentaire “bifide” à même de servir ou de desservir le discours analytique. D'où l'intérêt du “diagnostic différentiel” évoqué dans notre titre, qui fait entrevoir les pièges que l'homonymie peut tendre à ce discours.Puis, pour préparer un second balayage qui ne s'en tienne ni à la doxa analytique, ni aux opinions même autorisées de nos ténors et seniors, il sera proposé deux tentatives de redéfinitions (“apophatique” et “récursive”) de ce qu'est l'analyse, ainsi que des outils méthodologiques fonctionnant en aval de ces redéfinitions pour déjouer les embûches de l'homonymie “externe” et “interne” (à partir d'un syllogisme pouvant faire consensus). Le troisième temps sera fait justement de ce second balayage de la problématique, dont les éléments seront reconsidérés et analysés plus méthodiquement : “diagnostic différentiel externe” entre le discours analytique et les discours psychologique, philosophique, sociologique et celui de la science moderne ; et “diagnostic différentiel interne” portant sur l'intrication entre avancées théoriques des analystes et survivance à répétition d'éléments fantasmatiques...Enfin une quatrième partie exposera propositions et perspectives résultant de ces analyses (principe d'économie quant à la source des théorisations analytiques ; dialogue avec les autres champs, mais sans compromissions ; relations spécifiques avec le discours de la science), l'ensemble débouchant sur une invitation, au delà des différends, à renouveler sur certains points la teneur du discours analytique... (shrink)
This book argues that moral philosophy should be based on seven scientific principles of theory selection. It then argues that a new moral theory—Rightness as Fairness—satisfies those principles more successfully than existing theories. Chapter 1 explicates the seven principles of theory-selection, arguing that moral philosophy must conform to them to be truth-apt. Chapter 2 argues those principles jointly support founding moral philosophy in known facts of empirical moral psychology: specifically, our capacities for mental time-travel and modal imagination. Chapter 2 then (...) shows that these capacities present human decisionmakers with a problem of diachronic rationality that includes but generalizes beyond, L.A. Paul’s problem of transformative experience: a problem that I call “the problem of possible future selves.” Chapter 3 then argues that a new principle of rationality—the Categorical-Instrumental Imperative—is the only rational solution to this problem, as it requires our present and future selves to forge and uphold a recursive, bi-directional contract with each another given mutual recognition of the problem. Chapter 4 then shows that the Categorical-Instrumental Imperative has three identical formulations analogous but superior to Immanuel Kant’s various formulations of his ‘categorical imperative.’ Chapter 5 shows that these unified formulas jointly entail a particular test of moral principles: a Moral Original Position similar to John Rawls’ famous ‘original position’, but which avoids a variety of problems with Rawls' model. Chapter 6 then shows that the Moral Original Position generates Four Principles of Fairness, which can then be combined into a single principle of moral rightness: Rightness as Fairness. This new conception of rightness is shown to reconcile four dominant moral frameworks (deontology, consequentialism, virtue ethics, and contractualism), as well as entail a new method of moral decisionmaking for applied ethics: a method of “principled fair negotiation” according to which applied ethical issues cannot be wholly resolved through principled debate, but must instead be resolved by actual negotiation and compromise. This method is then argued to generate novel, nuanced analyses of a variety of applied moral issues, including trolley cases, torture, and the ethical treatment of nonhuman animals. Chapter 7 then shows that Rightness as Fairness reconciles three leading political frameworks—libertarianism, egalitarianism, and communitarianism—showing how all three embody legitimate moral ideals that can, and should, be fairly negotiated against each other to settle the scope, and nature, of domestic, international, and global justice on an ongoing, iterated basis. Finally, Chapter 8 argues that Rightness as Fairness satisfies all seven of the principles of theory-selected defended in Chapter 1 more successfully than rival theories. (shrink)
In Recursivity and Contingency, Yuk Hui prompts a rigorous historical and philosophical analysis of today’s algorithmic culture. As evidenced by highspeed AI trading, predictive processing algorithms, elastic graph-bunching biometrics, Hebbian machine learning and thermographic drone warfare, we are privy to an epochal technological transition. As these technologies, stilted on inductive learning, demonstrate, we no longer occupy the moment of the ‘storage-and-retrieval’ static database but are increasingly engaged with technologies that are involved in the ‘manipulable arrangement’ (p204) of the indeterminable. It (...) is, in fact, extricating the indeterminable or the Inhuman and its cosmic anti-capitalist imperative that concerns the core of Hui’s project of technodiversity. (shrink)
In his 1993 article ‘The Coming Technological Singularity: How to survive in the posthuman era’ the computer scientist Virnor Vinge speculated that developments in artificial intelligence might reach a point where improvements in machine intelligence result in smart AI’s producing ever-smarter AI’s. According to Vinge the ‘singularity’, as he called this threshold of recursive self-improvement, would be a ‘transcendental event’ transforming life on Earth in ways that unaugmented humans are not equipped to envisage. In this paper I argue Vinge’s idea (...) of a technologically led intelligence explosion is philosophically important because it requires us to consider the prospect of a posthuman condition succeeding the human one. What is the ‘humanity’ to which the posthuman is ‘post’? Does the possibility of a posthumanity presuppose that there is a ‘human essence’, or is there some other way of conceiving the human-posthuman difference? I argue that the difference should be conceived as a historically emergent disconnection between individuals, not in terms of the presence or lack of essential properties. I also suggest that these individuals should not be conceived in narrow biological terms but in ‘wide’ terms permitting biological, cultural and technological relations of descent between human and posthuman. Finally, I consider the ethical implications of this metaphysics of the posthuman. If, as I claim, the posthuman difference is not one between kinds but between individuals, we cannot specify its nature a priori but only a posteriori. Thus the only way to evaluate the posthuman condition would be to witness the emergence of posthumans. The implications of this are somewhat paradoxical. We are not currently in a position to evaluate the posthuman condition. Since there are no posthumans, the condition of posthumanity is not defined. However, posthumans could result from some iteration of our current technical activity, so we have an interest in understanding what they might be like. It follows that we have an interest in making or becoming posthumans. (shrink)
Frames, i.e., recursive attribute-value structures, are a general format for the decomposition of lexical concepts. Attributes assign unique values to objects and thus describe functional relations. Concepts can be classified into four groups: sortal, individual, relational and functional concepts. The classification is reflected by different grammatical roles of the corresponding nouns. The paper aims at a cognitively adequate decomposition, particularly, of sortal concepts by means of frames. Using typed feature structures, an explicit formalism for the characterization of cognitive frames is (...) developed. The frame model can be extended to account for typicality effects. Applying the paradigm of object-related neural synchronization, furthermore, a biologically motivated model for the cortical implementation of frames is developed. Cortically distributed synchronization patterns may be regarded as the fingerprints of concepts. (shrink)
In the 1920s, David Hilbert proposed a research program with the aim of providing mathematics with a secure foundation. This was to be accomplished by first formalizing logic and mathematics in their entirety, and then showing---using only so-called finitistic principles---that these formalizations are free of contradictions. ;In the area of logic, the Hilbert school accomplished major advances both in introducing new systems of logic, and in developing central metalogical notions, such as completeness and decidability. The analysis of unpublished material presented (...) in Chapter 2 shows that a completeness proof for propositional logic was found by Hilbert and his assistant Paul Bernays already in 1917--18, and that Bernays's contribution was much greater than is commonly acknowledged. Aside from logic, the main technical contribution of Hilbert's Program are the development of formal mathematical theories and proof-theoretical investigations thereof, in particular, consistency proofs. In this respect Wilhelm Ackermann's 1924 dissertation is a milestone both in the development of the Program and in proof theory in general. Ackermann gives a consistency proof for a second-order version of primitive recursive arithmetic which, surprisingly, explicitly uses a finitistic version of transfinite induction up to www . He also gave a faulty consistency proof for a system of second-order arithmetic based on Hilbert's &egr;-substitution method. Detailed analyses of both proofs in Chapter 3 shed light on the development of finitism and proof theory in the 1920s as practiced in Hilbert's school. ;In a series of papers, Charles Parsons has attempted to map out a notion of mathematical intuition which he also brings to bear on Hilbert's finitism. According to him, mathematical intuition fails to be able to underwrite the kind of intuitive knowledge Hilbert thought was attainable by the finitist. It is argued in Chapter 4 that the extent of finitistic knowledge which intuition can provide is broader than Parsons supposes. According to another influential analysis of finitism due to W. W. Tait, finitist reasoning coincides with primitive recursive reasoning. The acceptance of non-primitive recursive methods in Ackermann's dissertation presented in Chapter 3, together with additional textual evidence presented in Chapter 4, shows that this identification is untenable as far as Hilbert's conception of finitism is concerned. Tait's conception, however, differs from Hilbert's in important respects, yet it is also open to criticisms leading to the conclusion that finitism encompasses more than just primitive recursive reasoning. (shrink)
Until the late 19th century scientists almost always assumed that the world could be described as a rule-based and hence deterministic system or as a set of such systems. The assumption is maintained in many 20th century theories although it has also been doubted because of the breakthrough of statistical theories in thermodynamics (Boltzmann and Gibbs) and other fields, unsolved questions in quantum mechanics as well as several theories forwarded within the social sciences. Until recently it has furthermore been assumed (...) that a rule-based and deterministic system was also predictable if only the rules were known, but this assumption has now been undermined by modern chaos-theory describing rule-based and deterministic, but unpredictable systems, while catastrophe-theory delivers a set of types describing various kinds of instability and conditions for the stability of a given system. Hence the main trait in the theoretical development in the 20th-century science can be described as a basic modification and limitation of some of the fundamental and strong assumptions forwarded in the previous epochs of modern science. Ironically, the very same process has been a process in which the human capacity to intervene in nature has expanded dramatically and mainly with the help of the very same theories, and not least because they allow nature to be described and made manipulable on a lower level and a more fine-grained scale. While the overall theoretical consistency between the various theories has gone, the reach of human intervention in nature has increased based on quite new dimensions whether in the area of physics (e.g.: energy technologies, chemical technologies, nanotechnologies etc.) or biology (ge¬netic manipulation) or in the area of psychology, sociology and culture (artificial simulations of mental processes, new means of communication, implying changes in the social infrastructure and cultural behaviour etc.). While some of these changes and new conditions can be reflected from within the conceptual framework of rule-based systems, albeit more complex than formerly recognized, others seem to give rise to the question of whether there are »systems« and relations between different systems in the world which are not rule-based? For instance, it seems to be obvious that the notion of instability represents a major conceptual break with former theories of rule-based systems, as the stability of the latter is an axiomatically given property implied in the very notion of rule-based systems, while instability can only be the result of external influence which should be explained as the result of another rule-based sy¬stem. While there are no difficulties implied concerning the stability of rule-based systems, the notion of unstable states of a system raises the question of how there can be a system at all if there are no invariant stabilising principles? This is the first question which I will address. And I shall do so by taking two examples of such systems as my point of departure. The first example will be the computer and the second will be ordinary language. In both cases I will argue that the stability of these systems (which are both defined by the existence/presence of human intentions) are provided by the help of - differently organised - redundancy functions which both allow the maintenance of systems in unstable macro-states, suspension of previous rules, underdetermination and overdetermination and generation/emergence/creation of new rules more or less independent of previous rules by the help of optional recursions to the permanently accessible underlying levels as for instance the level of binary representation in computers. Since the notion of redundancy is both controversial as such and often avoided, the concept is discussed (as defined in Claude Shannon's mathematical theory of information and in the semiotic framework of J. J. Greimas) and leading to a more general definition in which the redundancy functions serve to overcome noisy conditions, but at the cost of rule-based stability, determination, and predictability. A second question will be how the notion of rule-generating systems relates to the notion of anticipatory systems and it will be argued that rule-generating systems share some features with an¬ticipatory systems and that the former from a certain viewpoint can be seen as a subclass of the latter, although anticipative features are not necessarily a part of the definition of rule-generating systems. On the other hand, it will be discussed whether anticipatory systems which are not rule-generating systems can exist and it will be argued that the capacity to anticipate is strongly limited if it is not part of a rule-generating system. Therefore, it is concluded that the most powerful anticipatory systems need to be rule-generating systems. (shrink)
Proceedings of the papers presented at the Symposium on "Revisiting Turing and his Test: Comprehensiveness, Qualia, and the Real World" at the 2012 AISB and IACAP Symposium that was held in the Turing year 2012, 2–6 July at the University of Birmingham, UK. Ten papers. - http://www.pt-ai.org/turing-test --- Daniel Devatman Hromada: From Taxonomy of Turing Test-Consistent Scenarios Towards Attribution of Legal Status to Meta-modular Artificial Autonomous Agents - Michael Zillich: My Robot is Smarter than Your Robot: On the Need for (...) a Total Turing Test for Robots - Adam Linson, Chris Dobbyn and Robin Laney: Interactive Intelligence: Behaviour-based AI, Musical HCI and the Turing Test - Javier Insa, Jose Hernandez-Orallo, Sergio España - David Dowe and M.Victoria Hernandez-Lloreda: The anYnt Project Intelligence Test (Demo) - Jose Hernandez-Orallo, Javier Insa, David Dowe and Bill Hibbard: Turing Machines and Recursive Turing Tests — Francesco Bianchini and Domenica Bruni: What Language for Turing Test in the Age of Qualia? - Paul Schweizer: Could there be a Turing Test for Qualia? - Antonio Chella and Riccardo Manzotti: Jazz and Machine Consciousness: Towards a New Turing Test - William York and Jerry Swan: Taking Turing Seriously (But Not Literally) - Hajo Greif: Laws of Form and the Force of Function: Variations on the Turing Test. (shrink)
This paper contends that Stoic logic (i.e. Stoic analysis) deserves more attention from contemporary logicians. It sets out how, compared with contemporary propositional calculi, Stoic analysis is closest to methods of backward proof search for Gentzen-inspired substructural sequent logics, as they have been developed in logic programming and structural proof theory, and produces its proof search calculus in tree form. It shows how multiple similarities to Gentzen sequent systems combine with intriguing dissimilarities that may enrich contemporary discussion. Much of Stoic (...) logic appears surprisingly modern: a recursively formulated syntax with some truth-functional propositional operators; analogues to cut rules, axiom schemata and Gentzen’s negation-introduction rules; an implicit variable-sharing principle and deliberate rejection of Thinning and avoidance of paradoxes of implication. These latter features mark the system out as a relevance logic, where the absence of duals for its left and right introduction rules puts it in the vicinity of McCall’s connexive logic. Methodologically, the choice of meticulously formulated meta-logical rules in lieu of axiom and inference schemata absorbs some structural rules and results in an economical, precise and elegant system that values decidability over completeness. (shrink)
Perceptual and recursion-based faculties have long been recognized to be vital constituents of human (and, in general, animal) cognition. However, certain faculties such as the visual and the linguistic faculty have come to receive far more academic and experimental attention, in recent decades, than other recognized categories of faculties. This paper seeks to highlight the imbalance in these studies and bring into sharper focus the need for further in-depth philosophical treatments of faculties such as especially hearing, touch, and proprioception, (...) besides other faculties such as the olfactory and gustatory ones. It also seeks to bring to bear the debate on the role of qualia in perception and overall in cognition in its thesis of the significance of these other modular faculties for genuine insights into cognition as a (now) technologically expanded domain. (shrink)
I show that there are good arguments and evidence to boot that support the language as an instrument of thought hypothesis. The underlying mechanisms of language, comprising of expressions structured hierarchically and recursively, provide a perspective (in the form of a conceptual structure) on the world, for it is only via language that certain perspectives are avail- able to us and to our thought processes. These mechanisms provide us with a uniquely human way of thinking and talking about the world (...) that is different to the sort of thinking we share with other animals. If the primary function of language were communication then one would expect that the underlying mechanisms of language will be structured in a way that favours successful communication. I show that not only is this not the case, but that the underlying mechanisms of language are in fact structured in a way to maximise computational efficiency, even if it means causing communicative problems. Moreover, I discuss evidence from comparative, neuropatho- logical, developmental, and neuroscientific evidence that supports the claim that language is an instrument of thought. (shrink)
I develop a theory of counterfactuals about relative computability, i.e. counterfactuals such as 'If the validity problem were algorithmically decidable, then the halting problem would also be algorithmically decidable,' which is true, and 'If the validity problem were algorithmically decidable, then arithmetical truth would also be algorithmically decidable,' which is false. These counterfactuals are counterpossibles, i.e. they have metaphysically impossible antecedents. They thus pose a challenge to the orthodoxy about counterfactuals, which would treat them as uniformly true. What’s more, I (...) argue that these counterpossibles don’t just appear in the periphery of relative computability theory but instead they play an ineliminable role in the development of the theory. Finally, I present and discuss a model theory for these counterfactuals that is a straightforward extension of the familiar comparative similarity models. (shrink)
In this article, I show why it is necessary to abolish the use of predictive algorithms in the US criminal justice system at sentencing. After presenting the functioning of these algorithms in their context of emergence, I offer three arguments to demonstrate why their abolition is imperative. First, I show that sentencing based on predictive algorithms induces a process of rewriting the temporality of the judged individual, flattening their life into a present inescapably doomed by its past. Second, I demonstrate (...) that recursive processes, comprising predictive algorithms and the decisions based on their predictions, systematically suppress outliers and progressively transform reality to match predictions. In my third and final argument, I show that decisions made on the basis of predictive algorithms actively perform a biopolitical understanding of justice as management and modulation of risks. In such a framework, justice becomes a means to maintain a perverse social homeostasis that systematically exposes disenfranchised Black and Brown populations to risk. (shrink)
I here investigate the sense in which diagonalization allows one to construct sentences that are self-referential. Truly self-referential sentences cannot be constructed in the standard language of arithmetic: There is a simple theory of truth that is intuitively inconsistent but is consistent with Peano arithmetic, as standardly formulated. True self-reference is possible only if we expand the language to include function-symbols for all primitive recursive functions. This language is therefore the natural setting for investigations of self-reference.
I spell out and update the individuality thesis, that species are individuals, and not classes, sets, or kinds. I offer three complementary presentations of this thesis. First, as a way of resolving an inconsistent triad about natural kinds; second, as a phylogenetic systematics theoretical perspective; and, finally, as a novel recursive account of an evolved character. These approaches do different sorts of work, serving different interests. Presenting them together produces a taxonomy of the debates over the thesis, and isolates ways (...) it has been productive. This goes to the larger point of this paper: a defense of the individuality thesis in terms of its utility, and an update of it in light of recent theoretical developments and empirical work in biology. (shrink)
Can you find an xy-equation that, when graphed, writes itself on the plane? This idea became internet-famous when a Wikipedia article on Tupper’s self-referential formula went viral in 2012. Under scrutiny, the question has two flaws: it is meaningless (it depends on fonts) and it is trivial. We fix these flaws by formalizing the problem.
Husserl’s Logical Grammar is intended to explain how complex expressions can be constructed out of simple ones so that their meaning turns out to be determined by the meanings of their constituent parts and the way they are put together. Meanings are thus understood as structured contents and classified into formal categories to the effect that the logical properties of expressions reflect their grammatical properties. As long as linguistic meaning reduces to the intentional content of pre-linguistic representations, however, it is (...) not trivial to account for how semantics relates to syntax in this context. In this paper, I analyze Husserl’s Logical Grammar as a system of recursive rules operating on representations and suggest that the syntactic form of representations contributes to their semantics because it carries information about semantic role. I further discuss Husserl’s syntactic account of the unity of propositions and argue that, on this account, logical form supervenes on syntactic form. In the last section I draw some implications for the phenomenology of thought and conjecture that the structural features it displays are likely to convey the syntactic structures of an underlying language-like representational system. (shrink)
Causation is a macroscopic phenomenon. The temporal asymmetry displayed by causation must somehow emerge along with other asymmetric macroscopic phenomena like entropy increase and the arrow of radiation. I shall approach this issue by considering ‘causal inference’ techniques that allow causal relations to be inferred from sets of observed correlations. I shall show that these techniques are best explained by a reduction of causation to structures of equations with probabilistically independent exogenous terms. This exogenous probabilistic independence imposes a recursive order (...) on these equations and a consequent distinction between dependent and independent variables that lines up with the temporal asymmetry of causation. (shrink)
SNePS, the Semantic Network Processing System 45, 54], has been designed to be a system for representing the beliefs of a natural-language-using intelligent system (a \cognitive agent"). It has always been the intention that a SNePS-based \knowledge base" would ultimatelybe built, not by a programmeror knowledge engineer entering representations of knowledge in some formallanguage or data entry system, but by a human informing it using a natural language (NL) (generally supposed to be English), or by the system reading books or (...) articles that had been prepared for human readers. Because of this motivation, the criteria for the development of SNePS have included: it should be able to represent anything and everything expressible in NL; it should be able to represent generic, as well as speci c information; it should be able to use the generic and the speci c information to reason and infer information implied by what it has been told; it cannot count on any particular order among the pieces of information it is given; it must continue to act reasonably even if the information it is given includes circular de nitions, recursive rules, and inconsistent information. (shrink)
The first form of the inside-outside dichotomy appears as a self-encapsulated system with an active border. These systems are based on two complementary but asymmetric processes: constructive and interactive. The former physically constitute the system as a recursive network of component production, defining an inside. The maintenance of the constructive processes implies that the internal organization also constrains certain flows of matter and energy across the border of the system, generating interactive processes. These interactive processes ensure the maintenance of the (...) constructive processes thus specifying a meaningful outside. Upon this basic form of identity formation, the evolutionary and historical domain is open for the emergence of a whole hierarchy and ecology of insides and outsides. These which mutually subsume and collaborate in the maintenance of the essential inside-outside dichotomy that defines the conditions of possibility of the subjects and the worlds they generate. (shrink)
This paper examines an insoluble Cartesian problem for classical AI, namely, how linguistic understanding involves knowledge and awareness of u’s meaning, a cognitive process that is irreducible to algorithms. As analyzed, Descartes’ view about reason and intelligence has paradoxically encouraged certain classical AI researchers to suppose that linguistic understanding suffices for machine intelligence. Several advocates of the Turing Test, for example, assume that linguistic understanding only comprises computational processes which can be recursively decomposed into algorithmic mechanisms. Against this background, in (...) the first section, I explain Descartes’ view about language and mind. To show that Turing bites the bullet with his imitation game, in the second section I analyze this method to assess intelligence. Then, in the third section, I elaborate on Schank and Abelsons’ Script Applier Mechanism (SAM, hereby), which supposedly casts doubt on Descartes’ denial that machines can think. Finally, in the fourth section, I explore a challenge that any algorithmic decomposition of linguistic understanding faces. This challenge, I argue, is the core of the Cartesian problem: knowledge and awareness of meaning require a first-person viewpoint which is irreducible to the decomposition of algorithmic mechanisms. (shrink)
Take a strip of paper with 'once upon a time there'‚ written on one side and 'was a story that began'‚ on the other. Twisting the paper and joining the ends produces John Barth’s story Frame-Tale, which prefixes 'once upon a time there was a story that began'‚ to itself. I argue that the ability to understand this sentence cannot be explained by tacit knowledge of a recursive theory of truth in English.
1. Was ist ein Homunkulus-Fehlschluß? 2. Analyse des Mentalen und Naturalisierung der Intentionalität 3. Homunkulismus in Theorien der visuellen Wahrnehmung 4. Homunkulismus und Repräsentationalismus 5. Der homunkulare Funktionalismus 6. Philosophische Sinnkritik und empirische Wissenschaft Literatur .
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.