Citations of:
Theory of Recursive Functions and Effective Computability
Journal of Symbolic Logic 36 (1):141146 (1971)
Add citations
You must login to add citations.










It is proved that if 1 $\langle \mathscr{D}_{2n}, \leq, P\rangle$ and $\langle \mathscr{D}_{2n}, \leq, P\rangle$ are not elementary equivalent where P is the predicate P(a) = "a is a Π 0 1 edegree". 









A set A is mreducible to B if and only if there is a polynomialtime computable function f such that, for all x, x∈ A if and only if f ∈ B. Two sets are: 1equivalent if and only if each is mreducible to the other by oneone reductions; pinvertible equivalent if and only if each is mreducible to the other by oneone, polynomialtime invertible reductions; and pisomorphic if and only if there is an mreduction from one set to the (...) 



This paper explores the relationship borne by the traditional paradoxes of set theory and semantics to formal incompleteness phenomena. A central tool is the application of the Arithmetized Completeness Theorem to systems of secondorder arithmetic and set theory in which various “paradoxical notions” for firstorder languages can be formalized. I will first discuss the setting in which this result was originally presented by Hilbert & Bernays and also how it was later adapted by Kreisel and Wang in order to obtain (...) 







Infinite time Turing machines represent a model of computability that extends the operations of Turing machines to transfinite ordinal time by defining the content of each cell at limit steps to be the lim sup of the sequences of previous contents of that cell. In this paper, we study a computational model obtained by replacing the lim sup rule with an ‘eventually constant’ rule: at each limit step, the value of each cell is defined if and only if the content (...) 

For a computable structure \, if there is a computable infinitary Scott sentence, then the complexity of this sentence gives an upper bound for the complexity of the index set \\). If we can also show that \\) is mcomplete at that level, then there is a correspondence between the complexity of the index set and the complexity of a Scott sentence for the structure. There are results that suggest that these complexities will always match. However, it was shown in (...) 

Answering a question raised by Shavrukov and Visser :569–582, 2014), we show that the lattice of \sentences ) over any computable enumerable consistent extension T of \ is uniformly dense. We also show that for every \ and \ refer to the known hierarchies of arithmetical formulas introduced by Burr for intuitionistic arithmetic) the lattices of \sentences over any c.e. consistent extension T of the intuitionistic version of Robinson Arithmetic \ are uniformly dense. As an immediate consequence of the proof, (...) 

The sentences employed in semantic paradoxes display a wide range of semantic behaviours. However, the main theories of truth currently available either fail to provide a theory of paradox altogether, or can only account for some paradoxical phenomena by resorting to multiple interpretations of the language. In this paper, I explore the wide range of semantic behaviours displayed by paradoxical sentences, and I develop a unified theory of truth and paradox, that is a theory of truth that also provides a (...) 

We study the structure of Σ11 equivalence relations on hyperarithmetical subsets of ω under reducibilities given by hyperarithmetical or computable functions, called hreducibility and FFreducibility, respectively. We show that the structure is rich even when one fixes the number of properly equation imagei.e., Σ11 but not equation image equivalence classes. We also show the existence of incomparable Σ11 equivalence relations that are complete as subsets of ω × ω with respect to the corresponding reducibility on sets. We study complete Σ11 (...) 

Pour le philosophe intéressé aux structures et aux fondements du savoir théorétique, à la constitution d'une « métathéorétique «, θεωρíα., qui, mieux que les « Wissenschaftslehre » fichtéenne ou husserlienne et pardelà les débris de la métaphysique, veut dans une intention nouvelle faire la synthèse du « théorétique », la logique mathématique se révèle un objet privilégié. 

Freivalds defined an acceptable programming system independent criterion for learning programs for functions in which the final programs were required to be both correct and "nearly" minimal size, i.e., within a computable function of being purely minimal size. Kinber showed that this parsimony requirement on final programs limits learning power. However, in scientific inference, parsimony is considered highly desirable. A limcomputablefunction is (by definition) one calculable by a total procedure allowed to change its mind finitely many times about its output. (...) 

The appropriate methodology for psychological research depends on whether one is studying mental algorithms or their implementation. Mental algorithms are abstract specifications of the steps taken by procedures that run in the mind. Implementational issues concern the speed and reliability of these procedures. The algorithmic level can be explored only by studying acrosstask variation. This contrasts with psychology's dominant methodology of looking for withintask generalities, which is appropriate only for studying implementational issues.The implementationalgorithm distinction is related to a number of (...) 

Higman essentially showed that if A is any language then SUBSEQ(A) is regular, where SUBSEQ(A) is the language of all subsequences of strings in A. Let s1, s2, s3, . . . be the standard lexicographic enumeration of all strings over some finite alphabet. We consider the following inductive inference problem: given A(s1), A(s2), A(s3), . . . . learn, in the limit, a DFA for SUBSEQU). We consider this model of learning and the variants of it that are usually (...) 

Computational complexity theory is a subfield of computer science originating in computability theory and the study of algorithms for solving practical mathematical problems. Amongst its aims is classifying problems by their degree of difficulty — i.e., how hard they are to solve computationally. This paper highlights the significance of complexity theory relative to questions traditionally asked by philosophers of mathematics while also attempting to isolate some new ones — e.g., about the notion of feasibility in mathematics, the $\mathbf{P} \neq \mathbf{NP}$ (...) 

We show how removing faithbased beliefs in current philosophies of classical and constructive mathematics admits formal, evidencebased, definitions of constructive mathematics; of a constructively welldefined logic of a formal mathematical language; and of a constructively welldefined model of such a language. / We argue that, from an evidencebased perspective, classical approaches which follow Hilbert's formal definitions of quantification can be labelled `theistic'; whilst constructive approaches based on Brouwer's philosophy of Intuitionism can be labelled `atheistic'. / We then adopt what may (...) 

We consider the argument that Tarski's classic definitions permit an intelligencewhether human or mechanisticto admit finitary evidencebased definitions of the satisfaction and truth of the atomic formulas of the firstorder Peano Arithmetic PA over the domain N of the natural numbers in two, hitherto unsuspected and essentially different, ways: (1) in terms of classical algorithmic verifiabilty; and (2) in terms of finitary algorithmic computability. We then show that the two definitions correspond to two distinctly different assignments of satisfaction and truth (...) 









Computationalism holds that our grasp of notions like ‘computable function’ can be used to account for our putative ability to refer to the standard model of arithmetic. Tennenbaum's Theorem has been repeatedly invoked in service of this claim. I will argue that not only do the relevant class of arguments fail, but that the result itself is most naturally understood as having the opposite of a referencefixing effect — i.e., rather than securing the determinacy of numbertheoretic reference, Tennenbaum's Theorem points (...) 





. We describe the important role that the conjectures and questions posed at the end of the two editions of Gerald Sacks's Degrees of Unsolvability have had in the development of recursion theory over the past thirty years. 

In philosophical logic necessity is usually conceived as a sentential operator rather than as a predicate. An intensional sentential operator does not allow one to express quantified statements such as 'There are necessary a posteriori propositions' or 'All laws of physics are necessary' in firstorder logic in a straightforward way, while they are readily formalized if necessity is formalized by a predicate. Replacing the operator conception of necessity by the predicate conception, however, causes various problems and forces one to reject (...) 

The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin's famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims that the limiting constant is determined by the complexity of the theory itself, which is assumed to be good measure of (...) 



Earman (1993) distinguishes three notions of empirical indistinguishability and offers a rigorous framework to investigate how each of these notions relates to the problem of underdetermination of theory choice. He uses some of the results obtained in this framework to argue for a version of scientific anti realism. In the present paper we first criticize Earman's arguments for that position. Secondly, we propose and motivate a modification of Earman's framework and establish several results concerning some of the notions of indistinguishability (...) 





We will study several weak axiom systems that use the Subtraction and Division primitives (rather than Addition and Multiplication) to formally encode the theorems of Arithmetic. Provided such axiom systems do not recognize Multiplication as a total function, we will show that it is feasible for them to verify their Semantic Tableaux, Herbrand, and CutFree consistencies. If our axiom systems additionally do not recognize Addition as a total function, they will be capable of recognizing the consistency of their Hilbertstyle deductive (...) 

Chaitin’s incompleteness result related to random reals and the halting probability has been advertised as the ultimate and the strongest possible version of the incompleteness and undecidability theorems. It is argued that such claims are exaggerations. 

It is shown that the Fodor's interpretation of the frame problem is the central indication that his version of the Modularity Thesis is incompatible with computationalism. Since computationalism is far more plausible than this thesis, the latter should be rejected. 

Presented here is a new result concerning the computational power of socalled SADn computers, a class of Turingmachinebased computers that can perform some nonTuring computable feats by utilising the geometry of a particular kind of general relativistic spacetime. It is shown that SADn can decide nquantifier arithmetic but not (n+1)quantifier arithmetic, a result that reveals how neatly the SADn family maps into the Kleene arithmetical hierarchy. Introduction Axiomatising computers The power of SAD computers Remarks regarding the concept of computability. 

In this paper, the issues of computability and constructivity in the mathematics of physics are discussed. The sorts of questions to be addressed are those which might be expressed, roughly, as: Are the mathematical foundations of our current theories unavoidably nonconstructive: or, Are the laws of physics computable? 



It is commonly held that the natural numbers sequence 0, 1, 2,... possesses a unique structure. Yet by a well known model theoretic argument, there exist nonstandard models of the formal theory which is generally taken to axiomatize all of our practices and intentions pertaining to use of the term “natural number.” Despite the structural similarity of this argument to the influential set theoretic indeterminacy argument based on the downward L ̈owenheimSkolem theorem, most theorists agree that the number theoretic version (...) 

I develop a theory of counterfactuals about relative computability, i.e. counterfactuals such as 'If the validity problem were algorithmically decidable, then the halting problem would also be algorithmically decidable,' which is true, and 'If the validity problem were algorithmically decidable, then arithmetical truth would also be algorithmically decidable,' which is false. These counterfactuals are counterpossibles, i.e. they have metaphysically impossible antecedents. They thus pose a challenge to the orthodoxy about counterfactuals, which would treat them as uniformly true. What’s more, I (...) 