A new computational methodology for executing calculations with infinite and infinitesimal quantities is described in this paper. It is based on the principle ‘The part is less than the whole’ introduced by Ancient Greeks and applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as particular cases of a unique framework. The (...) new methodology has allowed us to introduce the Infinity Computer working with such numbers (its simulator has already been realized). Examples dealing with divergent series, infinite sets, and limits are given. (shrink)
This paper considers non-standard analysis and a recently introduced computational methodology based on the notion of ①. The latter approach was developed with the intention to allow one to work with infinities and infinitesimals numerically in a unique computational framework and in all the situations requiring these notions. Non-standard analysis is a classical purely symbolic technique that works with ultrafilters, external and internal sets, standard and non-standard numbers, etc. In its turn, the ①-based methodology does not use any of these (...) notions and proposes a more physical treatment of mathematical objects separating the objects from tools used to study them. It both offers a possibility to create new numerical methods using infinities and infinitesimals in floating-point computations and allows one to study certain mathematical objects dealing with infinity more accurately than it is done traditionally. In these notes, we explain that even though both methodologies deal with infinities and infinitesimals, they are independent and represent two different philosophies of Mathematics that are not in a conflict. It is proved that texts :539–555, 2017; Gutman and Kutateladze in Sib Math J 49:835–841, 2008; Kutateladze in J Appl Ind Math 5:73–75, 2011) asserting that the ①-based methodology is a part of non-standard analysis unfortunately contain several logical fallacies. Their attempt to show that the ①-based methodology can be formalized within non-standard analysis is similar to trying to show that constructivism can be reduced to the traditional Mathematics. (shrink)
Several ways used to rank countries with respect to medals won during Olympic Games are discussed. In particular, it is shown that the unofficial rank used by the Olympic Committee is the only rank that does not allow one to use a numerical counter for ranking – this rank uses the lexicographic ordering to rank countries: one gold medal is more precious than any number of silver medals and one silver medal is more precious than any number of bronze medals. (...) How can we quantify what do these words, more precious, mean? Can we introduce a counter that for any possible number of medals would allow us to compute a numerical rank of a country using the number of gold, silver, and bronze medals in such a way that the higher resulting number would put the country in the higher position in the rank? Here we show that it is impossible to solve this problem using the positional numeral system with any finite base. Then we demonstrate that this problem can be easily solved by applying numerical computations with recently developed actual infinite numbers. These computations can be done on a new kind of a computer – the recently patented Infinity Computer. Its working software prototype is described briefly and examples of computations are given. It is shown that the new way of counting can be used in all situations where the lexicographic ordering is required. (shrink)
The Koch snowflake is one of the first fractals that were mathematically described. It is interesting because it has an infinite perimeter in the limit but its limit area is finite. In this paper, a recently proposed computational methodology allowing one to execute numerical computations with infinities and infinitesimals is applied to study the Koch snowflake at infinity. Numerical computations with actual infinite and infinitesimal numbers can be executed on the Infinity Computer being a new supercomputer patented in USA and (...) EU. It is revealed in the paper that at infinity the snowflake is not unique, i.e., different snowflakes can be distinguished for different infinite numbers of steps executed during the process of their generation. It is then shown that for any given infinite number n of steps it becomes possible to calculate the exact infinite number, Nn, of sides of the snowflake, the exact infinitesimal length, Ln, of each side and the exact infinite perimeter, Pn, of the Koch snowflake as the result of multiplication of the infinite Nn by the infinitesimal Ln. It is established that for different infinite n and k the infinite perimeters Pn and Pk are also different and the difference can be infinite. It is shown that the finite areas An and Ak of the snowflakes can be also calculated exactly (up to infinitesimals) for different infinite n and k and the difference An − Ak results to be infinitesimal. Finally, snowflakes constructed starting from different initial conditions are also studied and their quantitative characteristics at infinity are computed. (shrink)
In this survey, a recent computational methodology paying a special attention to the separation of mathematical objects from numeral systems involved in their representation is described. It has been introduced with the intention to allow one to work with infinities and infinitesimals numerically in a unique computational framework in all the situations requiring these notions. The methodology does not contradict Cantor’s and non-standard analysis views and is based on the Euclid’s Common Notion no. 5 “The whole is greater than the (...) part” applied to all quantities (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The methodology uses a computational device called the Infinity Computer (patented in USA and EU) working numerically (recall that traditional theories work with infinities and infinitesimals only symbolically) with infinite and infinitesimal numbers that can be written in a positional numeral system with an infinite radix. It is argued that numeral systems involved in computations limit our capabilities to compute and lead to ambiguities in theoretical assertions, as well. The introduced methodology gives the possibility to use the same numeral system for measuring infinite sets, working with divergent series, probability, fractals, optimization problems, numerical differentiation, ODEs, etc. (recall that traditionally different numerals lemniscate; Aleph zero, etc. are used in different situations related to infinity). Numerous numerical examples and theoretical illustrations are given. The accuracy of the achieved results is continuously compared with those obtained by traditional tools used to work with infinities and infinitesimals. In particular, it is shown that the new approach allows one to observe mathematical objects involved in the Hypotheses of Continuum and the Riemann zeta function with a higher accuracy than it is done by traditional tools. It is stressed that the hardness of both problems is not related to their nature but is a consequence of the weakness of traditional numeral systems used to study them. It is shown that the introduced methodology and numeral system change our perception of the mathematical objects studied in the two problems. (shrink)
There exists a huge number of numerical methods that iteratively construct approximations to the solution y(x) of an ordinary differential equation (ODE) y′(x) = f(x,y) starting from an initial value y_0=y(x_0) and using a finite approximation step h that influences the accuracy of the obtained approximation. In this paper, a new framework for solving ODEs is presented for a new kind of a computer – the Infinity Computer (it has been patented and its working prototype exists). The new computer is (...) able to work numerically with finite, infinite, and infinitesimal numbers giving so the possibility to use different infinitesimals numerically and, in particular, to take advantage of infinitesimal values of h. To show the potential of the new framework a number of results is established. It is proved that the Infinity Computer is able to calculate derivatives of the solution y(x) and to reconstruct its Taylor expansion of a desired order numerically without finding the respective derivatives analytically (or symbolically) by the successive derivation of the ODE as it is usually done when the Taylor method is applied. Methods using approximations of derivatives obtained thanks to infinitesimals are discussed and a technique for an automatic control of rounding errors is introduced. Numerical examples are given. (shrink)
A computational methodology called Grossone Infinity Computing introduced with the intention to allow one to work with infinities and infinitesimals numerically has been applied recently to a number of problems in numerical mathematics (optimization, numerical differentiation, numerical algorithms for solving ODEs, etc.). The possibility to use a specially developed computational device called the Infinity Computer (patented in USA and EU) for working with infinite and infinitesimal numbers numerically gives an additional advantage to this approach in comparison with traditional methodologies studying (...) infinities and infinitesimals only symbolically. The grossone methodology uses the Euclid’s Common Notion no. 5 ‘The whole is greater than the part’ and applies it to finite, infinite, and infinitesimal quantities and to finite and infinite sets and processes. It does not contradict Cantor’s and non-standard analysis views on infinity and can be considered as an applied development of their ideas. In this paper we consider infinite series and a particular attention is dedicated to divergent series with alternate signs. The Riemann series theorem states that conditionally convergent series can be rearranged in such a way that they either diverge or converge to an arbitrary real number. It is shown here that Riemann’s result is a consequence of the fact that symbol ∞ used traditionally does not allow us to express quantitatively the number of addends in the series, in other words, it just shows that the number of summands is infinite and does not allows us to count them. The usage of the grossone methodology allows us to see that (as it happens in the case where the number of addends is finite) rearrangements do not change the result for any sum with a fixed infinite number of summands. There are considered some traditional summation techniques such as Ramanujan summation producing results where to divergent series containing infinitely many positive integers negative results are assigned. It is shown that the careful counting of the number of addends in infinite series allows us to avoid this kind of results if grossone-based numerals are used. (shrink)
There exist many applications where it is necessary to approximate numerically derivatives of a function which is given by a computer procedure. In particular, all the fields of optimization have a special interest in such a kind of information. In this paper, a new way to do this is presented for a new kind of a computer - the Infinity Computer - able to work numerically with finite, infinite, and infinitesimal number. It is proved that the Infinity Computer is able (...) to calculate values of derivatives of a higher order for a wide class of functions represented by computer procedures. It is shown that the ability to compute derivatives of arbitrary order automatically and accurate to working precision is an intrinsic property of the Infinity Computer related to its way of functioning. Numerical examples illustrating the new concepts and numerical tools are given. (shrink)
Many biological processes and objects can be described by fractals. The paper uses a new type of objects – blinking fractals – that are not covered by traditional theories considering dynamics of self-similarity processes. It is shown that both traditional and blinking fractals can be successfully studied by a recent approach allowing one to work numerically with infinite and infinitesimal numbers. It is shown that blinking fractals can be applied for modeling complex processes of growth of biological systems including their (...) season changes. The new approach allows one to give various quantitative characteristics of the obtained blinking fractals models of biological systems. (shrink)
In this paper, a number of traditional models related to the percolation theory has been considered by means of new computational methodology that does not use Cantor’s ideas and describes infinite and infinitesimal numbers in accordance with the principle ‘The part is less than the whole’. It gives a possibility to work with finite, infinite, and infinitesimal quantities numerically by using a new kind of a compute - the Infinity Computer – introduced recently in [18]. The new approach does not (...) contradict Cantor. In contrast, it can be viewed as an evolution of his deep ideas regarding the existence of different infinite numbers in a more applied way. Site percolation and gradient percolation have been studied by applying the new computational tools. It has been established that in an infinite system the phase transition point is not really a point as with respect of traditional approach. In light of new arithmetic it appears as a critical interval, rather than a critical point. Depending on “microscope” we use this interval could be regarded as finite, infinite and infinitesimal short interval. Using new approach we observed that in vicinity of percolation threshold we have many different infinite clusters instead of one infinite cluster that appears in traditional consideration. (shrink)
The importance of the prime factorization problem is very well known (e.g., many security protocols are based on the impossibility of a fast factorization of integers on traditional computers). It is necessary from a number k to establish two primes a and b giving k = a · b. Usually, k is written in a positional numeral system. However, there exists a variety of numeral systems that can be used to represent numbers. Is it true that the prime factorization is (...) difficult in any numeral system? In this paper, a numeral system with partial carrying is described. It is shown that this system contains numerals allowing one to reduce the problem of prime factorization to solving [K/2] − 1 systems of equations, where K is the number of digits in k (the concept of digit in this system is more complex than the traditional one) and [u] is the integer part of u. Thus, it is shown that the difficulty of prime factorization is not in the problem itself but in the fact that the positional numeral system is used traditionally to represent numbers participating in the prime factorization. Obviously, this does not mean that P=NP since it is not known whether it is possible to re-write a number given in the traditional positional numeral system to the new one in a polynomial time. (shrink)
The paper investigates how the mathematical languages used to describe and to observe automatic computations influence the accuracy of the obtained results. In particular, we focus our attention on Single and Multi-tape Turing machines which are described and observed through the lens of a new mathematical language which is strongly based on three methodological ideas borrowed from Physics and applied to Mathematics, namely: the distinction between the object (we speak here about a mathematical object) of an observation and the instrument (...) used for this observation; interrelations holding between the object and the tool used for the observation; the accuracy of the observation determined by the tool. Results of the observation executed by the traditional and new languages are compared and discussed. (shrink)
A new computational methodology allowing one to work in a new way with infinities and infinitesimals is presented in this paper. The new approach, among other things, gives the possibility to calculate the number of elements of certain infinite sets, avoids indeterminate forms and various kinds of divergences. This methodology has been used by the author as a starting point in developing a new kind of computer – the Infinity Computer – able to execute computations and to store in its (...) memory not only finite numbers but also infinite and infinitesimal ones. (shrink)
In this paper, I consider a family of three-valued regular logics: the well-known strong and weak S.C. Kleene’s logics and two intermedi- ate logics, where one was discovered by M. Fitting and the other one by E. Komendantskaya. All these systems were originally presented in the semantical way and based on the theory of recursion. However, the proof theory of them still is not fully developed. Thus, natural deduction sys- tems are built only for strong Kleene’s logic both with one (...) (A. Urquhart, G. Priest, A. Tamminga) and two designated values (G. Priest, B. Kooi, A. Tamminga). The purpose of this paper is to provide natural deduction systems for weak and intermediate regular logics both with one and two designated values. (shrink)
The article deals with the Aristotelian doctrine of induction and its influence on the theory of induction of Al-Farabi. Inductive syllogisms of antiquity and the Middle Ages are compared with modern inferences by induction.
Historically, laws and policies to criminalize drug use or possession were rooted in explicit racism, and they continue to wreak havoc on certain racialized communities. We are a group of bioethicists, drug experts, legal scholars, criminal justice researchers, sociologists, psychologists, and other allied professionals who have come together in support of a policy proposal that is evidence-based and ethically recommended. We call for the immediate decriminalization of all so-called recreational drugs and, ultimately, for their timely and appropriate legal regulation. We (...) also call for criminal convictions for nonviolent offenses pertaining to the use or possession of small quantities of such drugs to be expunged, and for those currently serving time for these offenses to be released. In effect, we call for an end to the “war on drugs.”. (shrink)
Despite being assailed for decades by disability activists and disability studies scholars spanning the humanities and social sciences, the medical model of disability—which conceptualizes disability as an individual tragedy or misfortune due to genetic or environmental insult—still today structures many cases of patient–practitioner communication. Synthesizing and recasting work done across critical disability studies and philosophy of disability, I argue that the reason the medical model of disability remains so gallingly entrenched is due to what I call the “ableist conflation” of (...) disability with pain and suffering. In an effort to better equip healthcare practitioners and those invested in health communication to challenge disability stigma, discrimination, and oppression, I lay out the logic of the ableist conflation and interrogate examples of its use. I argue that insofar as the semiosis of pain and suffering is structured by the lived experience of unwelcome bodily transition or variation, experiences of pain inform the ableist conflation by preemptively tying such variability and its attendant disequilibrium to disability. I conclude by discussing how philosophy of disability and critical disability studies might better inform health communication concerning disability, offering a number of conceptual distinctions toward that end. (shrink)
Heidegger, Art, and Postmodernity offers a radical new interpretation of Heidegger's later philosophy, developing his argument that art can help lead humanity beyond the nihilistic ontotheology of the modern age. Providing pathbreaking readings of Heidegger's 'The Origin of the Work of Art' and his notoriously difficult Contributions to Philosophy, this book explains precisely what postmodernity meant for Heidegger, the greatest philosophical critic of modernity, and what it could still mean for us today. Exploring these issues, Iain D. Thomson examines several (...) postmodern works of art, including music, literature, painting and even comic books, from a post-Heideggerian perspective. Clearly written and accessible, this book will help readers gain a deeper understanding of Heidegger and his relation to postmodern theory, popular culture and art. (shrink)
This is the first chapter to our edited collection of essays on the nature and ethics of blame. In this chapter we introduce the reader to contemporary discussions about blame and its relationship to other issues (e.g. free will and moral responsibility), and we situate the essays in this volume with respect to those discussions.
The agent portrayed in much philosophy of action is, let's face it, a square. He does nothing intentionally unless he regards it or its consequences as desirable. The reason is that he acts intentionally only when he acts out of a desire for some anticipated outcome; and in desiring that outcome, he must regard it as having some value. All of his intentional actions are therefore directed at outcomes regarded sub specie boni: under the guise of the good. This agent (...) is conceived as being capable of intentional action—and hence as being an agent—only by virtue of being a pursuer of value. I want to question whether this conception of agency can be correct. Surely, so general a capacity as agency cannot entail so narrow a cast of mind. Our moral psychology has characterized, not the generic agent, but a particular species of agent, and a particularly bland species of agent, at that. It has characterized the earnest agent while ignoring those agents who are disaffected, refractory, silly, satanic, or punk. I hope for a moral psychology that has room for the whole motley crew. I shall begin by examining why some philosophers have thought that the attitudes motivating intentional actions involve judgments of value. I shall then argue that their conception of these attitudes is incorrect. Finally, I shall argue that practical reason should not be conceived as a faculty for pursuing value. (shrink)
Clark and Chalmers (1998) defend the hypothesis of an ‘Extended Mind’, maintaining that beliefs and other paradigmatic mental states can be implemented outside the central nervous system or body. Aspects of the problem of ‘language acquisition’ are considered in the light of the extended mind hypothesis. Rather than ‘language’ as typically understood, the object of study is something called ‘utterance-activity’, a term of art intended to refer to the full range of kinetic and prosodic features of the on-line behaviour of (...) interacting humans. It is argued that utterance activity is plausibly regarded as jointly controlled by the embodied activity of interacting people, and that it contributes to the control of their behaviour. By means of specific examples it is suggested that this complex joint control facilitates easier learning of at least some features of language. This in turn suggests a striking form of the extended mind, in which infants’ cognitive powers are augmented by those of the people with whom they interact. (shrink)
Open peer commentary on the article “Sensorimotor Direct Realism: How We Enact Our World” by Michael Beaton. Upshot: In light of the construal of sensorimotor theory offered by the target article, this commentary examines the role the theory should admit for internal representation.
Despite playing an important role in epistemology, philosophy of science, and more recently in moral philosophy and aesthetics, the nature of understanding is still much contested. One attractive framework attempts to reduce understanding to other familiar epistemic states. This paper explores and develops a methodology for testing such reductionist theories before offering a counterexample to a recently defended variant on which understanding reduces to what an agent knows.
According to a widespread view in metaphysics and philosophy of science, all explanations involve relations of ontic dependence between the items appearing in the explanandum and the items appearing in the explanans. I argue that a family of mathematical cases, which I call “viewing-as explanations”, are incompatible with the Dependence Thesis. These cases, I claim, feature genuine explanations that aren’t supported by ontic dependence relations. Hence the thesis isn’t true in general. The first part of the paper defends this claim (...) and discusses its significance. The second part of the paper considers whether viewing-as explanations occur in the empirical sciences, focusing on the case of so-called fictional models. It’s sometimes suggested that fictional models can be explanatory even though they fail to represent actual worldly dependence relations. Whether or not such models explain, I suggest, depends on whether we think scientific explanations necessarily give information relevant to intervention and control. Finally, I argue that counterfactual approaches to explanation also have trouble accommodating viewing-as cases. (shrink)
Recent iterations of Alvin Plantinga’s “evolutionary argument against naturalism” bear a surprising resemblance to a famous argument in Descartes’s Third Meditation. Both arguments conclude that theists have an epistemic advantage over atheists/naturalists vis-à-vis the question whether or not our cognitive faculties are reliable. In this paper, I show how these arguments bear an even deeper resemblance to each other. After bringing the problem of evil to bear negatively on Descartes’s argument, I argue that, given these similarities, atheists can wield a (...) recent solution to the problem of evil against theism in much the way Plantinga wields the detailsof evolutionary theory against naturalism. I conclude that Plantinga and Descartes give us insufficient reason for thinking theists are in a better epistemic position than atheists and naturalists vis-à-vis the question whether or not our cognitive faculties are reliable. (shrink)
The problem of truth in fiction concerns how to tell whether a given proposition is true in a given fiction. Thus far, the nearly universal consensus has been that some propositions are ‘implicitly true’ in some fictions: such propositions are not expressed by any explicit statements in the relevant work, but are nevertheless held to be true in those works on the basis of some other set of criteria. I call this family of views ‘implicitism’. I argue that implicitism faces (...) serious problems, whereas the opposite view is much more plausible than has previously been thought. After mounting a limited defence of explicitism, I explore a difficult problem for the view and discuss some possible responses. (shrink)
In this paper, I develop and defend a new adverbial theory of perception. I first present a semantics for direct-object perceptual reports that treats their object positions as supplying adverbial modifiers, and I show how this semantics definitively solves the many-property problem for adverbialism. My solution is distinctive in that it articulates adverbialism from within a well-established formal semantic framework and ties adverbialism to a plausible semantics for perceptual reports in English. I then go on to present adverbialism as a (...) theory of the metaphysics of perception. The metaphysics I develop treats adverbial perception as a directed activity: it is an activity with success conditions. When perception is successful, the agent bears a relation to a concrete particular, but perception need not be successful; this allows perception to be fundamentally non-relational. The result is a novel formulation of adverbialism that eliminates the need for representational contents, but also treats successful and unsuccessful perceptual events as having a fundamental common factor. (shrink)
Traditionally, Aristotle is held to believe that philosophical contemplation is valuable for its own sake, but ultimately useless. In this volume, Matthew D. Walker offers a fresh, systematic account of Aristotle's views on contemplation's place in the human good. The book situates Aristotle's views against the background of his wider philosophy, and examines the complete range of available textual evidence. On this basis, Walker argues that contemplation also benefits humans as perishable living organisms by actively guiding human life activity, including (...) human self-maintenance. Aristotle's views on contemplation's place in the human good thus cohere with his broader thinking about how living organisms live well. A novel exploration of Aristotle's views on theory and practice, this volume will interest scholars and students of both ancient Greek ethics and natural philosophy. It will also appeal to those working in other disciplines including classics, ethics, and political theory. (shrink)
This article defends the Doomsday Argument, the Halfer Position in Sleeping Beauty, the Fine-Tuning Argument, and the applicability of Bayesian confirmation theory to the Everett interpretation of quantum mechanics. It will argue that all four problems have the same structure, and it gives a unified treatment that uses simple models of the cases and no controversial assumptions about confirmation or self-locating evidence. The article will argue that the troublesome feature of all these cases is not self-location but selection effects.
David Stove reviews Selwyn Grave's History of Philosophy in Australia, and praises philosophers for thinking harder about the bases of science, mathematics and medicine than the practitioners in the field. The review is reprinted as an appendix to James Franklin's Corrupting the Youth: A History of Philosophy in Australia.
Philosophers of science since Nagel have been interested in the links between intertheoretic reduction and explanation, understanding and other forms of epistemic progress. Although intertheoretic reduction is widely agreed to occur in pure mathematics as well as empirical science, the relationship between reduction and explanation in the mathematical setting has rarely been investigated in a similarly serious way. This paper examines an important particular case: the reduction of arithmetic to set theory. I claim that the reduction is unexplanatory. In defense (...) of this claim, I offer evidence from mathematical practice, and I respond to contrary suggestions due to Steinhart, Maddy, Kitcher and Quine. I then show how, even if set-theoretic reductions are generally not explanatory, set theory can nevertheless serve as a legitimate foundation for mathematics. Finally, some implications of my thesis for philosophy of mathematics and philosophy of science are discussed. In particular, I suggest that some reductions in mathematics are probably explanatory, and I propose that differing standards of theory acceptance might account for the apparent lack of unexplanatory reductions in the empirical sciences. (shrink)
Following neo-Aristotelians Alasdair MacIntyre and Martha Nussbaum, we claim that humans are story-telling animals who learn from the stories of diverse others. Moral agents use rational emotions, such as compassion which is our focus here, to imaginatively reconstruct others’ thoughts, feelings and goals. In turn, this imaginative reconstruction plays a crucial role in deliberating and discerning how to act. A body of literature has developed in support of the role narrative artworks (i.e. novels and films) can play in allowing us (...) the opportunity to engage imaginatively and sympathetically with diverse characters and scenarios in a safe protected space that is created by the fictional world. By practising what Nussbaum calls a ‘loving attitude’, her version of ethical attention, we can form virtuous habits that lead to phronesis (practical wisdom). In this paper, and taking compassion as an illustrative focus, we examine the ways that students’ moral education might usefully develop from engaging with narrative artworks through Philosophy for Children (P4C), where philosophy is a praxis, conducted in a classroom setting using a Community of Inquiry (CoI). We argue that narrative artworks provide useful stimulus material to engage students, generate student questions, and motivate philosophical dialogue and the formation of good habits which, in turn, supports the argument for philosophy to be taught in schools. (shrink)
Gauss’s quadratic reciprocity theorem is among the most important results in the history of number theory. It’s also among the most mysterious: since its discovery in the late 18th century, mathematicians have regarded reciprocity as a deeply surprising fact in need of explanation. Intriguingly, though, there’s little agreement on how the theorem is best explained. Two quite different kinds of proof are most often praised as explanatory: an elementary argument that gives the theorem an intuitive geometric interpretation, due to Gauss (...) and Eisenstein, and a sophisticated proof using algebraic number theory, due to Hilbert. Philosophers have yet to look carefully at such explanatory disagreements in mathematics. I do so here. According to the view I defend, there are two important explanatory virtues—depth and transparency—which different proofs (and other potential explanations) possess to different degrees. Although not mutually exclusive in principle, the packages of features associated with the two stand in some tension with one another, so that very deep explanations are rarely transparent, and vice versa. After developing the theory of depth and transparency and applying it to the case of quadratic reciprocity, I draw some morals about the nature of mathematical explanation. (shrink)
This article presents modal versions of resource-conscious logics. We concentrate on extensions of variants of linear logic with one minimal non-normal modality. In earlier work, where we investigated agency in multi-agent systems, we have shown that the results scale up to logics with multiple non-minimal modalities. Here, we start with the language of propositional intuitionistic linear logic without the additive disjunction, to which we add a modality. We provide an interpretation of this language on a class of Kripke resource models (...) extended with a neighbourhood function: modal Kripke resource models. We propose a Hilbert-style axiomatisation and a Gentzen-style sequent calculus. We show that the proof theories are sound and complete with respect to the class of modal Kripke resource models. We show that the sequent calculus admits cut elimination and that proof-search is in PSPACE. We then show how to extend the results when non-commutative connectives are added to the language. Finally, we put the l.. (shrink)
This paper engages critically with anti-representationalist arguments pressed by prominent enactivists and their allies. The arguments in question are meant to show that the “as-such” and “job-description” problems constitute insurmountable challenges to causal-informational theories of mental content. In response to these challenges, a positive account of what makes a physical or computational structure a mental representation is proposed; the positive account is inspired partly by Dretske’s views about content and partly by the role of mental representations in contemporary cognitive scientific (...) modeling. (shrink)
We maintain that in many contexts promising to try is expressive of responsibility as a promiser. This morally significant application of promising to try speaks in favor of the view that responsible promisers favor evidentialism about promises. Contra Berislav Marušić, we contend that responsible promisers typically withdraw from promising to act and instead promise to try, in circumstances in which they recognize that there is a significant chance that they will not succeed.
Recent years have seen fresh impetus brought to debates about the proper role of statistical evidence in the law. Recent work largely centres on a set of puzzles known as the ‘proof paradox’. While these puzzles may initially seem academic, they have important ramifications for the law: raising key conceptual questions about legal proof, and practical questions about DNA evidence. This article introduces the proof paradox, why we should care about it, and new work attempting to resolve it.
Atomically precise manufacturing (APM) is the assembly of materials with atomic precision. APM does not currently exist, and may not be feasible, but if it is feasible, then the societal impacts could be dramatic. This paper assesses the net societal impacts of APM across the full range of important APM sectors: general material wealth, environmental issues, military affairs, surveillance, artificial intelligence, and space travel. Positive effects were found for material wealth, the environment, military affairs (specifically nuclear disarmament), and space travel. (...) Negative effects were found for military affairs (specifically rogue actor violence and AI. The net effect for surveillance was ambiguous. The effects for the environment, military affairs, and AI appear to be the largest, with the environment perhaps being the largest of these, suggesting that APM would be net beneficial to society. However, these factors are not well quantified and no definitive conclusion can be made. One conclusion that can be reached is that if APM R&D is pursued, it should go hand-in-hand with effective governance strategies to increase the benefits and reduce the harms. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.