So-called “traditional epistemology” and “Bayesian epistemology” share a word, but it may often seem that the enterprises hardly share a subject matter. They differ in their central concepts. They differ in their main concerns. They differ in their main theoretical moves. And they often differ in their methodology.However, in the last decade or so, there have been a number of attempts to build bridges between the two epistemologies. Indeed, many would say that there is just one branch of philosophy here—epistemology. (...) There is a common subject matter after all.In this paper, we begin by playing the role of a “bad cop,” emphasizing many apparent points of disconnection, and even conflict, between the approaches to epistemology. We then switch role, playing a “good cop” who insists that the approaches are engaged in common projects after all. We look at various ways in which the gaps between them have been bridged, and we consider the prospects for bridging them further. We conclude that this is an exciting time for epistemology, as the two traditions can learn, and have started learning, from each other. (shrink)
In the situation known as the “cable guy paradox” the expected utility principle and the “avoid certain frustration” principle (ACF) seem to give contradictory advice about what one should do. This article tries to resolve the paradox by presenting an example that weakens the grip of ACF: a modified version of the cable guy problem is introduced in which the choice dictated by ACF loses much of its intuitive appeal.
Alan Hájek launches a formidable attack on the idea that deliberation crowds out prediction – that when we are deliberating about what to do, we cannot rationally accommodate evidence about what we are likely to do. Although Hájek rightly diagnoses the problems with some of the arguments for the view, his treatment falls short in crucial ways. In particular, he fails to consider the most plausible version of the view, the best argument for it, and why anyone would ever (...) believe it in the first place. In doing so, he misses a deep puzzle about deliberation and prediction – a puzzle which all of us, as agents, face, and which we may be able to resolve by recognizing the complicated relationship between deliberation and prediction. (shrink)
Przedmowa Problematyka związana z zależnościami przyczynowymi, ich modelowaniem i odkrywa¬niem, po długiej nieobecności w filozofii i metodologii nauk, budzi współcześnie duże zainteresowanie. Wiąże się to przede wszystkim z dynamicznym rozwojem, zwłaszcza od lat 1990., technik obli¬czeniowych. Wypracowane w tym czasie sieci bayesowskie uznaje się za matematyczny język przyczynowości. Pozwalają one na daleko idącą auto¬matyzację wnioskowań, co jest także zachętą do podjęcia prób algorytmiza¬cji odkrywania przyczyn. Na potrzeby badań naukowych, które pozwalają na przeprowadzenie eksperymentu z randomizacją, standardowe metody ustalania zależności przyczynowych (...) opracowano na początku XX wieku. Zupełnie inaczej sprawa przedstawia się w przypadku badań nieeksperymentalnych, gdzie podobne rozwiązania pozostają kwestią przyszłości. Zadaniem tej książki jest podanie warunków, które powinny być spełnione przez te rozwiązania, oraz sformułowanie proceduralnego kryterium zależności przy¬czynowych jako szczegółowej realizacji tych warunków. Pociąga ono waż¬kie konsekwencje dla filozofii i metodologii nauk, które ujawnia – podany w Części II – zarys me-todolo¬gii proceduralnej. W literaturze przedmiotu brakuje w miarę wszechstronnego i systema¬tycznego omówie¬nia najnowszych filozoficznych i metodologicznych dys¬kusji na temat przy¬czynowości, co niech będzie wytłumaczeniem, dlaczego w niektórych punktach obecnej książki szczegółowo referuję trudno dos¬tępne teksty źró¬dłowe. Przymiotnik „proceduralny” używam tu w znaczeniu węższym niż Huw Price (w którego pracach właściwszy byłby termin „kryterialny”) dla podkre¬ślenia – zgodnie z łacińskim źródłosłowem procedo – że dla ustalenia przy¬czyny niezbędne jest podjęcie przez uczonych określonych interakcji z ba¬daną rzeczywistością. Zalążki zamysłu prezentowanego w tej książce przedstawiłem podczas warsztatów filozoficznych „Philosophy and Probability” w roku 2002, zor¬ganizowanych przez Instytut Filozofii Uniwersytetu w Konstancji. Wdzięczny jestem uczestnikom tych warsztatów za uwagi, a przede wszyst¬kim następującym osobom: Luc Bovens, Brandon Fitelson, Alan Hájek, Stephan Hartmann oraz Jon Williamson. Podczas międzynarodowej konferencji „Analytical Pragmatism”, zorgani¬zowanej w Lublinie w roku 2003 przez Wydział Filo¬zofii Katolickiego Uniwersytetu Lubelskiego, odniosłem swoją koncepcję do prac Nancy Cartwright. Szczególnie inspirujący okazał się komentarz Huw Price’a do mojego referatu i przeprowadzona z nim dysku¬sja. Ujęcie koncepcji metodologii proceduralnej na szerszym tle współ-czes¬nego nurtu empirystycznego w filozofii nauki przedstawiłem w roku 2004 podczas konferencji „5th Quadrennial Fellows Conference”, zorgani¬zowanej przez Instytut Filozofii Uniwersytetu Jagiellońskiego oraz Centrum Filozofii Nauki w Pittsburghu. Szczególnie pomocne w dalszych moich pra¬cach były uwagi Jamesa Bogena, Janet Kourany, Jamesa Lennoxa, Johna Nortona, Thomasa Bonka, Jana Woleńskiego i Johna Worralla, za które wyra¬żam swoją wdzięczność. Korpus książki powstał podczas mojego stażu w Centrum Filozofii Nauki w Pittsburghu, który odbyłem jako stypendysta Fundacji na Rzecz Nauki Polskiej w roku akademickim 2004-2005. Uczestniczyłem w tym czasie w życiu naukowym Centrum i w pracach badawczych zespołu z Instytutu Filozofii Uniwersytetu Carnegie-Mellon w Pittsburghu kierowanego przez Clarka Glymoura. Na jego ręce składam podziękowanie za wiele po¬mocnych uwag do moich wy¬stąpień oraz tekstów i za dyskusje przede wszystkim z nim samym i z jego najbliższymi współpracownikami: Peterem Spirtesem oraz Richardem Scheinesem, a także pozostałymi członkami tego zes¬połu, doktorantami i uczestnikami seminarium badawczego „Causality in the Social Sciences”. Za wieloletnie wsparcie, wielopłaszczyznowe inspiracje towarzyszące pi¬saniu tej książki, a także liczne pomocne uwagi do jej wcześniejszych wersji dziękuję przede wszystkim Księdzu Profesorowi Andrzejowi Bronkowi oraz Księdzu Profesorowi Józefowi Herbutowi, współprowadzącemu seminarium doktorskie w Katedrze Metodologii Nauk Katolickiego Uniwersytetu Lubel¬skiego im. Jana Pawła II, jak również pozostałym uczestnikom tego semina¬rium. Dziękuję mojej Żonie, dr Annie Kawalec za wiele wysiłku włożonego w ulepszenie redakcji – językowej i merytorycznej – obecnej książki. Książkę tę można czytać na kilka sposobów. Czytelnikom zainteresowa¬nym przede wszystkim prowadzeniem badań empirycznych polecałbym rozpoczęcie od Rozdziału 2. i kontynuację pozostałych rozdziałów Części I, a następnie Dodatków. Czytelnikom zainteresowanym problemami filozo¬fii i metodologii nauk polecałbym rozpoczęcie lektury książki od Części II i uzupełniającą lekturę Rozdziału 2., a następnie Wprowadzenia i Zakończenia. Czytelnikom mniej zainteresowanym zagadnieniami teoretycznymi pole¬całbym zapoznanie się z fascynującymi dziejami odkrycia przyczyn cholery przez Johna Snowa, które rekonstruuję w Rozdziale 1. W dalszej części nato¬miast polecałbym przejście do Wprowadzenia i Zakończenia, które w mniej specjalistyczny sposób przybliżają proponowane tu rozstrzygnięcia. Tekst książki nie był dotąd publikowany. Wyjątkiem są pewne fragmenty Rozdziału 8. oraz 9., które w zmienionej postaci ukazały się w Rocznikach Filozoficznych (Kawalec 2004). -/- Lublin, luty 2006 r. (shrink)
In recent years, sequencing technologies have enabled the identification of a wide range of non-coding RNAs (ncRNAs). Unfortunately, annotation and integration of ncRNA data has lagged behind their identification. Given the large quantity of information being obtained in this area, there emerges an urgent need to integrate what is being discovered by a broad range of relevant communities. To this end, the Non-Coding RNA Ontology (NCRO) is being developed to provide a systematically structured and precisely defined controlled vocabulary for the (...) domain of ncRNAs, thereby facilitating the discovery, curation, analysis, exchange, and reasoning of data about structures of ncRNAs, their molecular and cellular functions, and their impacts upon phenotypes. The goal of NCRO is to serve as a common resource for annotations of diverse research in a way that will significantly enhance integrative and comparative analysis of the myriad resources currently housed in disparate sources. It is our belief that the NCRO ontology can perform an important role in the comprehensive unification of ncRNA biology and, indeed, fill a critical gap in both the Open Biological and Biomedical Ontologies (OBO) Library and the National Center for Biomedical Ontology (NCBO) BioPortal. Our initial focus is on the ontological representation of small regulatory ncRNAs, which we see as the first step in providing a resource for the annotation of data about all forms of ncRNAs. The NCRO ontology is free and open to all users, accessible at: http://purl.obolibrary.org/obo/ncro.owl. (shrink)
In this book Alan Haworth tends to sneer at libertarians. However, there are, I believe, a few sound criticisms. I have always held similar opinions of Murray Rothbard‟s and Friedrich Hayek‟s definitions of liberty and coercion, Robert Nozick‟s account of natural rights, and Hayek‟s spontaneous-order arguments. I urge believers of these positions to read Haworth. But I don‟t personally know many libertarians who believe them (or who regard Hayek as a libertarian).
The Early Han enjoyed some prosperity while it struggled with centralization and political control of the kingdom. The Later Han was plagued by the court intrigue, corrupt eunuchs, and massive flooding of the Yellow River that eventually culminated in popular uprisings that led to the demise of the dynasty. The period that followed was a renewed warring states period that likewise stimulated a rebirth of philosophical and religious debate, growth, and innovations. Alan K. L. Chan and Yuet-Keung Lo's Philosophy (...) and Religion in Early Medieval China is a welcome addition to the growing body of literature on medieval China. It is a companion volume to their coauthored work, Interpretation and Literature in Early .. (shrink)
Drawing from the results of various case studies conducted in India, Japan, China, Korea, and New York, the author focuses on the cultural interplay of Asian and American individualities. T is century has also witnessed barbarous acts of terrorism. Taking the partition of India and Pakistan and the 9/11 tragedy as his points of departure, he traces the trauma and dissociation these events entailed.
This is one of the best popular cosmology books ever written and Guth is now (2016) a top physics Professor at MIT. He tells the extremely complex story of inflation and related areas of particle physics in such an absorbing style that it reads like a detective novel-in fact, it is a detective novel-how he and others found out how the universe started! The interweaving of his personal story and that of many colleagues along with their photos and many wonderfully (...) clear diagrams allows just the right amount of relaxation from the intensity of the physics. In places the style reminds one of Watson´s famous book ``The Double Helix``. He tells how his work on magnetic monopoles and spontaneous symmetry breaking led to the discovery of the inflationary theory of the very early universe (ca. 10 to minus 35 seconds!). -/- Along the way you will learn many gems that should stay with you a long time such as: the observed universe(e.g., everything the Hubble telescope etc. can see out to ca. 15 billion light years when the universe began) is likely just a vanishingly tiny part of the entire inhomogeneous universe which is about 10 to the 23rd times larger; the big bang probably took place simultaneously and homogeneously in our observed universe; there probably have been and will continue to be an infinite number of big bangs in an infinite number of universes for an infinite time; when a bang happens, everything(space, time, all the elements) from the previous universe are destroyed; the stretching of space can happen at speeds much greater than the speed of light; our entire observed universe lies in a single bubble out of an endless number so there may be trillions of trillions just in our own entire(pocket) universe(and there may be an endless number of such); none of these infinite number of universes interact-i.e., we can never find out anything about the others; each universe started with its own big bang and will eventually collapse to create a new big bang; all this implies that the whole universe is fractal in nature and thus infinitely regresses to ever more universes(which can lead one to thinking of it as a giant hologram); disagreements between the endless(hundreds at least) variations of inflation are sometimes due to lack of awareness that different definitions of time are being used; some theories suggest that there was a first big bang but we can never find out what happened before it; nevertheless it appears increasingly plausible that there was no beginning but rather an eternal cycle of the destruction and creation, each being the beginning of spacetime for that universe; to start a universe you need about 25g of matter in a 10 to minus 26cm diameter sphere with a false vacuum and a singularity(white hole). -/- He deliberately spends little time on the endless variants of inflation such as chaotic, expanded and supernatural inflation or on dark matter´, supersymmetry and string theory, though they were well known at the time as you can find by reading other books such as Michio Kaku´s `Hyperspace` (see my review) and countless others. Of course much has happened since this book appeared but it still serves as an excellent background volume so cheap now it’s free for the cost of mailing. -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019), The Logical Structure of Human Behavior (2019), and Suicidal Utopian Delusions in the 21st Century 4th ed (2019). (shrink)
“Could a machine think?” asks John R. Searle in his paper Minds, Brains, and Programs. He answers that “only a machine could think1, and only very special kinds of machines, namely brains.”2 The subject of this paper is the analysis of the aforementioned question through presentation of the symbol manipulation approach to intelligence and Searle's well-known criticism to this approach, namely the Chinese room argument. The examination of these issues leads to the systems reply of the Chinese room argument and (...) tries to illustrate that Searle's response to the systems reply does not detract from the symbol manipulation approach. (shrink)
I begin by describing the hideous nature of sexuality, that which makes sexual desire and activity morally suspicious, or at least what we have been told about the moral foulness of sex by, in particular, Immanuel Kant, but also by some of his predecessors and by some contemporary philosophers.2 A problem arises because acting on sexual desire, given this Kantian account of sex, apparently conflicts with the Categorical Imperative. I then propose a typology of possible solutions to this sex problem (...) and critically discuss recent philosophical ethics of sex that fall into the typology's various categories. (shrink)
After a sketch of the optimism and high aspirations of History and Philosophy of Science when I first joined the field in the mid 1960s, I go on to describe the disastrous impact of "the strong programme" and social constructivism in history and sociology of science. Despite Alan Sokal's brilliant spoof article, and the "science wars" that flared up partly as a result, the whole field of Science and Technology Studies is still adversely affected by social constructivist ideas. I (...) then go on to spell out how in my view STS ought to develop. It is, to begin with, vitally important to recognize the profoundly problematic character of the aims of science. There are substantial, influential and highly problematic metaphysical, value and political assumptions built into these aims. Once this is appreciated, it becomes clear that we need a new kind of science which subjects problematic aims - problematic assumptions inherent in these aims - to sustained imaginative and critical scrutiny as an integral part of science itself. This needs to be done in an attempt to improve the aims and methods of science as science proceeds. The upshot is that science, STS, and the relationship between the two, are all transformed. STS becomes an integral part of science itself. And becomes a part of an urgently needed campaign to transform universities so that they become devoted to helping humanity create a wiser world. (shrink)
One type of deflationism about metaphysical modality suggests that it can be analysed strictly in terms of linguistic or conceptual content and that there is nothing particularly metaphysical about modality. Scott Soames is explicitly opposed to this trend. However, a detailed study of Soames’s own account of modality reveals that it has striking similarities with the deflationary account. In this paper I will compare Soames’s account of a posteriori necessities concerning natural kinds with the deflationary one, specifically Alan Sidelle’s (...) account, and suggest that Soames’s account is vulnerable to the deflationist’s critique. Furthermore, I conjecture that both the deflationary account and Soames’s account fail to fully explicate the metaphysical content of a posteriori necessities. Although I will focus on Soames, my argument may have more general implications towards the prospects of providing a meaning-based account of metaphysical modality. (shrink)
Chalmers and Hájek argue that on an epistemic reading of Ramsey’s test for the rational acceptability of conditionals, it is faulty. They claim that applying the test to each of a certain pair of conditionals requires one to think that one is omniscient or infallible, unless one forms irrational Moore-paradoxical beliefs. I show that this claim is false. The epistemic Ramsey test is indeed faulty. Applying it requires that one think of anyone as all-believing and if one is rational, to (...) think of anyone as infallible-if-rational. But this is not because of Moore-paradoxical beliefs. Rather it is because applying the test requires a certain supposition about conscious belief. It is important to understand the nature of this supposition. (shrink)
Informatics is generally understood as a “new technology” and is therewith discussed according to technological aspects such as speed, data retrieval, information control and so on. Its widespread use from home appliances to enterprises and universities is not the result of a clear-cut analysis of its inner possibilities but is rather dependent on all sorts of ideological promises of unlimited progress. We will discuss the theoretical definition of informatics proposed in 1936 by Alan Turing in order to show that (...) it should be taken as final and complete. This definition has no relation to the technology because Turing defines computers as doing the work of solving problems with numbers. This formal definition implies nonetheless a relation to the non-formalized elements around informatics, which we shall discuss through the Greek notion of téchne. (shrink)
The editors of the Journal of Applied Philosophy allowed Alan Haworth to reply to my short review of his Anti-Libertarianism. The editors would not allow me to respond to Haworth. Thanks to the openness of internet publication and the Libertarian Alliance website, this can now be rectified and Haworth's reply can no longer escape a public critical response.
This paper argues that there is a close connection between basic human rights and communal bonds. It reviews the views expressed by Alan Gewirth and Alasdair MacIntyre, which in differing ways deny this connection, and concludes that the deficiencies in their accounts reinforce the case for communal bonds.
A comprehensive introduction to the ways in which meaning is conveyed in language. Alan Cruse covers semantic matters, but also deals with topics that are usually considered to fall under pragmatics. A major aim is to highlight the richness and subtlety of meaning phenomena, rather than to expound any particular theory.
Epistemological disjunctivism says that one can know that p on the rational basis of one’s seeing that p. The basis problem for disjunctivism says that that can’t be since seeing that p entails knowing that p on account of simply being the way in which one knows that p. In defense of their view disjunctivists have rejected the idea that seeing that p is just a way of knowing that p (the SwK thesis). That manoeuvre is familiar. In this paper (...) I explore the prospects for rejecting instead the thought that if the SwK thesis is true then seeing that p can’t be one’s rational basis for perceptual knowledge. I explore two strategies. The first situates disjunctivism within the context of a ‘knowledge-first’ approach that seeks to reverse the traditional understanding of the relationship between perceptual knowledge and justification (or rational support). But I argue that a more interesting strategy situates disjunctivism within a context that accepts a more nuanced understanding of perceptual beliefs. The proposal that I introduce reimagines disjunctivism in light of a bifurcated conception of perceptual knowledge that would see it cleaved along two dimensions. On the picture that results perceptual knowledge at the judgemental level is rationally supported by perceptual knowledge at the merely functional or ‘animal’ level. This supports a form of disjunctivism that I think is currently off the radar: one that’s consistent both with the SwK thesis and a commitment to a traditional reductive account of perceptual knowledge. (shrink)
Non-cognitivists claim that thick concepts can be disentangled into distinct descriptive and evaluative components and that since thick concepts have descriptive shape they can be mastered independently of evaluation. In Non-Cognitivism and Rule-Following, John McDowell uses Wittgenstein’s rule-following considerations to show that such a non-cognitivist view is untenable. In this paper I do several things. I describe the non-cognitivist position in its various forms and explain its driving motivations. I then explain McDowell’s argument against non-cognitivism and the Wittgensteinian considerations upon (...) which it relies, because this has been sufficiently misunderstood by critics and rarely articulated by commentators. After clarifying McDowell’s argument against non-cognitivism, I extend the analysis to show that commentators of McDowell have failed to appreciate his argument and that critical responses have been weak. I argue against three challenges posed to McDowell, and show that the case of thick concepts should lead us to reject non-cognitivism. (shrink)
Our digital society increasingly relies in the power of others’ aggregated judgments to make decisions. Questions as diverse as which film we will watch, what scientific news we will decide to read, which path we will follow to find a place, or what political candidate we will vote for are usually associated to a rating that influences our final decisions.
Causalists about explanation claim that to explain an event is to provide information about the causal history of that event. Some causalists also endorse a proportionality claim, namely that one explanation is better than another insofar as it provides a greater amount of causal information. In this chapter I consider various challenges to these causalist claims. There is a common and influential formulation of the causalist requirement – the ‘Causal Process Requirement’ – that does appear vulnerable to these anti-causalist challenges, (...) but I argue that they do not give us reason to reject causalism entirely. Instead, these challenges lead us to articulate the causalist requirement in an alternative way. This alternative articulation incorporates some of the important anti-causalist insights without abandoning the explanatory necessity of causal information. For example, proponents of the ‘equilibrium challenge’ argue that the best available explanations of the behaviour of certain dynamical systems do not appear to provide any causal information. I respond that, contrary to appearances, these equilibrium explanations are fundamentally causal, and I provide a formulation of the causalist thesis that is immune to the equilibrium challenge. I then show how this formulation is also immune to the ‘epistemic challenge’ – thus vindicating (a properly formulated version of) the causalist thesis. (shrink)
John Searle has argued that the aim of strong AI of creating a thinking computer is misguided. Searle’s Chinese Room Argument purports to show that syntax does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality. But we are not mainly interested in the program itself but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the implementation (...) of them cannot suffice for semantics. Perhaps our world is a world in which any implementation of the right computer program will create a system with intrinsic intentionality, in which case Searle’s Chinese Room Scenario is empirically (nomically) impossible. But, indeed, perhaps our world is a world in which Searle’s Chinese Room Scenario is empirically (nomically) possible and that the silicon basis of modern day computers are one kind of material unsuited to give you intrinsic intentionality. The metaphysical question turns out to be a question of what kind of world we are in and I argue that in this respect we do not know our modal address. The Modal Address Argument does not ensure that strong AI will succeed, but it shows that Searle’s challenge on the research program of strong AI fails in its objectives. (shrink)
Theoretiker der Künstlichen Intelligenz und deren Wegbegleiter in der Philosophie des Geistes haben auf unterschiedliche Weise auf Kritik am ursprünglichen Theorieziel der KI reagiert. Eine dieser Reaktionen ist die Zurücknahme dieses Theorieziels zugunsten der Verfolgung kleinerformatiger Projekte. Eine andere Reaktion ist die Propagierung konnektionistischer Systeme, die mit ihrer dezentralen Arbeitsweise die neuronalen Netze des menschlichen Gehirns besser simulieren sollen. Eine weitere ist die sogenannte robot reply. Die Roboterantwort besteht aus zwei Elementen. Sie enthält (a) das Zugeständnis, daß das Systemverhalten eines (...) wie auch immer programmierten konventionellen Digitalrechners mit von Neumann-Architektur nicht schon menschenähnliche Intelligenz aufweist, und (b) die Behauptung, daß es für bestimmte Arten von Maschinen doch zur Intelligenz reicht. In die Liga der intelligenten Wesen könnten Maschinen genau dann aufsteigen, wenn sie Roboter sind. Damit ist gemeint: wenn sie über Wahrnehmungskomponenten (Rezeptoren) und Handlungskomponenten (Effektoren) verfügen, mithilfe deren sie aktiv in kausale Interaktionen mit ihrer Umwelt eintreten können. Im Beitrag wird für die These argumentiert, daß der Roboterantwort eine richtige Intuition zugrunde liegt, von der die Roboterfreunde sich aber zu einer kurzschlüssigen Folgerung verleiten lassen. Es ist richtig, mentale Zustände und Handlungskompetenz eng aneinander zu binden. Einem Wesen, dem man Handlungsfähigkeit zuerkennt, kann man mentale Zustände nicht absprechen. Doch Handelnkönnen und Geisthaben sind nicht hinreichend unabhängig voneinander, als daß man das eine als Rechtfertigung für die Zuerkennung des anderen verwenden könnte. Man sollte Robotern beides absprechen. (shrink)
This paper commences from the critical observation that the Turing Test (TT) might not be best read as providing a definition or a genuine test of intelligence by proxy of a simulation of conversational behaviour. Firstly, the idea of a machine producing likenesses of this kind served a different purpose in Turing, namely providing a demonstrative simulation to elucidate the force and scope of his computational method, whose primary theoretical import lies within the realm of mathematics rather than cognitive modelling. (...) Secondly, it is argued that a certain bias in Turing’s computational reasoning towards formalism and methodological individualism contributed to systematically unwarranted interpretations of the role of the TT as a simulation of cognitive processes. On the basis of the conceptual distinction in biology between structural homology vs. functional analogy, a view towards alternate versions of the TT is presented that could function as investigative simulations into the emergence of communicative patterns oriented towards shared goals. Unlike the original TT, the purpose of these alternate versions would be co-ordinative rather than deceptive. On this level, genuine functional analogies between human and machine behaviour could arise in quasi-evolutionary fashion. (shrink)
The allure of perennial questions in biology: temporary excitement or substantive advance? Content Type Journal Article Pages 1-4 DOI 10.1007/s11016-011-9533-5 Authors Alan C. Love, Department of Philosophy, Minnesota Center for Philosophy of Science, University of Minnesota, 831 Heller Hall, 271 19th Ave. S, Minneapolis, MN 55455-0310, USA Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
Proceedings of the papers presented at the Symposium on "Revisiting Turing and his Test: Comprehensiveness, Qualia, and the Real World" at the 2012 AISB and IACAP Symposium that was held in the Turing year 2012, 2–6 July at the University of Birmingham, UK. Ten papers. - http://www.pt-ai.org/turing-test --- Daniel Devatman Hromada: From Taxonomy of Turing Test-Consistent Scenarios Towards Attribution of Legal Status to Meta-modular Artificial Autonomous Agents - Michael Zillich: My Robot is Smarter than Your Robot: On the Need for (...) a Total Turing Test for Robots - Adam Linson, Chris Dobbyn and Robin Laney: Interactive Intelligence: Behaviour-based AI, Musical HCI and the Turing Test - Javier Insa, Jose Hernandez-Orallo, Sergio España - David Dowe and M.Victoria Hernandez-Lloreda: The anYnt Project Intelligence Test (Demo) - Jose Hernandez-Orallo, Javier Insa, David Dowe and Bill Hibbard: Turing Machines and Recursive Turing Tests — Francesco Bianchini and Domenica Bruni: What Language for Turing Test in the Age of Qualia? - Paul Schweizer: Could there be a Turing Test for Qualia? - Antonio Chella and Riccardo Manzotti: Jazz and Machine Consciousness: Towards a New Turing Test - William York and Jerry Swan: Taking Turing Seriously (But Not Literally) - Hajo Greif: Laws of Form and the Force of Function: Variations on the Turing Test. (shrink)
I give a detailed review of 'The Outer Limits of Reason' by Noson Yanofsky 403(2013) from a unified perspective of Wittgenstein and evolutionary psychology. I indicate that the difficulty with such issues as paradox in language and math, incompleteness, undecidability, computability, the brain and the universe as computers etc., all arise from the failure to look carefully at our use of language in the appropriate context and hence the failure to separate issues of scientific fact from issues of how language (...) works. I discuss Wittgenstein's views on incompleteness, paraconsistency and undecidability and the work of Wolpert on the limits to computation. -/- Those wishing a comprehensive up to date account of Wittgenstein, Searle and their analysis of behavior from the modern two systems view may consult my article The Logical Structure of Philosophy, Psychology, Mind and Language as Revealed in Wittgenstein and Searle (2016). Those interested in all my writings in their most recent versions may download from this site my e-book ‘Philosophy, Human Nature and the Collapse of Civilization Michael Starks (2016)- Articles and Reviews 2006-2016’ by Michael Starks First Ed. 662p (2016). -/- All of my papers and books have now been published in revised versions both in ebooks and in printed books. -/- Talking Monkeys: Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet - Articles and Reviews 2006-2017 (2017) https://www.amazon.com/dp/B071HVC7YP. -/- The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle--Articles and Reviews 2006-2016 (2017) https://www.amazon.com/dp/B071P1RP1B. -/- Suicidal Utopian Delusions in the 21st century: Philosophy, Human Nature and the Collapse of Civilization - Articles and Reviews 2006-2017 (2017) https://www.amazon.com/dp/B0711R5LGX . (shrink)
Мета статті – з’ясувати концептуальний контекст виникнення нових медіа. Задля цього здійснено огляд ключових для нових медіа ідей Алана Кея та Теда Нельсона, а саме: перетворення комп’ютера на персональний метамедіум за допомогою користувацького інферфейсу та ідеї гіпертексту. Підкреслено, що створення медіа на базі комп’ютерних технологій супроводжувалось впливом медіа-теорії Маклюена та поєднанням технічного і гуманітарного дискурсів. Від початку створення нових медіа осмислювалась їхня метапозиція щодо традиційних медіа через здатність перших до симуляції наявних медіа-форм та створення нових. Теоретичні підвалини нових медіа як (...) проекту дають змогу дійти висновку, що вони можуть природньо вписуватись у культурологічні студії. (shrink)
The objection from the insolvability of principle-based modal disagreements appears to support the claim that there are no objective modal facts, or at the very least modal facts cannot be accounted for by modal rationalist theories. An idea that resurfaced fairly recently in the literature is that the use of ordinary empirical statements presupposes some prior grasp of modal notions. If this is correct, then the idea that we may have a total agreement concerning empirical facts and disagree on modal (...) facts, which is the starting point of the objection from the insolvability of modal disagreement, is undercut. This paper examines the no-separation thesis and shows that some of the arguments against the classical (empiricist) distinction between empirical and modal statements fail to be conclusive if they are taken to defend a strong notion of metaphysical possibility. The no-separation thesis appears to work only in theoretical frameworks where metaphysical modalities are considered (broadly) conceptual. For these reasons, the no-separation thesis cannot save modal rationalism from the insolvability of modal disagreement. (shrink)
Allocation of very scarce medical interventions such as organs and vaccines is a persistent ethical challenge. We evaluate eight simple allocation principles that can be classified into four categories: treating people equally, favouring the worst-off, maximising total benefits, and promoting and rewarding social usefulness. No single principle is sufficient to incorporate all morally relevant considerations and therefore individual principles must be combined into multiprinciple allocation systems. We evaluate three systems: the United Network for Organ Sharing points systems, quality-adjusted life-years, and (...) disability-adjusted life-years. We recommend an alternative system—the complete lives system—which prioritises younger people who have not yet lived a complete life, and also incorporates prognosis, save the most lives, lottery, and instrumental value principles. (shrink)
Despite its short historical moment in the sun, behaviorism has become something akin to a theoria non grata, a position that dare not be explicitly endorsed. The reasons for this are complex, of course, and they include sociological factors which we cannot consider here, but to put it briefly: many have doubted the ambition to establish law-like relationships between mental states and behavior that dispense with any sort of mentalistic or intentional idiom, judging that explanations of intelligent behavior require reference (...) to qualia and/or mental events. Today, when behaviorism is discussed at all, it is usually in a negative manner, either as an attempt to discredit an opponent’s view via a reductio, or by enabling a position to distinguish its identity and positive claims by reference to what it is (allegedly) not. -/- In this paper, however, we argue that the ghost of behaviorism is present in influential, contemporary work in the field of embodied and enactive cognition, and even in aspects of the phenomenological tradition that these theorists draw on. Rather than take this to be a problem for these views as some have, we argue that once the behaviorist dimensions are clarified and distinguished from the straw-man version of the view, it is in fact an asset, one which will help with task of setting forth a scientifically reputable version of enactivism and/or philosophical behaviorism that is nonetheless not brain-centric but behavior-centric. While this is a bit like “the enemy of my enemy is my friend” strategy, as Shaun Gallagher notes (2019), with the shared enemy of behaviorism and enactivism being classical Cartesian views and/or orthodox cognitivism in its various guises, the task of this paper is to render this alliance philosophically plausible. -/- Doi: 10.1007/s11229-019-02432-1. (shrink)
This article discusses the views of Immanuel Kant on sexual perversion (what he calls "carnal crimes against nature"), as found in his Vorlesung (Lectures on Ethics) and the Metaphysics of Morals (both the Rechtslehre and Tugendlehre). Kant criticizes sexual perversion by appealing to Natural Law and to his Formula of Humanity. Neither argument for the immorality of sexual perversion succeeds.
The concept of acting intentionally is an important nexus where ‘theory of mind’ and moral judgment meet. Preschool children’s judgments of intentional action show a valence-driven asymmetry. Children say that a foreseen but disavowed side-effect is brought about 'on purpose' when the side-effect itself is morally bad but not when it is morally good. This is the first demonstration in preschoolers that moral judgment influences judgments of ‘on-purpose’ (as opposed to purpose influencing moral judgment). Judgments of intentional action are usually (...) assumed to be purely factual. That these judgments are sometimes partly normative — even in preschoolers — challenges current understanding. Young children’s judgments regarding foreseen side-effects depend upon whether the children process the idea that the character does not care about the side-effect. As soon as preschoolers effectively process the ‘theory of mind’ concept, NOT CARE THAT P, children show the side-effect effect.idea.. (shrink)
Feminist science critics, in particular Sandra Harding, Carolyn Merchant, and Evelyn Fox Keller, claim that misogynous sexual metaphors played an important role in the rise of modern science. The writings of Francis Bacon have been singled out as an especially egregious instance of the use of misogynous metaphors in scientific philosophy. This paper offers a defense of Bacon.
When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number (...) of recent cases involving automated, or algorithmic, decision-systems. We apply our conception of agency laundering to a series of examples, including Facebook’s automated advertising suggestions, Uber’s driver interfaces, algorithmic evaluation of K-12 teachers, and risk assessment in criminal sentencing. We distinguish agency laundering from several other critiques of information technology, including the so-called “responsibility gap,” “bias laundering,” and masking. (shrink)
To enhance the treatment of relations in biomedical ontologies we advance a methodology for providing consistent and unambiguous formal definitions of the relational expressions used in such ontologies in a way designed to assist developers and users in avoiding errors in coding and annotation. The resulting Relation Ontology can promote interoperability of ontologies and support new types of automated reasoning about the spatial and temporal dimensions of biological and medical phenomena.
When pessimists claim that human life is meaningless, they often also assert that the universe is “blind to good and evil” and “indifferent to us”. How, if it all, is the indifference of the universe relevant to whether life is meaningful? To answer this question, and to know whether we should be concerned that the universe is indifferent, we need a clearer and deeper understanding of the concept of “cosmic indifference”, which I will seek to provide. I will argue that (...) the lives of many individuals are meaningful and that human life, in general, is somewhat meaningful, despite the indifference of the universe. Furthermore, I will seek to demonstrate that even if the universe cared about us, or had preferences for how we live our lives, that this likely would not enhance the quality of our lives. (shrink)
The mantra that "the best way to predict the future is to invent it" (attributed to the computer scientist Alan Kay) exemplifies some of the expectations from the technical and innovative sides of biomedical research at present. However, for technical advancements to make real impacts both on patient health and genuine scientific understanding, quite a number of lingering challenges facing the entire spectrum from protein biology all the way to randomized controlled trials should start to be overcome. The proposal (...) in this chapter is that philosophy is essential in this process. By reviewing select examples from the history of science and philosophy, disciplines which were indistinguishable until the mid-nineteenth century, I argue that progress toward the many impasses in biomedicine can be achieved by emphasizing theoretical work (in the true sense of the word 'theory') as a vital foundation for experimental biology. Furthermore, a philosophical biology program that could provide a framework for theoretical investigations is outlined. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.