Philosophers have claimed that education aims at fostering disparate epistemic goals. In this paper we focus on an important segment of this debate involving conversation between AlvinGoldman and Harvey Siegel. Goldman claims that education is essentially aimed at producing true beliefs. Siegel contends that education is essentially aimed at fostering both true beliefs and, independently, critical thinking and rational belief. Although we find Siegel’s position intuitively more plausible than Goldman’s, we also find Siegel’s defence of (...) it wanting. We suggest novel argumentative strategies that draw on Siegel’s own arguments but look to us more promising. (shrink)
I argue that AlvinGoldman has failed to save process reliabilism from my critique in earlier work of consequentialist or teleological epistemic theories. First, Goldman misconstrues the nature of my challenge: two of the cases he discusses I never claimed to be counterexamples to process reliabilism. Second, Goldman’s reply to the type of case I actually claimed to be a counterexample to process reliabilism is unsuccessful. He proposes a variety of responses, but all of them either (...) feature an implausible restriction on process types, or fail to rule out cases with the sort of structure that generates the worry, or both. (shrink)
En este trabajo nos referiremos al debate entre internismo y externismo epistemológicos a partir del episodio que constituye el debate entre Michael Williams y AlvinGoldman, que se expresa en el intercambio de trabajos presente en el volumen del año 2016 Goldman and his critics. Enmarcaremos esta discusión señalando que, mientras una serie de autores (en particular Laurence BonJour y, siguiendo su influencia, Jennifer Lackey y Fernando Broncano) extrae del debate internismo/externismo la consecuencia “dualista” de que el (...) conocimiento no puede analizarse según un único eje, sino que hallamos al menos dos -el de la razonabilidad subjetiva y el de la conductividad a la verdad-, Goldman se presenta como un externista de “línea dura” para el cual el internismo es simplemente insostenible y la justificación epistémica ha de comprenderse exclusivamente en términos confiabilistas. Sobre este trasfondo, analizaremos la propuesta de Williams, quien por un lado busca probar que las críticas de Goldman al internismo quedan restringidas a una versión “mentalista” o “subjetivista”, y, por el otro, busca ratificar una epistemología internista moderada apoyándose en el señalamiento de que el conocimiento humano es indisociable de consideraciones de responsabilidad (accountability). Buscaremos probar que la réplica de Goldman, que se centra en denunciar una presunta confusión por Williams entre el problema del “estar justificado” y el del “justificar”, no hace justicia a la amplitud del problema de la justificación epistémica, una vez que este es abordado, no a partir de presuntas “intuiciones” sobre el significado de “conocimiento” en el lenguaje ordinario, sino a partir de las prácticas en las que nociones como esta pueden hacerse funcionar. -/- . (shrink)
In his 2010 paper “Philosophical Naturalism and Intuitional Methodology”, Alvin I. Goldman invokes the Condorcet Jury Theorem in order to defend the reliability of intuitions. The present note argues that the original conditions of the theorem are all unrealistic when analysed in connection to the case of intuitions. Alternative conditions are discussed.
This paper examines the causal theory of knowledge put forth by AlvinGoldman in his 1967 paper “A Causal Theory of Knowing.” Goldman contends that a justified, true belief is knowledge if and only if it is causally connected to the fact that makes it true. This paper provides examples, however, of justified, true beliefs with such causal connections that are clearly not knowledge. The paper further shows that at-tempts to salvage the causal theory are unsatisfactory.
The paper concentrates on how the acceptance of radical naturalism in Quine’s theory of meaning escorts Quine to ponder the naturalized epistemology. W.V. Quine was fascinated by the evidential acquisition of scientific knowledge, and language as a vehicle of knowledge plays a significant role in his regimented naturalistic theory anchored in the scientific framework. My point is that there is an interesting shift from epistemology to language (semantic externalism). The rejection of the mentalist approach on meaning vindicates external that somehow (...) pave the way for ‘semantic holism’, a thesis where the meaning of a sentence is defined in turns to the totality of nodes and paths of its semantic networks where the meaning of linguistic units depend upon the meaning of the entire language. I would like to relook on Quine’s heart-throbbing claim about the co-extensiveness of the sentential relation and the evidential relation that point towards an affirmation of meaning holism and semantic externalism. Besides, the knowledge of acquaintance that relinquishes the singular thought from the account of psychological consideration and self-knowledge hypothesis copes up with the testimonial and warrant knowledge entangling by the claims of social knowledge as anticipated by AlvinGoldman. My conclusion would be nearer to the stance of semantic externalism inculcated by the social knowledge (in an epistemic sense) and semantic holism. (shrink)
In a situation of peer disagreement, peers are usually assumed to share the same evidence. However they might not share the same evidence for the epistemic system used to process the evidence. This synchronic complication of the peer disagreement debate suggested by Goldman (In Feldman R, Warfield T (eds) (2010) Disagreement. Oxford University Press, Oxford, pp 187–215) is elaborated diachronically by use of a simulation. The Hegselmann–Krause model is extended to multiple epistemic systems and used to investigate the role (...) of consensus and difference splitting in peer disagreement. I find that the very possibility of multiple epistemic systems downgrades the epistemic value of consensus and makes difference splitting a suboptimal strategy. (shrink)
Preservationism states that memory preserves the justification of the beliefs it preserves. More precisely: if S formed a justified belief that p at t1 and retains in memory a belief that p until t2, then S's belief that p is prima facie justified via memory at t2. Preservationism is an unchallenged orthodoxy in the epistemology of memory. Advocates include Sven Bernecker, Tyler Burge, AlvinGoldman, Gilbert Harman, Michael Huemer, Matthew McGrath, and Thomas Senor. I develop three dilemmas for (...) it, in part by drawing on research in cognitive psychology. The dilemmas centre on preservationism's implications for certain cases involving either stored beliefs, forgotten evidence, or recollection failure. Each dilemma shows that preservationism either is false or lacks key support. (shrink)
This paper tackles the problem of defining what a cognitive expert is. Starting from a shared intuition that the definition of an expert depends upon the conceptual function of expertise, I shed light on two main approaches to the notion of an expert: according to novice-oriented accounts of expertise, experts need to provide laypeople with information they lack in some domain; whereas, according to research-oriented accounts, experts need to contribute to the epistemic progress of their discipline. In this paper, I (...) defend the thesis that cognitive experts should be identified by their ability to perform the latter function rather than the former, as novice-oriented accounts, unlike research-oriented ones, fail to comply with the rules of a functionalist approach to expertise. (shrink)
According to process reliabilism, a belief produced by a reliable belief-forming process is justified. I introduce problems for this theory on any account of reliability. Does the performance of a process in some domain of worlds settle its reliability? The theories that answer “Yes” typically fail to state the temporal parameters of this performance. I argue that any theory paired with any plausible parameters has implausible implications. The theories that answer “No,” I argue, thereby lack essential support and exacerbate familiar (...) problems. There are new reasons to avoid any reliability conditions on justification. (shrink)
Stewart Cohen’s New Evil Demon argument raises familiar and widely discussed concerns for reliabilist accounts of epistemic justification. A now standard response to this argument, initiated by AlvinGoldman and Ernest Sosa, involves distinguishing different notions of justification. Juan Comesaña has recently and prominently claimed that his Indexical Reliabilism (IR) offers a novel solution in this tradition. We argue, however, that Comesaña’s proposal suffers serious difficulties from the perspective of the philosophy of language. More specifically, we show that (...) the two readings of sentences involving the word ‘justified’ which are required for Comesaña’s solution to the problem are not recoverable within the two-dimensional framework of Robert Stalnaker to which he appeals. We then consider, and reject, an attempt to overcome this difficulty by appeal to a complication of the theory involving counterfactuals, and conclude the paper by sketching our own preferred solution to Cohen’s New Evil Demon. (shrink)
This book is a general defence of Donald Davidson's and G.E.M. Anscombe's 'unifying' approach to the individuation of actions and other events against objections raised by Alvin I. Goldman and others. It is argued that, ironically, Goldman's rival 'multiplying' account is itself vulnerable to these objections, whereas Davidson's account survives them. Although claims that the unifier-multiplier dispute is not really substantive are shown to be unfounded, some room for limited agreement over the ontological status of events is (...) indicated. Davidson's causal criterion of event identity is then defended against charges of triviality or inadequacy. It is concluded that Davidson's criterion is not primarily a criterion for arriving at particular judgments of individuation, but a metaphysical standard for the correctness of such judgments, however arrived at. Contents: Unifiers vs. Multipliers - Davidson's individuation of events - Goldman's act generation - causal, 'by'-relational, and temporal problems - ontology and event constituents - Davidson's causal criterion. This book is unique in providing a detailed survey and analysis of the recent unifier-multiplier dispute, and will be of interest to all researchers in action theory, as well as those working more broadly in metaphysics, philosophy of language, and philosophy of mind. (shrink)
The problem of act individuation is a debate about the identity conditions of human acts. The fundamental question about act individuation is: how do we distinguish between actions? Three views of act individuation have dominated the literature. First, Donald Davidson and G.E.M. Anscombe have argued that a number of different descriptions refer to a single act. Second, AlvinGoldman and Jaegwon Kim have argued that each description designates a distinct act. Finally, Irving Thalberg and Judith Jarvis Thomson have (...) averred that some acts are sequences of causally related events, which include both a primitive bodily action and some of its effects. All of these accounts have assumed that a simple invariantist account of act individuation captures how ordinary people distinguish between acts. For my dissertation, I devised an experiment to test the action theorists' assumptions. My data show that people's intuitions seem to depend on the valence of the consequences of the action under consideration. So, an invariantist account is not possible. In light of the empirical results, I argue that if we seek a folk account of act individuation, then that account should be able to explain the variability that seems to be present in people's intuitions about different cases. (shrink)
Traditionally pragmatists have been favorably disposed to improving our understanding of agency and ethics through the use of empirical research. In the last two decades simulation theory has been championed in certain cognitive science circles as a way of explaining how we attribute mental states and predict human behavior. Drawing on research in psychology and neuroscience, Alvin I. Goldman and Robert M. Gordon have not only used simulation theory to discuss how we “mindread”, but have suggested that the (...) theory has implications for ethics. The limitations of simulation theory for “mindreading” and ethics are addressed in this article from an interactionist or neo-Meadian pragmatic perspective. To demonstrate the limitations of simulation theory scenes from the television show Mad Men are used as “thought-experiments”. (shrink)
Internalists have criticised reliabilism for overlooking the importance of the subject's point of view in the generation of knowledge. This paper argues that there is a troubling ambiguity in the intuitive examples that internalists have used to make their case, and on either way of resolving this ambiguity, reliabilism is untouched. However, the argument used to defend reliabilism against the internalist cases could also be used to defend a more radical form of externalism in epistemology.
In “Process Reliabilism and the Value Problem” I argue that Erik Olsson and AlvinGoldman's conditional probability solution to the value problem in epistemology is unsuccessful and that it makes significant internalist concessions. In “Kinds of Learning and the Likelihood of Future True Beliefs” Olsson and Martin Jönsson try to show that my argument does “not in the end reduce the plausibility” of Olsson and Goldman's account. Here I argue that, while Olsson and Jönsson clarify and amend (...) the conditional probability approach in a number of helpful ways, my case against it remains intact. I conclude with a constructive proposal as to how their account may be steered in a more promising direction. (shrink)
This article intends to show that the defense of “understanding” as one of the major goals of science education can be grounded on an anti-reductionist perspective on testimony as a source of knowledge. To do so, we critically revisit the discussion between Harvey Siegel and AlvinGoldman about the goals of science education, especially where it involves arguments based on the epistemology of testimony. Subsequently, we come back to a discussion between Charbel N. El-Hani and Eduardo Mortimer, on (...) the one hand, and Michael Hoffmann, on the other, striving to strengthen the claim that rather than students’ belief change, understanding should have epistemic priority as a goal of science education. Based on these two lines of discussion, we conclude that the reliance on testimony as a source of knowledge is necessary to the development of a more large and comprehensive scientific understanding by science students. (shrink)
This article intends to show that the defense of ‘‘understanding’’ as one of the major goals of science education can be grounded on an anti-reductionist perspective on testimony as a source of knowledge. To do so, we critically revisit the discussion between Harvey Siegel and AlvinGoldman about the goals of science education, especially where it involves arguments based on the epistemology of testimony. Subsequently, we come back to a discussion between Charbel N. El-Hani and Eduardo Mortimer, on (...) the one hand, and Michael Hoffmann, on the other, striving to strengthen the claim that rather than students’ belief change, understanding should have epistemic priority as a goal of science education. Based on these two lines of discussion, we conclude that the reliance on testimony as a source of knowledge is necessary to the development of a more large and comprehensive scientific understanding by science students. (shrink)
This paper attempts to provide a remedy to a surprising lacuna in the current discussion in the epistemology of expertise, namely the lack of a theory accounting for the epistemic authority of collective agents. After introducing a service conception of epistemic authority based on AlvinGoldman’s account of a cognitive expert, I argue that this service conception is well suited to account for the epistemic authority of collective bodies on a non-summativist perspective, and I show in detail how (...) the defining requirements of an expert can apply to epistemic groups. (shrink)
There are three widely held beliefs among epistemologists: (1) the goal of inquiry is truth or something that entails truth; (2) epistemology aims for a reflectively stable theory via reflective equilibrium; (3) epistemology is a kind of inquiry. I argue that accepting (1) and (2) entails denying (3). This is a problem especially for the philosophers (e.g. Duncan Pritchard and AlvinGoldman) who accept both (1) and (2), for in order to be consistent, they must reject (3). The (...) tension is not restricted to epistemology. A similar tension also exists in the area of moral philosophy. The tension can be generalized. If one believes that the goal of inquiry is truth or something that entails truth and that philosophy aims for a reflectively stable theory via reflective equilibrium, she must deny that philosophy is a kind of inquiry. (shrink)
Alvin I. Goldman has argued that since one must count epistemic rules among the factors that help to fix the justificational status of agents (generally called J-factors), not all J-factors are internalist, that is, intrinsic to the agent whose justificational status they help to fix. After all, for an epistemic rule to count as a genuine J-factor, it must be objectively correct and, therefore, “independent of any and all minds.” Consequently, it cannot be intrinsic to any particular epistemic (...) agent. In this brief commentary, I will argue that Goldman’s argument misunderstands what it takes for epistemic justification to be internalist and, therefore, fails to guarantee his externalist conclusion. In particular, I want to demonstrate that Goldman’s argument trivializes the difference between intrinsic and extrinsic properties that lays at the basis of the internalist/externalist debate. I will show that, if sound, simple variations on Goldman’s argument could be used to prove the absurd conclusion that all properties are extrinsic. Now, since the intrinsic/extrinsic distinction is fundamental to debates in several areas of philosophy, not only the internalist/externalist debate in epistemology, I conclude that Goldman’s argument cannot be sound. (shrink)
1. Introduction: a look back at the reasons vs. causes debate. 2. The interventionist account of causation. 3. Four objections to interventionism. 4. The counterfactual analysis of event causation. 5. The role of free agency. 6. Causality in the human sciences. -- The reasons vs. causes debate reached its peak about 40 years ago. Hempel and Dray had debated the nature of historical explanation and the broader issue of whether explanations that cite an agent’s reasons are causal or not. Melden, (...) Peters, Winch, Kenny and Anscombe had contributed their anticausal conceptions. The neo-Wittgensteinians seemed to be winning the day when in 1963 Donald Davidson published his seminal paper “Actions, Reasons, and Causes”. Davidson’s paper devastated the Wittgensteinian camp. It contained, among other things, a powerful attack on the logical connection argument. Davidson argued that the existence of a logical or conceptual connection between descriptions can never eliminate a causal relation, which holds between events simpliciter, not between events under certain descriptions. Davidson maintained that in a way, reasons can be causes. When somebody acts for a certain reason, his intentional attitudes, or rather changes in his attitudes, cause his bodily movements. Davidson also argued that rationalization is a species of causal explanation. For the definition of action, he argued that intentional actions are bodily movements caused in the right way by beliefs and desires that rationalize them. Davidson’s paper paved the way for causal theories of action, which superseded neo-Wittgensteinian analyses in the following decades. The causal theory was rapidly adopted by AlvinGoldman, David Armstrong, Paul Churchland, Myles Brand and many others, entering the mainstream and dominating the philosophy of action to this very day. In 1971 Georg Henrik von Wright published his book "Explanation and Understanding". The second chapter did not deal with agency, but with causation. It developed a new account of causation, the interventionist or experimentalist account. Focusing on causation, von Wright remedied a major shortcoming of the reasons vs. causes debate. The concept of causality, and the nature of the causal relation, received little attention in this debate, a fact that holds true for both camps. Mostly it was simply taken for granted that, as Hempel had declared, “causal explanation is a special type of deductive-nomological explanation”. One camp then aligned intentional explanations with D-N explanations, while the other camp insisted on their disparity. So strictly speaking, the label “reasons/causes debate” was a misnomer. The controversy dealt primarily with the question as to whether intentional explanations can take the form of D-N explanations, while the notion of causation, and the metaphysics of the causal relation, were left obscured. With von Wright’s new approach, the situation changed. Von Wright was primarily concerned with causation, but his approach contained an implicit attack on the causal theory of action as well. His core idea was that the notion of causality is intimately linked with, or even derived from, the notion of intentionally making something happen. Other philosophers, even Hume, had considered such a connection before, but often just to reject this view, regarding it as a kind of myth belonging to the infancy of the human mind. Von Wright took the idea seriously. He submitted the analysis that p is the cause of q if and only if by doing p we could bring about q. The causal theory of action was also concerned with the relation between causation and agency, to which its name bears witness. The causal theory of action holds that actions are bodily movements with a certain causal history. This is why von Wright’s account constituted a momentous challenge to the causal theory: it reversed the direction of conceptual dependency between both notions. Davidson and his followers tried to define what an intentional action is by using the notion of causation. The causal condition which the causal theory sets is part of the definition of “doing something intentionally”. Von Wright claimed that the conceptual dependency is the other way round. He used the notions of doing, and bringing about, to explain what causal relations are. So, instead of a causal theory of action, he advocated an agency theory of causation, as it may be dubbed. It is remarkable how seldom this clash of opinions about conceptual primacy is reflected in the literature. There are few exceptions: Fred Stoutland noticed the conflict, and he published a number of papers in which he compared Davidson’s and von Wright’s views. Von Wright’s book "Explanation and Understanding" was widely read and discussed in the seventies, especially in Europe. But it strikes me that especially in North America, where the causal theory of action became the orthodoxy of the day, von Wright’s challenge went largely unnoticed. Even Davidson did not seem to take it seriously. He nowhere takes notice of the interventionist theory of causation, while he does discuss von Wright’s earlier book "Norm and Action". As is well-known, Davidson favoured an alternative account of causation, based on “the principle of the nomological character of causality”, as he somewhat clumsily called it, or, later and less clumsily, “the cause-law thesis”. Davidson’s firm adherence to a nomological theory of causality may explain why he did not take much interest in alternative accounts. [...] -/- . (shrink)
Although Paul Churchland and Jerry Fodor both subscribe to the so-called theory-theory– the theory that folk psychology (FP) is an empirical theory of behavior – they disagree strongly about FP’s fate. Churchland contends that FP is a fundamentally flawed view analogous to folk biology, and he argues that recent advances in computational neuroscience and connectionist AI point toward development of a scientifically respectable replacement theory that will give rise to a new common-sense psychology. Fodor, however, wagers that FP will be (...) largely preserved and vindicated by scientific investigations of behavior. Recent findings by developmental psychologists, I argue, will push both Churchlandians and Fodorians toward the pessimistic view that FP is a misguided theory that will never be displaced, because it is, so to speak, built into our cognitive systems. I explore the possibility of preserving optimism by rejecting the theory-theory and adopting the simulation theory, a competing view developed by Robert Gordon, AlvinGoldman, and Jane Heal. According to simulationists, common-sense interpretation of behavior is accomplished by means of pretense-like operations that deploy the cognitive system’s own reasoning capabilities in a disengaged manner. Since on this view no theory-like set of principles would be needed, the simulation theory seems to enjoy a simplicity advantage over the theory-theory. Steven Stich and Shawn Nichols, however, contend that as the cognitive system would require special mechanisms for disengaged operation, the simplicity question cannot be resolved until suitable computational models are developed. I describe a set of models I have constructed to meet this need, and I discuss the contribution such models can make to determining FP’s fate. (shrink)
Process reliabilism is a theory about ex post justification, the justification of a doxastic attitude one has, such as belief. It says roughly that a justified belief is a belief formed by a reliable process. It is not a theory about ex ante justification, one’s justification for having a particular attitude toward a proposition, an attitude one might lack. But many reliabilists supplement their theory such that it explains ex ante justification in terms of reliable processes. In this paper I (...) argue that the main way reliabilists supplement their theory fails. In the absence of an alternative, reliabilism does not account for ex ante justification. (shrink)
This work has as its main goal to discuss two different epistemic proposals, both under the reliabilist handle. The first one, developed by AlvinGoldman, has as its central goal to offer an adequate characterization of the justificational element present in the standard account of knowledge. Goldman's proposal has the initial challenge of properly explaining Gettier's demand presented some years earlier, but also to correct some more central problems that affect his own causal theory of knowledge. However, (...) the externalist proposal within Goldman's reliabilism faced some serious attacks directed to its notion of justification. Three of these attacks became well known in the recent literature: the generality problem, the meta-incoherence problem and the new evil genius problem. Each one in its own way has established challenges to the his reliabilist account. The second reliabilist theory we will discuss consists in a reformulation of Goldman's account, defended mainly by Ernest Sosa in a series of very important works in contemporary epistemology. In these works, Sosa was able to insert the notion of intellectual virtues in the epistemological debate, bringing to the center of the externalist debate an idea of a responsible belief formation, and at the same time trying to give a proper answer to the more central challenges faced by original reliabilism. In the first part of the paper I present the first of these theories, and after that I offer a treatment of Sosa's reformulation of reliabilism and a defense of this proposal as a more adequate theory to deal with some basic demands of a proper theory of justification. (shrink)
On peut identifier deux principales caractéristiques de Wikipédia pour expliquer son succès et sa spécificité. La première est sa liberté de modification et d’accès. La deuxième réside dans le système organisationnel ascendant (bottom-up) et égalitaire où les articles ne proviennent plus d’experts sélectionnés pour être ensuite distribués à la foule mais sont, au contraire, produits par une base composée d’amateurs inconnus. Ces deux caractéristiques, associées au succès populaire et qualitatif de l’encyclopédie, laissent peu de place à l’expertise individuelle (en particulier (...) à l’expertise académique et scientifique). De plus, le constat de l’auto-correction dynamique de l’encyclopédie a effacé l’idée d’une vérification a posteriori des articles par un ou plusieurs experts du domaine. La force de Wikipedia semble alors résider dans une harmonie systématique inhérente à sa forme. -/- Dans cet article, je tente de montrer qu’il faut dépasser l’antagonisme entre amateurs et experts (crowds versus experts) et que c’est justement le caractère libre et égalitaire de Wikipedia qui rend nécessaire une réflexion épistémologique sur l’expertise au sein de cette encyclopédie. Je prends l’exemple du problème du novice et des deux experts théorisé par AlvinGoldman (2001). Appliqué à Wikipédia, ce problème pose la question des options épistémiques disponibles pour les wikipédiens confronté à une guerre d’édition. Outre les solutions classiques (recours à des meta-experts, décision majoritaire, discussion), Goldman avance l’idée d’une prise en compte, par le novice, des (in)succès épistémiques précédents des différents experts en désaccord. Un expert s’étant majoritairement trompé par le passé en devient ainsi moins crédible et inversement. Même si cette solution pose un certain nombre de problèmes, elle semble particulièrement propice à étudier l’expertise au sein de Wikipédia. Elle permet de comprendre (1.) qu’il existe déjà une expertise wikipedienne régie par certaines règles et (2.) qu’il est possible d’aménager une place aux experts (en particulier scientifiques) dans cette encyclopédie libre. Au final, pour comprendre et améliorer la place de l’expertise dans Wikipedia, il semble qu’il faille adopter une certaine conception de l’expertise, centrée sur l’agent plus que sur le contenu épistémique. Cette conclusion soulève un paradoxe, Wikipedia s’étant construite sur l’anonymat des auteurs et la primauté du contenu. (shrink)
Abstract of a paper to be presented in an APA symposium on Epistemology and Philosophy of Mind, December 28, 1987, commenting on papers by Alvin I. Goldman and Patricia Smith Churchland.
Goldman, though still a reliabilist, has made some recent concessions to evidentialist epistemologies. I agree that reliabilism is most plausible when it incorporates certain evidentialist elements, but I try to minimize the evidentialist component. I argue that fewer beliefs require evidence than Goldman thinks, that Goldman should construe evidential fit in process reliabilist terms, rather than the way he does, and that this process reliabilist understanding of evidence illuminates such important epistemological concepts as propositional justification, ex ante (...) justification, and defeat. (shrink)
Goldman’s epistemology has been influential in two ways. First, it has influenced some philosophers to think that, contrary to erstwhile orthodoxy, relations of evidential support, or confirmation, are not discoverable a priori. Second, it has offered some philosophers a powerful argument in favor of methodological reliance on intuitions about thought experiments in doing philosophy. This paper argues that these two legacies of Goldman’s epistemology conflict with each other.
The term “technological fix”, coined by technologist/administrator Alvin Weinberg in 1965, vaunted engineering innovation as a generic tool for circumventing problems commonly conceived as social, political or cultural. A longtime Director of Oak Ridge National Laboratory, government consultant and essayist, Weinberg also popularized the term “Big Science” to describe national goals and the competitive funding environment after the Second World War. Big Science reoriented towards Technological Fixes, he argued, could provide a new “Apollo project” to address social problems of (...) the future. His ideas – most recently echoed in “solutionism” – have channeled confidence and controversy ever since. This paper traces the genesis and promotion of the concept by Weinberg and his contemporaries. It argues that, through it, the marginal politics and technological confidences of interwar scientists and technocrats were repositioned as mainstream notions closer to the heart of Big Science policy. (shrink)
This article aims to examine Alvin Plantinga's speech at Biola University which name is: ‘‘Science and Religion, Where the Conflict Really Lies.’’ In order to achieve this purpose, after giving a description of the emergence of conflict between two disciplines, the ideas of Plantinga is examined through his speech and articles. The remainder of the article conducts a critical examination of his thesis in the light of Islamic viewpoint.
The fourteen essays in this volume, by leading scholars in the field, explore the relationship between teleology and politics in Kant’s corpus. Among the topics discussed are Kant’s normative political theory and legal philosophy; his cosmopolitanism and views on international relations; his theory of history; his theory of natural teleology; and the broader relationship between morality, history, nature, and politics. _Politics and Teleology in Kant_ will be of interest to a wide audience, including Kant scholars; scholars and students working in (...) moral and political philosophy, the philosophy of history, and political theory and political science; legal scholars; and international relations theorists. (shrink)
In May 2010, philosophers, family and friends gathered at the University of Notre Dame to celebrate the career and retirement of Alvin Plantinga, widely recognized as one of the world's leading figures in metaphysics, epistemology, and the philosophy of religion. Plantinga has earned particular respect within the community of Christian philosophers for the pivotal role that he played in the recent renewal and development of philosophy of religion and philosophical theology. Each of the essays in this volume engages with (...) some particular aspect of Plantinga's views on metaphysics, epistemology, or philosophy of religion. Contributors include Michael Bergman, Ernest Sosa, Trenton Merricks, Richard Otte, Peter VanInwagen, Thomas P. Flint, Eleonore Stump, Dean Zimmerman and Nicholas Wolterstorff. The volume also includes responses to each essay by Bas van Fraassen, Stephen Wykstra, David VanderLaan, Robin Collins, Raymond VanArragon, E. J. Coffman, Thomas Crisp, and Donald Smith. (shrink)
In this paper we will give a critical account of Plantinga’s well-known argument to the effect that the existence of an omnipotent and morally perfect God is consistent with the actual presence of evil. After presenting Plantinga’s view, we critically discuss both the idea of divine knowledge of conditionals of freedom and the concept of transworld depravity. Then, we will sketch our own version of the Free-Will Defence, which maintains that moral evil depends on the misuse of human freedom. However, (...) our argument does not hinge on problematic metaphysical assumptions, but depends only on a certain definition of a free act and a particular interpretation of divine omniscience. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.