The INBIOSA project brings together a group of experts across many disciplines who believe that science requires a revolutionary transformative step in order to address many of the vexing challenges presented by the world. It is INBIOSA’s purpose to enable the focused collaboration of an interdisciplinary community of original thinkers. This paper sets out the case for support for this effort. The focus of the transformative research program proposal is biology-centric. We admit that biology to date has been more fact-oriented (...) and less theoretical than physics. However, the key leverageable idea is that careful extension of the science of living systems can be more effectively applied to some of our most vexing modern problems than the prevailing scheme, derived from abstractions in physics. While these have some universal application and demonstrate computational advantages, they are not theoretically mandated for the living. A new set of mathematical abstractions derived from biology can now be similarly extended. This is made possible by leveraging new formal tools to understand abstraction and enable computability. [The latter has a much expanded meaning in our context from the one known and used in computer science and biology today, that is "by rote algorithmic means", since it is not known if a living system is computable in this sense (Mossio et al., 2009).] Two major challenges constitute the effort. The first challenge is to design an original general system of abstractions within the biological domain. The initial issue is descriptive leading to the explanatory. There has not yet been a serious formal examination of the abstractions of the biological domain. What is used today is an amalgam; much is inherited from physics (via the bridging abstractions of chemistry) and there are many new abstractions from advances in mathematics (incentivized by the need for more capable computational analyses). Interspersed are abstractions, concepts and underlying assumptions “native” to biology and distinct from the mechanical language of physics and computation as we know them. A pressing agenda should be to single out the most concrete and at the same time the most fundamental process-units in biology and to recruit them into the descriptive domain. Therefore, the first challenge is to build a coherent formal system of abstractions and operations that is truly native to living systems. Nothing will be thrown away, but many common methods will be philosophically recast, just as in physics relativity subsumed and reinterpreted Newtonian mechanics. -/- This step is required because we need a comprehensible, formal system to apply in many domains. Emphasis should be placed on the distinction between multi-perspective analysis and synthesis and on what could be the basic terms or tools needed. The second challenge is relatively simple: the actual application of this set of biology-centric ways and means to cross-disciplinary problems. In its early stages, this will seem to be a “new science”. This White Paper sets out the case of continuing support of Information and Communication Technology (ICT) for transformative research in biology and information processing centered on paradigm changes in the epistemological, ontological, mathematical and computational bases of the science of living systems. Today, curiously, living systems cannot be said to be anything more than dissipative structures organized internally by genetic information. There is not anything substantially different from abiotic systems other than the empirical nature of their robustness. We believe that there are other new and unique properties and patterns comprehensible at this bio-logical level. The report lays out a fundamental set of approaches to articulate these properties and patterns, and is composed as follows. -/- Sections 1 through 4 (preamble, introduction, motivation and major biomathematical problems) are incipient. Section 5 describes the issues affecting Integral Biomathics and Section 6 -- the aspects of the Grand Challenge we face with this project. Section 7 contemplates the effort to formalize a General Theory of Living Systems (GTLS) from what we have today. The goal is to have a formal system, equivalent to that which exists in the physics community. Here we define how to perceive the role of time in biology. Section 8 describes the initial efforts to apply this general theory of living systems in many domains, with special emphasis on crossdisciplinary problems and multiple domains spanning both “hard” and “soft” sciences. The expected result is a coherent collection of integrated mathematical techniques. Section 9 discusses the first two test cases, project proposals, of our approach. They are designed to demonstrate the ability of our approach to address “wicked problems” which span across physics, chemistry, biology, societies and societal dynamics. The solutions require integrated measurable results at multiple levels known as “grand challenges” to existing methods. Finally, Section 10 adheres to an appeal for action, advocating the necessity for further long-term support of the INBIOSA program. -/- The report is concluded with preliminary non-exclusive list of challenging research themes to address, as well as required administrative actions. The efforts described in the ten sections of this White Paper will proceed concurrently. Collectively, they describe a program that can be managed and measured as it progresses. (shrink)
Fitting Attitudes accounts of value analogize or equate being good with being desirable, on the premise that ‘desirable’ means not, ‘able to be desired’, as Mill has been accused of mistakenly assuming, but ‘ought to be desired’, or something similar. The appeal of this idea is visible in the critical reaction to Mill, which generally goes along with his equation of ‘good’ with ‘desirable’ and only balks at the second step, and it crosses broad boundaries in terms of philosophers’ other (...) commitments. For example, Fitting Attitudes accounts play a central role both in T.M. Scanlon’s [1998] case against teleology, and in Michael Smith [2003], [unpublished] and Doug Portmore’s [2007] cases for it. And of course they have a long and distinguished history. (shrink)
The basic idea of expressivism is that for some sentences ‘P’, believing that P is not just a matter of having an ordinary descriptive belief. This is a way of capturing the idea that the meaning of some sentences either exceeds their factual/descriptive content or doesn’t consist in any particular factual/descriptive content at all, even in context. The paradigmatic application for expressivism is within metaethics, and holds that believing that stealing is wrong involves having some kind of desire-like attitude, with (...) world-tomind direction of fit, either in place of, or in addition to, being in a representational state of mind with mind-to-world direction of fit. Because expressivists refer to the state of believing that P as the state of mind ‘expressed’ by ‘P’, this view can also be described as the view that ‘stealing is wrong’ expresses a state of mind that involves a desire-like attitude instead of, or in addition to, a representational state of mind. According to some expressivists - unrestrained expressivists, as I’ll call them - there need be no special relationship among the different kinds of state of mind that can be expressed by sentences. Pick your favorite state of mind, the unrestrained expressivist allows, and there could, at least in principle, be a sentence that expressed it. Expressivists who seem to have been unrestrained plausibly include Ayer in Language, Truth, and Logic, and Simon Blackburn in many of his writings, including his [1984], [1993], and.. (shrink)
Much academic work (in philosophy, economics, law, etc.), as well as common sense, assumes that ill health reduces well-being. It is bad for a person to become sick, injured, disabled, etc. Empirical research, however, shows that people living with health problems report surprisingly high levels of well-being - in some cases as high as the self-reported well-being of healthy people. In this chapter, I explore the relationship between health and well-being. I argue that although we have good reason to believe (...) that health problems causing pain and death typically do reduce well-being, health problems that limit capabilities probably don't reduce well-being nearly as much as most people suppose. I then briefly explore the consequences of this conclusion for political philosophy and ethics. If many health problems don't significantly reduce well-being, why should governments go to great expense to prevent or treat them? Why should parents be obliged to ensure the health of their children? (shrink)
Mark Schroeder has recently offered a solution to the problem of distinguishing between the so-called " right " and " wrong " kinds of reasons for attitudes like belief and admiration. Schroeder tries out two different strategies for making his solution work: the alethic strategy and the background-facts strategy. In this paper I argue that neither of Schroeder's two strategies will do the trick. We are still left with the problem of distinguishing the right from the wrong (...) kinds of reasons. (shrink)
According to a naïve view sometimes apparent in the writings of moral philosophers, ‘ought’ often expresses a relation between agents and actions – the relation that obtains between an agent and an action when that action is what that agent ought to do. It is not part of this naïve view that ‘ought’ always expresses this relation – on the contrary, adherents of the naïve view are happy to allow that ‘ought’ also has an epistemic sense, on which it means, (...) roughly, that some proposition is likely to be the case, and adherents of the naïve view are also typically happy to allow that ‘ought’ also has an evaluative sense, on which it means, roughly, that were things ideal, some proposition would be the case.1 What is important to the naïve view is not that these other senses of ‘ought’ do not exist, but rather that they are not exhaustive – for what they leave out, is the important deliberative sense of ‘ought’, which is the central subject of moral inquiry about what we ought to do and why – and it is this deliberative sense of ‘ought’ which the naïve view understands to express a relation between agents and actions.2 In contrast, logically and linguistically sophisticated philosophers – with a few notable exceptions3 – have rejected this naïve view. According to a dominant perspective in the interpretation of deontic logic and in linguistic semantics, for example, articulated by Roderick Chisholm (1964) and Bernard Williams (1981) in philosophy and in the dominant paradigm in linguistic semantics as articulated in particular by.. (shrink)
Philosophers of science now broadly agree that doing good science involves making non-epistemic value judgments. I call attention to two very different normative standards which can be used to evaluate such judgments: standards grounded in ethics and standards grounded in political philosophy. Though this distinction has not previously been highlighted, I show that the values in science literature contain arguments of each type. I conclude by explaining why this distinction is important. Seeking to determine whether some value-laden determination meets substantive (...) ethical standards is a very different endeavor from seeking to determine if it is politically legitimate. (shrink)
The claim defended in the paper is that the mechanistic account of explanation can easily embrace idealization in big-scale brain simulations, and that only causally relevant detail should be present in explanatory models. The claim is illustrated with two methodologically different models: Blue Brain, used for particular simulations of the cortical column in hybrid models, and Eliasmith’s SPAUN model that is both biologically realistic and able to explain eight different tasks. By drawing on the mechanistic theory of computational explanation, I (...) argue that large-scale simulations require that the explanandum phenomenon is identified; otherwise, the explanatory value of such explanations is difficult to establish, and testing the model empirically by comparing its behavior with the explanandum remains practically impossible. The completeness of the explanation, and hence of the explanatory value of the explanatory model, is to be assessed vis-à-vis the explanandum phenomenon, which is not to be conflated with raw observational data and may be idealized. I argue that idealizations, which include building models of a single phenomenon displayed by multi-functional mechanisms, lumping together multiple factors in a single causal variable, simplifying the causal structure of the mechanisms, and multi-model integration, are indispensable for complex systems such as brains; otherwise, the model may be as complex as the explanandum phenomenon, which would make it prone to so-called Bonini paradox. I conclude by enumerating dimensions of empirical validation of explanatory models according to new mechanism, which are given in a form of a “checklist” for a modeler. (shrink)
In this paper, I argue that even if the Hard Problem of Content, as identified by Hutto and Myin, is important, it was already solved in natu- ralized semantics, and satisfactory solutions to the problem do not rely merely on the notion of information as covariance. I point out that Hutto and Myin have double standards for linguistic and mental representation, which leads to a peculiar inconsistency. Were they to apply the same standards to basic and linguistic minds, they would (...) either have to embrace representationalism or turn to semantic nihilism, which is, as I argue, an unstable and unattractive position. Hence, I conclude, their book does not offer an alternative to representation- alism. At the same time, it reminds us that representational talk in cognitive science cannot be taken for granted and that information is different from men- tal representation. Although this claim is not new, Hutto and Myin defend it forcefully and elegantly. (shrink)
Symposium contribution on Mark Schroeder's Slaves of the Passions. Argues that Schroeder's account of agent-neutral reasons cannot be made to work, that the limited scope of his distinctive proposal in the epistemology of reasons undermines its plausibility, and that Schroeder faces an uncomfortable tension between the initial motivation for his view and the details of the view he develops.
Metaethics is the study of metaphysics, epistemology, the philosophy of mind, and the philosophy of language, insofar as they relate to the subject matter of moral or, more broadly, normative discourse – the subject matter of what is good, bad, right or wrong, just, reasonable, rational, what we must or ought to do, or otherwise. But out of these four ‘core’ areas of philosophy, it is plausibly the philosophy of language that is most central to metaethics – and not simply (...) because ‘metaethics’ was for a long time construed more narrowly as a name for the study of moral language. The philosophy of language is central to metaethics because both the advantages of and the open problems facing different metaethical theories differ sharply over the answers those theories give to central questions in the philosophy of language. In fact, among the open problems over which such theories differ, are included particularly further problems in the philosophy of language. This article briefly surveys a range of broad categories of views in metaethics and both catalogues some of the principal issues faced by each in the philosophy of language, as well as how those arise out of their answers to more basic questions in the philosophy of language. I make no claim to completeness, only to raising a variety of important issues. (shrink)
Reply to Shafer-Landau, Mcpherson, and Dancy Content Type Journal Article DOI 10.1007/s11098-010-9659-0 Authors Mark Schroeder, University of Southern California, Los Angeles, CA USA Journal Philosophical Studies Online ISSN 1573-0883 Print ISSN 0031-8116.
In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the primary reasons why information processing should explain cognition are reviewed. Then I argue that early formulations of these reasons are outdated. However, by relying on the mechanistic account of physical computation, they can be recast in a compelling way. Next, I contrast two computational models of working memory (...) to show how modeling has progressed over the years. The methodological assumptions of new modeling work are best understood in the mechanistic framework, which is evidenced by the way in which models are empirically validated. Moreover, the methodological and theoretical progress in computational neuroscience vindicates the new mechanistic approach to explanation, which, at the same time, justifies the best practices of computational modeling. Overall, computational modeling is deservedly successful in cognitive science. Its successes are related to deep conceptual connections between cognition and computation. Computationalism is not only here to stay, it becomes stronger every year. (shrink)
Many economic measures are structured to reflect ethical values. I describe three attitudes towards this: maximalism, according to which we should aim to build all relevant values into measures; minimalism, according to which we should aim to keep values out of measures; and an intermediate view. I argue the intermediate view is likely correct, but existing versions are inadequate. In particular, economists have strong reason to structure measures to reflect fixed, as opposed to user-assessable, values. This implies that, despite disagreement (...) about precisely how to do so, economists should standardly adjust QALYs and DALYs to reflect egalitarian values. (shrink)
In this paper, we argue that several recent ‘wide’ perspectives on cognition (embodied, embedded, extended, enactive, and distributed) are only partially relevant to the study of cognition. While these wide accounts override traditional methodological individualism, the study of cognition has already progressed beyond these proposed perspectives towards building integrated explanations of the mechanisms involved, including not only internal submechanisms but also interactions with others, groups, cognitive artifacts, and their environment. The claim is substantiated with reference to recent developments in the (...) study of “mindreading” and debates on emotions. We claim that the current practice in cognitive (neuro)science has undergone, in effect, a silent mechanistic revolution, and has turned from initial binary oppositions and abstract proposals towards the integration of wide perspectives with the rest of the cognitive (neuro)sciences. (shrink)
Theorists of health have, to this point, focused exclusively on trying to define a state—health—that an organism might be in. I argue that they have overlooked the possibility of a comparativist theory of health, which would begin by defining a relation—healthier than—that holds between two organisms or two possible states of the same organism. I show that a comparativist approach to health has a number of attractive features, and has important implications for philosophers of medicine, bioethicists, health economists, and policy (...) makers. (shrink)
The purpose of this paper is to present a general mechanistic framework for analyzing causal representational claims, and offer a way to distinguish genuinely representational explanations from those that invoke representations for honorific purposes. It is usually agreed that rats are capable of navigation because they maintain a cognitive map of their environment. Exactly how and why their neural states give rise to mental representations is a matter of an ongoing debate. I will show that anticipatory mechanisms involved in rats’ (...) evaluation of possible routes give rise to satisfaction conditions of contents, and this is why they are representationally relevant for explaining and predicting rats’ behavior. I argue that a naturalistic account of satisfaction conditions of contents answers the most important objections of antirepresentationalists. (shrink)
According to the thesis of the guise of the normative, all desires are associated with normative appearances or judgments. But guise of the normative theories differ sharply over the content of the normative representation, with the two main versions being the guise of reasons and the guise of the good. Chapter 6 defends the comparative thesis that the guise of reasons thesis is more promising than the guise of the good. The central idea is that observations from the theory of (...) content determination can be used in order to constrain possible theories of the representational contents associated with desire. The authors argue that the initially most promising versions of the guise of the good fail to meet these constraints, and then explain the steep challenge confronting any who wish to craft a new guise of the good theory which meets the constraints while also preserving the initial motivations for adopting any guise of the normative theory at all. But a simple version of the guise of reasons not only avoids the troubles besetting the guise of the good but proceeds immediately from a deep diagnosis of the source of its difficulties. (shrink)
Recently, a number of philosophers have argued that we can and should “consequentialize” non-consequentialist moral theories, putting them into a consequentialist framework. I argue that these philosophers, usually treated as a group, in fact offer three separate arguments, two of which are incompatible. I show that none represent significant threats to a committed non-consequentialist, and that the literature has suffered due to a failure to distinguish these arguments. I conclude by showing that the failure of the consequentializers’ arguments has implications (...) for disciplines, such as economics, logic, decision theory, and linguistics, which sometimes use a consequentialist structure to represent non-consequentialist ethical theories. (shrink)
Cognitive science is an interdisciplinary conglomerate of various research fields and disciplines, which increases the risk of fragmentation of cognitive theories. However, while most previous work has focused on theoretical integration, some kinds of integration may turn out to be monstrous, or result in superficially lumped and unrelated bodies of knowledge. In this paper, I distinguish theoretical integration from theoretical unification, and propose some analyses of theoretical unification dimensions. Moreover, two research strategies that are supposed to lead to unification are (...) analyzed in terms of the mechanistic account of explanation. Finally, I argue that theoretical unification is not an absolute requirement from the mechanistic perspective, and that strategies aiming at unification may be premature in fields where there are multiple conflicting explanatory models. (shrink)
When discussing the safety of research subjects, including their exploitation and vulnerability as well as failures in clinical research, recent commentators have focused mostly on countries with low or middle-income economies. High-income countries are seen as relatively safe and well-regulated. This article presents irregularities in clinical trials in an EU member state, Poland, which were revealed by the Supreme Audit Office of Poland (the NIK). Despite adopting many European Union regulations, including European Commission directives concerning Good Clinical Practice, these irregularities (...) occurred. Causes as well as potential solutions to make clinical trials more ethical and safer are discussed. (shrink)
In this paper, I show how semantic factors constrain the understanding of the computational phenomena to be explained so that they help build better mechanistic models. In particular, understanding what cognitive systems may refer to is important in building better models of cognitive processes. For that purpose, a recent study of some phenomena in rats that are capable of ‘entertaining’ future paths (Pfeiffer and Foster 2013) is analyzed. The case shows that the mechanistic account of physical computation may be complemented (...) with semantic considerations, and in many cases, it actually should. (shrink)
In this article, after presenting the basic idea of causal accounts of implementation and the problems they are supposed to solve, I sketch the model of computation preferred by Chalmers and argue that it is too limited to do full justice to computational theories in cognitive science. I also argue that it does not suffice to replace Chalmers’ favorite model with a better abstract model of computation; it is necessary to acknowledge the causal structure of physical computers that is not (...) accommodated by the models used in computability theory. Additionally, an alternative mechanistic proposal is outlined. (shrink)
Define teleology as the view that requirements hold in virtue of facts about value or goodness. Teleological views are quite popular, and in fact some philosophers (e.g. Dreier, Smith) argue that all (plausible) moral theories can be understood teleologically. I argue, however, that certain well-known cases show that the teleologist must at minimum assume that there are certain facts that an agent ought to know, and that this means that requirements can't, in general, hold in virtue of facts about value (...) or goodness. I then show that even if we grant those 'ought's teleology still runs into problems. A positive justification of teleology looks like it will require an argument of this form: O(X); if X, then O(Y); therefore O(Y). But this form of argument isn't in general valid. I conclude by offering two positive suggestions for those attracted to a teleological outlook. (shrink)
In this paper, we defend a novel, multidimensional account of representational unification, which we distinguish from integration. The dimensions of unity are simplicity, generality and scope, non-monstrosity, and systematization. In our account, unification is a graded property. The account is used to investigate the issue of how research traditions contribute to representational unification, focusing on embodied cognition in cognitive science. Embodied cognition contributes to unification even if it fails to offer a grand unification of cognitive science. The study of this (...) failure shows that unification, contrary to what defenders of mechanistic explanation claim, is an important mechanistic virtue of research traditions. (shrink)
In the Book of Common Prayer’s Rite II version of the Eucharist, the congregation confesses, “we have sinned against you in thought, word, and deed”. According to this confession we wrong God not just by what we do and what we say, but also by what we think. The idea that we can wrong someone not just by what we do, but by what think or what we believe, is a natural one. It is the kind of wrong we feel (...) when those we love believe the worst about us. And it is one of the salient wrongs of racism and sexism. Yet it is puzzling to many philosophers how we could wrong one another by virtue of what we believe about them. This paper defends the idea that we can morally wrong one another by what we believe about them from two such puzzles. The first puzzle concerns whether we have the right sort of control over our beliefs for them to be subject to moral evaluation. And the second concerns whether moral wrongs would come into conflict with the distinctively epistemic standards that govern belief. Our answer to both puzzles is that the distinctively epistemic standards governing belief are not independent of moral considerations. This account of moral encroachment explains how epistemic norms governing belief are sensitive to the moral requirements governing belief. (shrink)
In this paper, the Author reviewed the typical objections against the claim that brains are computers, or, to be more precise, information-processing mechanisms. By showing that practically all the popular objections are based on uncharitable interpretations of the claim, he argues that the claim is likely to be true, relevant to contemporary cognitive science, and non-trivial.
This paper compares two alternative explanations of pragmatic encroachment on knowledge (i.e., the claim that whether an agent knows that p can depend on pragmatic factors). After reviewing the evidence for such pragmatic encroachment, we ask how it is best explained, assuming it obtains. Several authors have recently argued that the best explanation is provided by a particular account of belief, which we call pragmatic credal reductivism. On this view, what it is for an agent to believe a proposition is (...) for her credence in this proposition to be above a certain threshold, a threshold that varies depending on pragmatic factors. We show that while this account of belief can provide an elegant explanation of pragmatic encroachment on knowledge, it is not alone in doing so, for an alternative account of belief, which we call the reasoning disposition account, can do so as well. And the latter account, we argue, is far more plausible than pragmatic credal reductivism, since it accords far better with a number of claims about belief that are very hard to deny. (shrink)
In this paper, an account of theoretical integration in cognitive (neuro)science from the mechanistic perspective is defended. It is argued that mechanistic patterns of integration can be better understood in terms of constraints on representations of mechanisms, not just on the space of possible mechanisms, as previous accounts of integration had it. This way, integration can be analyzed in more detail with the help of constraintsatisfaction account of coherence between scientific representations. In particular, the account has resources to talk of (...) idealizations and research heuristics employed by researchers to combine separate results and theoretical frameworks. The account is subsequently applied to an example of successful integration in the research on hippocampus and memory, and to a failure of integration in the research on mirror neurons as purportedly explanatory of sexual orientation. (shrink)
In this paper I look at the much-discussed case of disabled parents seeking to conceive disabled children. I argue that the permissibility of selecting for disability does not depend on the precise impact the disability will have on the child’s wellbeing. I then turn to an alternative analysis, which argues that the permissibility of selecting for disability depends on the impact that disability will have on the child’s future opportunities. Nearly all bioethicists who have approached the issue in this way (...) have argued that disabilities like deafness unacceptably constrain a child’s opportunities. I argue, however, that this conclusion is premature for several reasons. Most importantly, we don’t have a good way of comparing opportunity sets. Thus, we can’t conclude that deaf children will grow up to have a constrained set of opportunities relative to hearing children. I conclude by suggesting that bioethicists and philosophers of disability need to spend more time thinking carefully about the relationship between disability and opportunity. (shrink)
Daniel Whiting has argued, in this journal, that Mark Schroeder’s analysis of knowledge in terms of subjectively and objectively sufficient reasons for belief makes wrong predictions in fake barn cases. Schroeder has replied that this problem may be avoided if one adopts a suitable account of perceptual reasons. I argue that Schroeder’s reply fails to deal with the general worry underlying Whiting’s purported counterexample, because one can construct analogous potential counterexamples that do not involve perceptual reasons at (...) all. Nevertheless, I claim that it is possible to overcome Whiting’s objection, by showing that it rests on an inadequate characterization of how defeat works in the examples in question. (shrink)
Common-sense allows that talk about moral truths makes perfect sense. If you object to the United States’ Declaration of Independence’s assertion that it is a truth that ‘all men’ are ‘endowed by their Creator with certain unalienable Rights’, you are more likely to object that these rights are not unalienable or that they are not endowed by the Creator, or even that its wording ignores the fact that women have rights too, than that this is not the sort of thing (...) which could be a truth. Whether it is a truth or not seems beside the point, anyway; the writers of the Declaration could just have well written, ‘We hold it to be self-evident that all men are created equal, and also that it is self-evident that all men are endowed by their Creator with certain unalienable Rights,’ save only that its cadence would lack some of the poetic resonance of the version which garnered Hancock’s signature. Yet famously, ethical noncognitivists have proclaimed that moral sentences can’t be true or false – that, like ‘Hooray!’ or ‘dammit!’, they are not even the kinds of things to be true or false. Noncognitivism is sometimes even defined as the view that this is so, but even philosophers who define ‘noncognitivism’ more broadly, as consistent with the idea that moral sentences may be true or false, have believed that they needed to do important philosophical spadework in order to make sense of how moral sentences could be true or false. In this article we’ll look at the puzzle about moral truth as it is faced by early noncognitivists and by metaethical expressivists, the early noncognitivists’ contemporary cousins. We’ll look at what it would take for expressivists to ‘earn the right’ to talk about moral truths at all, and in particular at what it would take for them to earn the right to claim that moral truths behave in the ways that we should expect – including that meaningful moral sentences which lack presuppositions are true or false, and that classically valid arguments are truth-preserving.. (shrink)
In this paper, I review the objections against the claim that brains are computers, or, to be precise, information-processing mechanisms. By showing that practically all the popular objections are either based on uncharitable interpretation of the claim, or simply wrong, I argue that the claim is likely to be true, relevant to contemporary cognitive (neuro)science, and non-trivial.
There is virtually no philosophical consensus on what, exactly, imperfect duties are. In this paper, I lay out three criteria which I argue any adequate account of imperfect duties should satisfy. Using beneficence as a leading example, I suggest that existing accounts of imperfect duties will have trouble meeting those criteria. I then propose a new approach: thinking of imperfect duties as duties held by groups, rather than individuals. I show, again using the example of beneficence, that this proposal can (...) satisfy the criteria, explaining how something can both have the necessity characteristic of duty, while also allowing agents the latitude which seems to attach to imperfect duties. (shrink)
There is a growing consensus among philosophers of science that core parts of the scientific process involve non-epistemic values. This undermines the traditional foundation for public trust in science. In this article I consider two proposals for justifying public trust in value-laden science. According to the first, scientists can promote trust by being transparent about their value choices. On the second, trust requires that the values of a scientist align with the values of an individual member of the public. I (...) argue that neither of these proposals work and suggest an alternative that does better. When scientists must appeal to values in the course of their research, they should appeal to democratic values: the values of the public or its representatives. (shrink)
The nature of legislative intent remains a subject of vigorous debate. Its many participants perceive the intent in different ways. In this paper, I identify the reason for such diverse perceptions: three intentions are involved in lawmaking, not one. The three intentions correspond to the three aspects of a speech act: locutionary, illocutionary and perlocutionary. The dominant approach in legal theory holds that legislative intent is a semantic one. A closer examination shows that it is, in fact, an illocutionary one. (...) In the paper, I draw the consequences for legal interpretation of this more theorized model of legislative intent. (shrink)
In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms that manipulate semantic information. It will be argued that the role of the environment has long been appreciated, in particular in the work of Herbert A. Simon, which has inspired the mechanistic view on explanation. From Simon’s perspective, the embodied view on cognition seems natural but (...) it is nowhere near as critical as its proponents suggest. The only point of difference between Simon and embodied cognition is the significance of body-based off-line cognition; however, it will be argued that it is notoriously over-appreciated in the current debate. The new mechanistic view on explanation suggests that even if it is critical to situate a mechanism in its environment and study its physical composition, or realization, it is also stressed that not all detail counts, and that some bodily features of cognitive systems should be left out from explanations. (shrink)
The last few decades have given rise to the study of practical reason as a legitimate subfield of philosophy in its own right, concerned with the nature of practical rationality, its relationship to theoretical rationality, and the explanatory relationship between reasons, rationality, and agency in general. Among the most central of the topics whose blossoming study has shaped this field, is the nature and structure of instrumental rationality, the topic to which Kant has to date made perhaps the largest contribution, (...) under the heading of his treatment of hypothetical imperatives. (shrink)
Replicability and reproducibility of computational models has been somewhat understudied by “the replication movement.” In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model without original code, (...) serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code. (shrink)
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. -/- In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of models (...) of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
Précis of Slaves of the Passions Content Type Journal Article DOI 10.1007/s11098-010-9658-1 Authors Mark Schroeder, University of Southern California, Los Angeles, CA USA Journal Philosophical Studies Online ISSN 1573-0883 Print ISSN 0031-8116.
In this paper I defend the view that knowledge is belief for reasons that are both objectively and subjectively sufficient from an important objection due to Daniel Whiting, in this journal. Whiting argues that this view fails to deal adequately with a familiar sort of counterexample to analyses of knowledge, fake barn cases. I accept Whiting’s conclusion that my earlier paper offered an inadequate treatment of fake barn cases, but defend a new account of basic perceptual reasons that is consistent (...) with the account of knowledge and successfully deals with fake barns. (shrink)
The purpose of this paper is to argue against the claim that morphological computation is substantially different from other kinds of physical computation. I show that some (but not all) purported cases of morphological computation do not count as specifically computational, and that those that do are solely physical computational systems. These latter cases are not, however, specific enough: all computational systems, not only morphological ones, may (and sometimes should) be studied in various ways, including their energy efficiency, cost, reliability, (...) and durability. Second, I critically analyze the notion of “offloading” computation to the morphology of an agent or robot, by showing that, literally, computation is sometimes not offloaded but simply avoided. Third, I point out that while the morphology of any agent is indicative of the environment that it is adapted to, or informative about that environment, it does not follow that every agent has access to its morphology as the model of its environment. (shrink)
This paper explores the usefulness of the 'ethical matrix', proposed by Ben Mepham, as a tool in technology assessment, specifically in food ethics. We consider what the matrix is, how it might be useful as a tool in ethical decision-making, and what drawbacks might be associated with it. We suggest that it is helpful for fact-finding in ethical debates relating to food ethics; but that it is much less helpful in terms of weighing the different ethical problems that it uncovers. (...) Despite this drawback, we maintain that, with some modifications, the ethical matrix can be a useful tool in debates in food ethics. We argue that useful modifications might be to include future generations amongst the stakeholders in the matrix, and to substitute the principle of solidarity for the principle of justice. (shrink)
Predictive processing (PP) has been repeatedly presented as a unificatory account of perception, action, and cognition. In this paper, we argue that this is premature: As a unifying theory, PP fails to deliver general, simple, homogeneous, and systematic explanations. By examining its current trajectory of development, we conclude that PP remains only loosely connected both to its computational framework and to its hypothetical biological underpinnings, which makes its fundamentals unclear. Instead of offering explanations that refer to the same set of (...) principles, we observe systematic equivocations in PP‐based models, or outright contradictions with its avowed principles. To make matters worse, PP‐based models are seldom empirically validated, and they are frequently offered as mere just‐so stories. The large number of PP‐based models is thus not evidence of theoretical progress in unifying perception, action, and cognition. On the contrary, we maintain that the gap between theory and its biological and computational bases contributes to the arrested development of PP as a unificatory theory. Thus, we urge the defenders of PP to focus on its critical problems instead of offering mere re‐descriptions of known phenomena, and to validate their models against possible alternative explanations that stem from different theoretical assumptions. Otherwise, PP will ultimately fail as a unified theory of cognition. (shrink)
Jason Stanley's Know How aims to offer an attractive intellectualist analysis of knowledge how that is compositionally predicted by the best available treatments of sentences like 'Emile knows how to make his dad smile.' This paper explores one significant way in which Stanley's compositional treatment fails to generate his preferred account, and advocates a minimal solution.
The paper defends the claim that the mechanistic explanation of information processing is the fundamental kind of explanation in cognitive science. These mechanisms are complex organized systems whose functioning depends on the orchestrated interaction of their component parts and processes. A constitutive explanation of every mechanism must include both appeal to its environment and to the role it plays in it. This role has been traditionally dubbed competence. To fully explain how this role is played it is necessary to explain (...) the information processing inside the mechanism embedded in the environment. The most usual explanation on this level has a form of a computational model, for example a software program or a trained artificial neural network. However, this is not the end of the explanatory chain. What is left to be explained is how the program is realized (or what processes are responsible for information processing in the artificial neural network). By using two dramatically different examples from the history of cognitive science I show the multi-level structure of explanations in cognitive science. These examples are (1) the explanation of human process solving as proposed by A. Newell & H. Simon; (2) the explanation of cricket phonotaxis via robotic models by B. Webb. (shrink)
I discuss whether there are some lessons for philosophical inquiry over the nature of simulation to be learnt from the practical methodology of reengineering. I will argue that reengineering serves a similar purpose as simulations in theoretical science such as computational neuroscience or neurorobotics, and that the procedures and heuristics of reengineering help to develop solutions to outstanding problems of simulation.
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of models of (...) computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
Moral error theory is the doctrine that our first-order moral commitments are pervaded by systematic error. It has been objected that this makes the error theory itself a position in first-order moral theory that should be judged by the standards of competing first-order moral theories :87–139, 1996) and Kramer. Kramer: “the objectivity of ethics is itself an ethical matter that rests primarily on ethical considerations. It is not something that can adequately be contested or confirmed through non-ethical reasoning” [2009, 1]). (...) This paper shows that error theorists can resist this charge if they adopt a particular understanding of the presuppositions of moral discourse. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.