This article gives two arguments for believing that our society is unknowingly guilty of serious, large-scale wrongdoing. First is an inductive argument: most other societies, in history and in the world today, have been unknowingly guilty of serious wrongdoing, so ours probably is too. Second is a disjunctive argument: there are a large number of distinct ways in which our practices could turn out to be horribly wrong, so even if no particular hypothesized moral mistake strikes us as very likely, (...) the disjunction of all such mistakes should receive significant credence. The article then discusses what our society should do in light of the likelihood that we are doing something seriously wrong: we should regard intellectual progress, of the sort that will allow us to find and correct our moral mistakes as soon as possible, as an urgent moral priority rather than as a mere luxury; and we should also consider it important to save resources and cultivate flexibility, so that when the time comes to change our policies we will be able to do so quickly and smoothly. (shrink)
In debates over the regulation of communication related to dual-use research, the risks that such communication creates must be weighed against against the value of scientific autonomy. The censorship of such communication seems justifiable in certain cases, given the potentially catastrophic applications of some dual-use research. This conclusion however, gives rise to another kind of danger: that regulators will use overly simplistic cost-benefit analysis to rationalize excessive regulation of scientific research. In response to this, we show how institutional design principles (...) and normative frameworks from free speech theory can be used to help extend the argument for regulating dangerous dual-use research beyond overly simplistic cost-benefit reasoning, but without reverting to an implausibly absolutist view of scientific autonomy. (shrink)
IPCC SPECIAL REPORT ON CLIMATE CHANGE AND LAND (SRCCL) -/- Chapter 3: Climate Change and Land: An IPCC special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems.
A recent controversy over the US National Science Advisory Board for Biosecurity's recommendation to censor two publications on genetically modified H5N1 avian influenza has generated concern over the threat to scientific freedom such censorship presents. In this paper, I argue that in the case of these studies, appeals to scientific freedom are not sufficient to motivate a rejection of censorship. I then use this conclusion to draw broader concerns about the ethics of dual-use research.
In recent years, there has been a heated debate about how to interpret findings that seem to show that humans rapidly and automatically calculate the visual perspectives of others. In the current study, we investigated the question of whether automatic interference effects found in the dot-perspective task (Samson, Apperly, Braithwaite, Andrews, & Bodley Scott, 2010) are the product of domain-specific perspective-taking processes or of domain-general “submentalizing” processes (Heyes, 2014). Previous attempts to address this question have done so by implementing inanimate (...) controls, such as arrows, as stimuli. The rationale for this is that submentalizing processes that respond to directionality should be engaged by such stimuli, whereas domain-specific perspective-taking mechanisms, if they exist, should not. These previous attempts have been limited, however, by the implied intentionality of the stimuli they have used (e.g. arrows), which may have invited participants to imbue them with perspectival agency. Drawing inspiration from “novel entity” paradigms from infant gaze-following research, we designed a version of the dot-perspective task that allowed us to precisely control whether a central stimulus was viewed as animate or inanimate. Across four experiments, we found no evidence that automatic “perspective-taking” effects in the dot-perspective task are modulated by beliefs about the animacy of the central stimulus. Our results also suggest that these effects may be due to the task-switching elements of the dot-perspective paradigm, rather than automatic directional orienting. Together, these results indicate that neither the perspective-taking nor the standard submentalizing interpretations of the dot-perspective task are fully correct. (shrink)
We formulate a sort of "generic" cosmological argument, i.e., a cosmological argument that shares premises (e.g., "contingent, concretely existing entities have a cause") with numerous versions of the argument. We then defend each of the premises by offering pragmatic arguments for them. We show that an endorsement of each premise will lead to an increase in expected utility; so in the absence of strong evidence that the premises are false, it is rational to endorse them. Therefore, it is rational to (...) endorse the cosmological argument, and so rational to endorse theism. We then consider possible objections. (shrink)
We discuss arguments against the thesis that the world itself can be vague. The first section of the paper distinguishes dialectically effective from ineffective arguments against metaphysical vagueness. The second section constructs an argument against metaphysical vagueness that promises to be of the dialectically effective sort: an argument against objects with vague parts. Firstly, cases of vague parthood commit one to cases of vague identity. But we argue that Evans' famous argument against will not on its own enable one to (...) complete the reductio in the present context. We provide a metaphysical premise that would complete the reductio, but note that it seems deniable. We conclude by drawing general morals from our case study. (shrink)
Ascriptions of content are sensitive not only to our physical and social environment, but also to unforeseeable developments in the subsequent usage of our terms. This paper argues that the problems that may seem to come from endorsing such 'temporally sensitive' ascriptions either already follow from accepting the socially and historically sensitive ascriptions Burge and Kripke appeal to, or disappear when the view is developed in detail. If one accepts that one's society's past and current usage contributes to what one's (...) terms mean, there is little reason not to let its future usage to do so as well. (shrink)
Some, if not all statements containing the word 'I' seem to be 'immune to error through misidentification relative to the first-person pronoun' (Shoemaker). This immunity, however, is due to the fact that the pronoun 'I' plays no identifying role in the first place. Since no identification takes place here, the alleged immunity to misidentification should come as no surprise. But there is a second immunity thesis, which captures the peculiarity of 'I' better: The first-person pronoun is immune to reference-failure. Some (...) philosophers claim that this kind of immunity applies to the indexicals 'here' and 'now' as well. Which epistemic significance does such guaranteed reference have? Does it constitute infallible knowledge? No, the alleged immunity calls for a deflationary interpretation. It has grammatical reasons, in the Wittgensteinian sense. Therefore, the epistemic notions 'infallible knowledge' and 'immunity to error' are misleading here. (shrink)
The paper argues that Gareth Evans’ argument for transparent self-knowledge is based on a conflation of doxastic transparency with ascriptive transparency. Doxastic transparency means that belief about one’s own doxastic state, e.g., the belief that one thinks that it will rain, can be warranted by ordinary empirical observation, e.g., of the weather. In contrast, ascriptive transparency says that self-ascriptions of belief, e.g., “I believe it will rain”, can be warranted by such observation. We first show that the thesis of doxastic (...) transparency is ill-motivated and then offer a non-epistemic interpretation of ascriptive transparency by reference to the theory of explicit expressive acts: “I think it will rain” requires attendance to the weather because the utterance expresses a belief about the weather, not about ourselves. This will allow us to avoid what is often called “the puzzle of transparent self-knowledge” while remaining faithful to Evans’ linguistic observations. (shrink)
The paper argues that Gareth Evans’ argument for transparent self-knowledge is based on a conflation of doxastic transparency with ascriptive transparency. Doxastic transparency means that belief about one’s own doxastic state, e.g., the belief that one thinks that it will rain, can be warranted by ordinary empirical observation, e.g., of the weather. In contrast, ascriptive transparency says that self-ascriptions of belief, e.g., “I believe it will rain”, can be warranted by such observation. We first show that the thesis of doxastic (...) transparency is ill-motivated and then offer a non-epistemic interpretation of ascriptive transparency by reference to the theory of explicit expressive acts: “I think it will rain” requires attendance to the weather because the utterance expresses a belief about the weather, not about ourselves. This will allow us to avoid what is often called “the puzzle of transparent self-knowledge” while remaining faithful to Evans’ linguistic observations. (shrink)
The paper argues that Gareth Evans’ argument for transparent self-knowledge is based on a conflation of doxastic transparency with ascriptive transparency. Doxastic transparency means that belief about one’s own doxastic state, e.g., the belief that one thinks that it will rain, can be warranted by ordinary empirical observation, e.g., of the weather. In contrast, ascriptive transparency says that self-ascriptions of belief, e.g., “I believe it will rain”, can be warranted by such observation. We first show that the thesis of doxastic (...) transparency is ill-motivated and then offer a non-epistemic interpretation of ascriptive transparency by reference to the theory of explicit expressive acts: “I think it will rain” requires attendance to the weather because the utterance expresses a belief about the weather, not about ourselves. This will allow us to avoid what is often called “the puzzle of transparent self-knowledge” while remaining faithful to Evans’ linguistic observations. (shrink)
This is a transcript of a conversation between P F Strawson and Gareth Evans in 1973, filmed for The Open University. Under the title 'Truth', Strawson and Evans discuss the question as to whether the distinction between genuinely fact-stating uses of language and other uses can be grounded on a theory of truth, especially a 'thin' notion of truth in the tradition of F P Ramsey.
Quine (1960, "Word and object". Cambridge, Mass.: MIT Press, ch. 2) claims that there are a variety of equally good schemes for translating or interpreting ordinary talk. 'Rabbit' might be taken to divide its reference over rabbits, over temporal slices of rabbits, or undetached parts of rabbits, without significantly affecting which sentences get classified as true and which as false. This is the basis of his famous 'argument from below' to the conclusion that there can be no fact of the (...) matter as to how reference is to be divided. Putative counterexamples to Quine's claim have been put forward in the past (see especially Evans (1975, "Journal of Philosophy", LXXII(13), 343-362. Reprinted in McDowell (Ed.), "Gareth Evans: Collected papers." Oxford: Clarendon Press.), Fodor (1993, "The elm and the expert: Mentalese and its semantics." Cambridge, MA: Bradford)), and various patches have been suggested (e. g. Wright (1997, The indeterminacy of translation. In C. Wright & B. Hale (Eds.), "A companion to the philosophy of language" (pp. 397-426). Oxford: Blackwell)). One lacuna in this literature is that one does not find any detailed presentation of what exactly these interpretations are supposed to be. Drawing on contemporary literature on persistence, the present paper sets out detailed semantic treatments for fragments of English, whereby predicates such as 'rabbit' divide their reference over four-dimensional continuants (Quine's rabbits), instantaneous temporal slices of those continuants (Quine's rabbitslices) and the simple elements which compose those slices (undetached rabbit parts) respectively. Once we have the systematic interpretations on the table, we can get to work evaluating them. (shrink)
This is a forthcoming section for the book "Theism and Atheism: Opposing Arguments in Philosophy", edited by Graham Oppy, Gregory Dawes, Evan Fales, Joseph Koterski, Mashhad Al-Allaf, Robert Fastiggi, and David Shatz. I was asked to write a brief essay on whether naturalism or theism can successfully explain the distribution of suffering in our world. Wheras another section covers the possibility that suffering is evidence against theism, my essay is concerned only with the ability for either naturalism or theism to (...) explain suffering. I argue that, for naturalists, suffering is not to be explained by either philosophers of religion or theologians. Instead, naturalists believe that suffering should be explained by the cognitive and social sciences, perhaps in conjunction with political philosophy and philosophy of mind. Moreover, naturalists may take suffering to be an important reason for action. In a world without a transcendent, supernatural being to watch over us, we can only depend upon each other to ameliorate existing conditions, to the extent that they can be ameliorated. On the other hand, theistic explanations of the distibution of suffering slip very easily into problematic theologies when they try to offer explicit explanations for suffering. For example, the world's most vulnerable people are often those who suffer the most, whereas oppressors are often able to prosper. That is, theistic explanations of our world's suffering easily slip into, e.g., the just world fallacy, racist ideology (i.e., that God favors some race(s) of people over others), or patriarchal ideology. Instead of offering an explicit explanation, theists should instead be skeptical theists -- i.e., they should argue that an explanation for our world's suffering is beyond our ken. While skeptical theism avoids the aforementioned problematic implications, if skeptical theism is true, then our world's suffering cannot be fully explained. (shrink)
Bishop sentences such as “If a bishop meets a bishop, he blesses him” have long been considered problematic for the descriptivist (or E-type) approach of donkey anaphora (e.g. Evans, 1977; Heim, 1990; and Neale, 1990). Elbourne (2005) offers a situational descriptivist analysis that allegedly solves the problem, and furthermore extends its explanatory coverage to bishop sentence with coordinate subjects. However, I throw serious doubts on Elbourne’s analysis. Specifically, I argue that the purported solution is committed to the use of unbound (...) anaphora, and it cannot sustain the claimed empirical adequacy. (shrink)
The aim of this study is to justify the belief that there are biological normative mechanisms that fulfill non-trivial causal roles in the explanations (as formulated by researchers) of actions and behaviors present in specific systems. One example of such mechanisms is the predictive mechanisms described and explained by predictive processing (hereinafter PP), which (1) guide actions and (2) shape causal transitions between states that have specific content and fulfillment conditions (e.g. mental states). Therefore, I am guided by a specific (...) theoretical goal associated with the need to indicate those conditions that should be met by the non-trivial theory of normative mechanisms and the specific models proposed by PP supporters. In this work, I use classical philosophical methods, such as conceptual analysis and critical reflection. I also analyze selected studies in the field of cognitive science, cognitive psychology, neurology, information theory and biology in terms of the methodology, argumentation and language used, in accordance with their theoretical importance for the issues discussed in this study. In this sense, the research presented here is interdisciplinary. My research framework is informed by the mechanistic model of explanation, which defines the necessary and sufficient conditions for explaining a given phenomenon. The research methods I chose are therefore related to the problems that I intend to solve. In the introductory chapter, “The concept of predictive processing”, I discuss the nature of PP as well as its main assumptions and theses. I also highlight the key concepts and distinctions for this research framework. Many authors argue that PP is a contemporary version of Kantianism and is exposed to objections similar to those made against the approach of Immanuel Kant. I discuss this thesis and show that it is only in a very general sense that the PP framework is neo-Kantian. Here we are not dealing with transcendental deduction nor with the application of transcendental arguments. I argue that PP is based on reverse engineering and abduction inferences. In the second part of this chapter, I respond to the objection formulated by Dan Zahavi, who directly accuses this research framework of anti-realistic consequences. I demonstrate that the position of internalism, present in the so-called conservative PP, does not imply anti-realism, and that, due to the explanatory role played in it by structural representations directed at real patterns, it is justified to claim that PP is realistic. In this way, I show that PP is a non-trivial research framework, having its subject, specific methods and its own epistemic status. Finally, I discuss positions classified as the socalled radical PP. In the chapter “Predictive processing as a Bayesian explanatory model” I justify the thesis according to which PP offers Bayesian modeling. Many researchers claim that the brain is an implemented statistical probabilistic network that is an approximation of the Bayesian 7 rule. In practice, this means that all cognitive processes are to apply Bayes' rule and can be described in terms of probability distributions. Such a solution arouses objections among many researchers and is the subject of wide criticism. The purpose of this chapter is to justify the thesis that Bayesian PP is a non-trivial research framework. For this purpose, I argue that it explains certain phenomena not only at the computational level described by David Marr, but also at the level of algorithms and implementation. Later in this chapter I demonstrate that PP is normative modeling. Proponents of the use of Bayesian models in psychology or decision theory argue that they are normative because they allow the formulation of formal rules of action that show what needs to be done to make a given action optimal. Critics of this approach emphasize that such thinking about the normativity of Bayesian modeling is unjustified and that science should shift from prescriptive to descriptive positions. In a polemic with Shira Elqayam and Jonathan Evans (2011), I show that the division they propose into prescriptivism and Bayesian descriptivism is apparent, because, as I argue, there are two forms of prescriptivism, i.e. the weak and the strong. I argue that the weak version is epistemic and can lead to anti-realism, while the strong version is ontic and allows one to justify realism in relation to Bayesian models. I argue that a weak version of prescriptivism is valid for PP. It allows us to adopt anti-realism in relation to PP. In practice, this means that you can explain phenomena using Bayes' rule. This does not, however, imply that they are Bayesian in nature. However, the full justification of realism in relation to the Bayesian PP presupposes the adoption of strong prescriptivism. This position assumes that phenomena are explained by Bayesian rule because they are Bayesian as such. If they are Bayesian in nature, then they should be explained using Bayesian modeling. This thesis will be substantiated in the chapters “Normative functions and mechanisms in the context of predictive processing” and “Normative mechanisms and actions in predictive processing”. In the chapter “The Free Energy Principle in predictive processing”, I discuss the Free Energy Principle (hereinafter FEP) formulated by Karl Friston and some of its implications. According to this principle, all biological systems (defined in terms of Markov blankets) minimize the free energy of their internal states in order to maintain homeostasis. Some researchers believe that PP is a special case of applying this principle to cognition, and that predictive mechanisms are homeostatic mechanisms that minimize free energy. The discussion of FEP is important due to the fact that some authors consider it to be important for explanatory purposes and normative. If this is the case, then FEP turns out to be crucial in explaining normative predictive mechanisms and, in general, any normative biological mechanisms. To define the explanatory possibilities of this principle, I refer to the discussion of its supporters on the issue they define as the problem of continuity and discontinuity between life and mind. A critical analysis of this discussion and the additional arguments I have formulated have allowed me to revise the explanatory ambitions of FEP. I also reject the belief that this principle is necessary to explain the nature of predictive mechanisms. I argue that the principle formulated and defended by Friston is an important research heuristic for PP analysis. 8 In the chapter “Normative functions and mechanisms in predictive processing”, I start my analyzes by formulating an answer to the question about the normative nature of homeostatic mechanisms. I demonstrate that predictive mechanisms are not homeostatic. I defend the view that a full explanation of normative mechanisms presupposes an explanation of normative functions. I discuss the most important proposals for understanding the normativity of a function, both from a systemic and teleosemantic perspective. I conclude that the non-trivial concept of a function must meet two requirements which I define as explanatory and normative. I show that none of the theories I have invoked satisfactorily meets both of these requirements. Instead, I propose a model of normativity based on Bickhard's account, but supplemented by a mechanistic perspective. I argue that a function is normative when: (1) it allows one to explain the dysfunction of a given mechanism; (2) it contributes to the maintenance of the organism's stability by shaping and limiting possible relations, processes and behaviors of a given system; and when (3) (according to the representational and predictive functions) it enables explaining the attribution of logical values of certain representations / predictions. In such an approach, a mechanism is normative when it performs certain normative functions and when it is constitutive for a specific action or behavior, despite the fact that for some reason it cannot realize it either currently or in the long-term. Such an understanding of the normativity of mechanisms presupposes the acceptance of the epistemic hypothesis. I argue that this hypothesis is not cognitively satisfactory, and therefore the ontic hypothesis should be justified, which is directly related to adopting the position of ontic prescriptivism. For this reason, referring to the mechanistic theory of scientific explanations, I formulate an ontical interpretation of the concept of a normative mechanism. According to this approach, a mechanism or a function is normative when they perform such and such causal roles in explaining certain actions and behaviors. With regard to the normative properties of predictive mechanisms and functions, this means that they are the causes of specific actions an organism carries out in the environment. In this way, I justify the necessity of accepting the ontic hypothesis and rejecting the epistemic hypothesis. The fifth chapter, “Normative mechanisms and actions in predictive processing”, is devoted to the dark room problem and the related exploration-exploitation trade-off. A dark room is the state that an agent could be in if it minimized the sum of all potential prediction errors. I demonstrate that, in accordance with the basic assumption of PP about the need for continuous and long-term minimization of prediction errors, such a state should be desirable for the agent. Is it really so? Many authors believe it is not. I argue that the test of the value of PP is the possibility of a non-trivial solution of this problem, which can be reduced to the choice between active and uncertainty-increasing exploration and safe and easily predictable exploitation. I show that the solution proposed by PP supporters present in the literature does not enable a fully satisfactory explanation of this dilemma. Then I defend the position according to which the full explanation of the normative mechanisms, and, subsequently, the solution to the dilemma of exploration and exploitation, involves reference to the existence of constraints present in the environment. The constraints 9 include elements of the environment that make a given mechanism not only causal but also normative. They are therefore key to explaining the predictive mechanisms. They do not only play the role of the context in which the mechanism is implemented, but, above all, are its constitutive component. I argue that the full explanation of the role of constraints in normative predictive mechanisms presupposes the integration of individual models of specific cognitive phenomena, because only the mechanistic integration of PP with other models allows for a non-trivial explanation of the nature of normative predictive mechanisms that would have a strong explanatory value. The explanatory monism present in many approaches to PP makes it impossible to solve the problem of the dark room. Later in this chapter, I argue that the Bayesian PP is normative not because it enables the formulation of such and such rules of action, but because the predictive mechanisms themselves are normative. They are normative because they condition the choice of such and such actions by agents. In this way, I justify the hypothesis that normative mechanisms make it possible to explain the phenomenon of agent motivation, which is crucial for solving the dark room problem. In the last part of the chapter, I formulate the hypothesis of distributed normativity, which assumes that the normative nature of certain mechanisms, functions or objects is determined by the relations into which these mechanisms, functions or objects enter. This means that what is normative (in the primary sense) is the relational structure that constitutes the normativity of specific items included in it. I suggest that this hypothesis opens up many areas of research and makes it possible to rethink many problems. In the “Conclusion”, I summarize the results of my research and indicate further research perspectives. (shrink)
Filosofia limbajului are legătură cu studiul modului în care limbajul nostru se implică și interacționează cu gândirea noastră. Studierea logicii și relația dintre logică și vorbirea obișnuită poate ajuta o persoană să își structureze mai bine propriile argumente și să critice argumentele celorlalți. Înțelesul este modul în care pot fi considerate în mod corespunzător cuvinte, simboluri, idei și convingeri, definiția sa depinzând de teoria abordată, precum teoria corespondenței, teoria coerenței, teoria constructivistă, teoria consensului sau teoria pragmatică. Există mai multe explicații (...) distince despre ceea ce înseamnă un „sens” lingvistic, în funcție de diverse teorii (ideologice, adevăr-condiționale, de utilizare a limbajului, constructiviste, de referință, verificaționiste, pragmatice, etc.) Investigațiile privind modul în care limbajul interacționează cu lumea sunt numite teorii de referință. Sensul unei propoziții este gândul pe care îl exprimă. Un astfel de gând este abstract, universal și obiectiv. Sensurile determină referința și sunt, de asemenea, modurile de prezentare a obiectelor la care se referă expresiile. Referințele sunt obiectele din lume despre care vorbesc cuvintele. Filosofia limbajului explorează relația dintre limbă și realitate, în special filosofia problemelor de studiu lingvistic care nu pot fi abordate de alte domenii. Logica filozofică se ocupă de descrieri formale ale limbajului obișnuit, nespecializat („natural”). -/- CUPRINS: -/- 1. Filosofia limbajului - 1.1 Istorie - - Filosofia antică - - Filosofia medievală - - Filosofia modernă - - Filosofia contemporană - 1.2 Subiecte și sub-domenii majore - - Compoziție și părți - - Natura sensului - - Referinţă - - Mintea și limba - - - Înnăscut și învățat - - - Limba și gândul - - Interacțiunea socială și limba - 1.3 Limbajul și filozofia continentală - 1.4 Probleme în filosofia limbajului - - Imprecizia - - Problema universalului și a compoziției - - Natura limbajului - - Abordări formale versus informale - - Traducere și interpretare - 1.5 Nominalism - - Istorie - - - Filozofia greacă veche - - - Filozofia medievală - - - Filosofia modernă și contemporană - - Problema universalelor - - Tipuri - - - Filosofie analitică și matematică - - Critica originilor istorice ale termenului 2. Înțeles - 2.1 Adevăr și înțeles - - Teorii majore ale înțelesului - - - Teoria corespondenței - - - Teoria coerenței - - - Teoria constructivistă - - - Teoria consensului - - - Teoria pragmatică - - Teorii și comentarii asociate - - - Logica și limbajul - - - Gottlob Frege - - - Bertrand Russell - - - Alte teorii ale adevărului - - - Saul Kripke - - - Criticile teoriilor de adevăr ale înțelesului - 2.2 Bertrand Russell, Despre denotare - - "Fraza care denotă" - - - Concepția lui Russell despre o frază care denotă - - - Referința la ceva care nu există - - - Epistemologie - - Teoria descrierilor - - - Descrierea matematică - - - Ilustrare - - - Meinong - - Rezolvarea problemei existențialelor negative - - - Declarații despre concepte în care obiectul nu există - - - Ambiguitate - - - Nume fictive - - Critici - 2.3 Sens și referință - - Precursori - - - Antistene - - - John Stuart Mill - 2.3.1 Sens - - Sens și descriere - - Traducerea Bedeutung - - Natura sensului - 2.3.2 Referinţă - 2.4 Nume proprii - - Problema - 2.4.1 Teorii - - Teoria lui Mill - - Teoria bazată pe sensul numelor - - Teoria descriptivă - - Teoria cauzală a numelor - - Teorii de referință directă - - Filosofia continentală - 2.5 Gottlob Frege, Despre sens și referință - 2.6 Teorii cauzale ale referinței - - Motivaţie - - Variații - 2.6.1 Teoria cauzală a referinței a lui Saul Kripke - 2.6.2 Teoria cauzală a referinței a lui Gareth Evans - 2.6.3 Teoria cauzală a referinței a lui Michael Devitt - 2.6.4 Blockchain și arborele cauzal al referinței - 2.6.5 Perspective - 2.7 Saul Kripke, Numire și necesitate - 2.7.1 Prelegerea I - 2.7.2 Prelegerea II - 2.7.3 Prelegerea III - 2.7.4 Concluzii - 2.8 Kit Fine, Relaționismul semantic - - A. Antinomia variabilei - - B. Abordarea tarskiană - - C. Respingerea rolului semantic - - D. Abordarea instanțială - - E. Abordarea algebrică - - F. Abordarea relațională - - G. Semantica relațională pentru logica de primul ordin 3. Logica - Concepte - - Forma logică - - Semantică - - Inferență - - Sisteme logice - - Logică și raționalitate - - Concepte rivale - Tipuri - - Logica silogistică - - Logica propozițională - - Logica predicatelor - - Logica modală - - Raționament informal și dialectică - - Logica matematică - - Logica filozofică - - Logica computațională - - Logica non-clasică - Controverse - - "Este logica empirică?" - - Implicare: Strictă sau materială - - Tolerarea imposibilului - - Respingerea adevărului logic - 3.1 Filosofia logicii - - Adevăr - - - Purtătorii de adevăr - - - Adevăruri analitice, adevăr logic, valabilitate, consecință logică și implicare - 3.2 Logica propozițională - - Explicaţie - - Istorie - - Terminologie - - Noțiuni de bază - - - Închiderea sub operații - - - Argument - 3.3 Logica predicatelor - - Introducere - - Sintaxa - - - Alfabetul - - - - Simboluri logice - - - - Simboluri non-logice - - - Regulile formării - - - - Termeni - - - - Formule - - - - Convenții notaționale - - - Variabile libere și legate - - Semantica - - - Structuri de prim ordin - - - Evaluarea valorilor de adevăr - - - Valabilitate, satisfabilitate și consecință logică - - - Algebrizare - - - Teorii, modele și clase elementare de prim ordin - - - Domenii goale - 3.4 Logica modală - - Dezvoltarea logicii modale - - Semantica - - - Teoria modelului - - - Sisteme axiomatice - - - Teoria dovezilor structurale - - - Metode de decizie - - Tipuri de logici modale - - - Logica aletică - - - Logica epistemică - - - Logica temporală - - - Logica deontică - - - Logica doxastică - - - Alte logici modale - - Ontologia posibilității - - Controverse - 3.5 Declarații - - Declarația ca o entitate abstractă - 3.6 Dileme - - Utilizarea dilemei în logică - 3.7 Argumente - - Formal și informal - - Tipuri standard - - - Argumente deductive - - - Argumente inductive - 3.8 Actualism - - Exemplu - - Puncte de vedere filosofice - - Analiza indexicală a actualității - 3.9 Lumi posibile - - Posibilitate, necesitate și contingență - - Semantica formală a logicii modale - - De la logica modală la instrumentul filosofic - - Teoria lumii posibile în studiile literare - 3.9.1 Lumi reale, ne-reale dar posibile, și imposibile - - Lumi posibile logic - - Constituienții lumilor posibile - 3.10 Saul Kripke - - Wittgenstein - - Adevăr - 3.10.1 Logica modală - - Modele canonice - - Modele Carlson - 3.10.2 Logica intuiționistă - - Logica intuiționistă de ordinul întâi - 3.10.3 Nume și Necesitate - - "Un puzzle despre credință" Referințe Despre autor - Nicolae Sfetcu - - De același autor - - Contact Editura - MultiMedia Publishing . (shrink)
In his well-known Mind and World and in line with Wilfrid Sellars (1991) or “that great foe of ‘immediacy’” (ibid., 127) Hegel, McDowell claims that “when Evans argues that judgments of experience are based on non-conceptual content, he is falling into a version of the Myth of the Given” (1996, 114). In this talk and on the basis of a Berkeleyio-Kantian ‘realist idealist’ world view (sect. 1) and an explication of Kant’s concept of the “given manifold” (CPR, e.g. B138; sect. (...) 2), I will argue that Kant and Evans (1982, chs. 5.1–5.2) were indeed mistaken in their versions of the given (sect. 3), but that Sellars and his student McDowell were even more mistaken (sects. 4–5) and that, in the end, there would appear to be a non-conceptual and (thus) non-propositional and essentially (Kantio-)Schopenhauerian given (1816, ch. 1) in perceptual experience from which we unconsciously (Helmholtz 1867, ch. 26) and automatically infer to our first perceptual beliefs. (shrink)
In chapter 7 of The Varieties of Reference, Gareth Evans claimed to have an argument that would present "an antidote" to the Cartesian conception of the self as a purely mental entity. On the basis of considerations drawn from philosophy of language and thought, Evans claimed to be able to show that bodily awareness is a form of self-awareness. The apparent basis for this claim is the datum that sometimes judgements about one’s position based on body sense are immune to (...) errors of misidentification relative to the first-person pronoun 'I'. However, Evans’s argument suffers from a crucial ambiguity. 'I' sometimes refers to the subject's mind, sometimes to the person, and sometimes to the subject's body. Once disambiguated, it turns out that Evans’s argument either begs the question against the Cartesian or fails to be plausible at all. Nonetheless, the argument is important for drawing our attention to the idea that bodily modes of awareness should be taken seriously as possible forms of self-awareness. (shrink)
In The Varieties of Reference, Gareth Evans describes the acquisition of beliefs about one’s beliefs in the following way: ‘I get myself in a position to answer the question whether I believe that p by putting into operation whatever procedure I have for answering the question whether p.’ In this paper I argue that Evans’s remark can be used to explain first person authority if it is supplemented with the following consideration: Holding on to the content of a belief and (...) ‘prefixing’ it with ‘I believe that’ is as easy as it is to hold on to the contents of one’s thoughts when making an inference. We do not, usually, have the problem, in going, for example, from ‘p’ and ‘q’ to ‘p and q’, that one of our thought contents gets corrupted. Self-ascription of belief by way of Evans’s procedure is based on the same capacity to retain and re-deploy thought contents and therefore should enjoy a similar degree of authority. However, is Evans’s description exhaustive of all authoritative self-ascription of belief? Christopher Peacocke has suggested that in addition to Evans’s procedure there are two more relevant ways of self-ascribing belief. I argue that both methods can be subsumed under Evans’s procedure. (shrink)
This paper is largely exegetical/interpretive. My goal is to demonstrate that some criticisms that have been leveled against the program Gareth Evans constructs in The Varieties of Reference (Evans 1980, henceforth VR) misfire because they are based on misunderstandings of Evans’ position. First I will be discussing three criticisms raised by Tyler Burge (Burge, 2010). The first has to do with Evans’ arguments to the effect that a causal connection between a belief and an object is insufficient for that belief (...) to be about that object. A key part of Evans’ argument is to carefully distinguish considerations relevant to the semantics of language from considerations relevant to the semantics (so to speak) of thought or belief (to make the subsequent discussion easier, I will henceforth use ‘thought’ as a blanket term for the relevant mental states, including belief). I will argue that Burge’s criticisms depend on largely not taking account of Evans’ distinctions. Second, Burge criticizes Evans’ account of ‘informational content’ taking it to be inconsistent. I will show that the inconsistency Burge finds depends entirely on a misreading of the doctrine. Finally, Burge takes Evans to task for a perceived over-intellectualization in a key aspect of his doctrine. Burge incorrectly reads Evans as requiring that the subject holding a belief be engaged in certain overly intellectual endeavors, when in fact Evans is only attributing these endeavors to theorists of such a subject. Next, I turn to two criticisms leveled by John Campbell (Campbell, 1999). I will argue that Campbell’s criticisms are based on misunderstandings – though they do hit at deeper elements of Evans’ doctrine. First, Campbell reads Evans’ account of demonstrative thought as requiring that the subject’s information link to an object allows her to directly locate that object in space. Campbell constructs a case in which one tomato (a) is, because of an angled mirror, incorrectly seen as being at a location that happens to be occupied by an identical tomato (b). Campbell claims that Evans’ doctrines require us to conclude that the subject cannot have a demonstrative thought about the seen tomato (a), though it seems intuitively that such a subject would be able to have a demonstrative thought about that tomato, despite its location is inaccurately seen. I show that Evans’ position in fact allows that the subject can have a demonstrative thought about the causal-source tomato in this case because his account does not require that the location of demonstratively identified objects be immediately accurately assessed. What is crucial is that the subject have the ability to accurately discover the location. Second, Campbell criticizes Evans’ notion of a fundamental level of thought. I show that this criticism hinges on view of the nature and role of the fundamental level of thought that mischaracterizes Evans’ treatment of the notion. (shrink)
Both mindreading and stereotyping are forms of social cognition that play a pervasive role in our everyday lives, yet too little attention has been paid to the question of how these two processes are related. This paper offers a theory of the influence of stereotyping on mental-state attribution that draws on hierarchical predictive coding accounts of action prediction. It is argued that the key to understanding the relation between stereotyping and mindreading lies in the fact that stereotypes centrally involve character-trait (...) attributions, which play a systematic role in the action–prediction hierarchy. On this view, when we apply a stereotype to an individual, we rapidly attribute to her a cluster of generic character traits on the basis of her perceived social group membership. These traits are then used to make inferences about that individual’s likely beliefs and desires, which in turn inform inferences about her behavior. (shrink)
How is human social intelligence engaged in the course of ordinary conversation? Standard models of conversation hold that language production and comprehension are guided by constant, rapid inferences about what other agents have in mind. However, the idea that mindreading is a pervasive feature of conversation is challenged by a large body of evidence suggesting that mental state attribution is slow and taxing, at least when it deals with propositional attitudes such as beliefs. Belief attributions involve contents that are decoupled (...) from our own primary representation of reality; handling these contents has come to be seen as the signature of full-blown human mindreading. However, mindreading in cooperative communication does not necessarily demand decoupling. We argue for a theoretical and empirical turn towards “factive” forms of mentalizing here. In factive mentalizing, we monitor what others do or do not know, without generating decoupled representations. We propose a model of the representational, cognitive, and interactive components of factive mentalizing, a model that aims to explain efficient real-time monitoring of epistemic states in conversation. After laying out this account, we articulate a more limited set of conversational functions for nonfactive forms of mentalizing, including contexts of meta-linguistic repair, deception, and argumentation. We conclude with suggestions for further research into the roles played by factive versus nonfactive forms of mentalizing in conversation. (shrink)
Character judgments play an important role in our everyday lives. However, decades of empirical research on trait attribution suggest that the cognitive processes that generate these judgments are prone to a number of biases and cognitive distortions. This gives rise to a skeptical worry about the epistemic foundations of everyday characterological beliefs that has deeply disturbing and alienating consequences. In this paper, I argue that this skeptical worry is misplaced: under the appropriate informational conditions, our everyday character-trait judgments are in (...) fact quite trustworthy. I then propose a mindreading-based model of the socio-cognitive processes underlying trait attribution that explains both why these judgments are initially unreliable, and how they eventually become more accurate. (shrink)
According to the two-systems account of mindreading, our mature perspective-taking abilities are subserved by two distinct mindreading systems: a fast but inflexible, “implicit” system, and a flexible but slow “explicit” one. However, the currently available evidence on adult perspective-taking does not support this account. Specifically, both Level-1 and Level-2 perspective-taking show a combination of efficiency and flexibility that is deeply inconsistent with the two-systems architecture. This inconsistency also turns out to have serious consequences for the two-systems framework as a whole, (...) both as an account of our mature mindreading abilities and of the development of those abilities. What emerges from this critique is a conception of context-sensitive, spontaneous mindreading that may provide insight into how mindreading functions in complex social environments. This in turn offers a bulwark against skepticism about the role of mindreading in everyday social cognition. (shrink)
Nativists about theory of mind have typically explained why children below the age of four fail the false belief task by appealing to the demands that these tasks place on children’s developing executive abilities. However, this appeal to executive functioning cannot explain a wide range of evidence showing that social and linguistic factors also affect when children pass this task. In this paper, I present a revised nativist proposal about theory of mind development that is able to accommodate these findings, (...) which I call the pragmatic development account. According to this proposal, we can gain a better understanding of the shift in children’s performance on standard false-belief tasks around four years of age by considering how children’s experiences with the pragmatics of belief discourse affect the way they interpret the task. (shrink)
In the small but growing literature on the philosophy of country music, the question of how we ought to understand the genre’s notion of authenticity has emerged as one of the central questions. Many country music scholars argue that authenticity claims track attributions of cultural standing or artistic self-expression. However, careful attention to the history of the genre reveals that these claims are simply factually wrong. On the basis of this, we have grounds for dismissing these attributions. Here, I argue (...) for an alternative model of authenticity in which we take claims about the relative authenticity of country music to be evidence of ‘country’ being a dual character concept in the same way that it has been suggested of punk rock and hip-hop. Authentic country music is country music that embodies the core value commitments of the genre. These values form the basis of country artists’ and audiences’ practical identities. Part of country music’s aesthetic practice is that audiences reconnect with, reify, and revise this common practical identity through identification with artists and works that manifest these values. We should then think of authenticity discourse within country music as a kind of game within the genre’s practice of shaping and maintaining this practical identity. (shrink)
The problem of the many threatens to show that, in general, there are far more ordinary objects than you might have thought. I present and motivate a solution to this problem using many-one identity. According to this solution, the many things that seem to have what it takes to be, say, a cat, are collectively identical to that single cat.
‘Virtue signaling’ is the practice of using moral talk in order to enhance one’s moral reputation. Many find this kind of behavior irritating. However, some philosophers have gone further, arguing that virtue signaling actively undermines the proper functioning of public moral discourse and impedes moral progress. Against this view, I argue that widespread virtue signaling is not a social ill, and that it can actually serve as an invaluable instrument for moral change, especially in cases where moral argument alone does (...) not suffice. Specifically, virtue signaling can change the broader public’s social expectations, which can in turn motivate the adoption of new, positive social norms. I also argue that the reputation-seeking motives underlying virtue signaling impose important constraints on virtue signalers’ behavior, which serve to keep the worst excesses of virtue signaling in check. (shrink)
In How We Understand Others: Philosophy and Social Cognition, Shannon Spaulding develops a novel account of social cognition with pessimistic implications for mindreading accuracy: according to Spaulding, mistakes in mentalizing are much more common than traditional theories of mindreading commonly assume. In this commentary, I push against Spaulding’s pessimism from two directions. First, I argue that a number of the heuristic mindreading strategies that Spaulding views as especially error prone might be quite reliable in practice. Second, I argue that current (...) methods for measuring mindreading performance are not well-suited for the task of determining whether our mental-state attributions are generally accurate. I conclude that any claims about the accuracy or inaccuracy of mindreading are currently unjustified. (shrink)
Genre discourse is widespread in appreciative practice, whether that is about hip-hop music, romance novels, or film noir. It should be no surprise then, that philosophers of art have also been interested in genres. Whether they are giving accounts of genres as such or of particular genres, genre talk abounds in philosophy as much as it does the popular discourse. As a result, theories of genre proliferate as well. However, in their accounts, philosophers have so far focused on capturing all (...) of the categories of art that we think of as genres and have focused less on ensuring that only the categories we think are genres are captured by those theories. Each of these theories populates the world with far too many genres because they call a wide class of mere categories of art genres. I call this the problem of genre explosion. In this paper, I survey the existing accounts of genre and describe the kinds of considerations they employ in determining whether a work is a work of a given genre. After this, I demonstrate the ways in which the problem of genre explosion arises for all of these theories and discuss some solutions those theories could adopt that will ultimately not work. Finally, I argue that the problem of genre explosion is best solved by adopting a social view of genres, which can capture the difference between genres and mere categories of art. (shrink)
The current industrial revolution is said to be driven by the digitization that exploits connected information across all aspects of manufacturing. Standards have been recognized as an important enabler. Ontology-based information standard may provide benefits not offered by current information standards. Although there have been ontologies developed in the industrial manufacturing domain, they have been fragmented and inconsistent, and little has received a standard status. With successes in developing coherent ontologies in the biological, biomedical, and financial domains, an effort called (...) Industrial Ontologies Foundry (IOF) has been formed to pursue the same goal for the industrial manufacturing domain. However, developing a coherent ontology covering the entire industrial manufacturing domain has been known to be a mountainous challenge because of the multidisciplinary nature of manufacturing. To manage the scope and expectations, the IOF community kicked-off its effort with a proof-of-concept (POC) project. This paper describes the developments within the project. It also provides a brief update on the IOF organizational set up. (shrink)
how does one inquire into the truth of first principles? Where does one begin when deciding where to begin? Aristotle recognizes a series of difficulties when it comes to understanding the starting points of a scientific or philosophical system, and contemporary scholars have encountered their own difficulties in understanding his response. I will argue that Aristotle was aware of a Platonic solution that can help us uncover his own attitude toward the problem.Aristotle's central problem with first principles arises from the (...) fact that they cannot be demonstrated in the same way as other propositions. Since demonstrations proceed from prior and better-known principles, if the principles themselves were in need of... (shrink)
Social norms are commonly understood as rules that dictate which behaviors are appropriate, permissible, or obligatory in different situations for members of a given community. Many researchers have sought to explain the ubiquity of social norms in human life in terms of the psychological mechanisms underlying their acquisition, conformity, and enforcement. Existing theories of the psychology of social norms appeal to a variety of constructs, from prediction-error minimization, to reinforcement learning, to shared intentionality, to domain-specific adaptations for norm acquisition. In (...) this paper, we propose a novel methodological and conceptual framework for the cognitive science of social norms that we call normative pluralism. We begin with an analysis of the explanatory aims of the cognitive science of social norms. From this analysis, we derive a recommendation for a reformed conception of its explanandum: a minimally psychological construct that we call normative regularities. Our central empirical proposal is that the psychological underpinnings of social norms are most likely realized by a heterogeneous set of cognitive, motivational, and ecological mechanisms that vary between norms and between individuals, rather than by a single type of process or distinctive norm system. This pluralistic approach, we suggest, offers a methodologically sound point of departure for a fruitful and rigorous science of social norms. (shrink)
Moral character judgments pervade our everyday social interactions. But are these judgments epistemically reliable? In this paper, I discuss a challenge to the reliability of ordinary virtue and vice attribution that emerges from Christian Miller’s Mixed Traits theory of moral character, which entails that the majority of our ordinary moral character judgments are false. In response to this challenge, I argue that a key prediction of this theory is not borne out by the available evidence; this evidence further suggests that (...) our moral character judgments do converge upon real psychological properties of individuals. I go on to argue that this is because the evidence for the Mixed Traits Theory does not capture the kind of compassionate behaviors that ordinary folk really care about. Ultimately, I suggest that our ordinary standards for virtue and vice have a restricted social scope, which reflects the parochial nature of our characterological moral psychology. (shrink)
Plato's Theaetetus discusses and ultimately rejects Protagoras's famous claim that "man is the measure of all things." The most famous of Plato's arguments is the Self-Refutation Argument. But he offers a number of other arguments as well, including one that I call the 'Future Argument.' This argument, which appears at Theaetetus 178a−179b, is quite different from the earlier Self-Refutation Argument. I argue that it is directed mainly at a part of the Protagorean view not addressed before , namely, that all (...) beliefs concerning one's own future sensible qualities are true. This part of the view is found to be inconsistent with Protagoras's own conception of wisdom as expertise and with his own pretenses at expertise in teaching. (shrink)
This paper is a test case for the claim, made famous by Myles Burnyeat, that the ancient Greeks did not recognize subjective truth or knowledge. After a brief discussion of the issue in Sextus Empiricus, I then turn to Plato's discussion of Protagorean views in the Theaetetus. In at least two passages, it seems that Plato attributes to Protagoras the view that our subjective experiences constitute truth and knowledge, without reference to any outside world of objects. I argue that these (...) passages have been misunderstood and that on the correct reading, they do not say anything about subjective knowledge. I then try out what I take to be the correct reading of the passages. The paper concludes with a brief discussion of the importance of causes in Greek epistemology. (shrink)
Groove, as a musical quality, is an important part of jazz and pop music appreciative practices. Groove talk is widespread among musicians and audiences, and considerable importance is placed on generating and appreciating grooves in music. However, musicians, musicologists, and audiences use groove attributions in a variety of ways that do not track one consistent underlying concept. I argue that that there are at least two distinct concepts of groove. On one account, groove is ‘the feel of the music’ and, (...) on the other, groove is the psychological feeling (induced by music) of wanting to move one’s body. Further, I argue that recent work in music psychology shows that these two concepts do not converge on a unified set of musical features. Finally, I also argue that these two concepts play different functional roles in the appreciative practices of jazz and popular music. This should cause us to further consider the mediating role genre plays for aesthetic concepts and provides us with reason for adopting a more communitarian approach to aesthetics which is attentive to the ways in which aesthetic discourse serves the practices of different audiences. (shrink)
At a crucial juncture in Plato’s Sophist, when the interlocutors have reached their deepest confusion about being and not-being, the Eleatic Visitor proclaims that there is yet hope. Insofar as they clarify one, he maintains, they will equally clarify the other. But what justifies the Visitor’s seemingly oracular prediction? A new interpretation explains how the Visitor’s hope is in fact warranted by the peculiar aporia they find themselves in. The passage describes a broader pattern of ‘exploring both sides’ that lends (...) insight into Plato’s aporetic method. (shrink)
In both Metaphysics Γ 4 and 5 Aristotle argues that Protagoras is committed to the view that all contradictions are true. Yet Aristotle’s arguments are not transparent, and later, in Γ 6, he provides Protagoras with a way to escape contradictions. In this paper I try to understand Aristotle’s arguments. After examining a number of possible solutions, I conclude that the best way of explaining them is to (a) recognize that Aristotle is discussing a number of Protagorean opponents, and (b) (...) import another of Protagoras’ views, namely the claim that there are always two logoi opposed to one another. (shrink)
Plato’s Parmenides and Lysis have a surprising amount in common from a methodological standpoint. Both systematically employ a method that I call ‘exploring both sides’, a philosophical method for encouraging further inquiry and comprehensively understanding the truth. Both have also been held in suspicion by interpreters for containing what looks uncomfortably similar to sophistic methodology. I argue that the methodological connections across these and other dialogues relieve those suspicions and push back against a standard developmentalist story about Plato’s method. This (...) allows for a better understanding of why exploring both sides is explicitly recommended in the Parmenides and its role within Plato’s broader methodological repertoire. (shrink)
The purpose of this research is to articulate how a theory of causation might be serviceable to a theory of sport. This article makes conceptual links between Bernard Suits’ theory of game-playing, causation, and theories of causation. It justifies theories of causation while drawing on connections between sport and counterfactuals. It articulates the value of theories of causation while emphasizing possible limitations. A singularist theory of causation is found to be more broadly serviceable with particular regard to its analysis of (...) sports. (shrink)
The Parmenides has been unduly overlooked in discussions of hypothesis in Plato. It contains a unique method for testing first principles, a method I call ‘exploring both sides’. The dialogue recommends exploring the consequences of both a hypothesis and its contradictory and thematizes this structure throughout. I challenge the view of Plato’s so-called ‘method of hypothesis’ as an isolated stage in Plato’s development; instead, the evidence of the Parmenides suggests a family of distinct hypothetical methods, each with its own peculiar (...) aim. Exploring both sides is unique both in its structure and in its aim of testing candidate principles. (shrink)
Intellectual attention, like perceptual attention, is a special mode of mental engagement with the world. When we attend intellectually, rather than making use of sensory information we make use of the kind of information that shows up in occurent thought, memory, and the imagination (Chun, Golomb, & Turk-Browne, 2011). In this paper, I argue that reflecting on what it is like to comprehend memory demonstratives speaks in favour of the view that intellectual attention is required to understand memory demonstratives. Moreover, (...) I argue that this is a line of thought endorsed by Gareth Evans in his Varieties of Reference (1982). In so doing, I improve on interpretations of Evans that have been offered by Christopher Peacocke (1984), and Christoph Hoerl & Theresa McCormack (a coauthored piece, 2005). In so doing I also improve on McDowell’s (1990) criticism of Peacocke’s interpretation of Evans. Like McDowell, I believe that Peacocke might overemphasize the role that “memory-images” play in Evans’ account of comprehending memory demonstratives. But unlike McDowell, I provide a positive characterization of how Evans described the phenomenology of comprehending memory demonstratives. (shrink)
Amartya Sen has recently leveled a series of what he alleges to be quite serious very general objections against Rawls, Rawlsian fellow travelers, and other social contract accounts of justice. In The Idea of Justice, published in 2009, Sen specifically charges his target philosophical views with what calls transcendentalism, procedural parochialism, and with being mistakenly narrowly focused on institutions. He also thinks there is a basic incoherence—arising from a version of Derek Parfit’s Identity Problem—internal to the Rawslian theoretical apparatus. Sen (...) would have political philosophy pursue intersocietal comparisons of relative justice more directly and in the manner of social choice theory. Yet the positive argument he develops in support of this method is quite thin. That aside, Sen’s polemical strategy of inflicting death by a thousand cuts is ineffective against the Rawlsian paradigm. For, as I show herein, none of these criticisms have the force we might be led to expect. (shrink)
In this essay I argue that the central problem of Aristotle’s Metaphysics H (VIII) 6 is the unity of forms and that he solves this problem in just the way he solves the problem of the unity of composites – by hylomorphism. I also discuss the matter– form relationship in H 6, arguing that they have a correlative nature as the matter of the form and the form of the matter.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.