The implicit dimension of enthymemes is investigated from a pragmatic perspective to show why a premise can be left unexpressed, and how it can be used strategically. The relationship between the implicit act of taking for granted and the pattern of presumptive reasoning is shown to be the cornerstone of kairos and the fallacy of straw man. By taking a proposition for granted, the speaker shifts the burden of proving its un-acceptability onto the hearer. The resemblance of the (...) tacit premise with what is commonly acceptable or has been actually stated can be used as a rhetorical strategy. (shrink)
According to one picture of the mind, decisions and actions are largely the result of automatic cognitive processing beyond our ability to control. This picture is in tension with a foundational principle in ethics that moral responsibility for behavior requires the ability to control it. The discovery of implicit attitudes contributes to this tension. According to the ability argument against moral responsibility, if we cannot control implicit attitudes, and implicit attitudes cause behavior, then we cannot be morally (...) responsible for that behavior. The purpose of this paper is to refute the ability argument. Drawing on both scientific evidence in cognitive science and philosophical arguments in ethics and action theory, I argue that it is invalid and unsound because current evidence is insufficient to establish the premises that implicit attitudes are uncontrollable, that they significantly cause behavior, that responsibility always requires ability, and that even if uncontrollable attitudes did fully cause behavior, this entails that the behavior they cause is uncontrollable. The rejection of the ability argument questions the priority of the unconscious over the conscious mind in cognitive science, deprioritizes ability in theories of moral responsibility in ethics, and provides a strong reason to uphold moral responsibility for implicitly biased behavior. (shrink)
A person with one dollar is poor. If a person with n dollars is poor, then so is a person with n + 1 dollars. Therefore, a person with a billion dollars is poor. True premises, valid reasoning, a false a conclusion. This is an instance of the Sorites-paradox. (There are infinitely many such paradoxes. A man with an IQ of 1 is unintelligent. If a man with an IQ of n is unintelligent, so is a man with an (...) IQ of n+1. Therefore a man with an IQ of 200 is unintelligent.) Most attempts to solve this paradox reject some law of classical logic, usually the law of bivalence. I show that this paradox can be solved while holding on to all the laws of classical logic. Given any predicate that generates a Sorites-paradox, significant use of that predicate is actually elliptical for a relational statement: a significant token of "Bob is poor" means that Bob is poor compared to x, for some value of x. Once a value of x is supplied, a definite cutoff line between having and not having the paradox-generating predicate is supplied. This neutralizes the inductive step in the associated Sorites argument, and the would-be paradox is avoided. (shrink)
Expressing a widely-held view, David Hitchcock claims that "an enthymematic argument ... assumes at least the truth of the argument's associated conditional ... whose antecedent is the conjunction of the argument's explicit premises and whose consequent is the argument's conclusion." But even definitionally, this view is problematic, since an argument's being "enthymematic" or incomplete with respect to its explicit premises means that the conclusion is not implied by these premises alone. The paper attempts to specify the ways (...) in which the view is incorrect, as well as seemingly correct (e.g., the case of a Modus Ponens wherein the major premise is implicit). -/- . (shrink)
Although in some contexts the notions of an ordinary argument’s presumption, assumption, and presupposition appear to merge into the one concept of an implicit premise, there are important differences between these three notions. It is argued that assumption and presupposition, but not presumption, are basic logical notions. A presupposition of an argument is best understood as pertaining to a propositional element (a premise or the conclusion) e of the argument, such that the presupposition is a necessary condition for the (...) truth of e or for a term in e to have a referent. In contrast, an assumption of an argument pertains to the argument as a whole in that it is integral to the reasoning or inferential structure of the argument. A logical assumption of an argument is essentially a proposition that must be true in order for the argument aside from that proposition to be fully cogent. Nothing that is both comparable and distinguishing can be said about presumptions of arguments. Rather, presumptions of arguments are distinctively conventional; they are introduced through conventional rules (e.g., those that concern how to treat promises). So not all assumptions and not all presuppositions of arguments are presumptions of those arguments, although all presumptions of arguments are either assumptions or presuppositions of those arguments. This account avoids making the (monological) notion of presumption vacuous and dissolving the distinction between assumption and presumption, which is a vulnerability of alternative views such as Hansen’s and Bermejo-Luque’s, as is shown. (shrink)
This paper advances an approach to relevance grounded on patterns of material inference called argumentation schemes, which can account for the reconstruction and the evaluation of relevance relations. In order to account for relevance in different types of dialogical contexts, pursuing also non-cognitive goals, and measuring the scalar strength of relevance, communicative acts are conceived as dialogue moves, whose coherence with the previous ones or the context is represented as the conclusion of steps of material inferences. Such inferences are described (...) using argumentation schemes and are evaluated by considering 1) their defeasibility, and 2) the acceptability of the implicitpremises on which they are based. The assessment of both the relevance of an utterance and the strength thereof depends on the evaluation of three interrelated factors: 1) number of inferential steps required; 2) the types of argumentation schemes involved; and 3) the implicitpremises required. (shrink)
Mayr’s proximate–ultimate distinction has received renewed interest in recent years. Here we discuss its role in arguments about the relevance of developmental to evolutionary biology. We show that two recent critiques of the proximate–ultimate distinction fail to explain why developmental processes in particular should be of interest to evolutionary biologists. We trace these failures to a common problem: both critiques take the proximate–ultimate distinction to neglect specific causal interactions in nature. We argue that this is implausible, and that the distinction (...) should instead be understood in the context of explanatory abstractions in complete causal models of evolutionary change. Once the debate is reframed in this way, the proximate–ultimate distinction’s role in arguments against the theoretical significance of evo-devo is seen to rely on a generally implicit premise: that the variation produced by development is abundant, small and undirected. We show that a “lean version” of the proximate–ultimate distinction can be maintained even when this isotropy assumption does not hold. Finally, we connect these considerations to biological practice. We show that the investigation of developmental constraints in evolutionary transitions has long relied on a methodology which foregrounds the explanatory role of developmental processes. It is, however, entirely compatible with the lean version of the proximate–ultimate distinction. (shrink)
A probability distribution is regular if no possible event is assigned probability zero. While some hold that probabilities should always be regular, three counter-arguments have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson (2017) and Benci et al. (2016) have raised technical objections to these symmetry arguments, but we see here that their objections fail. Howson says that Williamson’s (2007) “isomorphic” events are not in fact isomorphic, but Howson is speaking (...) of set-theoretic representations of events in a probability model. While those sets are not isomorphic, Williamson’s physical events are, in the relevant sense. Benci et al. claim that all three arguments rest on a conflation of different models, but they do not. They are founded on the premise that similar events should have the same probability in the same model, or in one case, on the assumption that a single rotation-invariant distribution is possible. Having failed to refute the symmetry arguments on such technical grounds, one could deny their implicitpremises, which is a heavy cost, or adopt varying degrees of instrumentalism or pluralism about regularity, but that would not serve the project of accurately modelling chances. (shrink)
A probability distribution is regular if it does not assign probability zero to any possible event. While some hold that probabilities should always be regular, three counter-arguments have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson and Benci et al. have raised technical objections to these symmetry arguments, but we see here that their objections fail. Howson says that Williamson’s “isomorphic” events are not in fact isomorphic, but Howson is speaking (...) of set-theoretic representations of events in a probability model. While those sets are not isomorphic, Williamson’s physical events are, in the relevant sense. Benci et al. claim that all three arguments rest on a conflation of different models, but they do not. They are founded on the premise that similar events should have the same probability in the same model, or in one case, on the assumption that a single rotation-invariant distribution is possible. Having failed to refute the symmetry arguments on such technical grounds, one could deny their implicitpremises, which is a heavy cost, or adopt varying degrees of instrumentalism or pluralism about regularity, but that would not serve the project of accurately modelling chances. (shrink)
In this paper, I inspect the grounds for the mature Spinozist argument for substance monism. The argument is succinctly stated at Ethics Part 1, Proposition 14. The argument appeals to two explicit premises: (1) that there must be a substance with all attributes; (2) that substances cannot share their attributes. In conjunction with a third implicit premise, that a substance cannot not have any attribute whatsoever, Spinoza infers that there can be no more than one substance. I begin (...) the inspection with the analysis of the first premise, which is provided in the form of the four proofs of God’s existence in Ethics Part 1, Proposition 11. While demonstrating how Spinoza adopts a progressive approach, where the fourth proof of God’s existence is more successful and persuasive than the third, which is more successful than the second, etc., I also unpack concepts central to Spinoza’s thinking here, including the concepts of reason (ratio) and power (potesta or potentia). I then analyze the second premise of the Spinozist argument for substance monism, as established by Ethics Part 1, Proposition 4 in conjunction with Ethics Part 1, Proposition 5. I take up and respond to the objection attributed to Leibniz that a substance p can have the attributes x and y and a substance q can have the attributes y and z, and thus that substances can share some attributes while remaining distinct. Throughout the study, my attention is focused on the argumentative procedures Spinoza adopts. This yields a close, internalist reading of the text where Spinoza effectively embraces substance monism. In conclusion to this study, I underscore to the originality of Spinoza’s argument for seventeenth century theories of substance. (shrink)
I discuss the general form of arguments that profess to prove that the view that things endure in tensed time through causally produced change (the dynamic view) must be false because it involves contradictions. I argue that these arguments implicitly presuppose what has been called the temporal parity thesis, i.e. that all moments of time are equally existent and real, and that this thesis must be understood as the denial of the dynamic view. When this implicit premise is made (...) explicit, the arguments turn out to be either circular, they presuppose what they profess to prove, or mere demonstrations of the fact that the dynamic view is incompatible with its own negation. Furthermore, I discuss the metaphysical consequences of accepting the temporal parity thesis, arguing that it deprives us of the means to provide natural explanations to empirical phenomena. (shrink)
An argumentation profile is defined as a methodological instrument for analyzing argumentative discourse considering distinct and interrelated dimensions: the types of argument used, their quality, and the emotions triggered. Walton’s theoretical contributions are developed as a coherent analytical and multifaceted toolbox for capturing these aspects. Argumentation schemes are used to detect and quantify the types of argument. Fallacy analysis and the assessment of the implicitpremises retrieved through the schemes allow evaluating arguments. Finally, the frequency of emotive words (...) signals the most common emotions aroused. This method is illustrated through a corpus of argumentative tweets of three politicians. (shrink)
In the tradition stemming from Aristotle through Aquinas, rational decision making is seen as a complex structure of distinct phases in which reasoning and will are interconnected. Intention, deliberation, and decision are regarded as the fundamental steps of the decision-making process, in which an end is chosen, the means are specified, and a decision to act is made. Based on this Aristotelian theoretical background, we show how the decision-making process can be modeled as a net of several patterns of reasoning, (...) involving the classification of an action or state of affairs, its evaluation, the deliberation about the means to carry it out, and the decision. It is shown how argumentation theory can contribute to our understanding of the mechanisms involved by formalizing the steps of reasoning using argumentation schemes, and setting out the value-based criteria underlying the evaluation of an action. Representing each phase of the decision-making process as a separate scheme allows one to identify implicitpremises and bring the roots of ethical dilemma to light along with the means to resolve them. In particular, we will show the role of framing and classification in triggering value-based reasoning, and how argumentation theory can be used to represent and uproot the grounds of possible manipulations. (shrink)
Thomas Reid is often misread as defending common sense, if at all, only by relying on illicit premises about God or our natural faculties. On these theological or reliabilist misreadings, Reid makes common sense assertions where he cannot give arguments. This paper attempts to untangle Reid's defense of common sense by distinguishing four arguments: (a) the argument from madness, (b) the argument from natural faculties, (c) the argument from impotence, and (d) the argument from practical commitment. Of these, (a) (...) and (c) do rely on problematic premises that are no more secure than claims of common sense itself. Yet (b) and (d) do not. This conclusion can be established directly by considering the arguments informally, but one might still worry that there is an implicit premise in them. In order to address this concern, I reconstruct the arguments in the framework of subjective Bayesianism. The worry becomes this: Do the arguments rely on specific values for the prior probability of some premises? Reid's appeals to our prior cognitive and practical commitments do not. Rather than relying on specific probability assignments, they draw on things that are part of the Bayesian framework itself, such as the nature of observation and the connection between belief and action. Contra the theological or reliabilist readings, the defense of common sense does not require indefensible premises. (shrink)
Joanna Mary Firth and Jonathan Quong argue that both an instrumental account of liability to defensive harm, according to which an aggressor can only be liable to defensive harms that are necessary to avert the threat he poses, and a purely noninstrumental account which completely jettisons the necessity condition, lead to very counterintuitive implications. To remedy this situation, they offer a “pluralist” account and base it on a distinction between “agency rights” and a “humanitarian right.” I argue, first, that this (...) distinction is spurious; second, that the conclusions they draw from this distinction do not cohere with its premises; third, that even if one granted the distinction, Firth’s and Quong’s implicit premise that you can forfeit your agency rights but not your “humanitarian right” is unwarranted; fourth, that their attempt to mitigate the counterintuitive implications of their own account in the Rape case relies on mistaken ad-hoc assumptions; fifth, that even if they were successful in somewhat mitigating said counterintuitive implications, they would still not be able to entirely avoid them; and sixth, that even in the unlikely case that none of these previous five critical points are correct, Firth and Quong still fail to establish that aggressors can be liable to unnecessary defensive harm since they fail to establish that unnecessary harm can ever be defensive in the first place. (shrink)
Agent relativists about vagueness (henceforth ‘agent relativists’) hold that whether or not an object x falls in the extension of a vague predicate ‘P’ at a time t depends on the judgemental dispositions of a particular competent agent at t. My aim in this paper is to critically examine arguments that purport to support agent relativism by appealing to data from forced-march Sorites experiments. The most simple and direct versions of such forced-march Sorites arguments rest on the following (implicit) (...) premise: If competent speakers' judgements vary in a certain way, then the extensions of ‘P’ as used by these speakers must vary in the same way. This premise is in need of independent support, since otherwise opponents of agent relativism can simply reject it. In this paper, I focus on the idea that one cannot plausibly reject this premise, as that would commit one to implausible claims about linguistic competence. Against this, I argue that one can accommodate the data from forced-march Sorites experiments in a way that is compatible with a plausible picture of linguistic competence, without going agent relativist. Thus, there is reason to be sceptical of the idea that such data paired with considerations about linguistic competence can be invoked in order to lend any solid support to agent relativism. Forced-march Sorites arguments of this kind can, and should be, resisted. (shrink)
We argue that common knowledge, of the kind used in reasoning in law and computing is best analyzed using a dialogue model of argumentation (Walton & Krabbe 1995). In this model, implicitpremises resting on common knowledge are analyzed as endoxa or widely accepted opinions and generalizations (Tardini 2005). We argue that, in this sense, common knowledge is not really knowledge of the kind represent by belief and/or knowledge of the epistemic kind studied in current epistemology. This paper (...) takes a different approach, defining it in relation to a common commitment store of two participants in a rule-governed dialogue in which two parties engage in rational argumentation (Jackson & Jacobs 1980; van Eemeren & Grootendorst 2004). A theme of the paper is how arguments containing common knowledge premises can be studied with the help of argumentation schemes for arguments from generally accepted opinion and expert opinion. It is argued that common knowledge is a species of provi- sional acceptance of a premise that is not in dispute at a given point in a dia- logue, but may later be defeated as the discussion proceeds . (shrink)
Conventional wisdom dictates that proofs of mathematical propositions should be treated as necessary, and sufficient, for entailing `significant' mathematical truths only if the proofs are expressed in a---minimally, deemed consistent---formal mathematical theory in terms of: * Axioms/Axiom schemas * Rules of Deduction * Definitions * Lemmas * Theorems * Corollaries. Whilst Andrew Wiles' proof of Fermat's Last Theorem FLT, which appeals essentially to geometrical properties of real and complex numbers, can be treated as meeting this criteria, it nevertheless leaves two (...) questions unanswered: (i) Why is x^n +y^n = z^n solvable only for n < 3 if x, y, z, n are natural numbers? (ii) What technique might Fermat have used that led him to, even if only briefly, believe he had `a truly marvellous demonstration' of FLT? Prevailing post-Wiles wisdom---leaving (i) essentially unaddressed---dismisses Fermat's claim as a conjecture without a plausible proof of FLT. -/- However, we posit that providing evidence-based answers to both queries is necessary not only for treating FLT as significant, but also for understanding why FLT can be treated as a true arithmetical proposition. We thus argue that proving a theorem formally from explicit, and implicit, premises/axioms using rules of deduction---as currently accepted---is a meaningless game, of little scientific value, in the absence of evidence that has already established---unambiguously---why the premises/axioms and rules of deduction can be treated, and categorically communicated, as pre-formal truths in Marcus Pantsar's sense. Consequently, only evidence-based, pre-formal, truth can entail formal provability; and the formal proof of any significant mathematical theorem cannot entail its pre-formal truth as evidence-based. It can only identify the explicit/implicitpremises that have been used to evidence the, already established, pre-formal truth of a mathematical proposition. Hence visualising and understanding the evidence-based, pre-formal, truth of a mathematical proposition is the only raison d'etre for subsequently seeking a formal proof of the proposition within a formal mathematical language (whether first-order or second order set theory, arithmetic, geometry, etc.) By this yardstick Andrew Wiles' proof of FLT fails to meet the required, evidence-based, criteria for entailing a true arithmetical proposition. -/- Moreover, we offer two scenarios as to why/how Fermat could have laconically concluded in his recorded marginal noting that FLT is a true arithmetical proposition---even though he either did not (or could not to his own satisfaction) succeed in cogently evidencing, and recording, why FLT can be treated as an evidence-based, pre-formal, arithmetical truth (presumably without appeal to properties of real and complex numbers). It is primarily such a putative, unrecorded, evidence-based reasoning underlying Fermat's laconic assertion which this investigation seeks to reconstruct; and to justify by appeal to a plausible resolution of some philosophical ambiguities concerning the relation between evidence-based, pre-formal, truth and formal provability. (shrink)
Since the time of Aristotle's students, interpreters have considered Prior Analytics to be a treatise about deductive reasoning, more generally, about methods of determining the validity and invalidity of premise-conclusion arguments. People studied Prior Analytics in order to learn more about deductive reasoning and to improve their own reasoning skills. These interpreters understood Aristotle to be focusing on two epistemic processes: first, the process of establishing knowledge that a conclusion follows necessarily from a set of premises (that is, on (...) the epistemic process of extracting information implicit in explicitly given information) and, second, the process of establishing knowledge that a conclusion does not follow. Despite the overwhelming tendency to interpret the syllogistic as formal epistemology, it was not until the early 1970s that it occurred to anyone to think that Aristotle may have developed a theory of deductive reasoning with a well worked-out system of deductions comparable in rigor and precision with systems such as propositional logic or equational logic familiar from mathematical logic. When modern logicians in the 1920s and 1930s first turned their attention to the problem of understanding Aristotle's contribution to logic in modern terms, they were guided both by the Frege-Russell conception of logic as formal ontology and at the same time by a desire to protect Aristotle from possible charges of psychologism. They thought they saw Aristotle applying the informal axiomatic method to formal ontology, not as making the first steps into formal epistemology. They did not notice Aristotle's description of deductive reasoning. Ironically, the formal axiomatic method (in which one explicitly presents not merely the substantive axioms but also the deductive processes used to derive theorems from the axioms) is incipient in Aristotle's presentation. Partly in opposition to the axiomatic, ontically-oriented approach to Aristotle's logic and partly as a result of attempting to increase the degree of fit between interpretation and text, logicians in the 1970s working independently came to remarkably similar conclusions to the effect that Aristotle indeed had produced the first system of formal deductions. They concluded that Aristotle had analyzed the process of deduction and that his achievement included a semantically complete system of natural deductions including both direct and indirect deductions. Where the interpretations of the 1920s and 1930s attribute to Aristotle a system of propositions organized deductively, the interpretations of the 1970s attribute to Aristotle a system of deductions, or extended deductive discourses, organized epistemically. The logicians of the 1920s and 1930s take Aristotle to be deducing laws of logic from axiomatic origins; the logicians of the 1970s take Aristotle to be describing the process of deduction and in particular to be describing deductions themselves, both those deductions that are proofs based on axiomatic premises and those deductions that, though deductively cogent, do not establish the truth of the conclusion but only that the conclusion is implied by the premise-set. Thus, two very different and opposed interpretations had emerged, interestingly both products of modern logicians equipped with the theoretical apparatus of mathematical logic. The issue at stake between these two interpretations is the historical question of Aristotle's place in the history of logic and of his orientation in philosophy of logic. This paper affirms Aristotle's place as the founder of logic taken as formal epistemology, including the study of deductive reasoning. A by-product of this study of Aristotle's accomplishments in logic is a clarification of a distinction implicit in discourses among logicians--that between logic as formal ontology and logic as formal epistemology. (shrink)
Constructivism is frequently met with objections, criticism and often equated with nihilism or relativism. Sometimes even blamed for what some would randomly picture as unwanted side effects of radicalism or of a progressivist era: such misconceptions are not only due to an imprecise grasp of the premises shared by the constructivist family of systems. The structure of media, political systems, and economic models, still up today impel societal understandings of knowledge on neo-positivistic grounds. The first part of this essay (...) outlines such pressures while sketching how language and worldviews play critical roles in our knowledge construction. Focusing on recent mediatic events, this work advances displaying some essential limits regarding the construction of human knowledge. Though unavoidable, some of the distinguishing aspects regarding the nature of our narratives are then critically reviewed. Later, it is shown how a special kind of self-denial that certain sub-stories implicitly hold about their own narrative nature leaves us with clashing worldviews that eventually collide onto crisis. Finally, it’ll be argued that it’s precisely in this scenario where a constructivist depiction of social discourses may move us away from any adolescent intent of elucidating absolutes within mere heuristics, to the pragmatic need of arriving on satisfactory agreements between parties (DOI: 10.4236/ojpp.2022). (shrink)
In their book EVALUATING CRITICAL THINKING Stephen Norris and Robert Ennis say: “Although it is tempting to think that certain [unstated] assumptions are logically necessary for an argument or position, they are not. So do not ask for them.” Numerous writers of introductory logic texts as well as various highly visible standardized tests (e.g., the LSAT and GRE) presume that the Norris/Ennis view is wrong; the presumption is that many arguments have (unstated) necessary assumptions and that readers and test takers (...) can reasonably be expected to identify such assumptions. This paper proposes and defends criteria for determining necessary assumptions of arguments. Both theoretical and empirical considerations are brought to bear. (shrink)
As rational agents, we are governed by reasons. The fact that there’s beer at the pub might be a reason to go there and a reason to believe you’ll enjoy it. As this example illustrates, there are reasons for both action and for belief. There are also many other responses for which there seem to be reasons – for example, desire, regret, admiration, and blame. This diversity raises questions about how reasons for different responses relate to each other. Might certain (...) such reasons be more fundamental than others? Should certain reasons and not others be treated as paradigmatic? At least implicitly, many philosophers treat reasons for action as the fundamental or paradigmatic case. In contrast, this paper articulates and defends an alternative approach, on which reasons for attitudes are fundamental, and reasons for action are both derivative and, in certain ways, idiosyncratic. After outlining this approach, we focus on defending its most contentious thesis, that reasons for action are fundamentally reasons for intention. We offer two arguments for this thesis, which turn on central roles of reasons: that reasons can be responded to, and that reasons can feature as premises of good reasoning. We then examine objections to the thesis and argue that none succeed. We conclude by sketching some ways in which our approach is significant for theorising about reasons. (shrink)
Proponents of vaccine mandates typically claim that everyone who can be vaccinated has a moral or ethical obligation to do so for the sake of those who cannot be vaccinated, or in the interest of public health. I evaluate several previously undertheorised premisesimplicit to the ‘obligation to vaccinate’ type of arguments and show that the general conclusion is false: there is neither a moral obligation to vaccinate nor a sound ethical basis to mandate vaccination under any circumstances, (...) even for hypothetical vaccines that are medically risk-free. Agent autonomy with respect to self-constitution has absolute normative priority over reduction or elimination of the associated risks to life. In practical terms, mandatory vaccination amounts to discrimination against healthy, innate biological characteristics, which goes against the established ethical norms and is also defeasible a priori. (shrink)
This paper defends a distinctly liberal approach to public health ethics and replies to possible objections. In particular, I look at a set of recent proposals aiming to revise and expand liberalism in light of public health's rationale and epidemiological findings. I argue that they fail to provide a sociologically informed version of liberalism. Instead, they rest on an implicit normative premise about the value of health, which I show to be invalid. I then make explicit the unobvious, republican (...) background of these proposals. Finally, I expand on the liberal understanding of freedom as non-interference and show its advantages over the republican alternative of freedom as non-domination within the context of public health. The views of freedom I discuss in the paper do not overlap with the classical distinction between negative and positive freedom. In addition, my account differentiates the concepts of freedom and autonomy and does not rule out substantive accounts of the latter. Nor does it confine political liberalism to an essentially procedural form. (shrink)
Some have suggested that images can be arguments. Images can certainly bolster the acceptability of individual premises. We worry, though, that the static nature of images prevents them from ever playing a genuinely argumentative role. To show this, we call attention to a dilemma. The conclusion of a visual argument will either be explicit or implicit. If a visual argument includes its conclusion, then that conclusion must be demarcated from the premise or otherwise the argument will beg the (...) question. If a visual argument does not include its conclusion, then the premises on display must license that specific conclusion and not its opposite, in accordance with some demonstrable rationale. We show how major examples from the literature fail to escape this dilemma. Drawing inspiration from the graphical logic of C. S. Peirce, we suggest instead that images can be manipulated in a way that overcomes the dilemma. Diagrammatic reasoning can take one stepwise from an initial visual layout to a conclusion—thereby providing a principled rationale that bars opposite conclusions—and the visual inscription of this correct conclusion can come afterward in time—thereby distinguishing the conclusion from the premises. Even though this practical application of Peirce’s logical ideas to informal contexts requires that one make adjustments, we believe it points to a dynamic conception of visual argumentation that will prove more fertile in the long run. (shrink)
"Procedural Justice" offers a theory of procedural fairness for civil dispute resolution. The core idea behind the theory is the procedural legitimacy thesis: participation rights are essential for the legitimacy of adjudicatory procedures. The theory yields two principles of procedural justice: the accuracy principle and the participation principle. The two principles require a system of procedure to aim at accuracy and to afford reasonable rights of participation qualified by a practicability constraint. The Article begins in Part I, Introduction, with two (...) observations. First, the function of procedure is to particularize general substantive norms so that they can guide action. Second, the hard problem of procedural justice corresponds to the following question: How can we regard ourselves as obligated by legitimate authority to comply with a judgment that we believe (or even know) to be in error with respect to the substantive merits? The theory of procedural justice is developed in several stages, beginning with some preliminary questions and problems. The first question - what is procedure? - is the most difficult and requires an extensive answer: Part II, Substance and Procedure, defines the subject of the inquiry by offering a new theory of the distinction between substance and procedure that acknowledges the entanglement of the action-guiding roles of substantive and procedural rules while preserving the distinction between two ideal types of rules. The key to the development of this account of the nature of procedure is a thought experiment, in which we imagine a world with the maximum possible acoustic separation between substance and procedure. Part III, The Foundations of Procedural Justice, lays out the premises of general jurisprudence that ground the theory and answers a series of objections to the notion that the search for a theory of procedural justice is a worthwhile enterprise. Sections II and III set the stage for the more difficult work of constructing a theory of procedural legitimacy. Part IV, Views of Procedural Justice, investigates the theories of procedural fairness found explicitly or implicitly in case law and commentary. After a preliminary inquiry that distinguishes procedural justice from other forms of justice, Part IV focuses on three models or theories. The first, the accuracy model, assumes that the aim of civil dispute resolution is correct application of the law to the facts. The second, the balancing model, assumes that the aim of civil procedure is to strike a fair balance between the costs and benefits of adjudication. The third, the participation model, assumes that the very idea of a correct outcome must be understood as a function of process that guarantees fair and equal participation. Part IV demonstrates that none of these models provides the basis for a fully adequate theory of procedural justice. In Part V, The Value of Participation, the lessons learned from analysis and critique of the three models are then applied to the question whether a right of participation can be justified for reasons that are not reducible to either its effect on the accuracy or its effect on the cost of adjudication. The most important result of Part V is the Participatory Legitimacy Thesis: it is (usually) a condition for the fairness of a procedure that those who are to be finally bound shall have a reasonable opportunity to participate in the proceedings. The central normative thrust of Procedural Justice is developed in Part VI, Principles of Procedural Justice. The first principle, the Participation Principle, stipulates a minimum (and minimal) right of participation, in the form of notice and an opportunity to be heard, that must be satisfied (if feasible) in order for a procedure to be considered fair. The second principle, the Accuracy Principle, specifies the achievement of legally correct outcomes as the criterion for measuring procedural fairness, subject to four provisos, each of which sets out circumstances under which a departure from the goal of accuracy is justified by procedural fairness itself. In Part VII, The Problem of Aggregation, the Participation Principle and the Accuracy Principle are applied to the central problem of contemporary civil procedure - the aggregation of claims in mass litigation. Part VIII offers some concluding observations about the point and significance of Procedural Justice. (shrink)
Does the moral badness of pain depend on who feels it? A common, but generally only implicitly stated view, is that it does not. This view, ‘unitarianism’, maintains that the same interests of different beings should count equally in our moral calculus. Shelly Kagan’s project in How to Count Animals, more or less is to reject this common view, and develop an alternative to it: a hierarchical view of moral status, on which the badness of pain does depend on who (...) feels it. In this review essay, we critically examine Kagan’s argument for status hierarchy. In particular, we reject two of the central premises in his argument: that moral standing is ultimately grounded in agency and that unitarianism is overdemanding. We conclude that moral status may, despite Kagan’s compelling argument to the contrary, not be hierarchical. (shrink)
Many philosophers and non-philosophers who reflect on the causal antecedents of human action get the impression that no agent can have morally relevant freedom. Call this the ‘non-existence impression.’ The paper aims to understand the (often implicit) reasoning underlying this impression. On the most popular reconstructions, the reasoning relies on the assumption that either an action is the outcome of a chance process, or it is determined by factors that are beyond the agent’s control or which she did not (...) bring about. I argue that arguments based on this premise fail to apply to some possible agents for whom the non-existence impression arises. On the alternative reconstruction I offer, the impression rests on the assumption that free will requires being involved in the ultimate explanation of one’s actions in a novel sense in which nothing can be involved in the ultimate explanation of anything. (shrink)
Russell’s theory of acquaintance construes perceptual awareness as at once constitutively independent of conceptual thought and yet a source of propositional knowledge. Wilfrid Sellars, John McDowell, and other conceptualists object that this is a ‘myth’: perception can be a source of knowledge only if conceptual capacities are already in play therein. Proponents of a relational view of experience, including John Campbell, meanwhile voice sympathy for Russell’s position on this point. This paper seeks to spell out, and defend, a claim that (...) offers the prospects for an attractive, unacknowledged element of common ground in this debate. The claim is that conceptual capacities, at least in a certain minimal sense implicit in McDowell’s recent work, must be operative in perceptual experience, if it is to rationalize judgement. The claim will be supported on the basis of two premises, each of which can be defended drawing, inter alia, on considerations stressed by Campbell. First, that experience rationalizes judgement only if it is attentive. Second, that attention qualifies as a conceptual capacity, in the noted, minimal sense. The conjunction of the two premises might be dubbed ‘attentional conceptualism’. (shrink)
Interpreters of Kant’s Refutation of Idealism face a dilemma: it seems to either beg the question against the Cartesian sceptic or else offer a disappointingly Berkeleyan conclusion. In this article I offer an interpretation of the Refutation on which it does not beg the question against the Cartesian sceptic. After defending a principle about question-begging, I identify four premises concerning our representations that there are textual reasons to think Kant might be implicitly assuming. Using those assumptions, I offer a (...) reconstruction of Kant’s Refutation that avoids the interpretative dilemma, though difficult questions about the argument remain. (shrink)
In a recent paper, Jonathan Quong tries to offer further support for “the proposition that there are sometimes agent-relative prerogatives to harm nonliable persons.” In this brief paper, I will demonstrate that Quong’s argument implicitly relies on the premise that the violinist in Thomson’s famous example has a right not to be unplugged. Yet, first, Quong provides no argument in support of this premise; and second, the premise is clearly wrong. Moreover, throughout his paper Quong just question-beggingly and without argument (...) assumes that one cannot lose rights in other ways than by one’s own responsible action. I conclude that Quong has failed to provide further support for his thesis. (shrink)
This paper aims to exemplify the language acquisition model by tracing back to the Socratic model of language learning procedure that sets down inborn knowledge, a kind of implicit knowledge that becomes explicit in our language. Jotting down the claims in Meno, Plato triggers a representationalist outline basing on the deductive reasoning, where the conclusion follows from the premises (inborn knowledge) rather than experience. This revolution comes from the pen of Noam Chomsky, who amends the empiricist position on (...) the creativity of language by pinning down it with the innateness hypothesis. However, Chomsky never rejects the external world or the linguistic stipulation that relies on the objective reality. Wittgenstein’s model of language acquisition upholds a liaison centric appeal that stands between experience (use theory of meaning) and mentalism (mind based inner experiences). Wittgenstein’s Tractatus never demarcates the definite mental processes that entangle with the method of understanding and meaning. Wittgenstein’s ‘language game’ takes care of the model of language acquisition in a paradigmatic way. The way portrait language as the form of life and the process of language acquisition is nothing but a language game that relies on the activity of men. (shrink)
Hume and Quine argue that human beings do not have access to general knowledge, that is, to general truths . The arguments of these two philosophers are premised on what Jaakko Hintikka has called the atomistic postulate. In the present work, it is shown that Hume and Quine in fact sanction an extreme version of this postulate, according to which even items of particular knowledge are not directly accessible in so far as they are relational. For according to their fully (...) realized systems, human beings do not initially perceive any relations, or similar epistemological elements that can associate or combine terms on which a relational or general knowledge claim may be based. Nor, likewise, do human beings perceive the relations or the associations themselves as separate entities. ;In Chapters 1 and 2, respectively, it is shown precisely why Hume and Quine deny that human beings initially perceive either such associative elements or associations in general. Concomitantly, it is made clear why Hume's and Quine's respective epistemologies preclude human beings from initially apprehending not only general knowledge, but particular relational knowledge as well. But this is not to say that Hume and Quine do not think we can eventually acquire such associative elements and correspondingly, knowledge. Rather, Hume and Quine do provide an account of knowledge, but one that holds all relational and connective elements to be constructed by the human mind. In Hume's case, they are constructed by the imagination. In Quine's case, we are never told quite how this construction occurs, although the evidence suggests that Quine implicitly relies on a faculty similar to Hume's imagination. ;In the final chapter of this thesis, it is argued that both Hume and Quine must be read as philosophers who justify knowledge by reducing its possibility to a psychological faculty of construction, as well as to a few concepts of intuitively grasped relations. By way of conclusion, it is shown that this makes Quine's naturalism the psychological heir to Carnap's Aufbau. (shrink)
I argue that no successful version of Williamson's anti‐luminosity‐argument has yet been presented, even if Srinivasan's further elaboration and defence is taken into account. There is a version invoking a coarse‐grained safety condition and one invoking a fine‐grained safety condition. A crucial step in the former version implicitly relies on the false premise that sufficient similarity is transitive. I show that some natural attempts to resolve this issue fail. Similar problems arise for the fine‐grained version. Moreover, I argue that Srinivasan's (...) defence of the more contentious fine‐grained safety condition is also unsuccessful, again for similar reasons. (shrink)
In his Quadratura, Paul of Venice considers a sophism involving time and tense which appears to show that there is a valid inference which is also invalid. His argument runs as follows: consider this inference concerning some proposition A: A will signify only that everything true will be false, so A will be false. Call this inference B. Then B is valid because the opposite of its conclusion is incompatible with its premise. In accordance with the standard doctrine of ampliation, (...) Paul takes A to be equivalent to 'Everything that is or will be true will be false'. But he proceeds to argue that it is possible that B's premise ('A will signify only that everything true will be false') could be true and its conclusion false, so B is not only valid but also invalid. Thus A and B are the basis of an insoluble. In his Logica Parva, a self-confessedly elementary text aimed at students and not necessarily representing his own view, and in the Quadratura, Paul follows the solution found in the Logica Oxoniensis, which posits an implicit assertion of its own truth in insolubles like B. However, in the treatise on insolubles in his Logica Magna, Paul develops and endorses Swyneshed's solution, which stood out against this ''multiple-meanings'' approach in offering a solution that took insolubles at face value, meaning no more than is explicit in what they say. On this account, insolubles imply their own falsity, and that is why, in so falsifying themselves, they are false. We consider how both types of solution apply to B and how they complement each other. On both, B is valid. But on one (following Swyneshed), B has true premises and false conclusion, and contradictories can be false together; on the other (following the Logica Oxoniensis), the counterexample is rejected. (shrink)
This article’s (Part1 of a series) main focus is on a logical analysis of Appiah’s main argument that Alexander Crummell was a racist. However to show that the implicit fraudulent scholarship involved in nature of the argument is specifically racist we need to put his claimed logical argument in the context of other claims that Appiah makes in order to show that there is a pattern of fraudulent scholarship with a specifically anti African anti- Afro American animus. A ‘prelude (...) ‘ will cover these points and then ‘Appiah’s logical analysis of racism’ will look closely at the logic of his claims. Here the issue is that a logical review simply asks if the premises can justify the conclusion and if each step of an argument can follow the earlier in a formally sound manner. Such an analysis abstracts from the issue of whether the steps are reasonable or factually correct. (shrink)
Within his overarching program aiming to defend an epistemic conception of analyticity, Boghossian (1996 and 1997) has offered a clear-cut explanation of how we can acquire a priori knowledge of logical truths and logical rules through implicit definition. The explanation is based on a special template or general form of argument. Ebert (2005) has argued that an enhanced version of this template is flawed because a segment of it is unable to transmit warrant from its premises to the (...) conclusion. This article aims to defend the template from this objection. We provide an accurate description of the type of non-transmissivity that Ebert attributes to the template and clarify why this is a novel type of non-transmissivity. Then, we argue that Jenkins (2008)’s response to Ebert fails because it focuses on doxastic rather than propositional warrant. Finally, we rebut Ebert’s objection on Boghossian’s behalf by showing that it rests on an unwarranted assumption and is internally incoherent. (shrink)
George Pattison’s Heidegger on Death aims at critically assessing Heidegger’s analysis of death included in his magnum opus Being and Time . Given the peculiar status of Heidegger’s analysis, tightly interwoven into a complex argumentative narrative touching on an array of foundational issues in philosophy, Pattison must first of all spell out for his reader Heidegger’s overall project in BT and show how Heidegger’s analysis of death fits in it. As the author makes clear, HD isn't meant to be a (...) piece of Heidegger scholarship but rather ‘… an essay about death that uses Heidegger … as a way of thinking about the question of death in a Christian and theological perspective’ . This self-imposed task places a second burden on Pattison, i.e., to draw on theological premises to examine Heidegger’s analysis of death and find it ultimately wanting. An implicit third burden, which the author only occasionally seems to intend to meet, is to state in exactly what sense .. (shrink)
Some believe that all arguments make an implicit “inference claim” that the conclusion is inferable from the premises (e.g., Bermejo-Luque, Grennan, the Groarkes, Hitchcock, Scriven). I try to show that this is confused. An act of arguing arises because an inference can be attributed to us, not a meta-level “inference claim” that would make the argument self-referential and regressive. I develop six (other) possible explanations of the popularity of the doctrine that similarly identify confusions.
A Mathematical Review by John Corcoran, SUNY/Buffalo -/- Macbeth, Danielle Diagrammatic reasoning in Frege's Begriffsschrift. Synthese 186 (2012), no. 1, 289–314. ABSTRACT This review begins with two quotations from the paper: its abstract and the first paragraph of the conclusion. The point of the quotations is to make clear by the “give-them-enough-rope” strategy how murky, incompetent, and badly written the paper is. I know I am asking a lot, but I have to ask you to read the quoted passages—aloud if (...) possible. Don’t miss the silly attempt to recycle Kant’s quip “Concepts without intuitions are empty; intuitions without concepts are blind”. What the paper was aiming at includes the absurdity: “Proofs without definitions are empty; definitions without proofs are, if not blind, then dumb.” But the author even bollixed this. The editor didn’t even notice. The copy-editor missed it. And the author’s proof-reading did not catch it. In order not to torment you I will quote the sentence as it appears: “In a slogan: proofs without definitions are empty, merely the aimless manipulation of signs according to rules; and definitions without proofs are, if no blind, then dumb.”[sic] The rest of my review discusses the paper’s astounding misattribution to contemporary logicians of the information-theoretic approach. This approach was cruelly trashed by Quine in his 1970 Philosophy of Logic, and thereafter ignored by every text I know of. The paper under review attributes generally to modern philosophers and logicians views that were never espoused by any of the prominent logicians—such as Hilbert, Gödel, Tarski, Church, and Quine—apparently in an attempt to distance them from Frege: the focus of the article. On page 310 we find the following paragraph. “In our logics it is assumed that inference potential is given by truth-conditions. Hence, we think, deduction can be nothing more than a matter of making explicit information that is already contained in one’s premises. If the deduction is valid then the information contained in the conclusion must be contained already in the premises; if that information is not contained already in the premises […], then the argument cannot be valid.” Although the paper is meticulous in citing supporting literature for less questionable points, no references are given for this. In fact, the view that deduction is the making explicit of information that is only implicit in premises has not been espoused by any standard symbolic logic books. It has only recently been articulated by a small number of philosophical logicians from a younger generation, for example, in the prize-winning essay by J. Sagüillo, Methodological practice and complementary concepts of logical consequence: Tarski’s model-theoretic consequence and Corcoran’s information-theoretic consequence, History and Philosophy of Logic, 30 (2009), pp. 21–48. The paper omits definitions of key terms including ‘ampliative’, ‘explicatory’, ‘inference potential’, ‘truth-condition’, and ‘information’. The definition of prime number on page 292 is as follows: “To say that a number is prime is to say that it is not divisible without remainder by another number”. This would make one be the only prime number. The paper being reviewed had the benefit of two anonymous referees who contributed “very helpful comments on an earlier draft”. Could these anonymous referees have read the paper? -/- J. Corcoran, U of Buffalo, SUNY -/- PS By the way, if anyone has a paper that has been turned down by other journals, any journal that would publish something like this might be worth trying. (shrink)
n Part I of this essay I take a canonical case of political theology, Schmitt’s theory of sovereignty (1985; 1922), and show how Agamben derives his account of sovereignty from an interpretation of Schmitt that relies on the interesting theological premise of an atemporal act or decision, one that is traditionally attributed to god’s act of creation, and that is only ambiguously secularized in the transcendental moment of German Idealism. In Part II I show how this reading of Schmitt can (...) be used to avoid a certain kind of negative political theology associated with deconstruction because Agamben’s reading of Schmitt explains the emergence of certain specific temporal structures associated with the sovereign political decision: the sovereign political decision cannot be represented as having a beginning, and hence recedes phenomenologically into a kind of a priori past; and the sovereign decision cannot be represented as completed, and hence it is experienced as a ‘perpetual expenditure of energy’ that lacks comprehensible relation to a goal. In Part III I defend Agamben’s interpretation of sovereignty as a transcendental act from Negri’s objection that Agamben simply equates without argument Negri’s radically democratic conception of revolutionary constituent power with Schmitt’s conception of sovereignty (1999, p. 13). My defense relies on identifying Agamben’s ‘paradox of sovereignty’ (Agamben 1998, pp. 15ff.) with a ‘paradox of democracy.’ (Mouffe 2000; Whelan 1983) In Part IV I realize a corollary of the identification of the two paradoxes, of sovereignty and democracy: that political borders are the spatial site of the application of the act of political sovereignty, and possess a kind of transcendental spatiality akin to the special temporality associated with sovereignty. I apply this understanding to the privileged special case of the US-Mexico border: the structures implicit in Agamben’s analysis explain some crucial features of this case of walling: its manifest failure to achieve, even in principle, the purpose for which it is allegedly intended; the failure of democratic polity to address those affected by the wall; the appeal to sovereign powers in the legal legitimation of border policy. I defend Agamben’s analysis against other apparently competing views, especially those of Wendy Brown (2010) and argue that the transcendental act of sovereignty comprises a kind of primary political repression that opens up the space for ideological understandings of the wall, but does not itself comprise one. In Part V I address the question whether Agamben’s derived category of ‘bare life’ can also be used in the context of the border, arguing that it can. I conclude with some critical remarks about the limits of Agamben’s view. (shrink)
Crispin Wright tried to refute classical 'Cartesian' skepticism contending that its core argument is extendible to a reductio ad absurdum (_Mind<D>, 100, 87-116, 1991). We show both that Wright is mistaken and that his mistakes are philosophically illuminating. Wright's 'best version' of skepticism turns on a concept of warranted belief. By his definition, many of our well-founded beliefs about the external world and mathematics would not be warranted. Wright's position worsens if we take 'warranted belief' to be implicitly defined by (...) the general principles governing it. Those principles are inconsistent, as shown by a variant of Godel's argument. Thus the inconsistency Wright found has nothing to do with the special premises of Cartesian skepticism, but is embedded in his own conceptual apparatus. Lastly, we show how a Cartesian skeptic could avoid Wright's critique by reconstructing a skeptical argument that does not use the claims Wright ultimately finds objectionable. (shrink)
Reply to Child.Tim Crane - 1997 - Proceedings of the Aristotelian Society 97 (1):103-108.details
In ‘The Mental Causation Debate’ (1995), I pointed out the parallel between the premises in some traditional arguments for physicalism and the assumptions which give rise to the problem of mental causation. I argued that the dominant contemporary version of physicalism finds mental causation problematic because it accepts the main premises of the traditional arguments, but rejects their conclusion: the identification of mental with physical causes. Moreover, the orthodox way of responding to this problem (which I call the (...) ‘constitution view’) implicitly rejects an assumption hidden in the original argument for physicalism: the assumption that mental and physical causation are the same kind of relation (‘homogeneity’). The conclusion of my paper was that if you reject homogeneity, then there is no obvious need for an account of the relation between mental and physical properties. (shrink)
Introductory and advanced textbooks in bioethics focus almost entirely on issues that disproportionately affect disabled people and that centrally deal with becoming or being disabled. However, such textbooks typically omit critical philosophical reflection on disability, lack engagement with decades of empirical and theoretical scholarship spanning the social sciences and humanities in the multidisciplinary field of disability studies, and avoid serious consideration of the history of disability activism in shaping social, legal, political, and medical understandings of disability over the last fifty (...) years. For example, longstanding discussions on topics such as euthanasia, physician aid-in-dying, pre-implantation genetic diagnosis, prenatal testing, selective abortion, enhancement, patient autonomy, beneficence, non-maleficence, and health care rationing all tend to be premised on shared and implicit assumptions regarding disability, especially in relation to quality of life, yet with too little recognition of the way that “disability” is itself a topic of substantial research and scholarly disagreement across multiple fields. This is not merely a concern for academic and medical education; as an applied field tied to one of the largest economic sectors of most industrialized nations, bioethics has a direct impact on healthcare education, practice, policy, and, thereby, the health outcomes of existing and future populations. It is in light of these pressing issues that the Disability Bioethics Reader is the first reader to introduce students to core bioethical issues and concepts through the lens of critical disability studies and philosophy of disability. The Disability Bioethics Reader will include over thirty-five chapters covering key areas such as: critical histories and state-of-the-field analyses of modern medicine, bioethics, disability studies, and philosophy of medicine; methods in bioethics; concerns at the edge- and end-of-life; enhancement; disability, quality of life, and well-being; prenatal testing and abortion; invisible disabilities; chronic illness; healthcare justice; genetics and genomics; intellectual disability and neurodiversity; ethics and diagnosis; and epistemic injustice in healthcare. (shrink)
The paper describes the solution to semantic paradoxes pioneered by Pavel Tichý and further developed by the present author. Its main feature is an examination (and then refutation) of the hidden premise of paradoxes that the paradox-producing expression really means what it seems to mean. Semantic concepts are explicated as relative to language, thus also language is explicated. The so-called ‘explicit approach’ easily treats paradoxes in which language is explicitly referred to. The residual paradoxes are solved by the ‘implicit (...) approach’ which employs ideas made explicit by the former one. (shrink)
The problem of evil is set out as a dialectic between theist and critic, the aim being to reveal the place of ethical judgments in the theist's apology. I discover what ethical judgments, both normative and descriptive, are implicit in the theist's use of his premises as good reasons, and where his reasoning goes astray. I suggest what ethical judgments, in contrast to the theist's, are supported by good reasons.
This is a thesis in support of the conceptual yoking of analytic truth to a priori knowledge. My approach is a semantic one; the primary subject matter throughout the thesis is linguistic objects, such as propositions or sentences. I evaluate arguments, and also forward my own, about how such linguistic objects’ truth is determined, how their meaning is fixed and how we, respectively, know the conditions under which their truth and meaning are obtained. The strategy is to make explicit what (...) is distinctive about analytic truths. The objective is to show that truths, known a priori, are trivial in a highly circumscribed way. My arguments are premised on a language-relative account of analytic truth. The language relative account which underwrites much of what I do has two central tenets: 1. Conventionalism about truth and, 2. Non-factualism about meaning. I argue that one decisive way of establishing conventionalism and non-factualism is to prioritise epistemological questions. Once it is established that some truths are not known empirically an account of truth must follow which precludes factual truths being known non-empirically. The function of Part 1 is, chiefly, to render Carnap’s language-relative account of analytic truth. I do not offer arguments in support of Carnap at this stage, but throughout Parts 2 and 3, by looking at more current literature on a priori knowledge and analytic truth, it becomes quickly evident that I take Carnap to be correct, and why. In order to illustrate the extent to which Carnap’s account is conventionalist and non-factualist I pose his arguments against those of his predecessors, Kant and Frege. Part 1 is a lightly retrospective background to the concepts of ‘analytic’ and ‘a priori’. The strategy therein is more mercenary than exegetical: I select the parts from Kant and Frege most relevant to Carnap’s eventual reaction to them. Hereby I give the reasons why Carnap foregoes a factual and objective basis for logical truth. The upshot of this is an account of analytic truth (i.e. logical truth, to him) which ensures its trivial nature. In opposition to accounts of a priori knowledge, which describe it as knowledge gained from rational apprehension, I argue that it is either knowledge from logical deduction or knowledge of stipulations. I therefore reject, in Part 2, three epistemologies for knowing linguistic conventions (e.g. implicit definitions): 1. intuition, 2. inferential a priori knowledge and, 3. a posteriori knowledge. At base, all three epistemologies are rejected because they are incompatible with conventionalism and non-factualism. I argue this point by signalling that such accounts of knowledge yield unsubstantiated second-order claims and/or they render the relevant linguistic conventions epistemically arrogant. For a convention to be arrogant it must be stipulated to be true. The stipulation is then considered arrogant when its meaning cannot be fixed, and its truth cannot be determined without empirical ‘work’. Once a working explication of ‘a priori’ has been given, partially in Part 1 (as inferential) and then in Part 2 (as non-inferential) I look, in Part 3, at an apriorist account of analytic truth, which, I argue, renders analytic truth non-trivial. The particular subject matter here is the implicit definitions of logical terms. The opposition’s argument holds that logical truths are known a priori (this is part of their identification criteria) and that their meaning is factually based. From here it follows that analytic truth, being determined by factually based meaning, is also factual. I oppose these arguments by exposing the internal inconsistencies; that implicit definition is premised on the arbitrary stipulation of truth which is inconsistent with saying that there are facts which determine the same truth. In doing so, I endorse the standard irrealist position about implicit definition and analytic truth (along with the “early friends of implicit definition” such as Wittgenstein and Carnap). What is it that I am trying to get at by doing all of the abovementioned? Here is a very abstracted explanation. The unmitigated realism of the rationalists of old, e.g. Plato, Descartes, Kant, have stoically borne the brunt of the allegation of yielding ‘synthetic a priori’ claims. The anti-rationalist phase of this accusation I am most interested in is that forwarded by the semantically driven empiricism of the early 20th century. It is here that the charge of the ‘synthetic a priori’ really takes hold. Since then new methods and accusatory terms are employed by, chiefly, non-realist positions. I plan to give these proper attention in due course. However, it seems to me that the reframing of the debate in these new terms has also created the illusion that current philosophical realism, whether naturalistic realism, realism in science, realism in logic and mathematics, is somehow not guilty of the same epistemological and semantic charges levelled against Plato, Descartes and Kant. It is of interest to me that in, particularly, current analytic philosophy1 (given its rationale) realism in many areas seems to escape the accusation of yielding synthetic priori claims. Yet yielding synthetic a priori claims is something which realism so easily falls prey to. Perhaps this is a function of the fact that the phrase, ‘synthetic a priori’, used as an allegation, is now outmoded. This thesis is nothing other than an indictment of metaphysics, or speculative philosophy (this being the crime), brought against a specific selection of realist arguments. I, therefore, ask of my reader to see my explicit, and perhaps outmoded, charge of the ‘synthetic a priori’ levelled against respective theorists as an attempt to draw a direct comparison with the speculative metaphysics so many analytic philosophers now love to hate. I think the phrase ‘synthetic a priori’ still does a lot of work in this regard, precisely because so many current theorists wrongly think they are immune to this charge. Consequently, I shall say much about what is not permitted. Such is, I suppose, the nature of arguing ‘against’ something. I’ll argue that it is not permitted to be a factualist about logical principles and say that they are known a priori. I’ll argue that it is not permitted to say linguistic conventions are a posteriori, when there is a complete failure in locating such a posteriori conventions. Both such philosophical claims are candidates for the synthetic a priori, for unmitigated rationalism. But on the positive side, we now have these two assets: Firstly, I do not ask us to abandon any of the linguistic practises discussed; merely to adopt the correct attitude towards them. For instance, where we use the laws of logic, let us remember that there are no known/knowable facts about logic. These laws are therefore, to the best of our knowledge, conventions not dissimilar to the rules of a game. And, secondly, once we pass sentence on knowing, a priori, anything but trivial truths we shall have at our disposal the sharpest of philosophical tools. A tool which can only proffer a better brand of empiricism. (shrink)
What is the status of research on implicit bias? In light of meta-analyses revealing ostensibly low average correlations between implicit measures and behavior, as well as various other psychometric concerns, criticism has become ubiquitous. We argue that while there are significant challenges and ample room for improvement, research on the causes, psychological properties, and behavioral effects of implicit bias continues to deserve a role in the sciences of the mind as well as in efforts to understand, and (...) ultimately combat, discrimination and inequality. (shrink)
Should we understand implicit attitudes on the model of belief? I argue that implicit attitudes are (probably) members of a different psychological kind altogether, because they seem to be insensitive to the logical form of an agent’s thoughts and perceptions. A state is sensitive to logical form only if it is sensitive to the logical constituents of the content of other states (e.g., operators like negation and conditional). I explain sensitivity to logical form and argue that it is (...) a necessary condition for belief. I appeal to two areas of research that seem to show that implicit attitudes fail spectacularly to satisfy this condition—although persistent gaps in the empirical literature leave matters inconclusive. I sketch an alternative account, according to which implicit attitudes are sensitive merely to spatiotemporal relations in thought and perception, i.e., the spatial and temporal orders in which people think, see, or hear things. (shrink)
What is the mental representation that is responsible for implicit bias? What is this representation that mediates between the trigger and the biased behavior? My claim is that this representation is neither a propositional attitude nor a mere association. Rather, it is mental imagery: perceptual processing that is not directly triggered by sensory input. I argue that this view captures the advantages of the two standard accounts without inheriting their disadvantages. Further, this view also explains why manipulating mental imagery (...) is among the most efficient ways of counteracting implicit bias. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.