The analysis of multimodal argumentation in advertising is a crucial and problematic area of research. While its importance is growing in a time characterized by images and pictorial messages, the methods used for interpreting and reconstructing the structure of arguments expressed through verbal and visual means capture only isolated dimensions of this complex phenomenon. This paper intends to propose and illustrate a methodology for the reconstruction and analysis of “double-mode” arguments in advertisements, combining the instruments developed in social semiotics, (...) pragmatics, and argumentationtheory. An advertisement is processed through a five-step path. The analysis of its context, text genre, and images leads to a first representation of the messages that it encodes both pictorially and verbally. These first semantic representations are further enriched by including their polyphonic articulations and presuppositions, their explicatures, and their dialogical functions and illocutionary forces. These pragmatic steps retrieve the commitment structure of the ad, which allows a further argument analysis conducted through argumentation schemes. (shrink)
The innovative Moral Impact Theory (“MIT”) of law claims that the moral impacts of legal institutional actions, rather than the linguistic content of “rules” or judicial or legislative pronouncements, determine law’s content. MIT’s corollary is that legal interpretation consists in the inquiry into what is morally required as a consequence of the lawmaking actions. This paper challenges MIT by critiquing its attendant view of the nature of legal interpretation and argument. Points including the following: (1) it (...) is not practicable to predicate law’s content on the ability of legal officials to resolve moral controversies; (2) it would be impermissibly uncharitable to claim that participants in the legal system commit widespread error in failing to regard moral argument as the focus of legal interpretation; (3) whereas the legal official may initially respond to a conflict at the intuitive moral level, she must resolve the controversy at the deliberative, critical level, at which moral and legal thinking diverge; (4) because no two cases are precisely alike, and owing to the open texture of natural language, reference to extra-jurisdictional “persuasive” and “secondary” authority permeates legal argument, yet, nearly by definition, such linguistic sources cannot have engendered significant moral impacts in the home jurisdiction; and (5) one way or another, we ultimately arrive at linguistic contents. The paper concludes by accepting, as undeniable, that legal institutional actions have moral impacts, and generate moral obligations. Officials are obligated to adhere to certain constraints in their treatment of one another, cases, litigants and citizens. Less explored, however, have been the ways in which legal pronouncements likely morally impact the community, beyond the issue of a duty to obey the law. (shrink)
In this paper, we present a survey of the development of the technique of argument diagramming covering not only the fields in which it originated - informal logic, argumentationtheory, evidence law and legal reasoning – but also more recent work in applying and developing it in computer science and artificial intelligence. Beginning with a simple example of an everyday argument, we present an analysis of it visualised as an argument diagram constructed using a software tool. In (...) the context of a brief history of the development of diagramming, it is then shown how argument diagrams have been used to analyze and work with argumentation in law, philosophy and artificial intelligence. (shrink)
This book shows how research in linguistic pragmatics, philosophy of language, and rhetoric can be connected through argumentation to analyze a recognizably common strategy used in political and everyday conversation, namely the distortion of another’s words in an argumentative exchange. Straw man argumentation refers to the modification of a position by misquoting, misreporting or wrenching the original speaker’s statements from their context in order to attack them more easily or more effectively. Through 63 examples taken from different contexts (...) (including political and forensic discourses and dialogs) and 20 legal cases, the book analyzes the explicit and implicit types of straw man, shows how to assess the correctness of a quote or a report, and illustrates the arguments that can be used for supporting an interpretation and defending against a distortion. The tools of argumentationtheory, a discipline aimed at investigating the uses of arguments by combining insights from pragmatics, logic, and communication, are applied to provide an original account of interpretation and reporting, and to describe and illustrate tactics and procedures that can be used and implemented for practical purposes.. This book will appeal to scholars in the fields of political communication, communication in general, argumentationtheory, rhetoric and pragmatics, as well as to people working in public speech, speech writing, and discourse analysis. (shrink)
Social scientists have paid insufficient attention to the role of law in constituting the economic institutions of capitalism. Part of this neglect emanates from inadequate conceptions of the nature of law itself. Spontaneous conceptions of law and property rights that downplay the role of the state are criticized here, because they typically assume relatively small numbers of agents and underplay the complexity and uncertainty in developed capitalist systems. In developed capitalist economies, law is sustained through interaction between private agents, courts (...) and the legislative apparatus. Law is also a key institution for overcoming contracting uncertainties. It is furthermore a part of the power structure of society, and a major means by which power is exercised. This argument is illustrated by considering institutions such as property and the firm. Complex systems of law have played a crucial role in capitalist development and are also vital for developing economies. (shrink)
Anglo-American general jurisprudence remains preoccupied with the relationship of legality to morality. This has especially been so in the re-reading of Lon Fuller’s theory of an implied morality in any law. More often than not, Fuller has been said to distinguish between the identity of a discrete rule and something called ‘morality’. In this reading of Fuller, however, insufficient attention to what is signified by ‘morality’. Such an implied morality has been understood in terms of deontological duties, the Good (...) life, naturalism, and subjectively posited values. Each of these interpretations has a shared common denominator: namely, the distinction between ‘is’ and ‘ought’, ‘facts’ and ‘values’. Legality is said to be nested in an ‘is’ world. An ‘is’ cannot be derived from an ‘ought’. This essay aims to press this distinction further. Fuller, I intend to argue, does indeed accept the is/ought distinction as have his commentators. The associations of the ‘oughts’ with deontological duties, the Good life, naturalism and subjective values, however, have been misdirected. This has been so because Fuller presupposed that legality was a matter of a spatial structure. Non-law was situated outside the structure. If a legislator or judge considered matters outside the structure as if they were binding upon jurists and, for that matter, upon members of the legal structure, the law was not binding. The crucial incident of the structure was the boundary of the structure. Fuller’s structuralist theory of law offers the opportunity to better understand what he signified by ‘the internal morality of law’. I shall privilege several elements of his theory: the relation of legal units to a structure, the nature of a structure, the constituents of a structure (territorial space, its pillars and its matter); the forms of the legal structure; the centrifugal and centripetal structures, the structure and traditional theories of morality, the role of the legal official in a structure, and why the internal knowledge in the structure is binding. Fuller especially privileged two features of a legal structure. The one was the boundary of the structure. The second concerned the exteriority of the boundary. Both features presupposed a territorial sense view of legal knowledge. The legal mind analysed any social problem through the map of such a sense of legal space. By concentrating upon the discrete rule in isolation of the implied structural boundary to which the rule referred, commentators have attributed been misdirected in their analyses of Fuller’s theory of law and morality. My argument in this respect will proceed as follows. In order to clarify Fuller’s senses of the morality of law, I shall first outline what he means by a ‘structure’. Second, how is the structure related to legal knowledge? Third, what are the various forms of the structure? Fourth, is the structure centrifugal or centripetal? And finally, why is the structure binding? (shrink)
Take “legal reality” to be the part of reality that actual legal thought and talk is dis- tinctively about, such as legal institutions, legal obligations, and legal norms. Our goal is to explore whether legal reality is disunified. To illustrate the issue, consider the possibility that an important metaphysical thesis such as positivism is true of one part of legal reality (legal institutions), but not another (legal norms). We offer two arguments (...) that suggest that legal reality is disunified: one concerns the heteroge- neity of different entities that are part of legal reality; the other concerns variation within legal thought and talk. We then show that taking the possibility of the disunity of legal reality seriously has important upshots for how we think about the positivist and antipositivist traditions, the debate between them, and their relation to other parts of legaltheory, such as critical legaltheory and legal realism. (shrink)
Aristotle divided arguments that persuade into the rhetorical (which happen to persuade), the dialectical (which are strong so ought to persuade to some degree) and the demonstrative (which must persuade if rightly understood). Dialectical arguments were long neglected, partly because Aristotle did not write a book about them. But in the sixteenth and seventeenth century late scholastic authors such as Medina, Cano and Soto developed a sound theory of probable arguments, those that have logical and not merely psychological force (...) but fall short of demonstration. Informed by late medieval treatments of the law of evidence and problems in moral theology and aleatory contracts, they considered the reasons that could render legal, moral, theological, commercial and historical arguments strong though not demonstrative. At the same time, demonstrative arguments became better understood as Galileo and other figures of the Scientific Revolution used mathematical proof in arguments in physics. Galileo moved both dialectical and demonstrative arguments into mathematical territory. (shrink)
Kelsen, Hart, and Legal Normativity.Brian Bix - 2018 - Revus. Journal for Constitutional Theory and Philosophy of Law / Revija Za Ustavno Teorijo in Filozofijo Prava 34:25-42.details
This article focuses on issues relating to legal normativity, emphasizing the way these matters have been elaborated in the works of Kelsen and Hart and later commentators on their theories. First, in Section 2, the author offers a view regarding the nature of law and legal normativity focusing on Kelsen's work (at least one reasonable reading of it). The argument is that the Basic Norm is presupposed when a citizen chooses to read the actions of legal officials (...) in a normative way. In this Kelsenian approach, all normative systems are structurally and logically similar, but each normative system is independent of every other system – thus, law is, in this sense, conceptually separate from morality. Second, in Section 3, the author turns to Hart's theory, analyzing the extent to which his approach views legal normativity as sui generis. This approach raises questions regarding what has become a consensus view in contemporary jurisprudence: that law makes moral claims. The author shows how a more deflationary (and less morally-flavored) understanding of the nature of law is tenable, and may, in fact, work better than current conventional (morality-focused) understandings. (shrink)
The problem of establishing the best interpretation of a speech act is of fundamental importance in argumentation and communication in general. A party in a dialogue can interpret another’s or his own speech acts in the most convenient ways to achieve his dialogical goals. In defamation law this phenomenon becomes particularly important, as the dialogical effects of a communicative move may result in legal consequences. The purpose of this paper is to combine the instruments provided by argumentation (...)theory with the advances in pragmatics in order to propose an argumentative approach to meaning reconstruction. This theoretical proposal will be applied to and tested against defamation cases at common law. Interpretation is represented as based on a hierarchy of interpretative presumptions. On this view, the development of the logical form of an utterance is regarded as the result of an abductive pattern of reasoning in which various types of presumptions are confronted and the weakest ones are excluded. Conflicts of interpretations and equivocation become essentially interwoven with the dialectical problem of fulfilling the burden of defeating a presumption. The interpreter has a burden of explaining why a given presumption is subject to default, assuming that the speaker is reasonable and acting based on a set of shared expectations. (shrink)
Presumption is a complex concept in law, affecting the dialogue setting. However, it is not clear how presumptions work in everyday argumentation, in which the concept of “plausible argumentation” seems to encompass all kinds of inferences. By analyzing the legal notion of presumption, it appears that this type of reasoning combines argument schemes with reasoning from ignorance. Presumptive reasoning can be considered a particular form of reasoning, which needs positive or negative evidence to carry a probative weight (...) on the conclusion. For this reason, presumptions shift the burden of providing evidence or explanations onto the interlocutor. The latter can provide new information or fail to do so: whereas in the first case the new information rebuts the presumption, in the second case, the absence of information that the interlocutor could reasonably provide strengthen the conclusion of the presumptive reasoning. In both cases the result of the presumption is to strengthen the conclusion of the reasoning from lack of evidence. As shown in the legal cases, the effect of presumption is to shift the burden of proof to the interlocutor; however, the shift a presumption effects is only the shift of the evidential burden, or the burden of completing the incomplete knowledge from which the conclusion was drawn. The burden of persuasion remains on the proponent of the presumption. On the contrary, reasoning from definition in law is a conclusive proof, and shifts to the other party the burden to prove the contrary. This crucial difference can be applied to everyday argumentation: natural arguments can be divided into dialectical and presumptive arguments, leading to conclusions materially different in strength. -/- . (shrink)
Statutory interpretation involves the reconstruction of the meaning of a legal statement when it cannot be considered as accepted or granted. This phenomenon needs to be considered not only from the legal and linguistic perspective, but also from the argumentative one - which focuses on the strategies for defending a controversial or doubtful viewpoint. This book draws upon linguistics, legaltheory, computing, and dialectics to present an argumentation-based approach to statutory interpretation. By translating and summarizing (...) the existing legal interpretative canons into eleven patterns of natural arguments - called argumentation schemes - the authors offer a system of argumentation strategies for developing, defending, assessing, and attacking an interpretation. Illustrated through major cases from both common and civil law, this methodology is summarized in diagrams and maps for application to computer sciences. These visuals help make the structures, strategies, and vulnerabilities of legal reasoning accessible to both legal professionals and laypeople. (shrink)
The fields of linguistic pragmatics and legal interpretation are deeply interrelated. The purpose of this paper is to show how pragmatics and the developments in argumentationtheory can contribute to the debate on legal interpretation. The relation between the pragmatic maxims and the presumptions underlying the legal canons are brought to light, unveiling the principles that underlie the types of argument usually used to justify a construction. The Gricean maxims and the arguments of legal (...) interpretation are regarded as presumptions subject to default used to justify an interpretation. This approach can allow one to trace the different legal interpretive arguments back to their basic underlying presumptions, so that they can be compared, ordered, and assessed according to their defeasibility conditions. This approach allows one to understand the difference between various types of interpretive canons, and their strength in justifying an interpretation. (shrink)
The debate on the argumentative turn in Public Policy and Administration (PPA), as reflective of the influence of politico-legaltheory on the discipline, is reviewed with a thorough and indepth engagement with the ArgumentationTheory (AT) literature. The focus in this article is in fact of a methodological nature since we argue that critical scholars - who have contributed to the general and specialized (i.e. political discourse analysis and critical contextualism) literature of AT as well as (...) politico-legaltheory - pave the way to a novel methodology which will be exemplified through the analysis of the transparency concept. (shrink)
This paper challenges the Critical Legal Studies (CLS) claims of legal indeterminacy. It shall use a legal formalist logic and language as its main assertion, further maintaining that the CLS claims is only grounded in ambiguity and confusion. CLS is a legaltheory that challenges and overturns accepted norms and standards in legaltheory and practice. They maintained that law in the historical and contemporary society has an alleged impartiality, and it is used (...) as a tool of privilege and power – law is politics. Consequently, CLS maintained that these results to indeterminacy of law. Legal indeterminacy can be summed up as contrary to the common understanding that legal materials, statutes and case law, do not really answer legal disputes. Legal principles and doctrines, as CLS scholars claim, are said to be indeterminate, for it is riddle with gaps, conflicts, and anomalies that are widely present even in simple cases. Legal indeterminacy also rises because of the underlying political power – law is politics – that implicates law as merely a tool for oppression. This thesis shows that CLS assertions with legal indeterminacy is only grounded on ambiguity. On one hand, using the main concept of legal formalist logic and language grounded with sub-arguments: inherent generality of legal language, reasoned elaboration, and neutral principles, it refutes the CLS claims of legal indeterminacy. On the other, the paper maintains that their main reason of legal indeterminacy, ‘law is politics’, is merely a statement of fact that currently happens in society is sentimental and weak through counterexamples. (shrink)
Despite the effort educators put into developing in students the critical writing and thinking skills needed to compose effective arguments, undergraduate college students are often accused of churning out essays lacking in creative and critical thought, arguments too obviously formulated and with sides too sharply drawn. Theories abound as to why these deficiencies are rampant. Some blame students’ immature cognitive and emotional development for these lacks. Others put the blame of lackadaisical output on the assigning of shopworn writing subjects, assigned (...) topics such as on American laws and attitudes about capital punishment and abortion. Although these factors might contribute to faulty written output in some cases, the prevailing hindrance is our very pedagogy, a system in which students are rewarded for composing the very type of argument we wish to avoid — the eristic, in which the goal is not truth seeking, but successfully disputing another’s argument. Certainly the eristic argument is the intended solution in cases when a clear‑cut outcome is needed, such as in legal battles and political campaigns when there can only be one winner. However, teaching mainly or exclusively the eristic, as is done in most composition classrooms today, halts the advancement of these higher‑order inquiry skills we try developing in our students. (shrink)
When Hegel first addresses moral responsibility in the Philosophy of Right, he presupposes that agents are only responsible for what they intended to do, but appears to offer little, if any, justification for this assumption. In this essay, I claim that the first part of the Philosophy of Right, “Abstract Right”, contains an implicit argument that legal or external responsibility (blame for what we have done) is conceptually dependent on moral responsibility proper (blame for what we have intended). This (...) overlooked argument satisfies the first half of a thesis Hegel applies to action in the Encyclopaedia Logic, namely, that the outer must be inner, and thus provides a necessary complement for his more explicit treatment of the second half of that thesis, that the inner must be outer. The claim that agents are only responsible for what they intended to do might appear, at first, to risk conflating legal and moral responsibility and to lack the necessary means to deal with the phenomenon of moral luck, but I argue that if it is properly situated within the whole of Hegel's philosophy of action it can be saved from both of these consequences and so take its place as an essential component of Hegel's full theory of moral responsibility. (shrink)
Legal philosophy dates to the Ancient Greek Philosophers, and it continues to be a vigorously debated subject due to the fact that there does not exist a legal philosophy that is beyond reapproach that encapsulates law’s origins or purpose. This paper will introduce a new legal philosophy, which I have termed instinctualism. -/- Instinctualism is the idea that law originates from human instinct. Human beings are born with certain natural capacities that they learn to utilize as they (...) mature. Examples include speaking, walking, associating, and interacting with others, and practicing faith in a divine being, the state, or some other source of inspiration and hope. Human beings don’t think about the potential illegalities of speaking their minds, moving from one place to another, or engaging in conversation with their friends or associates unless they are indoctrinated to do so. Rather, human beings do these things because they have instinctual desires and the knowledge to do so. Other rights and laws such as freedom of the press are the product of peoples’ instinctual rights. For example, as people learn to speak, they instinctually share information and news about their inner circles or communities. If taken a step further, one begins to discuss an organized press. Knowledge and understanding of laws, such as those limiting certain types of speech, i.e., hate speech, must be taught and learned; it is not instinctual. This paper will introduce a subset of the most influential legal philosophies of different eras in human intellectual development, beginning with those of Ancient Greece. It will proceed to describe the shortcomings of those philosophies before introducing instinctualism as an alternative. After defining instinctualism, I will proceed to discuss how it addresses the shortcomings of other legal philosophies. Next, I will introduce rights guaranteed to the citizens of four prominent countries via relevant sources of primary law for each of those countries. Finally, I will close by reviewing the main arguments in the paper and discussing future research that I will undertake to buttress this paper’s arguments. (shrink)
Legal philosophy dates to the Ancient Greek Philosophers, and it continues to be a vigorously debated subject due to the fact that there does not exist a legal philosophy that is beyond reapproach that encapsulates law’s origins or purpose. This paper will introduce a new legal philosophy, which I have termed instinctualism. -/- Instinctualism is the idea that law originates from human instinct. Human beings are born with certain natural capacities that they learn to utilize as they (...) mature. Examples include speaking, walking, associating, and interacting with others, and practicing faith in a divine being, the state, or some other source of inspiration and hope. Human beings don’t think about the potential illegalities of speaking their minds, moving from one place to another, or engaging in conversation with their friends or associates unless they are indoctrinated to do so. Rather, human beings do these things because they have instinctual desires and the knowledge to do so. Other rights and laws such as freedom of the press are the product of peoples’ instinctual rights. For example, as people learn to speak, they instinctually share information and news about their inner circles or communities. If taken a step further, one begins to discuss an organized press. Knowledge and understanding of laws, such as those limiting certain types of speech, i.e., hate speech, must be taught and learned; it is not instinctual. -/- This paper will introduce a subset of the most influential legal philosophies of different eras in human intellectual development, beginning with those of Ancient Greece. It will proceed to describe the shortcomings of those philosophies before introducing instinctualism as an alternative. After defining instinctualism, I will proceed to discuss how it addresses the shortcomings of other legal philosophies. Next, I will introduce rights guaranteed to the citizens of four prominent countries via relevant sources of primary law for each of those countries. Finally, I will close by reviewing the main arguments in the paper and discussing future research that I will undertake to buttress this paper’s arguments. (shrink)
This Essay analyzes an essay by H. L. A. Hart about discretion that has never before been published, and has often been considered lost. Hart, one of the most significant legal philosophers of the twentieth century, wrote the essay at Harvard Law School in November 1956, shortly after he arrived as a visiting professor. In the essay, Hart argued that discretion is a special mode of reasoned, constrained decisionmaking that occupies a middle ground between arbitrary choice and determinate rule (...) application. Hart believed that discretion, soundly exercised, provides a principled way of coping with legal indeterminacy that is fully consistent with the rule of law. This Essay situates Hart’s paper – Discretion – in historical and intellectual context, interprets its main arguments, and assesses its significance in jurisprudential history. In the context of Hart’s work, Discretion is notable because it sketches a theory of legal reasoning in depth, with vivid examples. In the context of jurisprudential history, Discretion is significant because it sheds new light on long-overlooked historical and theoretical connections between Hart’s work and the Legal Process School, the American jurisprudential movement dominant at Harvard during Hart’s year as a visiting professor. Hart’s Discretion is part of our jurisprudential heritage, advancing our understanding of legal philosophy and its history. (shrink)
The phrase _secundum quid et simpliciter_ is the Latin expression translating and labelling the sophism described by Aristotle as connected with the use of some particular expression “absolutely or in a certain respect and not in its proper sense.” This paper presents an overview of the analysis of this fallacy in the history of dialectics, reconstructing the different explanations provided in the Aristotelian texts, the Latin and medieval dialectical tradition, and the modern logical approaches. The _secundum quid_ emerges as a (...) strategy that is based on the pragmatic dimension of arguments, and in particular the complex passage from an utterance (what is said) to its logical form (a proposition in an argument). The medieval and modern logical theories attempted to explain from different philosophical perspectives how the pragmatically enriched semantic representation can be achieved, justified, and most importantly manipulated. The different analyses of this fallacy bring to light various dimensions of the pragmatics of arguments, and the complex interdependence between context, meaning, and inferences. (shrink)
Defenders of intellectual property rights argue that these rights are justified because creators and inventors deserve compensation for their labour, because their ideas and expressions are their personal property and because the total amount of creative work and innovation increases when inventors and creators have a prospect of generating high income through the exploitation of their monopoly rights. This view is not only widely accepted by the general public, but also enforced through a very effective international legal framework. And (...) it is endorsed by most academic researchers and commentators in this field. In this essay, I will show that the classical arguments for the justification of private intellectual property rights can be contested, and that there are many good reasons to abolish intellectual property rights completely in favour of an intellectual commons where every person is allowed to use every cultural expression and invention in whatever way he wishes. I will first give a short overview of the classical arguments for the justification of intellectual property as they are usually stated. We will then discuss the question of whether the creator or inventor deserves his de jure monopoly, by using John Christman’s categories of income and control rights to analyse property rights. The aim here is to show that it does not make sense to create control rights for abstract objects, as they are not scarce, and that there is no logical connection between the surplus which may be generated through income rights and the labour which has been put into a cultural artefact or an invention, and therefore it is not justified to grant monopoly rights on the basis of Lockean natural rights arguments for self-ownership and the just appropriation of worldly resources. As it is possible to reject Christman’s property rights categories, I will then go on to show on the basis of Richard Dawkins’ postulation of the ‘meme’ and Ludwik Fleck’s theory of the ‘thought collective’ that creative processes should be interpreted as interpersonal or collective processes, and therefore it is not justified to grant intellectual property rights to individuals on the basis of the idea that the individual who has put labour into the creative work or the invention should be the one to whom the contents of the work belong exclusively. As it is still possible to postulate the utilitarian argument that intellectual property rights are just because they increase the amount of creative works and inventions, I will argue in the last chapter that, from a libertarian as well as from an egalitarian point of view, the justification of intellectual monopoly rights on utilitarian grounds cannot be maintained. Therefore it is time to abolish the current global intellectual property law regime in favour of an intellectual commons for the good of all human beings and societies. (shrink)
Most readers believe that it is difficult, verging on the impossible, to extract concrete prescriptions from the ethics of Emmanuel Levinas. Although this view is largely correct, Levinas’ philosophy can, with some assistance, generate specific duties on the part of legal actors. In this paper, I argue that the fundamental premises of Levinas’ theory of justice can be used to construct a prohibition against capital punishment. After analyzing Levinas’ concepts of justice, responsibility, and interruption, I turn toward his (...) scattered remarks on legal institutions, arguing that they enable a sense of interruption specific to the legal domain. It is here that we find the conceptual resources most important to my Levinasian abolition. I argue that the interruption of legal justice by responsibility implies what I call the principle of revisability. The principle of revisability states a necessary condition of just legal institutions: To be just, legal institutions must ensure the possibility of revising any and all of their rules, principles, and judgments. From this, the argument against capital punishment easily follows. Execution is a legal act, perhaps the only legal act, that cannot be undone. An application of the principle of revisability to this fact leads to the conclusion that legal institutions cannot justly impose capital punishment. After defending these points at length, I conclude with some observations on the consequences of the principle of revisability for law more generally. (shrink)
This paper is aimed at combining the advances in argumentationtheory with the models used in the field of education to address the issue of improving students’ argumentative behavior by interacting with an expert. The concept of deeper or more sophisticated argumentative strategy is theoretically defined and used to advance two new coding schemes, based on the advances in the argumentation studies and aimed at capturing the dialectical, or structural, behavior, and the argumentative content of each dialogue (...) unit. These coding schemes are then applied for a qualitative analysis of a study designed to investigate how students’ argumentative behavior can be influenced by the interaction with an expert, who used specific types of attacks to the interlocutors’ positions. The twofold coding shows at which dialogical level expert–peer interactions can directly and more stably affect students’ argumentative behavior, and what effects such more sophisticated strategies can have on the discussion and the analysis of disagreements. In particular, this paper shows how a specific type of deep-level attack, the underminer, can open dialogues of a different level, focused on unveiling and debating background beliefs underlying a specific position. (shrink)
Bayesian models of legal arguments generally aim to produce a single integrated model, combining each of the legal arguments under consideration. This combined approach implicitly assumes that variables and their relationships can be represented without any contradiction or misalignment, and in a way that makes sense with respect to the competing argument narratives. This paper describes a novel approach to compare and ‘average’ Bayesian models of legal arguments that have been built independently and with no attempt to (...) make them consistent in terms of variables, causal assumptions or parameterization. The approach involves assessing whether competing models of legal arguments are explained or predict facts uncovered before or during the trial process. Those models that are more heavily disconfirmed by the facts are given lower weight, as model plausibility measures, in the Bayesian model comparison and averaging framework adopted. In this way a plurality of arguments is allowed yet a single judgement based on all arguments is possible and rational. (shrink)
In this paper we use a series of examples to show how oppositions and dichotomies are fundamental in legalargumentation, and vitally important to be aware of, because of their twofold nature. On the one hand, they are argument structures underlying various kinds of rational argumentation commonly used in law as a means of getting to the truth in a conflict of opinion under critical discussion by two opposing sides before a tryer of fact. On the other (...) hand, they are argument structures underling moves made in strategic advocacy by both sides that function as platforms for different kinds of questionable argumentation tactics and moves that are in some instances tricky and deceptive. (shrink)
In this paper a theoretical definition that helps to explain how the logical structure of legal presumptions is constructed by applying the Carneades model of argumentation developed in artificial intelligence. Using this model, it is shown how presumptions work as devices used in evidentiary reasoning in law in the event of a lack of evidence to assist a chain of reasoning to move forward to prove or disprove a claim. It is shown how presumptions work as practical devices (...) that may be useful in cases in which there is insufficient evidence to prove the claim to an appropriate standard of proof. (shrink)
Please contact me at [email protected] if you are interested in reading a particular chapter or being sent the entire manuscript for private use. -/- The thesis offers a comprehensive argument in favor of a regulationist approach to autonomous weapon systems (AWS). AWS, defined as all military robots capable of selecting or engaging targets without direct human involvement, are an emerging and potentially deeply transformative military technology subject to very substantial ethical controversy. AWS have both their enthusiasts and their detractors, prominently (...) advocating for a global preemptive ban on AWS development and use. Rejecting both positions, the author outlines a middle-of-the-road regulationist approach that is neither overly restrictive nor overly permissive. The disqualifying flaws of the rival prohibitionist approach are demonstrated in the process. After defining the core term of autonomy in weapon systems, the practical difficulties involved in applying an arms control regime to AWS are analyzed. The analysis shows that AWS are an extremely regulation-resistant technology. This feature when combined with their assumed high military utility makes a ban framework extremely costly to impose and enforce. As such it is ultimately very likely to fail at the benefit of the most unscrupulous international actors and at a very substantial risk to those abiding with international law. Consequently, to be ethically viable, a prohibitionist framework would need to offer substantial moral benefits impossible to attain through the rival regulationist approach. The remainder of the thesis undertakes to demonstrate that this is not the case. Comparing the considerations of military and strategic necessity to humanitarian concerns most commonly voiced by prohibitionists requires finding a common denominator for all values being referred to. Consequently, the thesis proceeds to show that both kinds of concerns are ultimately reducible to respect for basic human rights of all stakeholders, and so that the prohibitionist and regulationist approach may ultimately be compared in terms of consequences their adoption would generate for basic human rights realization. The author then evaluates both the potential humanitarian benefits, and the potential humanitarian hazards of AWS introduction. The benefits of leaving frontline combat to machines are outlined, with the unique kinds of suffering that would be abolished by such a development being described in detail. The arguments against AWS adoption are then divided into three classes: arguments related to alleged impossibility of compliance with The Laws of Armed Conflict, non-consequentialist and broad consequentialist arguments. This analysis, which comprises the greater part of the entire thesis, shows that the concerns behind compliance arguments are indeed substantial and have to be accommodated via a complex framework of best practices, regulations and localized restrictions on some kinds of AWS or AWS use in particular environments. They do not, however, justify a universal ban on using all the diverse forms of AWS in all environments. Non-consequentialist objections are found either reducible to other classes of arguments or thoroughly unconvincing, sometimes to the point of being actually vacuous. Broad consequentialist concerns are likewise found to be accommodable by regulation, empirically unfounded or causally disconnected from the actions of legitimate actors acquiring AWS, and therefore irrelevant to the moral permissibility of such actions. The author concludes that the proponents of prohibitionism are unable to point to moral benefits substantial enough to justify the costs and risks inherent in the approach. A global ban is, in fact, likely to have a worse humanitarian impact that well-regulated AWS adoption even if the strategic risks are disregarded. On the other hand, the analysis shows that there indeed exists an urgent need to regulate AWS through a variety of technological, procedural and legal solutions. These include, but are not limited to, a temporary moratorium on anti-personnel AWS use, development of internationally verified compliance software and eventual legal requirement of its employment, a temporary moratorium on AWS proliferation to state actors and a ban on their proliferation to non-state agents. (shrink)
One main goal of argumentationtheory is to evaluate arguments and to determine whether they should be accepted or rejected. When there is no clear answer, a third option, being undecided, has to be taken into account. Indecision is often not considered explicitly, but rather taken to be a collection of all unclear or troubling cases. However, current philosophy makes a strong point for taking indecision itself to be a proper object of consideration. This paper aims at revealing (...) parallels between the findings concerning indecision in philosophy and the treatment of indecision in argumentationtheory. By investigating what philosophical forms and norms of indecision are involved in argumentationtheory, we can improve our understanding of the different uncertain evidential situations in argumentationtheory. (shrink)
Analytic philosophers have, since the pioneering work of B.K. Matilal, emphasized the contributions of Nyāya philosophers to what contemporary philosophy considers epistemology. More recently, scholarly work demonstrates the relevance of their ideas to argumentationtheory, an interdisciplinary area of study drawing on epistemology as well as logic, rhetoric, and linguistics. This paper shows how early Nyāya theorizing about argumentation, from Vātsyāyana to Jayanta Bhaṭṭa, can fruitfully be juxtaposed with the pragma-dialectic approach to argumentation pioneered by Frans (...) van Eemeren. I illustrate the implications of this analysis with a case study from Jayanta Bhaṭṭa’s satirical play, Much Ado about Religion (Āgamaḍambara). (shrink)
This dissertation is an analysis of the development of dialectic and argumentationtheory in post-classical Islamic intellectual history. The central concerns of the thesis are; treatises on the theoretical understanding of the concept of dialectic and argumentationtheory, and how, in practice, the concept of dialectic, as expressed in the Greek classical tradition, was received and used by five communities in the Islamic intellectual camp. It shows how dialectic as an argumentative discourse diffused into five communities (...) (theologicians, poets, grammarians, philosophers and jurists) and how these local dialectics that the individual communities developed fused into a single system to form a general argumentationtheory (adab al-bahth) applicable to all fields. I evaluate a treatise by Shams al-Din Samarqandi (d.702/1302), the founder of this general theory, and the treatises that were written after him as a result of his work. I concentrate specifically on work by Adud al-Din al-Iji (d.756/1355), Sayyid Sharif al-Jurjani (d.816/1413), Taşköprüzâde (d.968/1561), Saçaklızâde (d.1150/1737) and Gelenbevî (d.1205/1791) and analyze how each writer (from Samarqandi to Gelenbevî) altered the shape of argumentative discourse and how later intellectuals in the post-classical Islamic world responded to that discourse bequeathed by their predecessors. What is striking about the period that this dissertation investigates (from 1300-1800) is the persistence of what could be called the linguistic turn in argumentationtheory. After a centuries-long run, the jadal-based dialectic of the classical period was displaced by a new argumentationtheory, which was dominantly linguistic in character. This linguistic turn in argumentation dates from the final quarter of the fourteenth century in Iji's impressively prescient work on 'ilm al-wad'. This idea, which finally surfaced in the post-classical period, that argumentation is about definition and that, therefore, defining is the business of language—even perhaps, that language is the only available medium for understanding and being understood—affected the way that argumentationtheory was processed throughout most of the period in question. The argumentative discourse that started with Ibn al-Rawandi in the third/ninth century left a permanent imprint on Islamic intellectual history, which was then full of concepts, terminology and objectives from this discourse up until the late nineteenth century. From this perspective, Islamic intellectual history can be read as the tension between two languages: the "language of dialectic" (jadal) and the "language of demonstration" (burhan), each of which refer not only to a significant feature of that history, but also to a feature that could dramatically alter the interpretation of that history. (shrink)
We argue that legalargumentation, as the subject matter as well as a special subfield of Argumentation Studies (AS), has to be examined by making skilled use of the full panoply of tools such as argumentation and story schemes which are at the forefront of current work in AS. In reviewing the literature, we make explicit our own methodological choices (particularly regarding the place of normative deliberation in practical reasoning) and then illustrate the implications of such (...) an approach through the analysis of a case study in the English law of evidence. We argue that a clear distinction must be drawn between practical argumentation and stories. Because of the institutional separation between legal judgment and fact-finding in common-law jury trials, we argue for the combination of argument and story-based analysis. (shrink)
The problem addressed in this article is the relationship between law and morality. It is asked (1) to what extent law and morality are connected and separated and (2) since when has it been so. To the extent that law and morality are distinct normative orders, it is asked (3) whether they rule exactly the same behaviors or whether each order rules dierent kinds of behaviors. If they rule at least some of the same behaviors, it is asked (4) whether (...) there can be antinomies (contradictions) between them. If there are an- tinomies, it is asked (5) whether the antinomies are only apparent (prima facie) and are therefore mistakes of human reason, or are definite and real. If the antinomies are apparent or real, it is asked (6) whether law or morality prevails (or should prevail) in the case of an antinomy. If one of these prevails, it is asked (7) whether this is always so, or whether law sometimes prevails (and should prevail) over morality and vice versa. In the case of existing coherence or at least solvable antinomies between law and morality, it is asked (8) whether the consequent achieved unity of practical reason is a specifically moral unity and whether it is a matter of cognition, of institutionalization, of individual or collective construction, or of consensus. (shrink)
Moorean arguments are a popular and powerful way to engage highly revisionary philosophical views, such as nihilism about motion, time, truth, consciousness, causation, and various kinds of skepticism (e.g., external world, other minds, inductive, global). They take, as a premise, a highly plausible first-order claim (e.g., cars move, I ate breakfast before lunch, it’s true that some fish have gills) and conclude from it the falsity of the highly revisionary philosophical thesis. Moorean arguments can be used against nihilists in ethics (...) (error theorists), too. Recently, error theorists have recognized Moorean arguments as a powerful challenge and have tried to meet it. They’ve argued that moral Moorean premises seem highly credible to us, but aren’t, by offering various debunking explanations. These explanations all appeal to higher-order evidence—evidence of error in our reasoning. I argue that drawing attention to higher-order evidence is a welcome contribution from error theorists, but that the higher-order evidence actually counts further against error theoretic arguments—including their debunking explanations—and further in favor of Moorean arguments and the commonsense views they support. Along the way I answer a few prominent objections to Moorean arguments: that they are objectionably question-begging, rely on categorizing some facts as “Moorean Facts”, and that reports of one’s credence in a proposition bears no interesting relation to that proposition’s credibility. (shrink)
Legal philosophers distinguish between a static and a dynamic interpretation of law. The former assumes that the meaning of the words used in a legal text is set at the moment of its enactment and does not change with time. The latter allows the interpreters to update the meaning and apply a contemporary understanding to the text. The dispute between these competing theories has significant ramifications for social and political life. To take an example, depending on the approach, (...) the term “cruel punishment” used in the US Constitution will be given an 18th century meaning or a contemporary one. -/- The philosophy of language seems to provide greater support to the static approach to legal interpretation. Within this approach the lawmaker is perceived as a speaker and legal texts are interpreted as utterances. As a consequence, interpretation is a quest for the speaker/lawmaker’s intention or the public meaning that prevailed at the time of enactment. Neither the intention nor the public meaning are considered to have changed in time. -/- In this paper I argue that the philosophy of language provides the dynamic approach with an equally robust support as the static one. This support comes from an externalist perspective in semantics, rooted in philosophical pragmatism and supported by Ruth Millikan’s concept of meaning as proper function. Grounding the dynamic approach in a well-founded linguistic philosophy rises to the challenge presented by the originalists’ declaration that “it takes a theory to beat a theory”. (shrink)
In this article, a new, idealizing-hermeneutic methodological approach to developing a theory of philosophical arguments is presented and carried out. The basis for this is a theory of ideal philosophical theory types developed from the analysis of historical examples. According to this theory, the following ideal types of theory exist in philosophy: 1. descriptive-nomological, 2. idealizing-hermeneutic, 3. technical-constructive, 4. ontic-practical. These types of theories are characterized in particular by what their basic types of theses are. (...) The main task of this article is then to determine the types of arguments that are suitable for justifying these types of theses. Surprisingly, practical arguments play a key role here. (shrink)
The interpretation and the indirect reporting of a speaker’s communicative intentions lie at the crossroad between pragmatics, argumentationtheory, and forensic linguistics. Since the leading case Masson v. New Yorker Magazine, Inc., in the United States the legal problem of determining the truth of a quotation is essentially equated with the correctness of its indirect reporting, i.e. the representation of the speaker’s intentions. For this reason, indirect reports are treated as interpretations of what the speaker intends to (...) communicate. Theoretical considerations, aimed at establishing the pragmatic meaning of an utterance and differentiating between presumptive and non-presumptive interpretation, are thus interwoven with the practical legal need of distinguishing a correct indirect report from an indirect one or a misquotation. An incorrect report or a misquotation has the dialectical effect of attributing to the misquoted party commitments that he never held, which the latter needs to rebut. This shifting of the burden of persuasion can be increased by using strategically the conflict between the presumptive interpretation of an utterance and the non-presumptive one, i.e. the different types of pragmatic ambiguity. When an interpreter is confronted with an utterance taken out of its dialogical context, his interpretative process is not guided by the actual context or intention, but rather the most frequent or prototypical dialogical setting or the most typical individual purpose that it could have served to achieve. This presumptive reconstruction can be used to provide a prima facie case that the other party needs to reject. The stronger the interpretative presumptions a speaker needs to rebut, the more effective the misquotation strategy. The conflict between the systematic and the presumptive process of interpretation can be represented as an argumentative mechanism of reconstruction of the individual intention, which allows one to assess the reasonableness of the interpretative reasoning. (shrink)
When laws or legal principles mention mental states such as intentions to form a contract, knowledge of risk, or purposely causing a death, what parts of the brain are they speaking about? We argue here that these principles are tacitly directed at our prefrontal executive processes. Our current best theories of consciousness portray it as a workspace in which executive processes operate, but what is important to the law is what is done with the workspace content rather than the (...) content itself. This makes executive processes more important to the law than consciousness, since they are responsible for channeling conscious decision-making into intentions and actions, or inhibiting action.We provide a summary of the current state of our knowledge about executive processes, which consists primarily of information about which portions of the prefrontal lobes perform which executive processes. Then we describe several examples in which legal principles can be understood as tacitly singling out executive processes, including principles regarding defendants’ intentions or plans to commit crimes and their awareness that certain facts are the case, as well as excusatory principles which result in lesser responsibility for those who are juveniles, mentally ill, sleepwalking, hypnotized, or who suffer from psychopathy. (shrink)
The theory-ladenness of perception argument is not an argument at all. It is two clusters of arguments. The first cluster is empirical. These arguments typically begin with a discussion of one or more of the following psychological phenomena: (a) the conceptual penetrability of the visual system, (b) voluntary perceptual reversal of ambiguous figures, (c) adaptation to distorting lenses, or (d) expectation effects. From this evidence, proponents of theory-ladenness typically conclude that perception is in some sense "laden" with (...) class='Hi'>theory. The second cluster attempts to extract deep epistemological lessons from this putative fact. Some philosophers conclude that science is not (in any traditional sense) a rational activity, while others conclude that we must radically reconceptualize what scientific rationality involves. Once we understand the structure of these arguments, much conventional wisdom about the significance of the psychological data turns out to be false. (shrink)
This paper develops an argument against causal decision theory. I formulate a principle of preference, which I call the Guaranteed Principle. I argue that the preferences of rational agents satisfy the Guaranteed Principle, that the preferences of agents who embody causal decision theory do not, and hence that causal decision theory is false.
The paper considers contemporary models of presumption in terms of their ability to contribute to a working theory of presumption for argumentation. Beginning with the Whatelian model, we consider its contemporary developments and alternatives, as proposed by Sidgwick, Kauffeld, Cronkhite, Rescher, Walton, Freeman, Ullmann-Margalit, and Hansen. Based on these accounts, we present a picture of presumptions characterized by their nature, function, foundation and force. On our account, presumption is a modal status that is attached to a claim and (...) has the effect of shifting, in a dialogue, a burden of proof set at a local level. Presumptions can be analysed and evaluated inferentially as components of rule-based structures. Presumptions are defeasible, and the force of a presumption is a function of its normative foundation. This picture seeks to provide a framework to guide the development of specific theories of presumption. (shrink)
Argument from analogy is a common and formidable form of reasoning in law and in everyday conversation. Although there is substantial literature on the subject, according to a recent survey ( Juthe 2005) there is little fundamental agreement on what form the argument should take, or on how it should be evaluated. Th e lack of conformity, no doubt, stems from the complexity and multiplicity of forms taken by arguments that fall under the umbrella of analogical reasoning in argumentation, (...) dialectical studies, and law. Modeling arguments with argumentation schemes has proven useful in attempts to refine the analyst’s understanding of not only the logical structures that shape the backbone of the argument itself, but also the logical underpinning of strategies for evaluating it, strategies based on the semantic categories of genus and relevance. By clarifying the distinction between argument from example and argument from analogy, it is possible to advance a useful proposal for the treatment of argument from analogy in law. (shrink)
This study presents and develops in detail (a new version of) the argumental conception of meaning. The two basic principles of the argumental conception of meaning are: i) To know (implicitly) the sense of a word is to know (implicitly) all the argumentation rules concerning that word; ii) To know the sense of a sentence is to know the syntactic structure of that sentence and to know the senses of the words occurring in it. The sense of a sentence (...) is called immediate argumental role of that sentence. According to the argumental conception of meaning a theory of meaning for a particular language yields a systematic specification of the understanding of every sentence of the language which consists in a specification of the immediate argumental role of the sentence. The immediate argumental role is a particular aspect of the use of a sentence in arguments. But it is not the whole use in arguments, nor is the whole use in arguments reducible to the immediate argumental role. That is why, by accepting the argumental conception of meaning, we can have epistemological holism without linguistic holism. The argumental conception distinguishes between the understanding and the correctness of a language. Such a distinction makes it possible to account for our understanding of paradoxical languages. Redundancy theory of truth, realistic conceptions of truth or epistemic conceptions of truth are all compatible with an argumental conception of sense. But here it is argued that an epistemic conception of truth is preferrable. Acceptance of the argumental conception of meaning and of an epistemic conception of truth leads to a rejection of the idea of analytic truth. The argumental conception is pluralistic with respect to the understandability of different logics, and neutral with respect to their correctness. (shrink)
One of Jerry Fodor’s many seminal contributions to philosophy of mind was his inner sentence theory of belief and desire. To believe that p is to have a subpersonal inner sentence in one’s “belief-box” that means that p, and to desire that q is to have a subpersonal inner sentence in one’s “desire-box” that means that q. I will distinguish between two accounts of box-inclusion that exhaust the options: liberal and restrictive. I will show that both accounts have the (...) mistaken implication that in certain cases there can be radical but “secret” changes in a subject’s beliefs and desires. I will suggest that the correct moral to draw is that we should instead accept what Eric Schwitzgebel has called a “surface-level” theory of belief and desire. (shrink)
The concept of law is not a theorist's invention but one that people use every day. Thus one measure of the adequacy of a theory of law is its degree of fidelity to the concept as it is understood by those who use it. That means as far as possible. There are important truisms about the law that have an evaluative cast. The theorist has either to say what would make those evaluative truisms true or to defend her choice (...) to dismiss them as false of law or not of the essence of law. Thus the legal theorist must give an account of the truth grounds of the more central evaluative truisms about law. This account is a theory of legitimacy. It will contain framing judgments that state logical relations between descriptive judgments and directly evaluative judgments. Framing judgments are not directly evaluative, nor do they entail directly evaluative judgments, but they are nonetheless moral judgments. Therefore, an adequate theory of law must make (some) moral judgments. This means that an adequate theory of law has to take a stand on certain (but not all) contested issues in political philosophy. Legaltheory is thus a branch of political philosophy. Moreover, one cannot be a moral-aim functionalist about legal institutions without compromising one's positivism about legal norms. (shrink)
In this paper, I ask whether mishpat ivri (Jewish Law) is appropriately conceived as a “legal system”. I review Menachem Elon’s use of a “Sources” Theory of Law (based on Salmond) in his account of Mishpat Ivri; the status of religious law from the viewpoint of jurisprudence itself (Bentham, Austin and Kelsen); then the use of sources (and the approach to “dogmatic error”) by halakhic authorities in discussing the problems of the agunah (“chained wife”), which I suggest points (...) to a theory more radical than the “sources” theory of law, one more akin to the ultimate phase of the thought of Kelsen (the “non-logical” Kelsen) or indeed to some form of Legal Realism (with which that phase of Kelsen’s thought has indeed been compared)? I finally juxtapose an account based on internal theological resources (a “Jurisprudence of Revelation”). Downloadable at at http://www.biu.ac.il/JS/JSIJ/jsij1.html. (shrink)
The rhetorical theory of argument, if held as a conclusion of an argument, is self-defeating. The rhetorical theory can be refined, but these refinements either make the theory subject to a second self- defeat problem or tacitly an epistemic theory of argument.
In this paper I defend what I call the argument from epistemic reasons against the moral error theory. I argue that the moral error theory entails that there are no epistemic reasons for belief and that this is bad news for the moral error theory since, if there are no epistemic reasons for belief, no one knows anything. If no one knows anything, then no one knows that there is thought when they are thinking, and no one (...) knows that they do not know everything. And it could not be the case that we do not know that there is thought when we believe that there is thought and that we do not know that we do not know everything. I address several objections to the claim that the moral error theory entails that there are no epistemic reasons for belief. It might seem that arguing against the error theory on the grounds that it entails that no one knows anything is just providing a Moorean argument against the moral error theory. I show that even if my argument against the error theory is indeed a Moorean one, it avoids Streumer's, McPherson's and Olson's objections to previous Moorean arguments against the error theory and is a more powerful argument against the error theory than Moore's argument against external world skepticism is against external world skepticism. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.