In this paper we present a framework for the dynamic and automatic generation of novel knowledge obtained through a process of commonsense reasoning based on typicality-based concept combination. We exploit a recently introduced extension of a Description Logic of typicality able to combine prototypical descriptions of concepts in order to generate new prototypical concepts and deal with problem like the PET FISH (Osherson and Smith, 1981; Lieto & Pozzato, 2019). Intuitively, in the context of our application of this (...) class='Hi'>logic, the overall pipeline of our system works as follows: given a goal expressed as a set of properties, if the knowledge base does not contain a concept able to fulfill all these properties, then our system looks for two concepts to recombine in order to extend the original knowledge based satisfy the goal. (shrink)
In questo contributo descriviamo un sistema di creatività computazionale in grado di generare automaticamente nuovi concetti utilizzando una logica descrittiva non monotòna che integra tre ingredienti principali: una logica descrittiva della tipicalità, una estensione probabilistica basata sulla semantica distribuita nota come DISPONTE, e una euristica di ispirazione cognitiva per la combinazione di più concetti. Una delle applicazioni principali del sistema riguarda il campo della creatività computazionale e, più specificatamente, il suo utilizzo come sistema di supporto alla creatività in ambito mediale. (...) In particolare, tale sistema è in grado di: generare nuove storie (a partire da una rappresentazione narrativa di storie pre-esistenti), generare nuovi personaggi (ad es. il nuovo “cattivo” di una serie tv o di un cartone animato) e, in generale, può essere utilizzato per proporre nuove soluzioni e format narrativi da esplorare nell’ambito dell’industria creativa. (shrink)
It is usually accepted that one of the properties of classical logic is monotonicity, the property that ensures that the validity of implication is not affected by the addition of new premises. In this piece, I will argue that this is a category mistake, since the notion of monotonicity is primarily epistemic in character and can’t be meaningfully attributed to a system. The other point I intend to defend is that classical logic is compatible with non-monotonicity understood as (...) reasoners’ ability to abandon a previous inference based on new information. (shrink)
A non-embracing consequence relation is one such that no set of wffs closed under it is equal to the set of all wffs. I prove that these relations have no deductive power if they are also extensive and monotonic.
Formal ontologies are nowadays widely considered a standard tool for knowledge representation and reasoning in the Semantic Web. In this context, they are expected to play an important role in helping automated processes to access information. Namely: they are expected to provide a formal structure able to explicate the relationships between different concepts/terms, thus allowing intelligent agents to interpret, correctly, the semantics of the web resources improving the performances of the search technologies. Here we take into account a problem regarding (...) Knowledge Representation in general, and ontology based representations in particular; namely: the fact that knowledge modeling seems to be constrained between conflicting requirements, such as compositionality, on the one hand and the need to represent prototypical information on the other. In particular, most common sense concepts seem not to be captured by the stringent semantics expressed by such formalisms as, for example, Description Logics (which are the formalisms on which the ontology languages have been built). The aim of this work is to analyse this problem, suggesting a possible solution suitable for formal ontologies and semantic web representations. The questions guiding this research, in fact, have been: is it possible to provide a formal representational framework which, for the same concept, combines both the classical modelling view (accounting for compositional information) and defeasible, prototypical knowledge ? Is it possible to propose a modelling architecture able to provide different type of reasoning (e.g. classical deductive reasoning for the compositional component and a non monotonic reasoning for the prototypical one)? We suggest a possible answer to these questions proposing a modelling framework able to represent, within the semantic web languages, a multilevel representation of conceptual information, integrating both classical and non classical (typicality based) information. Within this framework we hypothesise, at least in principle, the coexistence of multiple reasoning processes involving the different levels of representation. (shrink)
We present two defeasible logics of norm-propositions (statements about norms) that (i) consistently allow for the possibility of normative gaps and normative conflicts, and (ii) map each premise set to a sufficiently rich consequence set. In order to meet (i), we define the logic LNP, a conflict- and gap-tolerant logic of norm-propositions capable of formalizing both normative conflicts and normative gaps within the object language. Next, we strengthen LNP within the adaptive logic framework for non-monotonic reasoning (...) in order to meet (ii). This results in the adaptive logics LNPr and LNPm, which interpret a given set of premises in such a way that normative conflicts and normative gaps are avoided ‘whenever possible’. LNPr and LNPm are equipped with a preferential semantics and a dynamic proof theory. (shrink)
We generalize, by a progressive procedure, the notions of conjunction and disjunction of two conditional events to the case of n conditional events. In our coherence-based approach, conjunctions and disjunctions are suitable conditional random quantities. We define the notion of negation, by verifying De Morgan’s Laws. We also show that conjunction and disjunction satisfy the associative and commutative properties, and a monotonicity property. Then, we give some results on coherence of prevision assessments for some families of compounded conditionals; in particular (...) we examine the Fréchet-Hoeffding bounds. Moreover, we study the reverse probabilistic inference from the conjunction Cn+1 of n + 1 conditional events to the family {Cn,En+1|Hn+1}. We consider the relation with the notion of quasi-conjunction and we examine in detail the coherence of the prevision assessments related with the conjunction of three conditional events. Based on conjunction, we also give a characterization of p-consistency and of p-entailment, with applications to several inference rules in probabilistic nonmonotonic reasoning. Finally, we examine some non p-valid inference rules; then, we illustrate by an example two methods which allow to suitably modify non p-valid inference rules in order to get inferences which are p-valid. (shrink)
Simultaneous hypothesis tests can fail to provide results that meet logical requirements. For example, if A and B are two statements such that A implies B, there exist tests that, based on the same data, reject B but not A. Such outcomes are generally inconvenient to statisticians (who want to communicate the results to practitioners in a simple fashion) and non-statisticians (confused by conflicting pieces of information). Based on this inconvenience, one might want to use tests that satisfy logical requirements. (...) However, Izbicki and Esteves shows that the only tests that are in accordance with three logical requirements (monotonicity, invertibility and consonance) are trivial tests based on point estimation, which generally lack statistical optimality. As a possible solution to this dilemma, this paper adapts the above logical requirements to agnostic tests, in which one can accept, reject or remain agnostic with respect to a given hypothesis. Each of the logical requirements is characterized in terms of a Bayesian decision theoretic perspective. Contrary to the results obtained for regular hypothesis tests, there exist agnostic tests that satisfy all logical requirements and also perform well statistically. In particular, agnostic tests that fulfill all logical requirements are characterized as region estimator-based tests. Examples of such tests are provided. (shrink)
Logic has been a—disputed—ingredient in the emergence and development of the now very large field known as knowledge representation and reasoning. In this book (in progress), I select some central topics in this highly fruitful, albeit controversial, association (e.g., non-monotonic reasoning, implicit belief, logical omniscience, closed world assumption), identifying their sources and analyzing/explaining their elaboration in highly influential published work.
Logical pluralism is the view that there is more than one correct logic. Most logical pluralists think that logic is normative in the sense that you make a mistake if you accept the premisses of a valid argument but reject its conclusion. Some authors have argued that this combination is self-undermining: Suppose that L1 and L2 are correct logics that coincide except for the argument from Γ to φ, which is valid in L1 but invalid in L2. If (...) you accept all sentences in Γ, then, by normativity, you make a mistake if you reject φ. In order to avoid mistakes, you should accept φ or suspend judgment about φ. Both options are problematic for pluralism. Can pluralists avoid this worry by rejecting the normativity of logic? I argue that they cannot. All else being equal, the argument goes through even if logic is not normative. (shrink)
In the late summer of 1998, the authors, a cognitive scientist and a logician, started talking about the relevance of modern mathematical logic to the study of human reasoning, and we have been talking ever since. This book is an interim report of that conversation. It argues that results such as those on the Wason selection task, purportedly showing the irrelevance of formal logic to actual human reasoning, have been widely misinterpreted, mainly because the picture of logic (...) current in psychology and cognitive science is completely mistaken. We aim to give the reader a more accurate picture of mathematical logic and, in doing so, hope to show that logic, properly conceived, is still a very helpful tool in cognitive science. The main thrust of the book is therefore constructive. We give a number of examples in which logical theorizing helps in understanding and modeling observed behavior in reasoning tasks, deviations of that behavior in a psychiatric disorder (autism), and even the roots of that behavior in the evolution of the brain. (shrink)
I present a possible worlds semantics for a hyperintensional belief revision operator, which reduces the logical idealization of cognitive agents affecting similar operators in doxastic and epistemic logics, as well as in standard AGM belief revision theory. belief states are not closed under classical logical consequence; revising by inconsistent information does not perforce lead to trivialization; and revision can be subject to ‘framing effects’: logically or necessarily equivalent contents can lead to different revisions. Such results are obtained without resorting to (...) non-classical logics, or to non-normal or impossible worlds semantics. The framework combines, instead, a standard semantics for propositional S5 with a simple mereology of contents. (shrink)
In this paper we discuss the new Tweety puzzle. The original Tweety puzzle was addressed by approaches in non-monotoniclogic, which aim to adequately represent the Tweety case, namely that Tweety is a penguin and, thus, an exceptional bird, which cannot fly, although in general birds can fly. The new Tweety puzzle is intended as a challenge for probabilistic theories of epistemic states. In the first part of the paper we argue against monistic Bayesians, who assume that epistemic (...) states can at any given time be adequately described by a single subjective probability function. We show that monistic Bayesians cannot provide an adequate solution to the new Tweety puzzle, because this requires one to refer to a frequency-based probability function. We conclude that monistic Bayesianism cannot be a fully adequate theory of epistemic states. In the second part we describe an empirical study, which provides support for the thesis that monistic Bayesianism is also inadequate as a descriptive theory of cognitive states. In the final part of the paper we criticize Bayesian approaches in cognitive science, insofar as their monistic tendency cannot adequately address the new Tweety puzzle. We, further, argue against monistic Bayesianism in cognitive science by means of a case study. In this case study we show that Oaksford and Chater’s (2007, 2008) model of conditional inference—contrary to the authors’ theoretical position—has to refer also to a frequency-based probability function. (shrink)
A number of authors have objected to the application of non-classical logic to problems in philosophy on the basis that these non-classical logics are usually characterised by a classical metatheory. In many cases the problem amounts to more than just a discrepancy; the very phenomena responsible for non-classicality occur in the field of semantics as much as they do elsewhere. The phenomena of higher order vagueness and the revenge liar are just two such examples. The aim of this paper (...) is to show that a large class of non-classical logics are strong enough to formulate their own model theory in a corresponding non-classical set theory. Specifically I show that adequate definitions of validity can be given for the propositional calculus in such a way that the metatheory proves, in the specified logic, that every theorem of the propositional fragment of that logic is validated. It is shown that in some cases it may fail to be a classical matter whether a given sentence is valid or not. One surprising conclusion for non-classical accounts of vagueness is drawn: there can be no axiomatic, and therefore precise, system which is determinately sound and complete. (shrink)
In a previous work we introduced the algorithm \SQEMA\ for computing first-order equivalents and proving canonicity of modal formulae, and thus established a very general correspondence and canonical completeness result. \SQEMA\ is based on transformation rules, the most important of which employs a modal version of a result by Ackermann that enables elimination of an existentially quantified predicate variable in a formula, provided a certain negative polarity condition on that variable is satisfied. In this paper we develop several extensions of (...) \SQEMA\ where that syntactic condition is replaced by a semantic one, viz. downward monotonicity. For the first, and most general, extension \SSQEMA\ we prove correctness for a large class of modal formulae containing an extension of the Sahlqvist formulae, defined by replacing polarity with monotonicity. By employing a special modal version of Lyndon's monotonicity theorem and imposing additional requirements on the Ackermann rule we obtain restricted versions of \SSQEMA\ which guarantee canonicity, too. (shrink)
This is a first tentative examination of the possibility of reinstating reduction as a valid candidate for presenting relations between mental and physical properties. Classical Nagelian reduction is undoubtedly contaminated in many ways, but here I investigate the possibility of adapting to problems concerning mental properties an alternative definition for theory reduction in philosophy of science. The definition I offer is formulated with the aid of non-monotoniclogic, which I suspect might be a very interesting realm for testing (...) notions concerning localized mental-physical reduction. The reason for this is that non-monotonic reasoning by definition is about appeals made not only to explicit observations, but also to an implicit selection of background knowledge containing heuristic information. The flexibility of this definition and the fact that it is not absolute, i.e. that the relation of reduction may be retracted or allowed to shift without fuss, add at least an interesting alternative factor to current materialist debates. South African Journal of Philosophy Vol. 25(2) 2006: 102-112. (shrink)
We study imagination as reality-oriented mental simulation : the activity of simulating nonactual scenarios in one’s mind, to investigate what would happen if they were realized. Three connected questions concerning ROMS are: What is the logic, if there is one, of such an activity? How can we gain new knowledge via it? What is voluntary in it and what is not? We address them by building a list of core features of imagination as ROMS, drawing on research in cognitive (...) psychology and the philosophy of mind. We then provide a logic of imagination as ROMS which models such features, combining techniques from epistemic logic, action logic, and subject matter semantics. Our logic comprises a modal propositional language with non-monotonic imagination operators, a formal semantics, and an axiomatization. (shrink)
A model-theoretic realist account of science places linguistic systems and their corresponding non-linguistic structures at different stages or different levels of abstraction of the scientific process. Apart from the obvious problem of underdetermination of theories by data, philosophers of science are also faced with the inverse (and very real) problem of overdetermination of theories by their empirical models, which is what this article will focus on. I acknowledge the contingency of the factors determining the nature – and choice – of (...) a certain model at a certain time, but in my terms, this is a matter about which we can talk and whose structure we can formalise. In this article a mechanism for tracing "empirical choices" and their particularized observational-theoretical entanglements will be offered in the form of Yoav Shoham's version of non-monotoniclogic. Such an analysis of the structure of scientific theories may clarify the motivations underlying choices in favor of certain empirical models (and not others) in a way that shows that "disentangling" theoretical and observation terms is more deeply model-specific than theory-specific. This kind of analysis offers a method for getting an articulable grip on the overdetermination of theories by their models – implied by empirical equivalence – which Kuipers' structuralist analysis of the structure of theories does not offer. (shrink)
We present a formal semantics for epistemic logic, capturing the notion of knowability relative to information (KRI). Like Dretske, we move from the platitude that what an agent can know depends on her (empirical) information. We treat operators of the form K_AB (‘B is knowable on the basis of information A’) as variably strict quantifiers over worlds with a topic- or aboutness- preservation constraint. Variable strictness models the non-monotonicity of knowledge acquisition while allowing knowledge to be intrinsically stable. Aboutness-preservation (...) models the topic-sensitivity of information, allowing us to invalidate controversial forms of epistemic closure while validating less controversial ones. Thus, unlike the standard modal framework for epistemic logic, KRI accommodates plausible approaches to the Kripke-Harman dogmatism paradox, which bear on non-monotonicity, or on topic-sensitivity. KRI also strikes a better balance between agent idealization and a non-trivial logic of knowledge ascriptions. (shrink)
Invited Lecture at the SRM ACM Student Chapter, India, on Cognitive Heuristics for Commonsense Thinking and Reasoning in the next generation Artificial Intelligence. The lecture proposes a historical and technical overview of strategies for commonsense reasoning in AI.
Mathematicians often speak of conjectures, yet unproved, as probable or well-confirmed by evidence. The Riemann Hypothesis, for example, is widely believed to be almost certainly true. There seems no initial reason to distinguish such probability from the same notion in empirical science. Yet it is hard to see how there could be probabilistic relations between the necessary truths of pure mathematics. The existence of such logical relations, short of certainty, is defended using the theory of logical probability (or objective Bayesianism (...) or non-deductive logic), and some detailed examples of its use in mathematics surveyed. Examples of inductive reasoning in experimental mathematics are given and it is argued that the problem of induction is best appreciated in the mathematical case. (shrink)
Multialgebras have been much studied in mathematics and in computer science. In 2016 Carnielli and Coniglio introduced a class of multialgebras called swap structures, as a semantic framework for dealing with several Logics of Formal Inconsistency that cannot be semantically characterized by a single finite matrix. In particular, these LFIs are not algebraizable by the standard tools of abstract algebraic logic. In this paper, the first steps towards a theory of non-deterministic algebraization of logics by swap structures are given. (...) Specifically, a formal study of swap structures for LFIs is developed, by adapting concepts of universal algebra to multialgebras in a suitable way. A decomposition theorem similar to Birkhoff’s representation theorem is obtained for each class of swap structures. Moreover, when applied to the 3-valued algebraizable logics J3 and Ciore, their classes of algebraic models are retrieved, and the swap structures semantics become twist structures semantics. This fact, together with the existence of a functor from the category of Boolean algebras to the category of swap structures for each LFI, suggests that swap structures can be seen as non-deterministic twist structures. This opens new avenues for dealing with non-algebraizable logics by the more general methodology of multialgebraic semantics. (shrink)
How is it possible that beginning from the negation of rational thoughts one comes to produce knowledge? This problem, besides its intrinsic interest, acquires a great relevance when the representation of a knowledge is settled, for example, on data and automatic reasoning. Many treatment ways have been tried, as in the case of the non-monotonic logics; logics that intend to formalize an idea of reasoning by default, etc. These attempts are incomplete and are subject to failure. A possible solution (...) would be to formulate a logic of the irrational, which offers a model for reasoning permitting to support contradictions as well as to produce knowledge from such situations. An intuition underlying the foundation of such a logic consists of the da Costa's paraconsistent logics presenting however, a different deduction theory and a whole distinct semantics, called here "the semantics of possible translations". The present proposing, following our argumentation, intends to enlight all this question, by a whole satisfactory logical point of view, being practically applicable and philosophically acceptable.Como é possível que a partir da negação do racional se possa obter conhecimento adicional? Esse problema, além de seu interesse intrínseco, adquire uma relevância adicional quando o encontramos na representação do conhecimento em bases de dados e raciocínio automático, por exemplo. Nesse caso, diversas tentativas de tratamento têm sido propostas, como as lógicas não-monotônicas, as lógicas que tentam formalizar a ideia do raciocínio por falha . Tais tentativas de solução, porém, são falhas e incompletas; proponho que uma solução possível seria formular uma lógica do irracional, que oferecesse um modelo para o raciocínio permitindo não só suportar contradições, como conseguir obter conhecimento, a partir de tais situações. A intuição subjacente à formulação de tal lógica são as lógicas paraconsistentes de da Costa, mas com uma teoria da dedução diferente e uma semântica completamente distinta . Tal proposta, como pretendo argumentar, fornece um enfoque para a questão que é ao mesmo tempo completamente satisfatório, aplicável do ponto de vista prático e aceitável do ponto de vista filosófico. (shrink)
Studies of several languages, including Swahili [swa], suggest that realis (actual, realizable) and irrealis (unlikely, counterfactual) meanings vary along a scale (e.g., 0.0–1.0). T-values (True, False) and P-values (probability) account for this pattern. However, logic cannot describe or explain (a) epistemic stances toward beliefs, (b) deontic and dynamic stances toward states-of-being and actions, and (c) context-sensitivity in conditional interpretations. (a)–(b) are deictic properties (positions, distance) of ‘embodied’ Frames of Reference (FoRs)—space-time loci in which agents perceive and from which they (...) contextually act (Rohrer 2007a, b). I argue that the embodied FoR describes and explains (a)–(c) better than T-values and P-values alone. In this cognitive-functional-descriptive study, I represent these embodied FoRs using Unified Modeling Language (UML) mental spaces in analyzing Swahili conditional constructions to show how necessary, sufficient, and contributing conditions obtain on the embodied FoR networks level. (shrink)
This work contributes to the theory of judgement aggregation by discussing a number of significant non-classical logics. After adapting the standard framework of judgement aggregation to cope with non-classical logics, we discuss in particular results for the case of Intuitionistic Logic, the Lambek calculus, Linear Logic and Relevant Logics. The motivation for studying judgement aggregation in non-classical logics is that they offer a number of modelling choices to represent agents’ reasoning in aggregation problems. By studying judgement aggregation in (...) logics that are weaker than classical logic, we investigate whether some well-known impossibility results, that were tailored for classical logic, still apply to those weak systems. (shrink)
In this paper, I discuss the analysis of logic in the pragmatic approach recently proposed by Brandom. I consider different consequence relations, formalized by classical, intuitionistic and linear logic, and I will argue that the formal theory developed by Brandom, even if provides powerful foundational insights on the relationship between logic and discursive practices, cannot account for important reasoning patterns represented by non-monotonic or resource-sensitive inferences. Then, I will present an incompatibility semantics in the framework of (...) linear logic which allow to refine Brandom’s concept of defeasible inference and to account for those non-monotonic and relevant inferences that are expressible in linear logic. Moreover, I will suggest an interpretation of discursive practices based on an abstract notion of agreement on what counts as a reason which is deeply connected with linear logic semantics. (shrink)
It is quite plausible to say that you may read or write implies that you may read and you may write (though possibly not both at once). This so-called free choice principle is well-known in deontic logic. Sadly, despite being so intuitive and seemingly innocent, this principle causes a lot of worries. The paper briefly but critically examines leading accounts of free choice permission present in the literature. Subsequently, the paper suggests to accept the free choice principle, but only (...) as a default (or defeasible) rule, issuing to it a ticket-of-leave, granting it some freedom, until it commits an undesired inference. (shrink)
[Abstract] Suppose that the Big Bang was the first singularity in the history of the cosmos. Then it would be plausible to presume that the availability of the strong general intelligence should mark the second singularity for the natural human race. The human race needs to be prepared to make it sure that if a singularity robot becomes a person, the robotic person should be a blessing for the humankind rather than a curse. Toward this direction I would scrutinize the (...) implication of the hypothesis that the singularity robot is a member of the human society. I will ask how the robot is equipped to satisfy the ontological criteria such as accountability, consciousness, identity, by demonstrating a possibility that it has the epistemological capacities like conceptual role semantic understanding and non-monotonous inference, and by probing whether it can behave in the way human moral visions expect it to. -/- [Table of contents] 1. Opening: Singularity robots are coming 1) Singularity robots of strong general intelligence 2) A singularity robot is a member of the human community 2. Ontological interpretation of singularity robot 1) Responsibility: thinking, understanding, belief 2) Consciousness: three characteristics – zombie, enjoyment, sympathy 3) Identity: I, body, autonomy, unity 3. Epistmological prospects of singularity robot 1) Semantics of general intelligence: conceptual role semantics 2) Logic for general intelligence: non-monotonous logic 4. Moral horizon of singularity robot 1) When a singularity robot becomes a robot person 2) Humanity independence: singularity robot is a user of human languages 5. Concluding: preemptive humanities. (shrink)
In this paper, by suggesting a formal representation of science based on recent advances in logic-based Artificial Intelligence (AI), we show how three serious concerns around the realisation of traditional scientific realism (the theory/observation distinction, over-determination of theories by data, and theory revision) can be overcome such that traditional realism is given a new guise as ‘naturalised’. We contend that such issues can be dealt with (in the context of scientific realism) by developing a formal representation of science based (...) on the application of the following tools from Knowledge Representation: the family of Description Logics, an enrichment of classical logics via defeasible statements, and an application of the preferential interpretation of the approach to Belief Revision. (shrink)
In previous work, we studied four well known systems of qualitative probabilistic inference, and presented data from computer simulations in an attempt to illustrate the performance of the systems. These simulations evaluated the four systems in terms of their tendency to license inference to accurate and informative conclusions, given incomplete information about a randomly selected probability distribution. In our earlier work, the procedure used in generating the unknown probability distribution (representing the true stochastic state of the world) tended to yield (...) probability distributions with moderately high entropy levels. In the present article, we present data charting the performance of the four systems when reasoning in environments of various entropy levels. The results illustrate variations in the performance of the respective reasoning systems that derive from the entropy of the environment, and allow for a more inclusive assessment of the reliability and robustness of the four systems. (shrink)
This article presents modal versions of resource-conscious logics. We concentrate on extensions of variants of linear logic with one minimal non-normal modality. In earlier work, where we investigated agency in multi-agent systems, we have shown that the results scale up to logics with multiple non-minimal modalities. Here, we start with the language of propositional intuitionistic linear logic without the additive disjunction, to which we add a modality. We provide an interpretation of this language on a class of Kripke (...) resource models extended with a neighbourhood function: modal Kripke resource models. We propose a Hilbert-style axiomatisation and a Gentzen-style sequent calculus. We show that the proof theories are sound and complete with respect to the class of modal Kripke resource models. We show that the sequent calculus admits cut elimination and that proof-search is in PSPACE. We then show how to extend the results when non-commutative connectives are added to the language. Finally, we put the l.. (shrink)
Priest has provided a simple tableau calculus for Chellas's conditional logic Ck. We provide rules which, when added to Priest's system, result in tableau calculi for Chellas's CK and Lewis's VC. Completeness of these tableaux, however, relies on the cut rule.
A recent trend in Husserl scholarship takes the Logische Untersuchungen (LU) as advancing an inconsistent and confused view of the non-conceptual content of perceptual experience. Against this, I argue that there is no inconsistency about non-conceptualism in LU. Rather, LU presents a hybrid view of the conceptual nature of perceptual experience, which can easily be misread as inconsistent, since it combines a conceptualist view of perceptual content (or matter) with a non-conceptualist view of perceptual acts. I show how this hybrid (...) view is operative in Husserl’s analyses of essentially occasional expressions and categorial intuition. And I argue that it can also be deployed in relation to Husserl’s analysis of the constitution of perceptual fullness, which allows it to avoid a objection raised by Walter Hopp—that the combination of Husserl’s analysis of perceptual fullness with conceptualism about perceptual content generates a vicious regress. (shrink)
Dynamic conceptual reframing represents a crucial mechanism employed by humans, and partially by other animal species, to generate novel knowledge used to solve complex goals. In this talk, I will present a reasoning framework for knowledge invention and creative problem solving exploiting TCL: a non-monotonic extension of a Description Logic (DL) of typicality able to combine prototypical (commonsense) descriptions of concepts in a human-like fashion [1]. The proposed approach has been tested both in the task of goal-driven concept (...) invention [2,3] and has additionally applied within the context of serendipity-based recommendation systems [4]. I will present the obtained results, the lessons learned, and the road ahead of this research path. -/- . (shrink)
This paper outlines an account of conditionals, the evidential account, which rests on the idea that a conditional is true just in case its antecedent supports its consequent. As we will show, the evidential account exhibits some distinctive logical features that deserve careful consideration. On the one hand, it departs from the material reading of ‘if then’ exactly in the way we would like it to depart from that reading. On the other, it significantly differs from the non-material accounts which (...) hinge on the Ramsey Test, advocated by Adams, Stalnaker, Lewis, and others. (shrink)
Inventing novel knowledge to solve problems is a crucial, creative, mechanism employed by humans, to extend their range of action. In this talk, I will show how commonsense reasoning plays a crucial role in this respect. In particular, I will present a cognitively inspired reasoning framework for knowledge invention and creative problem solving exploiting TCL: a non-monotonic extension of a Description Logic (DL) of typicality able to combine prototypical (commonsense) descriptions of concepts in a human-like fashion. The proposed (...) approach has been tested both in the task of goal-driven concept invention and has additionally applied within the context of serendipity-based recommendation systems. I will present the obtained results, the lessons learned, and the road ahead of this research path. (shrink)
Does it make sense to employ modern logical tools for ancient philosophy? This well-known debate2 has been re-launched by the indologist Piotr Balcerowicz, questioning those who want to look at the Eastern school of Jainism with Western glasses. While plainly acknowledging the legitimacy of Balcerowicz's mistrust, the present paper wants to propose a formal reconstruction of one of the well-known parts of the Jaina philosophy, namely: the saptabhangi, i.e. the theory of sevenfold predication. Before arguing for this formalist approach to (...) philosophy, let us return to the reasons to be reluctant at it. (shrink)
Abstract. As a general theory of reasoning—and as a general theory of what holds true under every possible circumstance—logic is supposed to be ontologically neutral. It ought to have nothing to do with questions concerning what there is, or whether there is anything at all. It is for this reason that traditional Aristotelian logic, with its tacit existential presuppositions, was eventually deemed inadequate as a canon of pure logic. And it is for this reason that modern quantification (...) theory, too, with its residue of existentially loaded theorems and patterns of inference, has been claimed to suffer from a defect of logical purity. The law of non-contradiction rules out certain circumstances as impossible—circumstances in which a statement is both true and false, or perhaps circumstances where something both is and is not the case. Is this to be regarded as a further ontological bias? (shrink)
A picture of the world as chiefly one of discrete objects, distributed in space and time, has sometimes seemed compelling. It is however one of the main targets of Henry Laycock's book; for it is seriously incomplete. The picture, he argues, leaves no space for "stuff" like air and water. With discrete objects, we may always ask "how many?," but with stuff the question has to be "how much?" Laycock's fascinating exploration also addresses key logical and linguistic questions about the (...) way we categorize the many and the much. (shrink)
This book has three main parts. The first, longer, part is a reprint of the author's Deviant Logic, which initially appeared as a book by itself in 1974. The second and third parts include reprints of five papers originally published between 1973 and 1980. Three of them focus on the nature and justification of deductive reasoning, which are also a major concern of Deviant Logic. The other two are on fuzzy logic, and make up for a major (...) omission of Deviant Logic. (shrink)
In this paper I am concerned with an analysis of negative existential sentences that contain proper names only by using negative or neutral free logic. I will compare different versions of neutral free logic with the standard system of negative free logic (Burge, Sainsbury) and aim to defend my version of neutral free logic that I have labeled non-standard neutral free logic.
In this paper I will develop a view about the semantics of imperatives, which I term Modal Noncognitivism, on which imperatives might be said to have truth conditions (dispositionally, anyway), but on which it does not make sense to see them as expressing propositions (hence does not make sense to ascribe to them truth or falsity). This view stands against “Cognitivist” accounts of the semantics of imperatives, on which imperatives are claimed to express propositions, which are then enlisted in explanations (...) of the relevant logico-semantic phenomena. It also stands against the major competitors to Cognitivist accounts—all of which are non-truth-conditional and, as a result, fail to provide satisfying explanations of the fundamental semantic characteristics of imperatives (or so I argue). The view of imperatives I defend here improves on various treatments of imperatives on the market in giving an empirically and theoretically adequate account of their semantics and logic. It yields explanations of a wide range of semantic and logical phenomena about imperatives—explanations that are, I argue, at least as satisfying as the sorts of explanations of semantic and logical phenomena familiar from truth-conditional semantics. But it accomplishes this while defending the notion—which is, I argue, substantially correct—that imperatives could not have propositions, or truth conditions, as their meanings. (shrink)
Unconscious logical inference seems to rely on the syntactic structures of mental representations (Quilty-Dunn & Mandelbaum 2018). Other transitions, such as transitions using iconic representations and associative transitions, are harder to assimilate to syntax-based theories. Here we tackle these difficulties head on in the interest of a fuller taxonomy of mental transitions. Along the way we discuss how icons can be compositional without having constituent structure, and expand and defend the “symmetry condition” on Associationism (the idea that associative links and (...) transitions are perfectly symmetric). In the end, we show how a BIT (“bare inferential transition”) theory can cohabitate with these other non-inferential mental transitions. (shrink)
Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth‐conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the ‘logicality of language’, accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counter‐examples consisting of acceptable tautologies and contradictions, the logicality of language is often paired (...) with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is ‘blind’ to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non‐classical—indeed quite exotic—kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that the language system is not radically modular vis‐á‐vis its open class terms and employs a deductive system that is basically classical. (shrink)
The Knower paradox purports to place surprising a priori limitations on what we can know. According to orthodoxy, it shows that we need to abandon one of three plausible and widely-held ideas: that knowledge is factive, that we can know that knowledge is factive, and that we can use logical/mathematical reasoning to extend our knowledge via very weak single-premise closure principles. I argue that classical logic, not any of these epistemic principles, is the culprit. I develop a consistent theory (...) validating all these principles by combining Hartry Field's theory of truth with a modal enrichment developed for a different purpose by Michael Caie. The only casualty is classical logic: the theory avoids paradox by using a weaker-than-classical K3 logic. I then assess the philosophical merits of this approach. I argue that, unlike the traditional semantic paradoxes involving extensional notions like truth, its plausibility depends on the way in which sentences are referred to--whether in natural languages via direct sentential reference, or in mathematical theories via indirect sentential reference by Gödel coding. In particular, I argue that from the perspective of natural language, my non-classical treatment of knowledge as a predicate is plausible, while from the perspective of mathematical theories, its plausibility depends on unresolved questions about the limits of our idealized deductive capacities. (shrink)
A logic is called 'paraconsistent' if it rejects the rule called 'ex contradictione quodlibet', according to which any conclusion follows from inconsistent premises. While logicians have proposed many technically developed paraconsistent logical systems and contemporary philosophers like Graham Priest have advanced the view that some contradictions can be true, and advocated a paraconsistent logic to deal with them, until recent times these systems have been little understood by philosophers. This book presents a comprehensive overview on paraconsistent logical systems (...) to change this situation. The book includes almost every major author currently working in the field. The papers are on the cutting edge of the literature some of which discuss current debates and others present important new ideas. The editors have avoided papers about technical details of paraconsistent logic, but instead concentrated upon works that discuss more 'big picture' ideas. Different treatments of paradoxes takes centre stage in many of the papers, but also there are several papers on how to interpret paraconistent logic and some on how it can be applied to philosophy of mathematics, the philosophy of language, and metaphysics. (shrink)
Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth-conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the `logicality of language', accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counter-examples consisting of acceptable tautologies and contradictions, the logicality of language is often paired (...) with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is `blind' to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non-classical---indeed quite exotic---kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that the language system is not radically modular vis-a-vis its open class terms and employs a deductive system that is basically classical. (shrink)
This article offers a novel solution to the problem of material constitution: by including non-concrete objects among the parts of material objects, we can avoid having a statue and its constituent piece of clay composed of all the same proper parts. Non-concrete objects—objects that aren’t concrete, but possibly are—have been used in defense of the claim that everything necessarily exists. But the account offered shows that non-concreta are independently useful in other domains as well. The resulting view falls under a (...) ‘nonmaterial partist’ class of views that includes, in particular, Laurie Paul’s and Kathrin Koslicki’s constitution views; ones where material objects have properties or structures as parts respectively. The article gives reasons for preferring the non-concretist solution over these other non-material partist views and defends it against objections. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.