The Protein Ontology (PRO) provides a formal, logically-based classification of specific protein classes including structured representations of protein isoforms, variants and modified forms. Initially focused on proteins found in human, mouse and Escherichia coli, PRO now includes representations of protein complexes. The PRO Consortium works in concert with the developers of other biomedical ontologies and protein knowledge bases to provide the ability to formally organize and integrate representations of precise protein forms so as to enhance accessibility to results of protein (...) research. PRO (http://pir.georgetown.edu/pro) is part of the Open Biomedical Ontologies (OBO) Foundry. (shrink)
Biomedical ontologies are emerging as critical tools in genomic and proteomic research where complex data in disparate resources need to be integrated. A number of ontologies exist that describe the properties that can be attributed to proteins; for example, protein functions are described by Gene Ontology, while human diseases are described by Disease Ontology. There is, however, a gap in the current set of ontologies—one that describes the protein entities themselves and their relationships. We have designed a PRotein Ontology (PRO) (...) to facilitate protein annotation and to guide new experiments. The components of PRO extend from the classification of proteins on the basis of evolutionary relationships to the representation of the multiple protein forms of a gene (products generated by genetic variation, alternative splicing, proteolytic cleavage, and other post-translational modification). PRO will allow the specification of relationships between PRO, GO and other OBO Foundry ontologies. Here we describe the initial development of PRO, illustrated using human proteins from the TGF-beta signaling pathway. (shrink)
The Protein Ontology (PRO; http://purl.obolibrary.org/obo/pr) formally defines and describes taxon-specific and taxon-neutral protein-related entities in three major areas: proteins related by evolution; proteins produced from a given gene; and protein-containing complexes. PRO thus serves as a tool for referencing protein entities at any level of specificity. To enhance this ability, and to facilitate the comparison of such entities described in different resources, we developed a standardized representation of proteoforms using UniProtKB as a sequence reference and PSI-MOD as a post-translational modification (...) reference. We illustrate its use in facilitating an alignment between PRO and Reactome protein entities. We also address issues of scalability, describing our first steps into the use of text mining to identify protein-related entities, the large-scale import of proteoform information from expert curated resources, and our ability to dynamically generate PRO terms. Web views for individual terms are now more informative about closely-related terms, including for example an interactive multiple sequence alignment. Finally, we describe recent improvement in semantic utility, with PRO now represented in OWL and as a SPARQL endpoint. These developments will further support the anticipated growth of PRO and facilitate discoverability of and allow aggregation of data relating to protein entities. (shrink)
The Protein Ontology (PRO; http://proconsortium.org) formally defines protein entities and explicitly represents their major forms and interrelations. Protein entities represented in PRO corresponding to single amino acid chains are categorized by level of specificity into family, gene, sequence and modification metaclasses, and there is a separate metaclass for protein complexes. All metaclasses also have organism-specific derivatives. PRO complements established sequence databases such as UniProtKB, and interoperates with other biomedical and biological ontologies such as the Gene Ontology (GO). PRO relates to (...) UniProtKB in that PRO’s organism-specific classes of proteins encoded by a specific gene correspond to entities documented in UniProtKB entries. PRO relates to the GO in that PRO’s representations of organism-specific protein complexes are subclasses of the organism-agnostic protein complex terms in the GO Cellular Component Ontology. The past few years have seen growth and changes to the PRO, as well as new points of access to the data and new applications of PRO in immunology and proteomics. Here we describe some of these developments. (shrink)
The Protein Ontology (PRO) provides terms for and supports annotation of species-specific protein complexes in an ontology framework that relates them both to their components and to species-independent families of complexes. Comprehensive curation of experimentally known forms and annotations thereof is expected to expose discrepancies, differences, and gaps in our knowledge. We have annotated the early events of innate immune signaling mediated by Toll-Like Receptor 3 and 4 complexes in human, mouse, and chicken. The resulting ontology and annotation data set (...) has allowed us to identify species-specific gaps in experimental data and possible functional differences between species, and to employ inferred structural and functional relationships to suggest plausible resolutions of these discrepancies and gaps. (shrink)
Representing species-specific proteins and protein complexes in ontologies that are both human and machine-readable facilitates the retrieval, analysis, and interpretation of genome-scale data sets. Although existing protin-centric informatics resources provide the biomedical research community with well-curated compendia of protein sequence and structure, these resources lack formal ontological representations of the relationships among the proteins themselves. The Protein Ontology (PRO) Consortium is filling this informatics resource gap by developing ontological representations and relationships among proteins and their variants and modified forms. Because (...) proteins are often functional only as members of stable protein complexes, the PRO Consortium, in collaboration with existing protein and pathway databases, has launched a new initiative to implement logical and consistent representation of protein complexes. We describe here how the PRO Consortium is meeting the challenge of representing species-specific protein complexes, how protein complex representation in PRO supports annotation of protein complexes and comparative biology, and how PRO is being integrated into existing community bioinformatics resources. The PRO resource is accessible at http://pir.georgetown.edu/pro/. (shrink)
Research has indicated that microRNAs (miRNAs), a special class of non-coding RNAs (ncRNAs), can perform important roles in different biological and pathological processes. miRNAs’ functions are realized by regulating their respective target genes (targets). It is thus critical to identify and analyze miRNA-target interactions for a better understanding and delineation of miRNAs’ functions. However, conventional knowledge discovery and acquisition methods have many limitations. Fortunately, semantic technologies that are based on domain ontologies can render great assistance in this regard. In our (...) previous investigations, we developed a miRNA domain-specific application ontology, Ontology for MIcroRNA Target (OMIT), to provide the community with common data elements and data exchange standards in the miRNA research. This paper describes (1) our continuing efforts in the OMIT ontology development and (2) the application of the OMIT to enable a semantic approach for knowledge capture of miRNA-target interactions. (shrink)
The Protein Ontology (PRO) web resource provides an integrative framework for protein-centric exploration and enables specific and precise annotation of proteins and protein complexes based on PRO. Functionalities include: browsing, searching and retrieving, terms, displaying selected terms in OBO or OWL format, and supporting URIs. In addition, the PRO website offers multiple ways for the user to request, submit, or modify terms and/or annotation. We will demonstrate the use of these tools for protein research and annotation.
Identification of non-coding RNAs (ncRNAs) has been significantly enhanced due to the rapid advancement in sequencing technologies. On the other hand, semantic annotation of ncRNA data lag behind their identification, and there is a great need to effectively integrate discovery from relevant communities. To this end, the Non-Coding RNA Ontology (NCRO) is being developed to provide a precisely defined ncRNA controlled vocabulary, which can fill a specific and highly needed niche in unification of ncRNA biology.
In recent years, sequencing technologies have enabled the identification of a wide range of non-coding RNAs (ncRNAs). Unfortunately, annotation and integration of ncRNA data has lagged behind their identification. Given the large quantity of information being obtained in this area, there emerges an urgent need to integrate what is being discovered by a broad range of relevant communities. To this end, the Non-Coding RNA Ontology (NCRO) is being developed to provide a systematically structured and precisely defined controlled vocabulary for the (...) domain of ncRNAs, thereby facilitating the discovery, curation, analysis, exchange, and reasoning of data about structures of ncRNAs, their molecular and cellular functions, and their impacts upon phenotypes. The goal of NCRO is to serve as a common resource for annotations of diverse research in a way that will significantly enhance integrative and comparative analysis of the myriad resources currently housed in disparate sources. It is our belief that the NCRO ontology can perform an important role in the comprehensive unification of ncRNA biology and, indeed, fill a critical gap in both the Open Biological and Biomedical Ontologies (OBO) Library and the National Center for Biomedical Ontology (NCBO) BioPortal. Our initial focus is on the ontological representation of small regulatory ncRNAs, which we see as the first step in providing a resource for the annotation of data about all forms of ncRNAs. (shrink)
Identification of non-coding RNAs (ncRNAs) has been significantly improved over the past decade. On the other hand, semantic annotation of ncRNA data is facing critical challenges due to the lack of a comprehensive ontology to serve as common data elements and data exchange standards in the field. We developed the Non-Coding RNA Ontology (NCRO) to handle this situation. By providing a formally defined ncRNA controlled vocabulary, the NCRO aims to fill a specific and highly needed niche in semantic annotation of (...) large amounts of ncRNA biological and clinical data. (shrink)
In recent years, sequencing technologies have enabled the identification of a wide range of non-coding RNAs (ncRNAs). Unfortunately, annotation and integration of ncRNA data has lagged behind their identification. Given the large quantity of information being obtained in this area, there emerges an urgent need to integrate what is being discovered by a broad range of relevant communities. To this end, the Non-Coding RNA Ontology (NCRO) is being developed to provide a systematically structured and precisely defined controlled vocabulary for the (...) domain of ncRNAs, thereby facilitating the discovery, curation, analysis, exchange, and reasoning of data about structures of ncRNAs, their molecular and cellular functions, and their impacts upon phenotypes. The goal of NCRO is to serve as a common resource for annotations of diverse research in a way that will significantly enhance integrative and comparative analysis of the myriad resources currently housed in disparate sources. It is our belief that the NCRO ontology can perform an important role in the comprehensive unification of ncRNA biology and, indeed, fill a critical gap in both the Open Biological and Biomedical Ontologies (OBO) Library and the National Center for Biomedical Ontology (NCBO) BioPortal. Our initial focus is on the ontological representation of small regulatory ncRNAs, which we see as the first step in providing a resource for the annotation of data about all forms of ncRNAs. The NCRO ontology is free and open to all users. (shrink)
The Protein Ontology (PRO) is designed as a formal and principled Open Biomedical Ontologies (OBO) Foundry ontology for proteins. The components of PRO extend from a classification of proteins on the basis of evolutionary relationships at the homeomorphic level to the representation of the multiple protein forms of a gene, including those resulting from alternative splicing, cleavage and/or posttranslational modifications. Focusing specifically on the TGF-beta signaling proteins, we describe the building, curation, usage and dissemination of PRO. PRO provides a framework (...) for the formal representation of protein classes and protein forms in the OBO Foundry. It is designed to enable data retrieval and integration and machine reasoning at the molecular level of proteins, thereby facilitating cross-species comparisons, pathway analysis, disease modeling and the generation of new hypotheses. (shrink)
Many epistemological problems can be solved by the objective Bayesian view that there are rationality constraints on priors, that is, inductive probabilities. But attempts to work out these constraints have run into such serious problems that many have rejected objective Bayesianism altogether. I argue that the epistemologist should borrow the metaphysician’s concept of naturalness and assign higher priors to more natural hypotheses.
How do temporal and eternal beliefs interact? I argue that acquiring a temporal belief should have no effect on eternal beliefs for an important range of cases. Thus, I oppose the popular view that new norms of belief change must be introduced for cases where the only change is the passing of time. I defend this position from the purported counter-examples of the Prisoner and Sleeping Beauty. I distinguish two importantly different ways in which temporal beliefs can be acquired and (...) draw some general conclusions about their impact on eternal beliefs. (shrink)
What if your peers tell you that you should disregard your perceptions? Worse, what if your peers tell you to disregard the testimony of your peers? How should we respond if we get evidence that seems to undermine our epistemic rules? Several philosophers have argued that some epistemic rules are indefeasible. I will argue that all epistemic rules are defeasible. The result is a kind of epistemic particularism, according to which there are no simple rules connecting descriptive and normative facts. (...) I will argue that this type of particularism is more plausible in epistemology than in ethics. The result is an unwieldy and possibly infinitely long epistemic rule — an Uber-rule. I will argue that the Uber-rule applies to all agents, but is still defeasible — one may get misleading evidence against it and rationally lower one’s credence in it. (shrink)
Should philosophers prefer simpler theories? Huemer (Philos Q 59:216–236, 2009) argues that the reasons to prefer simpler theories in science do not apply in philosophy. I will argue that Huemer is mistaken—the arguments he marshals for preferring simpler theories in science can also be applied in philosophy. Like Huemer, I will focus on the philosophy of mind and the nominalism/Platonism debate. But I want to engage with the broader issue of whether simplicity is relevant to philosophy.
If an agent believes that the probability of E being true is 1/2, should she accept a bet on E at even odds or better? Yes, but only given certain conditions. This paper is about what those conditions are. In particular, we think that there is a condition that has been overlooked so far in the literature. We discovered it in response to a paper by Hitchcock (2004) in which he argues for the 1/3 answer to the Sleeping Beauty problem. (...) Hitchcock argues that this credence follows from calculating her fair betting odds, plus the assumption that Sleeping Beauty’s credences should track her fair betting odds. We will show that this last assumption is false. Sleeping Beauty’s credences should not follow her fair betting odds due to a peculiar feature of her epistemic situation. (shrink)
The fine-tuning argument can be used to support the Many Universe hypothesis. The Inverse Gambler’s Fallacy objection seeks to undercut the support for the Many Universe hypothesis. The objection is that although the evidence that there is life somewhere confirms Many Universes, the specific evidence that there is life in this universe does not. I will argue that the Inverse Gambler’s Fallacy is not committed by the fine-tuning argument. The key issue is the procedure by which the universe with life (...) is selected for observation. Once we take account of the procedure, we find that the support for the Many Universe hypothesis remains. (shrink)
Sometimes we learn what the world is like, and sometimes we learn where in the world we are. Are there any interesting differences between the two kinds of cases? The main aim of this article is to argue that learning where we are in the world brings into view the same kind of observation selection effects that operate when sampling from a population. I will first explain what observation selection effects are ( Section 1 ) and how they are relevant (...) to learning where we are in the world ( Section 2 ). I will show how measurements in the Many Worlds Interpretation of quantum mechanics can be understood as learning where you are in the world via some observation selection effect ( Section 3 ). I will apply a similar argument to the Sleeping Beauty Problem ( Section 4 ) and explain what I take the significance of the analogy to be ( Section 5 ). Finally, I will defend the Restricted Principle of Indifference on which some of my arguments depend ( Section 6 ). (shrink)
Cian Dorr (2002) gives an argument for the 1/3 position in Sleeping Beauty. I argue this is based on a mistake about Sleeping Beauty's epistemic position.
We give an analysis of the Monty Hall problem purely in terms of confirmation, without making any lottery assumptions about priors. Along the way, we show the Monty Hall problem is structurally identical to the Doomsday Argument.
There is a widely shared belief that the higher-level sciences can provide better explanations than lower-level sciences. But there is little agreement about exactly why this is so. It is often suggested that higher-level explanations are better because they omit details. I will argue instead that the preference for higher-level explanations is just a special case of our general preference for informative, logically strong, beliefs. I argue that our preference for informative beliefs entirely accounts for why higher-level explanations are sometimes (...) better—and sometimes worse—than lower-level explanations. The result is a step in the direction of the unity of science hypothesis. 1Introduction2Background: Is Omitting Details an Explanatory Virtue? 2.1Anti-reductionist arguments2.2Reductionist argument2.3Logical strength3Bases, Links and Logical Strength4Functionalism and Fodor’s Argument5Two Generalizations6Should the Base Really Be Maximally Strong?7Anti-reductionist Arguments Regarding the Base8Should the Antecedent of the Link Really Be Maximally Weak? (shrink)
Many who take a dismissive attitude towards metaphysics trace their view back to Carnap’s ‘Empiricism, Semantics and Ontology’. But the reason Carnap takes a dismissive attitude to metaphysics is a matter of controversy. I will argue that no reason is given in ‘Empiricism, Semantics and Ontology’, and this is because his reason for rejecting metaphysical debates was given in ‘Pseudo-Problems in Philosophy’. The argument there assumes verificationism, but I will argue that his argument survives the rejection of verificationism. The root (...) of his argument is the claim that metaphysical statements cannot be justified; the point is epistemic, not semantic. I will argue that this remains a powerful challenge to metaphysics that has yet to be adequately answered. (shrink)
The independence problems for functionalism stem from the worry that if functional properties are defined in terms of their causes and effects then such functional properties seem to be too intimately connected to these purported causes and effects. I distinguish three different ways the independence problems can be filled out – in terms of necessary connections, analytic connections and vacuous explanations. I argue that none of these present serious problems. Instead, they bring out some important and over-looked features of functionalism.
In this paper I argue that whether or not a computer can be built that passes the Turing test is a central question in the philosophy of mind. Then I show that the possibility of building such a computer depends on open questions in the philosophy of computer science: the physical Church-Turing thesis and the extended Church-Turing thesis. I use the link between the issues identified in philosophy of mind and philosophy of computer science to respond to a prominent argument (...) against the possibility of building a machine that passes the Turing test. Finally, I respond to objections against the proposed link between questions in the philosophy of mind and philosophy of computer science. (shrink)
Colin Howson (1995 ) offers a counter-example to the rule of conditionalization. I will argue that the counter-example doesn't hit its target. The problem is that Howson mis-describes the total evidence the agent has. In particular, Howson overlooks how the restriction that the agent learn 'E and nothing else' interacts with the de se evidence 'I have learnt E'.
In Bradley, I offered an analysis of Sleeping Beauty and the Everettian interpretation of quantum mechanics. I argued that one can avoid a kind of easy confirmation of EQM by paying attention to observation selection effects, that halfers are right about Sleeping Beauty, and that thirders cannot avoid easy confirmation for the truth of EQM. Wilson agrees with my analysis of observation selection effects in EQM, but goes on to, first, defend Elga’s thirder argument on Sleeping Beauty and, second, argue (...) that the analogy I draw between Sleeping Beauty and EQM fails. I will argue that neither point succeeds. 1 Introduction2 Background3 Wilson’s Argument for ⅓ in Sleeping Beauty4 Reply: Explaining Away the Crazy5 Wilson's Argument for the Breakdown of the Analogy6 Reply: The Irrelevance of Chance7 Conclusion. (shrink)
Carrie Jenkins (2005, 2008) has developed a theory of the a priori that she claims solves the problem of how justification regarding our concepts can give us justification regarding the world. She claims that concepts themselves can be justified, and that beliefs formed by examining such concepts can be justified a priori. I object that we can have a priori justified beliefs with unjustified concepts if those beliefs have no existential import. I then argue that only beliefs without existential import (...) can be justified a priori on the widely held conceptual approach. This limits the scope of the a priori and undermines arguments for essentialism. (shrink)
Jonathan Weisberg (2010 ) argues that, given that life exists, the fact that the universe is fine-tuned for life does not confirm the design hypothesis. And if the fact that life exists confirms the design hypothesis, fine-tuning is irrelevant. So either way, fine-tuning has nothing to do with it. I will defend a design argument that survives Weisberg’s critique — the fact that life exists supports the design hypothesis, but it only does so given fine-tuning.
How should our beliefs change over time? Much has been written about how our beliefs should change in the light of new evidence. But that is not the question I’m asking. Sometimes our beliefs change without new evidence. I previously believed it was Sunday. I now believe it’s Monday. In this paper I discuss the implications of such beliefs for philosophy of language. I will argue that we need to allow for ‘dynamic’ beliefs, that we need new norms of belief (...) change to model how they function, and that this gives Perry’s (1977) two tier account the advantage over Lewis’s (1979) theory -/- . (shrink)
Since the publication of Kenneth Howard’s 2017 article, “The Religion Singularity: A Demographic Crisis Destabilizing and Transforming Institutional Christianity,” there has been an increasing demand to understand the root causes and historical foundations for why institutional Christianity is in a state of de-institutionalization. In response to Howard’s research, a number of authors have sought to provide a contextual explanation for why the religion singularity is currently happening, including studies in epistemology, church history, psychology, anthropology, and church ministry. The purpose of (...) this article is to offer a brief survey and response to these interactions with Howard’s research, identifying the overall implications of each researcher’s perspective for understanding the religion singularity phenomenon. It explores factors relating to denominational switching in Jeshua Branch’s research, social memory in John Lingelbach’s essay, religious politics in Kevin Seybold’s survey, scientific reductionism in Jack David Eller’s position paper, and institutional moral failure in Brian McLaren’s article. (shrink)
Sober and Elgin defend the claim that there are a priori causal laws in biology. Lange and Rosenberg take issue with this on Humean grounds, among others. I will argue that Sober and Elgin don’t go far enough – there are a priori causal laws in many sciences. Furthermore, I will argue that this thesis is compatible with a Humean metaphysics and an empiricist epistemology.
The main argument given for relevant alternatives theories of knowledge has been that they answer scepticism about the external world. I will argue that relevant alternatives also solve two other problems that have been much discussed in recent years, a) the bootstrapping problem and b) the apparent conflict between semantic externalism and armchair self-knowledge. Furthermore, I will argue that scepticism and Mooreanism can be embedded within the relevant alternatives framework.
How should we update our beliefs when we learn new evidence? Bayesian confirmation theory provides a widely accepted and well understood answer – we should conditionalize. But this theory has a problem with self-locating beliefs, beliefs that tell you where you are in the world, as opposed to what the world is like. To see the problem, consider your current belief that it is January. You might be absolutely, 100%, sure that it is January. But you will soon believe it (...) is February. This type of belief change cannot be modelled by conditionalization. We need some new principles of belief change for this kind of case, which I call belief mutation. In part 1, I defend the Relevance-Limiting Thesis, which says that a change in a purely self-locating belief of the kind that results in belief mutation should not shift your degree of belief in a non-self-locating belief, which can only change by conditionalization. My method is to give detailed analyses of the puzzles which threaten this thesis: Duplication, Sleeping Beauty, and The Prisoner. This also requires giving my own theory of observation selection effects. In part 2, I argue that when self-locating evidence is learnt from a position of uncertainty, it should be conditionalized on in the normal way. I defend this position by applying it to various cases where such evidence is found. I defend the Halfer position in Sleeping Beauty, and I defend the Doomsday Argument and the Fine-Tuning Argument. (shrink)
Decision theory is concerned with how agents should act when the consequences of their actions are uncertain. The central principle of contemporary decision theory is that the rational choice is the choice that maximizes subjective expected utility. This entry explains what this means, and discusses the philosophical motivations and consequences of the theory. The entry will consider some of the main problems and paradoxes that decision theory faces, and some of responses that can be given. Finally the entry will briefly (...) consider how decision theory applies to choices involving more than one agent. (shrink)
Self-locating beliefs cause a problem for conditionalization. Miriam Schoenfield offers a solution: that on learning E, agents should update on the fact that they learned E. However, Schoenfield is not explicit about whether the fact that they learned E is self-locating. I will argue that if the fact that they learned E is self-locating then the original problem has not been addressed, and if the fact that they learned E is not self-locating then the theory generates implausible verdicts which Schoenfield (...) explicitly rejects. (shrink)
There is a divide in epistemology between those who think that, for any hypothesis and set of total evidence, there is a unique rational credence in that hypothesis, and those who think that there can be many rational credences. Schultheis offers a novel and potentially devastating objection to Permissivism, on the grounds that Permissivism permits dominated credences. I will argue that Permissivists can plausibly block Schultheis' argument. The issue turns on getting clear about whether we should be certain whether our (...) credences are rational. (shrink)
What does logic tells us how about we ought to reason? If P entails Q, and you believe P, should you believe Q? There seem to be cases where you should not, for example, if you have evidence against Q, or the inference is not worth making. So we need a theory telling us when an inference ought to be made, and when not. I will argue that we should embed the issue in an independently motivated contextualist semantics for ‘ought’. (...) With the contextualist machinery in hand we can give a theory of when inferences should be made and when not. (shrink)
The notion that there existed a distinction between so-called “Alexandrian” and “Antiochene” exegesis in the ancient church has become a common assumption among theologians. The typical belief is that Alexandria promoted an allegorical reading of Scripture, whereas Antioch endorsed a literal approach. However, church historians have long since recognized that this distinction is neither wholly accurate nor helpful to understanding ancient Christian hermeneutics. Indeed, neither school of interpretation sanctioned the practice of just one exegetical method. Rather, both Alexandrian and Antiochene (...) theologians were expedient hermeneuts, meaning they utilized whichever exegetical practice (allegory, typology, literal, historical) that would supply them with their desired theology or interpretive conclusion. The difference between Alexandria and Antioch was not exegetical; it was theological. In other words, it was their respective theological paradigms that dictated their exegetical practices, allowing them to utilize whichever hermeneutical method was most expedient for their theological purposes. Ultimately, neither Alexandrian nor Antiochene exegetes possessed a greater respect for the biblical text over the other, nor did they adhere to modern-day historical-grammatical hermeneutics as theologians would like to believe. (shrink)
Published in Darren Tofts, Annemarie Jonson, and Alessio Cavallaro (eds), _Prefiguring Cyberculture: an intellectual history_ (MIT Press and Power Publications, December 2002). Please do send comments: email me. Back to my main publications page . Back to my home page.
Paolo Barbò da Soncino conosciuto anche come il „Soncinas" (Soncinate) fu un domenicano italiano, filosofo e teologo tomista. Visse durante il periodo del rinascimento italiano nel XV secolo tra Bologna e Milano, morto a Cremona nel 1495. La suo apiù importante opera è proprio il commento alla Metafisica di Aristotele (Acutissimae quaestiones metaphysicales, 1 ed. Venezia 1498) che rappresenta una particolare sintesi del commentatore arabo Averrè, Tommaso d'Aquino, Erveo Natale († 1323) e Giovanni Capreolo († 1444). L'opera filosofica del (...) Soncinate era spesso discusso tra il XVI e XVII secolo. Il libro offre la prima biografia scientifica dell'autore in italiano, e l'edizione critica del suo commento al IV libro della Metafica di Aristotele in latino. -/- Paolo Barbò da Soncino called "Soncinas" was an Italian Dominican, Thomist philosopher and theologian. His life and work fall within the ambit of Italian Renaissance Thomism of the fifteenth century, between Bologna and Milano, died in 1495 in Cremona. His principal work, the exposition of Aristotle's Metaphysics, (Acutissimae quaestiones metaphysicales 1 ed. Venice 1498) proceeds from a particular synthesis of the Arabic commentator Averroes, Thomas Aquinas, Hervaeus Natalis (d. 1323), and John Capreolus (d. 1444). Soncinas' work and position were frequently discussed from the sixteenth to the seventeenth century. This study offers the first scientific biography, description and analysis of the method, sources and doctrine of Soncinas, in particular the critical Latin edition of the 4th book of his Acutissimae Quaestiones Metaphysicales (the exposition of Aristotle's Metaphysics). (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.