Results for 'algorithm SQEMA'

84 found
Order:
  1.  55
    Algorithmic Correspondence and Completeness in Modal Logic. V. Recursive Extensions of SQEMA.Willem Conradie, Valentin Goranko & Dimitar Vakarelov - 2010 - Journal of Applied Logic 8 (4):319-333.
    The previously introduced algorithm \sqema\ computes first-order frame equivalents for modal formulae and also proves their canonicity. Here we extend \sqema\ with an additional rule based on a recursive version of Ackermann's lemma, which enables the algorithm to compute local frame equivalents of modal formulae in the extension of first-order logic with monadic least fixed-points \mffo. This computation operates by transforming input formulae into locally frame equivalent ones in the pure fragment of the hybrid mu-calculus. In (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  2.  37
    Algorithmic Correspondence and Completeness in Modal Logic. IV. Semantic Extensions of SQEMA.Willem Conradie & Valentin Goranko - 2008 - Journal of Applied Non-Classical Logics 18 (2-3):175-211.
    In a previous work we introduced the algorithm \SQEMA\ for computing first-order equivalents and proving canonicity of modal formulae, and thus established a very general correspondence and canonical completeness result. \SQEMA\ is based on transformation rules, the most important of which employs a modal version of a result by Ackermann that enables elimination of an existentially quantified predicate variable in a formula, provided a certain negative polarity condition on that variable is satisfied. In this paper we develop (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  3.  6
    Probable General Intelligence Algorithm.Anton Venglovskiy - manuscript
    Contains a description of a generalized and constructive formal model for the processes of subjective and creative thinking. According to the author, the algorithm presented in the article is capable of real and arbitrarily complex thinking and is potentially able to report on the presence of consciousness.
    Download  
     
    Export citation  
     
    Bookmark  
  4.  52
    ID + MD = OD Towards a Fundamental Algorithm for Consciousness.Thomas McGrath - manuscript
    The Algorithm described in this short paper is a simplified formal representation of consciousness that may be applied in the fields of Psychology and Artificial Intelligence. -/- Click on the download link to read full essay...
    Download  
     
    Export citation  
     
    Bookmark  
  5. Algorithm and Parameters: Solving the Generality Problem for Reliabilism.Jack C. Lyons - 2019 - Philosophical Review 128 (4):463-509.
    The paper offers a solution to the generality problem for a reliabilist epistemology, by developing an “algorithm and parameters” scheme for type-individuating cognitive processes. Algorithms are detailed procedures for mapping inputs to outputs. Parameters are psychological variables that systematically affect processing. The relevant process type for a given token is given by the complete algorithmic characterization of the token, along with the values of all the causally relevant parameters. The typing that results is far removed from the typings of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  6.  46
    Taste and the Algorithm.Emanuele Arielli - 2018 - Studi di Estetica 12 (3):77-97.
    Today, a consistent part of our everyday interaction with art and aesthetic artefacts occurs through digital media, and our preferences and choices are systematically tracked and analyzed by algorithms in ways that are far from transparent. Our consumption is constantly documented, and then, we are fed back through tailored information. We are therefore witnessing the emergence of a complex interrelation between our aesthetic choices, their digital elaboration, and also the production of content and the dynamics of creative processes. All are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Living by Algorithm: Smart Surveillance and the Society of Control.Sean Erwin - 2015 - Humanities and Technology Review 34:28-69.
    Foucault’s disciplinary society and his notion of panopticism are often invoked in discussions regarding electronic surveillance. Against this use of Foucault, I argue that contemporary trends in surveillance technology abstract human bodies from their territorial settings, separating them into a series of discrete flows through what Deleuze will term, the surveillant assemblage. The surveillant assemblage and its product, the socially sorted body, aim less at molding, punishing and controlling the body and more at triggering events of in- and ex-clusion from (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  8. Research on Context-Awareness Mobile SNS Recommendation Algorithm.Zhijun Zhang & Hong Liu - 2015 - Pattern Recognition and Artificial Intelligence 28.
    Although patterns of human activity show a large degree of freedom, they exhibit structural patterns subjected by geographic and social constraints. Aiming at various problems of personalized recommendation in mobile networks, a social network recommendation algorithm is proposed with a variety of context-aware information and combined with a series of social network analysis methods.Based on geographical location and temporal information, potential social relations among users are mined deeply to find the most similar set of users for the target user, (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  9. Contextual Vocabulary Acquisition: From Algorithm to Curriculum.Michael W. Kibby & William J. Rapaport - 2014 - In Adriano Palma (ed.), Castañeda and His Guises: Essays on the Work of Hector-Neri Castañeda. De Gruyter. pp. 107-150.
    Deliberate contextual vocabulary acquisition (CVA) is a reader’s ability to figure out a (not the) meaning for an unknown word from its “context”, without external sources of help such as dictionaries or people. The appropriate context for such CVA is the “belief-revised integration” of the reader’s prior knowledge with the reader’s “internalization” of the text. We discuss unwarranted assumptions behind some classic objections to CVA, and present and defend a computational theory of CVA that we have adapted to a new (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  10.  65
    DELTA: A Unifying Categorization Algorithm Integrating Prototypes, Exemplars and Theory-Theory Representations and Mechanisms.Antonio Lieto - 2018 - In Proceedings of AISC 2018, Extended Abstracts. 27100 Pavia, Province of Pavia, Italy: pp. 5-7.
    Download  
     
    Export citation  
     
    Bookmark  
  11.  13
    ITS for Teaching DES Information Security Algorithm.Abed Elhaleem A. Alnajar & Monnes Hanjory - 2017 - International Journal of Advanced Research and Development 2 (1):69-73.
    Lately there is more attention paid to technological development in intelligent tutoring systems. This field is becoming an interesting topic to many researchers. In this paper, we are presenting an intelligent tutoring system for teaching DES Information Security Algorithm called DES-Tutor. The DES-Tutor target the students enrolled in cryptography course in the department Information Technology in Al-Azhar University in Gaza. Through DES-Tutor the student will be able to study course material and try the exercises of each lesson. An evaluation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Does the Solar System Compute the Laws of Motion?Douglas Ian Campbell & Yi Yang - forthcoming - Synthese:1-18.
    The counterfactual account of physical computation is simple and, for the most part, very attractive. However, it is usually thought to trivialize the notion of physical computation insofar as it implies ‘limited pancomputationalism’, this being the doctrine that every deterministic physical system computes some function. Should we bite the bullet and accept limited pancomputationalism, or reject the counterfactual account as untenable? Jack Copeland would have us do neither of the above. He attempts to thread a path between the two horns (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Algorithms and Arguments: The Foundational Role of the ATAI-Question.Paola Cantu' & Italo Testa - 2011 - In Frans H. van Eemeren, Bart Garssen, David Godden & Gordon Mitchell (eds.), Proceedings of the Seventh International Conference of the International Society for the Study of Argumentation (pp. 192-203). Rozenberg / Sic Sat.
    Argumentation theory underwent a significant development in the Fifties and Sixties: its revival is usually connected to Perelman's criticism of formal logic and the development of informal logic. Interestingly enough it was during this period that Artificial Intelligence was developed, which defended the following thesis (from now on referred to as the AI-thesis): human reasoning can be emulated by machines. The paper suggests a reconstruction of the opposition between formal and informal logic as a move against a premise of an (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  14. Logically Possible Machines.Eric Steinhart - 2002 - Minds and Machines 12 (2):259-280.
    I use modal logic and transfinite set-theory to define metaphysical foundations for a general theory of computation. A possible universe is a certain kind of situation; a situation is a set of facts. An algorithm is a certain kind of inductively defined property. A machine is a series of situations that instantiates an algorithm in a certain way. There are finite as well as transfinite algorithms and machines of any degree of complexity (e.g., Turing and super-Turing machines and (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  15.  67
    Global Optimization Studies on the 1-D Phase Problem.Jim Marsh, Martin Zwick & Byrne Lovell - 1996 - Int. J. Of General Systems 25 (1):47-59.
    The Genetic Algorithm (GA) and Simulated Annealing (SA), two techniques for global optimization, were applied to a reduced (simplified) form of the phase problem (RPP) in computational crystallography. Results were compared with those of "enhanced pair flipping" (EPF), a more elaborate problem-specific algorithm incorporating local and global searches. Not surprisingly, EPF did better than the GA or SA approaches, but the existence of GA and SA techniques more advanced than those used in this study suggest that these techniques (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. The Physical Impossibility of Machine Computations on Sufficiently Large Integers Inspires an Open Problem That Concerns Abstract Computable Sets X⊆N and Cannot Be Formalized in the Set Theory ZFC as It Refers to Our Current Knowledge on X.Apoloniusz Tyszka & Sławomir Kurpaska - manuscript
    Edmund Landau's conjecture states that the set P(n^2+1) of primes of the form n^2+1 is infinite. Let β=(((24!)!)!)!, and let Φ denote the implication: card(P(n^2+1))<ω ⇒ P(n^2+1)⊆(-∞,β]. We heuristically justify the statement Φ without invoking Landau's conjecture. The set X = {k∈N: (β<k) ⇒ (β,k)∩P(n^2+1) ≠ ∅} satisfies conditions (1)--(4). (1) There are a large number of elements of X and it is conjectured that X is infinite. (2) No known algorithm decides the finiteness/infiniteness of X . (3) There (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17.  44
    Semantic Search in Scholarly Discourse.Martin Schulz - manuscript
    The basics for the Dictionary of Arguments - How a Semantic Search Algorithm for unedited sources may look.
    Download  
     
    Export citation  
     
    Bookmark  
  18.  69
    Proof-of-Loss.Mirelo Deugh Ausgam Valis - unknown
    An alternative consensus algorithm to both proof-of-work and proof-of-stake, proof-of-loss addresses all their deficiencies, including the lack of an organic block size limit, the risks of mining centralization, and the "nothing at stake" problem.
    Download  
     
    Export citation  
     
    Bookmark  
  19.  80
    The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies From the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Agency Laundering and Information Technologies.Alan Rubel, Clinton Castro & Adam Pham - 2019 - Ethical Theory and Moral Practice 22 (4):1017-1041.
    When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Intractability and the Use of Heuristics in Psychological Explanations.Iris Rooij, Cory Wright & Todd Wareham - 2012 - Synthese 187 (2):471-487.
    Many cognitive scientists, having discovered that some computational-level characterization f of a cognitive capacity φ is intractable, invoke heuristics as algorithmic-level explanations of how cognizers compute f. We argue that such explanations are actually dysfunctional, and rebut five possible objections. We then propose computational-level theory revision as a principled and workable alternative.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  22.  70
    The Incoherence of Heuristically Explaining Coherence.Iris van Rooij & Cory Wright - 2006 - In Ron Sun (ed.), Proceedings of the 28th Annual Conference of the Cognitive Science Society. Mahwah, NJ 07430, USA: pp. 2622.
    Advancement in cognitive science depends, in part, on doing some occasional ‘theoretical housekeeping’. We highlight some conceptual confusions lurking in an important attempt at explaining the human capacity for rational or coherent thought: Thagard & Verbeurgt’s computational-level model of humans’ capacity for making reasonable and truth-conducive abductive inferences (1998; Thagard, 2000). Thagard & Verbeurgt’s model assumes that humans make such inferences by computing a coherence function (f_coh), which takes as input representation networks and their pair-wise constraints and gives as output (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  23.  52
    Consciousness for the Ouroboros Model.Knud Thomsen - 2011 - International Journal of Machine Consciousness 3 (01):163-175.
    The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed "consumption analysis" is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  56
    Do Goedel's Incompleteness Theorems Set Absolute Limits on the Ability of the Brain to Express and Communicate Mental Concepts Verifiably?Bhupinder Singh Anand - 2004 - Neuroquantology 2:60-100.
    Classical interpretations of Goedels formal reasoning, and of his conclusions, implicitly imply that mathematical languages are essentially incomplete, in the sense that the truth of some arithmetical propositions of any formal mathematical language, under any interpretation, is, both, non-algorithmic, and essentially unverifiable. However, a language of general, scientific, discourse, which intends to mathematically express, and unambiguously communicate, intuitive concepts that correspond to scientific investigations, cannot allow its mathematical propositions to be interpreted ambiguously. Such a language must, therefore, define mathematical truth (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Toward an Ethics of AI Assistants: An Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26.  60
    Transparency in Complex Computational Systems.Kathleen A. Creel - forthcoming - Philosophy of Science.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have suggested treating opaque systems instrumentally, but computer scientists developing strategies for increasing transparency are correct in finding this unsatisfying. Instead, I propose an analysis of transparency as having three forms: transparency of the algorithm, the realization of the algorithm in code, and the way that code is run on particular hardware and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  75
    Aesthetic Dissonance. On Behavior, Values, and Experience Through New Media.Adrian Mróz - 2019 - Hybris 47:1-21.
    Aesthetics is thought of as not only a theory of art or beauty, but also includes sensibility, experience, judgment, and relationships. This paper is a study of Bernard Stiegler’s notion of Aesthetic War (stasis) and symbolic misery. Symbolic violence is ensued through a loss of individuation and participation in the creation of symbols. As a struggle between market values against spirit values human life and consciousness within neoliberal hyperindustrial society has become calculable, which prevents people from creating affective and meaningful (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28.  87
    Upright Posture and the Meaning of Meronymy: A Synthesis of Metaphoric and Analytic Accounts.Jamin Pelkey - 2018 - Cognitive Semiotics 11 (1):1-18.
    Cross-linguistic strategies for mapping lexical and spatial relations from body partonym systems to external object meronymies (as in English ‘table leg’, ‘mountain face’) have attracted substantial research and debate over the past three decades. Due to the systematic mappings, lexical productivity and geometric complexities of body-based meronymies found in many Mesoamerican languages, the region has become focal for these discussions, prominently including contrastive accounts of the phenomenon in Zapotec and Tzeltal, leading researchers to question whether such systems should be explained (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  21
    Введение в структурно-онтологическую методологию: анализ предметной области социализации личности (Introduction to Structural-Ontological Methodology: Analysis of the Subject Matter Field of Personality Socialization).Vitalii Shymko - 2020 - SSRN Electronic Journal.
    Russian Abstract: Данный документ является сборником «заметок на полях», раскрывающих состав и содержание метода структурно-онтологического анализа. Указанный метод разработан для системного описания предметной области изучаемых явлений. Он включает специальную процедуру по построению структурно-онтологических матриц и алгоритм их описания. Междисциплинарная направленность метода продемонстрирована на примере анализа процесса социализации личности. English Abstract: This document is a collection of `marginal notes` revealing the composition and content of the structural ontological analysis method. The specified method is developed for a systemic description of the subject (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  30. Cognitive Computation Sans Representation.Paul Schweizer - 2017 - In Thomas Powers (ed.), Philosophy and Computing: Essays in epistemology, philosophy of mind, logic, and ethics,. Cham, Switzerland: Springer. pp. 65-84.
    The Computational Theory of Mind (CTM) holds that cognitive processes are essentially computational, and hence computation provides the scientific key to explaining mentality. The Representational Theory of Mind (RTM) holds that representational content is the key feature in distinguishing mental from non-mental systems. I argue that there is a deep incompatibility between these two theoretical frameworks, and that the acceptance of CTM provides strong grounds for rejecting RTM. The focal point of the incompatibility is the fact that representational content is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  31. Algorithmic Paranoia: The Temporal Governmentality of Predictive Policing.Bonnie Sheehey - 2019 - Ethics and Information Technology 21 (1):49-58.
    In light of the recent emergence of predictive techniques in law enforcement to forecast crimes before they occur, this paper examines the temporal operation of power exercised by predictive policing algorithms. I argue that predictive policing exercises power through a paranoid style that constitutes a form of temporal governmentality. Temporality is especially pertinent to understanding what is ethically at stake in predictive policing as it is continuous with a historical racialized practice of organizing, managing, controlling, and stealing time. After first (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32.  82
    Parsing and Presupposition in the Calculation of Local Contexts.Matthew Mandelkern & Jacopo Romoli - forthcoming - Semantics and Pragmatics.
    In this paper, we use antecedent-final conditionals to formulate two problems for parsing-based theories of presupposition projection and triviality of the kind given in Schlenker 2009. We show that, when it comes to antecedent-final conditionals, parsing-based theories predict filtering of presuppositions where there is in fact projection, and triviality judgments for sentences which are in fact felicitous. More concretely, these theories predict that presuppositions triggered in the antecedent of antecedent-final conditionals will be filtered (i.e. will not project) if the negation (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  33. Heterogeneous Proxytypes Extended: Integrating Theory-Like Representations and Mechanisms with Prototypes and Exemplars.Antonio Lieto - 2018 - In Advances in Intelligent Systems and Computing: Proceedings of BICA. Springer.
    The paper introduces an extension of the proposal according to which conceptual representations in cognitive agents should be intended as heterogeneous proxytypes. The main contribution of this paper is in that it details how to reconcile, under a heterogeneous representational perspective, different theories of typicality about conceptual representation and reasoning. In particular, it provides a novel theoretical hypothesis - as well as a novel categorization algorithm called DELTA - showing how to integrate the representational and reasoning assumptions of the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  34. Mining Arguments From 19th Century Philosophical Texts Using Topic Based Modelling.John Lawrence, Chris Reed, Simon McAlister, Andrew Ravenscroft, Colin Allen & David Bourget - 2014 - In Proceedings of the First Workshop on Argumentation Mining. Baltimore, USA: pp. 79-87.
    In this paper we look at the manual analysis of arguments and how this compares to the current state of automatic argument analysis. These considerations are used to develop a new approach combining a machine learning algorithm to extract propositions from text, with a topic model to determine argument structure. The results of this method are compared to a manual analysis.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Incommensurability and Theory Change.Howard Sankey - 2011 - In Steven Hales (ed.), A Companion to Relativism. Oxford: Wiley-Blackwell. pp. 456-474.
    The paper explores the relativistic implications of the thesis of incommensurability. A semantic form of incommensurability due to semantic variation between theories is distinguished from a methodological form due to variation in methodological standards between theories. Two responses to the thesis of semantic incommensurability are dealt with: the first challenges the idea of untranslatability to which semantic incommensurability gives rise; the second holds that relations of referential continuity eliminate semantic incommensurability. It is then argued that methodological incommensurability poses little risk (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36.  92
    Judgement and Rational Theory-Choice.Howard Sankey - 1994 - Methodology and Science 27 (3):167-182.
    It is argued that in the absence of an algorithm of theory-choice, a role must be played by deliberative judgement in the process of choosing rationally between theories.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Sense and the Computation of Reference.Reinhard Muskens - 2004 - Linguistics and Philosophy 28 (4):473 - 504.
    The paper shows how ideas that explain the sense of an expression as a method or algorithm for finding its reference, preshadowed in Frege’s dictum that sense is the way in which a referent is given, can be formalized on the basis of the ideas in Thomason (1980). To this end, the function that sends propositions to truth values or sets of possible worlds in Thomason (1980) must be replaced by a relation and the meaning postulates governing the behaviour (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  38. HCI Model with Learning Mechanism for Cooperative Design in Pervasive Computing Environment.Hong Liu, Bin Hu & Philip Moore - 2015 - Journal of Internet Technology 16.
    This paper presents a human-computer interaction model with a three layers learning mechanism in a pervasive environment. We begin with a discussion around a number of important issues related to human-computer interaction followed by a description of the architecture for a multi-agent cooperative design system for pervasive computing environment. We present our proposed three- layer HCI model and introduce the group formation algorithm, which is predicated on a dynamic sharing niche technology. Finally, we explore the cooperative reinforcement learning and (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  39. Diagnostic Criteria for Temporomandibular Disorders (DC/TMD) for Clinical and Research Applications.Eric Schiffman, Richard Ohrbach, E. Truelove, Edmond Truelove, John Look, Gary Anderson, Werner Ceusters, Barry Smith & Others - 2014 - Journal of Oral and Facial Pain and Headache 28 (1):6-27.
    Aims: The Research Diagnostic Criteria for Temporomandi¬bular Disorders (RDC/TMD) Axis I diagnostic algorithms were demonstrated to be reliable but below target sensitivity and specificity. Empirical data supported Axis I algorithm revisions that were valid. Axis II instruments were shown to be both reliable and valid. An international consensus workshop was convened to obtain recommendations and finalization of new Axis I diagnostic algorithms and new Axis II instruments. Methods: A comprehensive search of published TMD diagnostic literature was followed by review (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. The Problem of Evaluating Automated Large-Scale Evidence Aggregators.Nicolas Wüthrich & Katie Steele - 2019 - Synthese (8):3083-3102.
    In the biomedical context, policy makers face a large amount of potentially discordant evidence from different sources. This prompts the question of how this evidence should be aggregated in the interests of best-informed policy recommendations. The starting point of our discussion is Hunter and Williams’ recent work on an automated aggregation method for medical evidence. Our negative claim is that it is far from clear what the relevant criteria for evaluating an evidence aggregator of this sort are. What is the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Russell and the Newman Problem Revisited.Marc Champagne - 2012 - Analysis and Metaphysics 11:65 - 74.
    In his 1927 Analysis of Matter and elsewhere, Russell argued that we can successfully infer the structure of the external world from that of our explanatory schemes. While nothing guarantees that the intrinsic qualities of experiences are shared by their objects, he held that the relations tying together those relata perforce mirror relations that actually obtain (these being expressible in the formal idiom of the Principia Mathematica). This claim was subsequently criticized by the Cambridge mathematician Max Newman as true but (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Compositionality and Complexity in Multiple Negation.Francis Corblin - 1995 - Logic Journal of the IGPL 3 (2-3):449-471.
    This paper considers negative triggers and the interpretation of simple sentences containing more than one occurrence of those items . In the most typical interpretations those sentences have more negative expressions than negations in their semantic representation. It is first shown that this compositionality problem remains in current approaches. A principled algorithm for deriving the representation of sentences with multiple negative quantifiers in a DRT framework is then introduced. The algorithm is under the control of an on-line check-in, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  50
    Minimum Intelligent Signal Test as an Alternative to the Turing Test.Paweł Łupkowski & Patrycja Jurowska - 2019 - Diametros 59:35-47.
    The aim of this paper is to present and discuss the issue of the adequacy of the Minimum Intelligent Signal Test (MIST) as an alternative to the Turing Test. MIST has been proposed by Chris McKinstry as a better alternative to Turing’s original idea. Two of the main claims about MIST are that (1) MIST questions exploit commonsense knowledge and as a result are expected to be easy to answer for human beings and difficult for computer programs; and that (2) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Bioethics: Reincarnation of Natural Philosophy in Modern Science.Valentin Teodorovich Cheshko, Valery I. Glazko & Yulia V. Kosova - 2017 - Biogeosystem Technique 4 (2):111-121.
    The theory of evolution of complex and comprising of human systems and algorithm for its constructing are the synthesis of evolutionary epistemology, philosophical anthropology and concrete scientific empirical basis in modern (transdisciplinary) science. «Trans-disciplinary» in the context is interpreted as a completely new epistemological situation, which is fraught with the initiation of a civilizational crisis. Philosophy and ideology of technogenic civilization is based on the possibility of unambiguous demarcation of public value and descriptive scientific discourses (1), and the object (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Metanormative Principles and Norm Governed Social Interaction.Berislav Žarnić & Gabriela Bašić - 2014 - Revus 22:105-120.
    Critical examination of Alchourrón and Bulygin’s set-theoretic definition of normative system shows that deductive closure is not an inevitable property. Following von Wright’s conjecture that axioms of standard deontic logic describe perfection-properties of a norm-set, a translation algorithm from the modal to the set-theoretic language is introduced. The translations reveal that the plausibility of metanormative principles rests on different grounds. Using a methodological approach that distinguishes the actor roles in a norm governed interaction, it has been shown that metanormative (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  46.  45
    On the Axiomatic Systems of Syntactically-Categorial Languages.Urszula Wybraniec-Skardowska - 1984 - Bulletin of the Section of Logic 13 (4):241-249.
    The paper contains an overview of the most important results presented in the monograph of the author "Teorie Językow Syntaktycznie-Kategorialnych" ("Theories of Syntactically-Categorial Languages" (in Polish), PWN, Warszawa-Wrocław 1985. In the monograph four axiomatic systems of syntactically-categorial languages are presented. The first two refer to languages of expression-tokens. The others also takes into consideration languages of expression-types. Generally, syntactically-categorial languages are languages built in accordance with principles of the theory of syntactic categories introduced by S. Leśniewski [1929,1930]; they are connected (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Bridging Conceptual Gaps: The Kolmogorov-Sinai Entropy.Massimiliano Badino - forthcoming - Isonomía. Revista de Teoría y Filosofía Del Derecho.
    The Kolmogorov-Sinai entropy is a fairly exotic mathematical concept which has recently aroused some interest on the philosophers’ part. The most salient trait of this concept is its working as a junction between such diverse ambits as statistical mechanics, information theory and algorithm theory. In this paper I argue that, in order to understand this very special feature of the Kolmogorov-Sinai entropy, is essential to reconstruct its genealogy. Somewhat surprisingly, this story takes us as far back as the beginning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. The Prepared Mind: The Role of Representational Change in Chance Discovery.Eric Dietrich, Arthur B. Markman & Michael Winkley - 2003 - In Yukio Ohsawa Peter McBurney (ed.), Chance Discovery by Machines. Springer-Verlag, pp. 208-230..
    Analogical reminding in humans and machines is a great source for chance discoveries because analogical reminding can produce representational change and thereby produce insights. Here, we present a new kind of representational change associated with analogical reminding called packing. We derived the algorithm in part from human data we have on packing. Here, we explain packing and its role in analogy making, and then present a computer model of packing in a micro-domain. We conclude that packing is likely used (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   5 citations  
  49. Two Concepts of "Form" and the so-Called Computational Theory of Mind.John-Michael Kuczynski - 2006 - Philosophical Psychology 19 (6):795-821.
    According to the computational theory of mind , to think is to compute. But what is meant by the word 'compute'? The generally given answer is this: Every case of computing is a case of manipulating symbols, but not vice versa - a manipulation of symbols must be driven exclusively by the formal properties of those symbols if it is qualify as a computation. In this paper, I will present the following argument. Words like 'form' and 'formal' are ambiguous, as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  50. Local Complexity Adaptable Trajectory Partitioning Via Minimum Message Length.Charles R. Twardy - 2011 - In 18th IEEE International Conference on Image Processing. IEEE.
    We present a minimum message length (MML) framework for trajectory partitioning by point selection, and use it to automatically select the tolerance parameter ε for Douglas-Peucker partitioning, adapting to local trajectory complexity. By examining a range of ε for synthetic and real trajectories, it is easy to see that the best ε does vary by trajectory, and that the MML encoding makes sensible choices and is robust against Gaussian noise. We use it to explore the identification of micro-activities within a (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
1 — 50 / 84