Results for 'crash algorithms'

69 found
Order:
  1. Crash Algorithms for Autonomous Cars: How the Trolley Problem Can Move Us Beyond Harm Minimisation.Dietmar Hübner & Lucie White - 2018 - Ethical Theory and Moral Practice 21 (3):685-698.
    The prospective introduction of autonomous cars into public traffic raises the question of how such systems should behave when an accident is inevitable. Due to concerns with self-interest and liberal legitimacy that have become paramount in the emerging debate, a contractarian framework seems to provide a particularly attractive means of approaching this problem. We examine one such attempt, which derives a harm minimisation rule from the assumptions of rational self-interest and ignorance of one’s position in a future accident. We contend, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. The Ethics of Algorithms: Mapping the Debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2).
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  3. An Intelligent Tutoring System for Learning Classical Cryptography Algorithms (CCAITS).Jihan Y. AbuEl-Reesh & Samy S. Abu-Naser - 2018 - International Journal of Academic and Applied Research (IJAAR) 2 (2):1-11.
    With the expansion of computer and information technologies, intelligent tutoring system are becoming more prominent everywhere throughout the world, it influences the scene to wind up plainly genuine that anyone could learn at anyplace in whenever. Be that as it may, without the help of intelligent tutoring system, the learning questions students’ response can't be understood in time. Thus it is important to create intelligent tutoring system (ITS) keeping in mind the end goal to give learning support service for students. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. The Role of Imagination in Social Scientific Discovery: Why Machine Discoverers Will Need Imagination Algorithms.Michael Stuart - 2019 - In Mark Addis, Fernand Gobet & Peter Sozou (eds.), Scientific Discovery in the Social Sciences. Springer Verlag.
    When philosophers discuss the possibility of machines making scientific discoveries, they typically focus on discoveries in physics, biology, chemistry and mathematics. Observing the rapid increase of computer-use in science, however, it becomes natural to ask whether there are any scientific domains out of reach for machine discovery. For example, could machines also make discoveries in qualitative social science? Is there something about humans that makes us uniquely suited to studying humans? Is there something about machines that would bar them from (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Semantical Mutation, Algorithms and Programs.Porto André - 2015 - Dissertatio (S1):44-76.
    This article offers an explanation of perhaps Wittgenstein’s strangest and least intuitive thesis – the semantical mutation thesis – according to which one can never answer a mathematical conjecture because the new proof alters the very meanings of the terms involved in the original question. Instead of basing our justification on the distinction between mere calculation and proofs of isolated propositions, characteristic of Wittgenstein’s intermediary period, we generalize it to include conjectures involving effective procedures as well.
    Download  
     
    Export citation  
     
    Bookmark  
  6. The Construction of Transfinite Equivalence Algorithms.Han Geurdes - manuscript
    Context: Consistency of mathematical constructions in numerical analysis and the application of computerized proofs in the light of the occurrence of numerical chaos in simple systems. Purpose: To show that a computer in general and a numerical analysis in particular can add its own peculiarities to the subject under study. Hence the need of thorough theoretical studies on chaos in numerical simulation. Hence, a questioning of what e.g. a numerical disproof of a theorem in physics or a prediction in numerical (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  7. Algorithms and Arguments: The Foundational Role of the ATAI-Question.Paola Cantu' & Italo Testa - 2011 - In Frans H. van Eemeren, Bart Garssen, David Godden & Gordon Mitchell (eds.), Proceedings of the Seventh International Conference of the International Society for the Study of Argumentation (pp. 192-203). Rozenberg / Sic Sat.
    Argumentation theory underwent a significant development in the Fifties and Sixties: its revival is usually connected to Perelman's criticism of formal logic and the development of informal logic. Interestingly enough it was during this period that Artificial Intelligence was developed, which defended the following thesis (from now on referred to as the AI-thesis): human reasoning can be emulated by machines. The paper suggests a reconstruction of the opposition between formal and informal logic as a move against a premise of an (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  8.  34
    Evolutionary Theory and Computerised Genetic Algorithms.Derek Philip Hough - manuscript
    Neo-Darwinism can be usefully studied with the help of a Computerised Genetic Algorithm. Only a mathematical approach can reveal the shortcomings of the current dogma and point the way to a revised definition of the theory of evolution.
    Download  
     
    Export citation  
     
    Bookmark  
  9. Email Classification Using Artificial Neural Network.Ahmed Alghoul, Sara Al Ajrami, Ghada Al Jarousha, Ghayda Harb & Samy S. Abu-Naser - 2018 - International Journal of Academic Engineering Research (IJAER) 2 (11):8-14.
    Abstract: In recent years email has become one of the fastest and most economical means of communication. However increase of email users has resulted in the dramatic increase of spam emails during the past few years. Data mining -classification algorithms are used to categorize the email as spam or non-spam. Numerous email spam messages are marketable in nature but might similarly encompass camouflaged links that seem to be for acquainted websites but actually lead to phishing web sites or sites (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10.  66
    On the Wisdom of Algorithmic Markets: Governance by Algorithmic Price.Pip Thornton & John Danaher - manuscript
    Leading digital platform providers such as Google and Uber construct marketplaces in which algorithms set prices. The efficiency-maximising free market credentials of this approach are touted by the companies involved and by legislators, policy makers and marketers. They have also taken root in the public imagination. In this article we challenge this understanding of algorithmically constructed marketplaces. We do so by returning to Hayek’s (1945) classic defence of the price mechanism, and by arguing that algorithmically-mediated price mechanisms do not, (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  11. Profiling Vandalism in Wikipedia: A Schauerian Approach to Justification.Paul B. de Laat - 2016 - Ethics and Information Technology 18 (2):131-148.
    In order to fight massive vandalism the English- language Wikipedia has developed a system of surveillance which is carried out by humans and bots, supported by various tools. Central to the selection of edits for inspection is the process of using filters or profiles. Can this profiling be justified? On the basis of a careful reading of Frederick Schauer’s books about rules in general (1991) and profiling in particular (2003) I arrive at several conclusions. The effectiveness, efficiency, and risk-aversion of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12.  40
    From the Closed Classical Algorithmic Universe to an Open World of Algorithmic Constellations.Mark Burgin & Gordana Dodig-Crnkovic - 2013 - In Gordana Dodig-Crnkovic Raffaela Giovagnoli (ed.), Computing Nature. pp. 241--253.
    In this paper we analyze methodological and philosophical implications of algorithmic aspects of unconventional computation. At first, we describe how the classical algorithmic universe developed and analyze why it became closed in the conventional approach to computation. Then we explain how new models of algorithms turned the classical closed algorithmic universe into the open world of algorithmic constellations, allowing higher flexibility and expressive power, supporting constructivism and creativity in mathematical modeling. As Goedels undecidability theorems demonstrate, the closed algorithmic universe (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Recommender Systems and their Ethical Challenges.Silvia Milano, Mariarosaria Taddeo & Luciano Floridi - manuscript
    This article presents the first, systematic analysis of the ethical challenges posed by recommender systems. Through a literature review, the article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  14. Why Are Software Patents so Elusive? A Platonic Approach.Odin Kroeger - 2011 - Masaryk University Journal of Law and Technology 5 (1):57-70.
    Software patents are commonly criticised for being fuzzy, context-sensitive, and often granted for trivial inventions. More often than not, these shortcomings are said to be caused by the abstract nature of software - with little further analysis offered. Drawing on Plato’s Parmenides, this paper will argue (1) that the reason why software patents seem to be elusive is that patent law suggests to think about algorithms as paradigmatic examples and (2) that Plato’s distinction between two modes of predication and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. The Ethics of Algorithmic Outsourcing in Everyday Life.John Danaher - forthcoming - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford, UK: Oxford University Press.
    We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16.  46
    Two Challenges for CI Trustworthiness and How to Address Them.Kevin Baum, Eva Schmidt & A. Köhl Maximilian - 2017 - Proceedings of the 1st Workshop on Explainable Computational Intelligence (XCI 2017).
    We argue that, to be trustworthy, Computa- tional Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and gras- pable. We support this claim by drawing par- allels with trustworthy human persons, and we show what difference this makes in a hypo- thetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism which could be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17.  96
    Introduction to Data Ethics.James Brusseau - 2018 - In The Business Ethics Workshop, 3rd Edition. Boston, USA: Boston Academic Publishing / Flatworld Knowledge. pp. 349-376.
    An Introduction to data ethics, focusing on questions of privacy and personal identity in the economic world as it is defined by big data technologies, artificial intelligence, and algorithmic capitalism. -/- Originally published in The Business Ethics Workshop, 3rd Edition, by Boston Acacdemic Publishing / FlatWorld Knowledge.
    Download  
     
    Export citation  
     
    Bookmark  
  18.  26
    Suggestions to Enhance the Scholarly Search Engine: Google Scholar.Ibrahim M. Nasser, Mohammed M. Elsobeihi & Samy S. Abu Naser - 2019 - International Journal of Engineering and Information Systems (IJEAIS) 3 (3):11-16.
    The scholarly search engine Google Scholar (G.S.) has problems that make it not a 100% trusted search engine. In this research, we discussed a few drawbacks that we noticed in Google Scholar, one of them is related to how does it perform (add articles) option for adding new articles that are related to the registered researchers. Our suggestion is an attempt for making G.S. more efficient by improving the searching method that it uses and finally having trusted statistical results.
    Download  
     
    Export citation  
     
    Bookmark  
  19.  50
    Affiliative Subgroups in Preschool Classrooms: Integrating Constructs and Methods From Social Ethology and Sociometric Traditions.António J. Santos, João R. Daniel, Carla Fernandes & Brian E. Vaughn - 2015 - PLoS ONE 7 (10):1-17.
    Recent studies of school-age children and adolescents have used social network analyses to characterize selection and socialization aspects of peer groups. Fewer network studies have been reported for preschool classrooms and many of those have focused on structural descriptions of peer networks, and/or, on selection processes rather than on social functions of subgroup membership. In this study we started by identifying and describing different types of affiliative subgroups (HMP- high mutual proximity, LMP- low mutual proximity, and ungrouped children) in a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  23
    Sistema Experto para Resolver Problemas Lógicos de Deducción.Gabriel Garduño-Soto, David René Thierry García, Rafael Vidal Uribe & Hugo Padilla Chacón - 1989 - In Va. Conferencia Internacional: Las Computadoras en Instituciones de Educación y de Investigación. Cómputo Académico, UNAM, UNISYS, México, noviembre 14–16, 1989. Mexico City, México: National Autonomous University of Mexico.
    Proceeding of the first public presentation of the work of the mexican logical group "Deduktor" under the direction of the mexican professor Hugo Padilla Chacón. This work was in fact the first whole and stand alone arithmetization of logics.
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  21. The Threat of Algocracy: Reality, Resistance and Accommodation.John Danaher - 2016 - Philosophy and Technology 29 (3):245-268.
    One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a significant (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  22. Democratizing Algorithmic Fairness.Pak-Hang Wong - forthcoming - Philosophy and Technology:1-20.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Algorithm and Parameters: Solving the Generality Problem for Reliabilism.Jack Lyons - forthcoming - Philosophical Review.
    I offer a solution to the generality problem for a reliabilist epistemology, by developing an “algorithm and parameters” scheme for type-individuating cognitive processes. Algorithms are detailed procedures for mapping inputs to outputs. Parameters are psychological variables that systematically affect processing. The relevant process type for a given token is given by the complete algorithmic characterization of the token, along with the values of all the causally relevant parameters. The typing that results is far removed from the typings of folk (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  24. Theory Choice and Social Choice: Okasha Versus Sen.Jacob Stegenga - 2015 - Mind 124 (493):263-277.
    A platitude that took hold with Kuhn is that there can be several equally good ways of balancing theoretical virtues for theory choice. Okasha recently modelled theory choice using technical apparatus from the domain of social choice: famously, Arrow showed that no method of social choice can jointly satisfy four desiderata, and each of the desiderata in social choice has an analogue in theory choice. Okasha suggested that one can avoid the Arrow analogue for theory choice by employing a strategy (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  25. The Mind, the Lab, and the Field: Three Kinds of Populations in Scientific Practice.Rasmus Grønfeldt Winther, Ryan Giordano, Michael D. Edge & Rasmus Nielsen - 2015 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 52:12-21.
    Scientists use models to understand the natural world, and it is important not to conflate model and nature. As an illustration, we distinguish three different kinds of populations in studies of ecology and evolution: theoretical, laboratory, and natural populations, exemplified by the work of R.A. Fisher, Thomas Park, and David Lack, respectively. Biologists are rightly concerned with all three types of populations. We examine the interplay between these different kinds of populations, and their pertinent models, in three examples: the notion (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  26. Plant Seedlings Classification Using Deep Learning.Belal A. M. Ashqar, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2019 - International Journal of Academic Information Systems Research (IJAISR) 3 (1):7-14.
    Agriculture is very important to human continued existence and remains a key driver of many economies worldwide, especially in underdeveloped and developing economies. There is an increasing demand for food and cash crops, due to the increasing in world population and the challenges enforced by climate modifications, there is an urgent need to increase plant production while reducing costs. Preceding instrument vision methods established for selective weeding have confronted with major challenges for trustworthy and precise weed recognition. In this paper, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Problem Solving and Situated Cognition.David Kirsh - 2009 - The Cambridge Handbook of Situated Cognition:264-306.
    In the course of daily life we solve problems often enough that there is a special term to characterize the activity and the right to expect a scientific theory to explain its dynamics. The classical view in psychology is that to solve a problem a subject must frame it by creating an internal representation of the problem’s structure, usually called a problem space. This space is an internally generable representation that is mathematically identical to a graph structure with nodes and (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  28. Bioeconomics, Biopolitics and Bioethics: Evolutionary Semantics of Evolutionary Risk (Anthropological Essay).V. T. Cheshko - 2016 - Bioeconomics and Ecobiopolitic (1 (2)).
    Attempt of trans-disciplinary analysis of the evolutionary value of bioethics is realized. Currently, there are High Tech schemes for management and control of genetic, socio-cultural and mental evolution of Homo sapiens (NBIC, High Hume, etc.). The biological, socio-cultural and technological factors are included in the fabric of modern theories and technologies of social and political control and manipulation. However, the basic philosophical and ideological systems of modern civilization formed mainly in the 17–18 centuries and are experiencing ever-increasing and destabilizing risk-taking (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  29. Advantages of Artificial Intelligences, Uploads, and Digital Minds.Kaj Sotala - 2012 - International Journal of Machine Consciousness 4 (01):275-291.
    I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. Hardware advantages include greater serial speeds and greater parallel speeds. Self-improvement advantages include improvement of algorithms, design of new mental modules, and modification of motivational system. Co-operative advantages include copyability, perfect co-operation, improved communication, and transfer of skills. Human handicaps include computational limitations and faulty heuristics, human-centric biases, and socially motivated cognition. The shape of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  31
    Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - AAAI/ACM Conference on AI, Ethics, and Society (AIES '19) 1.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Geachianism.Patrick Todd - 2011 - Oxford Studies in Philosophy of Religion 3:222-251.
    The plane was going to crash, but it didn't. Johnny was going to bleed to death, but he didn't. Geach sees here a changing future. In this paper, I develop Geach's primary argument for the (almost universally rejected) thesis that the future is mutable (an argument from the nature of prevention), respond to the most serious objections such a view faces, and consider how Geach's view bears on traditional debates concerning divine foreknowledge and human freedom. As I hope to (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   7 citations  
  32. An Improbable God Between Simplicity and Complexity: Thinking About Dawkins's Challenge.Philippe Gagnon - 2013 - International Philosophical Quarterly 53 (4):409-433.
    Richard Dawkins has popularized an argument that he thinks sound for showing that there is almost certainly no God. It rests on the assumptions (1) that complex and statistically improbable things are more difficult to explain than those that are not and (2) that an explanatory mechanism must show how this complexity can be built up from simpler means. But what justifies claims about the designer’s own complexity? One comes to a different understanding of order and of simplicity when one (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33.  51
    Tractability and the Computational Mind.Rineke Verbrugge & Jakub Szymanik - 2018 - In Mark Sprevak & Matteo Colombo (eds.), The Routledge Handbook of the Computational Mind. Oxford, UK: pp. 339-353.
    We overview logical and computational explanations of the notion of tractability as applied in cognitive science. We start by introducing the basics of mathematical theories of complexity: computability theory, computational complexity theory, and descriptive complexity theory. Computational philosophy of mind often identifies mental algorithms with computable functions. However, with the development of programming practice it has become apparent that for some computable problems finding effective algorithms is hardly possible. Some problems need too much computational resource, e.g., time or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Ontology-Based Error Detection in SNOMED-CT.Werner Ceusters, Barry Smith, Anand Kumar & Christoffel Dhaen - 2004 - Proceedings of Medinfo 2004:482-6.
    Quality assurance in large terminologies is a difficult issue. We present two algorithms that can help terminology developers and users to identify potential mistakes. We demon­strate the methodology by outlining the different types of mistakes that are found when the algorithms are applied to SNOMED-CT. On the basis of the results, we argue that both formal logical and linguistic tools should be used in the development and quality-assurance process of large terminologies.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  35. Humeanism and Exceptions in the Fundamental Laws of Physics.Billy Wheeler - 2017 - Principia: An International Journal of Epistemology 21 (3):317-337.
    It has been argued that the fundamental laws of physics do not face a ‘problem of provisos’ equivalent to that found in other scientific disciplines (Earman, Roberts and Smith 2002) and there is only the appearance of exceptions to physical laws if they are confused with differential equations of evolution type (Smith 2002). In this paper I argue that even if this is true, fundamental laws in physics still pose a major challenge to standard Humean approaches to lawhood, as they (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Contextual Vocabulary Acquisition: A Computational Theory and Educational Curriculum.William J. Rapaport & Michael W. Kibby - 2002 - In Nagib Callaos, Ana Breda & Ma Yolanda Fernandez J. (eds.), Proceedings of the 6th World Multiconference on Systemics, Cybernetics and Informatics. International Institute of Informatics and Systemics.
    We discuss a research project that develops and applies algorithms for computational contextual vocabulary acquisition (CVA): learning the meaning of unknown words from context. We try to unify a disparate literature on the topic of CVA from psychology, first- and secondlanguage acquisition, and reading science, in order to help develop these algorithms: We use the knowledge gained from the computational CVA system to build an educational curriculum for enhancing students’ abilities to use CVA strategies in their reading of (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   5 citations  
  37. Psychological and Computational Models of Language Comprehension.David Pereplyotchik - 2011 - Croatian Journal of Philosophy 11 (1):31-72.
    In this paper, I argue for a modified version of what Devitt calls the Representational Thesis. According to RT, syntactic rules or principles are psychologically real, in the sense that they are represented in the mind/brain of every linguistically competent speaker/hearer. I present a range of behavioral and neurophysiological evidence for the claim that the human sentence processing mechanism constructs mental representations of the syntactic properties of linguistic stimuli. I then survey a range of psychologically plausible computational models of comprehension (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  38. Defining Textual Entailment.Daniel Z. Korman, Eric Mack, Jacob Jett & Allen H. Renear - forthcoming - Journal of the Association for Information Science and Technology.
    Textual entailment is a relationship that obtains between fragments of text when one fragment in some sense implies the other fragment. The automation of textual entailment recognition supports a wide variety of text-based tasks, including information retrieval, information extraction, question answering, text summarization, and machine translation. Much ingenuity has been devoted to developing algorithms for identifying textual entailments, but relatively little to saying what textual entailment actually is. This article is a review of the logical and philosophical issues involved (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Sense and the Computation of Reference.Reinhard Muskens - 2004 - Linguistics and Philosophy 28 (4):473 - 504.
    The paper shows how ideas that explain the sense of an expression as a method or algorithm for finding its reference, preshadowed in Frege’s dictum that sense is the way in which a referent is given, can be formalized on the basis of the ideas in Thomason (1980). To this end, the function that sends propositions to truth values or sets of possible worlds in Thomason (1980) must be replaced by a relation and the meaning postulates governing the behaviour of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  40. The Use of Software Tools and Autonomous Bots Against Vandalism: Eroding Wikipedia’s Moral Order?Paul B. de Laat - 2015 - Ethics and Information Technology 17 (3):175-188.
    English - language Wikipedia is constantly being plagued by vandalistic contributions on a massive scale. In order to fight them its volunteer contributors deploy an array of software tools and autonomous bots. After an analysis of their functioning and the ‘ coactivity ’ in use between humans and bots, this research ‘ discloses ’ the moral issues that emerge from the combined patrolling by humans and bots. Administrators provide the stronger tools only to trusted users, thereby creating a new hierarchical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Bibliometric Mapping of Computer and Information Ethics.Richard Heersmink, Jeroen van den Hoven, Nees Jan van Eck & Jan van den Berg - 2011 - Ethics and Information Technology 13 (3):241-249.
    This paper presents the first bibliometric mapping analysis of the field of computer and information ethics (C&IE). It provides a map of the relations between 400 key terms in the field. This term map can be used to get an overview of concepts and topics in the field and to identify relations between information and communication technology concepts on the one hand and ethical concepts on the other hand. To produce the term map, a data set of over thousand articles (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  42. Diagnostic Criteria for Temporomandibular Disorders (DC/TMD) for Clinical and Research Applications.Eric Schiffman, Richard Ohrbach, E. Truelove, Edmond Truelove, John Look, Gary Anderson, Werner Ceusters, Barry Smith & Others - 2014 - Journal of Oral and Facial Pain and Headache 28 (1):6-27.
    Aims: The Research Diagnostic Criteria for Temporomandi¬bular Disorders (RDC/TMD) Axis I diagnostic algorithms were demonstrated to be reliable but below target sensitivity and specificity. Empirical data supported Axis I algorithm revisions that were valid. Axis II instruments were shown to be both reliable and valid. An international consensus workshop was convened to obtain recommendations and finalization of new Axis I diagnostic algorithms and new Axis II instruments. Methods: A comprehensive search of published TMD diagnostic literature was followed by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Kuznetsov V. From studying theoretical physics to philosophical modeling scientific theories: Under influence of Pavel Kopnin and his school.Volodymyr Kuznetsov - 2017 - ФІЛОСОФСЬКІ ДІАЛОГИ’2016 ІСТОРІЯ ТА СУЧАСНІСТЬ У НАУКОВИХ РОЗМИСЛАХ ІНСТИТУТУ ФІЛОСОФІЇ 11:62-92.
    The paper explicates the stages of the author’s philosophical evolution in the light of Kopnin’s ideas and heritage. Starting from Kopnin’s understanding of dialectical materialism, the author has stated that category transformations of physics has opened from conceptualization of immutability to mutability and then to interaction, evolvement and emergence. He has connected the problem of physical cognition universals with an elaboration of the specific system of tools and methods of identifying, individuating and distinguishing objects from a scientific theory domain. The (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  44. HCI Model with Learning Mechanism for Cooperative Design in Pervasive Computing Environment.Hong Liu, Bin Hu & Philip Moore - 2015 - Journal of Internet Technology 16.
    This paper presents a human-computer interaction model with a three layers learning mechanism in a pervasive environment. We begin with a discussion around a number of important issues related to human-computer interaction followed by a description of the architecture for a multi-agent cooperative design system for pervasive computing environment. We present our proposed three- layer HCI model and introduce the group formation algorithm, which is predicated on a dynamic sharing niche technology. Finally, we explore the cooperative reinforcement learning and fusion (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  45. The Problem of Evaluating Automated Large-Scale Evidence Aggregators.Nicolas Wüthrich & Katie Steele - 2019 - Synthese (8):3083-3102.
    In the biomedical context, policy makers face a large amount of potentially discordant evidence from different sources. This prompts the question of how this evidence should be aggregated in the interests of best-informed policy recommendations. The starting point of our discussion is Hunter and Williams’ recent work on an automated aggregation method for medical evidence. Our negative claim is that it is far from clear what the relevant criteria for evaluating an evidence aggregator of this sort are. What is the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. In Defense of Contextual Vocabulary Acquisition: How to Do Things with Words in Context.William J. Rapaport - 2005 - In Anind Dey, Boicho Kokinov, David Leake & Roy Turner (eds.), Proceedings of the 5th International and Interdisciplinary Conference on Modeling and Using Context. Springer-Verlag Lecture Notes in Artificial Intelligence 3554. pp. 396--409.
    Contextual vocabulary acquisition (CVA) is the deliberate acquisition of a meaning for a word in a text by reasoning from context, where “context” includes: (1) the reader’s “internalization” of the surrounding text, i.e., the reader’s “mental model” of the word’s “textual context” (hereafter, “co-text” [3]) integrated with (2) the reader’s prior knowledge (PK), but it excludes (3) external sources such as dictionaries or people. CVA is what you do when you come across an unfamiliar word in your reading, realize that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  47.  49
    Mistakes in Medical Ontologies: Where Do They Come From and How Can They Be Detected?Werner Ceusters, Barry Smith, Anand Kumar & Christoffel Dhaen - 2004 - Studies in Health and Technology Informatics 102:145-164.
    We present the details of a methodology for quality assurance in large medical terminologies and describe three algorithms that can help terminology developers and users to identify potential mistakes. The methodology is based in part on linguistic criteria and in part on logical and ontological principles governing sound classifications. We conclude by outlining the results of applying the methodology in the form of a taxonomy different types of errors and potential errors detected in SNOMED-CT.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  48. Semantic Arithmetic: A Preface.John Corcoran - 1995 - Agora 14 (1):149-156.
    SEMANTIC ARITHMETIC: A PREFACE John Corcoran Abstract Number theory, or pure arithmetic, concerns the natural numbers themselves, not the notation used, and in particular not the numerals. String theory, or pure syntax, concems the numerals as strings of «uninterpreted» characters without regard to the numbe~s they may be used to denote. Number theory is purely arithmetic; string theory is purely syntactical... in so far as the universe of discourse alone is considered. Semantic arithmetic is a broad subject which begins when (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  49. Cognitivism About Epistemic Modality.Hasen Khudairi - manuscript
    This paper aims to vindicate the thesis that cognitive computational properties are abstract objects implemented in physical systems. I avail of the equivalence relations countenanced in Homotopy Type Theory, in order to specify an abstraction principle for intensional, computational properties. The homotopic abstraction principle for intensional mental functions provides an epistemic conduit into our knowledge of cognitive algorithms as abstract objects. I examine, then, how intensional functions in Epistemic Modal Algebra are deployed as core models in the philosophy of (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  50.  4
    Repairing Ontologies Via Axiom Weakening.Daniele Porello & Oliver Kutz Nicolas Troquard, Roberto Confalonieri, Pietro Galliani, Rafael Peñaloza, Daniele Porello - 2018 - In Proceedings of the Thirty-Second {AAAI} Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th {AAAI} Symposium on Educational Advances in Artificial Intelligence (EAAI-18). pp. 1981--1988.
    Ontology engineering is a hard and error-prone task, in which small changes may lead to errors, or even produce an inconsistent ontology. As ontologies grow in size, the need for automated methods for repairing inconsistencies while preserving as much of the original knowledge as possible increases. Most previous approaches to this task are based on removing a few axioms from the ontology to regain consistency. We propose a new method based on weakening these axioms to make them less restrictive, employing (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 69