Switch to: References

Add citations

You must login to add citations.
  1. How experimental algorithmics can benefit from Mayo’s extensions to Neyman–Pearson theory of testing.Thomas Bartz-Beielstein - 2008 - Synthese 163 (3):385-396.
    Although theoretical results for several algorithms in many application domains were presented during the last decades, not all algorithms can be analyzed fully theoretically. Experimentation is necessary. The analysis of algorithms should follow the same principles and standards of other empirical sciences. This article focuses on stochastic search algorithms, such as evolutionary algorithms or particle swarm optimization. Stochastic search algorithms tackle hard real-world optimization problems, e.g., problems from chemical engineering, airfoil optimization, or bioinformatics, where classical methods from mathematical optimization fail. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Tuning Your Priors to the World.Jacob Feldman - 2013 - Topics in Cognitive Science 5 (1):13-34.
    The idea that perceptual and cognitive systems must incorporate knowledge about the structure of the environment has become a central dogma of cognitive theory. In a Bayesian context, this idea is often realized in terms of “tuning the prior”—widely assumed to mean adjusting prior probabilities so that they match the frequencies of events in the world. This kind of “ecological” tuning has often been held up as an ideal of inference, in fact defining an “ideal observer.” But widespread as this (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • The problem of induction.John Vickers - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Review of Gerhard Schurz's Optimality Justifications (2024, OUP). [REVIEW]Richard Pettigrew - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  • On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. Similarly, linguistic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Implications of the No-Free-Lunch Theorems for Meta-induction.David H. Wolpert - 2023 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 54 (3):421-432.
    The important recent book by Schurz ( 2019 ) appreciates that the no-free-lunch theorems (NFL) have major implications for the problem of (meta) induction. Here I review the NFL theorems, emphasizing that they do not only concern the case where there is a uniform prior—they prove that there are “as many priors” (loosely speaking) for which any induction algorithm _A_ out-generalizes some induction algorithm _B_ as vice-versa. Importantly though, in addition to the NFL theorems, there are many _free lunch_ theorems. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • How Is Perception Tractable?Tyler Brooke-Wilson - forthcoming - The Philosophical Review.
    Perception solves computationally demanding problems at lightning fast speed. It recovers sophisticated representations of the world from degraded inputs, often in a matter of milliseconds. Any theory of perception must be able to explain how this is possible; in other words, it must be able to explain perception's computational tractability. One of the few attempts to move toward such an explanation has been the information encapsulation hypothesis, which posits that perception can be fast because it keeps computational costs low by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Adaptive Algorithms for Meta-Induction.Ronald Ortner - 2023 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 54 (3):433-450.
    Work in online learning traditionally considered induction-friendly (e.g. stochastic with a fixed distribution) and induction-hostile (adversarial) settings separately. While algorithms like Exp3 that have been developed for the adversarial setting are applicable to the stochastic setting as well, the guarantees that can be obtained are usually worse than those that are available for algorithms that are specifically designed for stochastic settings. Only recently, there is an increasing interest in algorithms that give (near-)optimal guarantees with respect to the underlying setting, even (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • No free theory choice from machine learning.Bruce Rushing - 2022 - Synthese 200 (5):1-21.
    Ravit Dotan argues that a No Free Lunch theorem from machine learning shows epistemic values are insufficient for deciding the truth of scientific hypotheses. She argues that NFL shows that the best case accuracy of scientific hypotheses is no more than chance. Since accuracy underpins every epistemic value, non-epistemic values are needed to assess the truth of scientific hypotheses. However, NFL cannot be coherently applied to the problem of theory choice. The NFL theorem Dotan’s argument relies upon is a member (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • When the (Bayesian) ideal is not ideal.Danilo Fraga Dantas - 2023 - Logos and Episteme 15 (3):271-298.
    Bayesian epistemologists support the norms of probabilism and conditionalization using Dutch book and accuracy arguments. These arguments assume that rationality requires agents to maximize practical or epistemic value in every doxastic state, which is evaluated from a subjective point of view (e.g., the agent’s expectancy of value). The accuracy arguments also presuppose that agents are opinionated. The goal of this paper is to discuss the assumptions of these arguments, including the measure of epistemic value. I have designed AI agents based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • The no-free-lunch theorems of supervised learning.Tom F. Sterkenburg & Peter D. Grünwald - 2021 - Synthese 199 (3-4):9979-10015.
    The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Ethical Issues in Democratizing Digital Phenotypes and Machine Learning in the Next Generation of Digital Health Technologies.Maurice D. Mulvenna, Raymond Bond, Jack Delaney, Fatema Mustansir Dawoodbhoy, Jennifer Boger, Courtney Potts & Robin Turkington - 2021 - Philosophy and Technology 34 (4):1945-1960.
    Digital phenotyping is the term given to the capturing and use of user log data from health and wellbeing technologies used in apps and cloud-based services. This paper explores ethical issues in making use of digital phenotype data in the arena of digital health interventions. Products and services based on digital wellbeing technologies typically include mobile device apps as well as browser-based apps to a lesser extent, and can include telephony-based services, text-based chatbots, and voice-activated chatbots. Many of these digital (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Undecidability of the Spectral Gap: An Epistemological Look.Emiliano Ippoliti & Sergio Caprara - 2021 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 52 (1):157-170.
    The results of Cubitt et al. on the spectral gap problem add a new chapter to the issue of undecidability in physics, as they show that it is impossible to decide whether the Hamiltonian of a quantum many-body system is gapped or gapless. This implies, amongst other things, that a reductionist viewpoint would be untenable. In this paper, we examine their proof and a few philosophical implications, in particular ones regarding models and limitative results. In more detail, we examine the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Practical performance models of algorithms in evolutionary program induction and other domains.Mario Graff & Riccardo Poli - 2010 - Artificial Intelligence 174 (15):1254-1276.
    Download  
     
    Export citation  
     
    Bookmark  
  • A new approach to estimating the expected first hitting time of evolutionary algorithms.Yang Yu & Zhi-Hua Zhou - 2008 - Artificial Intelligence 172 (15):1809-1832.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Book review. [REVIEW]Tim Oates & Waiyian Chong - 2006 - Artificial Intelligence 170 (18):1222-1226.
    Download  
     
    Export citation  
     
    Bookmark  
  • On the application of compression-based metrics to identifying anomalous behaviour in web traffic.Gonzalo de la Torre-Abaitua, Luis F. Lago-Fernández & David Arroyo - 2020 - Logic Journal of the IGPL 28 (4):546-557.
    In cybersecurity, there is a call for adaptive, accurate and efficient procedures to identifying performance shortcomings and security breaches. The increasing complexity of both Internet services and traffic determines a scenario that in many cases impedes the proper deployment of intrusion detection and prevention systems. Although it is a common practice to monitor network and applications activity, there is not a general methodology to codify and interpret the recorded events. Moreover, this lack of methodology somehow erodes the possibility of diagnosing (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What Simulations Teach Us About Ordinary Objects.Arthur C. Https://Orcidorg Schwaninger - 2019 - Open Philosophy 2 (1):614-628.
    Under the label of scientific metaphysics, many naturalist metaphysicians are moving away from a priori conceptual analysis and instead seek scientific explanations that will help bring forward a unified understanding of the world. This paper first reviews how our classical assumptions about ordinary objects fail to be true in light of quantum mechanics. The paper then explores how our experiences of ordinary objects arise by reflecting on how our neural system operates algorithmically. Contemporary models and simulations in computational neuroscience are (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Analysing knowledge transfer in SHADE via complex network.Adam Viktorin, Roman Senkerik, Michal Pluhacek & Tomas Kadavy - forthcoming - Logic Journal of the IGPL.
    Download  
     
    Export citation  
     
    Bookmark  
  • On Mathematical Anti-Evolutionism.Jason Rosenhouse - 2016 - Science & Education 25 (1-2):95-114.
    The teaching of evolution in American high schools has long been a source of controversy. The past decade has seen an important shift in the rhetoric of anti-evolutionists, toward arguments of a strongly mathematical character. These mathematical arguments, while different in their specifics, follow the same general program and rely on the same underlying model of evolution. We shall discuss the nature and history of this program and model and describe general reasons for skepticism with regard to any anti-evolutionary arguments (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Coherence in the Visual Imagination.Michael O. Vertolli, Matthew A. Kelly & Jim Davies - 2018 - Cognitive Science 42 (3):885-917.
    An incoherent visualization is when aspects of different senses of a word are present in the same visualization. We describe and implement a new model of creating contextual coherence in the visual imagination called Coherencer, based on the SOILIE model of imagination. We show that Coherencer is able to generate scene descriptions that are more coherent than SOILIE's original approach as well as a parallel connectionist algorithm that is considered competitive in the literature on general coherence. We also show that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A survey of parallel distributed genetic algorithms.Enrique Alba & José M. Troya - 1999 - Complexity 4 (4):31-52.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • How experimental algorithmics can benefit from Mayo’s extensions to Neyman–Pearson theory of testing.Thomas Bartz-Beielstein - 2008 - Synthese 163 (3):385 - 396.
    Although theoretical results for several algorithms in many application domains were presented during the last decades, not all algorithms can be analyzed fully theoretically. Experimentation is necessary. The analysis of algorithms should follow the same principles and standards of other empirical sciences. This article focuses on stochastic search algorithms, such as evolutionary algorithms or particle swarm optimization. Stochastic search algorithms tackle hard real-world optimization problems, e.g., problems from chemical engineering, airfoil optimization, or bio-informatics, where classical methods from mathematical optimization fail. (...)
    Download  
     
    Export citation  
     
    Bookmark