Related

Contents
70 found
Order:
1 — 50 / 70
  1. On Philomatics and Psychomatics for Combining Philosophy and Psychology with Mathematics.Benyamin Ghojogh & Morteza Babaie - manuscript
    We propose the concepts of philomatics and psychomatics as hybrid combinations of philosophy and psychology with mathematics. We explain four motivations for this combination which are fulfilling the desire of analytical philosophy, proposing science of philosophy, justifying mathematical algorithms by philosophy, and abstraction in both philosophy and mathematics. We enumerate various examples for philomatics and psychomatics, some of which are explained in more depth. The first example is the analysis of relation between the context principle, semantic holism, and the usage (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - manuscript
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Can AI Abstract the Architecture of Mathematics?Posina Rayudu - manuscript
    The irrational exuberance associated with contemporary artificial intelligence (AI) reminds me of Charles Dickens: "it was the age of foolishness, it was the epoch of belief" (cf. Editorial, 2016; to get a feel for the vanity fair that is AI, see Mitchell and Krakauer, 2023; Stilgoe, 2023). It is particularly distressing—feels like yet another rerun of Seinfeld, which is all about nothing (pun intended); we have seen it in the 60s and again in the 90s. AI might have had an (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. A statistical learning approach to a problem of induction.Kino Zhao - manuscript
    At its strongest, Hume's problem of induction denies the existence of any well justified assumptionless inductive inference rule. At the weakest, it challenges our ability to articulate and apply good inductive inference rules. This paper examines an analysis that is closer to the latter camp. It reviews one answer to this problem drawn from the VC theorem in statistical learning theory and argues for its inadequacy. In particular, I show that it cannot be computed, in general, whether we are in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.Alex Grzankowski - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in order (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Operationalising Representation in Natural Language Processing.Jacqueline Harding - forthcoming - British Journal for the Philosophy of Science.
    Despite its centrality in the philosophy of cognitive science, there has been little prior philosophical work engaging with the notion of representation in contemporary NLP practice. This paper attempts to fill that lacuna: drawing on ideas from cognitive science, I introduce a framework for evaluating the representational claims made about components of neural NLP models, proposing three criteria with which to evaluate whether a component of a model represents a property and operationalising these criteria using probing classifiers, a popular analysis (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Predicting and Preferring.Nathaniel Sharadin - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Encoder-Decoder Based Long Short-Term Memory (LSTM) Model for Video Captioning.Adewale Sikiru, Tosin Ige & Bolanle Matti Hafiz - forthcoming - Proceedings of the IEEE:1-6.
    This work demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - forthcoming - ACM Computing Surveys.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the weighted mixture's intelligence is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains.Thomas F. Burns - 2023 - Journal of Medical Ethics 49 (8):547-548.
    Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a prescriber who (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Performance Comparison and Implementation of Bayesian Variants for Network Intrusion Detection.Tosin Ige & Christopher Kiekintveld - 2023 - Proceedings of the IEEE 1:5.
    Bayesian classifiers perform well when each of the features is completely independent of the other which is not always valid in real world applications. The aim of this study is to implement and compare the performances of each variant of the Bayesian classifier (Multinomial, Bernoulli, and Gaussian) on anomaly detection in network intrusion, and to investigate whether there is any association between each variant’s assumption and their performance. Our investigation showed that each variant of the Bayesian algorithm blindly follows its (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Swansea: Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Machine Learning, Misinformation, and Citizen Science.Adrian K. Yee - 2023 - European Journal for Philosophy of Science 13 (56):1-24.
    Current methods of operationalizing concepts of misinformation in machine learning are often problematic given idiosyncrasies in their success conditions compared to other models employed in the natural and social sciences. The intrinsic value-ladenness of misinformation and the dynamic relationship between citizens' and social scientists' concepts of misinformation jointly suggest that both the construct legitimacy and the construct validity of these models needs to be assessed via more democratic criteria than has previously been recognized.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. Extending Environments To Measure Self-Reflection In Reinforcement Learning.Samuel Allen Alexander, Michael Castaneda, Kevin Compher & Oscar Martinez - 2022 - Journal of Artificial General Intelligence 13 (1).
    We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent's hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment's outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  18. Pseudo-visibility: A Game Mechanic Involving Willful Ignorance.Samuel Allen Alexander & Arthur Paul Pedersen - 2022 - FLAIRS-35.
    We present a game mechanic called pseudo-visibility for games inhabited by non-player characters (NPCs) driven by reinforcement learning (RL). NPCs are incentivized to pretend they cannot see pseudo-visible players: the training environment simulates an NPC to determine how the NPC would act if the pseudo-visible player were invisible, and penalizes the NPC for acting differently. NPCs are thereby trained to selectively ignore pseudo-visible players, except when they judge that the reaction penalty is an acceptable tradeoff (e.g., a guard might accept (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) 'Human-Like (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. Interprétabilité et explicabilité de phénomènes prédits par de l’apprentissage machine.Christophe Denis & Franck Varenne - 2022 - Revue Ouverte d'Intelligence Artificielle 3 (3-4):287-310.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathéma- tique et causale d’un phénomène (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. THE ROBOTS ARE COMING: What’s Happening in Philosophy (WHiP)-The Philosophers, August 2022.Jeff Hawley - 2022 - Philosophynews.Com.
    Should we fear a future in which the already tricky world of academic publishing is increasingly crowded out by super-intelligent artificial general intelligence (AGI) systems writing papers on phenomenology and ethics? What are the chances that AGI advances to a stage where a human philosophy instructor is similarly removed from the equation? If Jobst Landgrebe and Barry Smith are correct, we have nothing to fear.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. AI Powered Anti-Cyber bullying system using Machine Learning Algorithm of Multinomial Naïve Bayes and Optimized Linear Support Vector Machine.Tosin Ige - 2022 - International Journal of Advanced Computer Science and Applications 13 (5):1 - 5.
    Unless and until our society recognizes cyber bullying for what it is, the suffering of thousands of silent victims will continue.” ~ Anna Maria Chavez. There had been series of research on cyber bullying which are unable to provide reliable solution to cyber bullying. In this research work, we were able to provide a permanent solution to this by developing a model capable of detecting and intercepting bullying incoming and outgoing messages with 92% accuracy. We also developed a chatbot automation (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Implementation of Data Mining on a Secure Cloud Computing over a Web API using Supervised Machine Learning Algorithm.Tosin Ige - 2022 - International Journal of Advanced Computer Science and Applications 13 (5):1 - 4.
    Ever since the era of internet had ushered in cloud computing, there had been increase in the demand for the unlimited data available through cloud computing for data analysis, pattern recognition and technology advancement. With this also bring the problem of scalability, efficiency and security threat. This research paper focuses on how data can be dynamically mine in real time for pattern detection in a secure cloud computing environment using combination of decision tree algorithm and Random Forest over a restful (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Philosophical foundations of intelligence collection and analysis: a defense of ontological realism.William Mandrick & Barry Smith - 2022 - Intelligence and National Security 38.
    There is a common misconception across the lntelligence Community (IC) to the effect that information trapped within multiple heterogeneous data silos can be semantically integrated by the sorts of meaning-blind statistical methods employed in much of artificial intelligence (Al) and natural language processlng (NLP). This leads to the misconception that incoming data can be analysed coherently by relying exclusively on the use of statistical algorithms and thus without any shared framework for classifying what the data are about. Unfortunately, such approaches (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Algorithmic Microaggressions.Emma McClure & Benjamin Wald - 2022 - Feminist Philosophy Quarterly 8 (3).
    We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, stereotyping and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not make them (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Model-induced escape.Barry Smith - 2022 - Facing the Future, Facing the Screen: 10Th Budapest Visual Learning Conference.
    We can illustrate the phenomenon of model-induced escape by examining the phenomenon of spam filters. Spam filter A is, we can assume, very effective at blocking spam. Indeed it is so effective that it motivates the authors of spam to invent new types of spam that will beat the filters of spam filter A. -/- An example of this phenomenon in the realm of philosophy is illustrated in the work of Nyíri on Wittgenstein's political beliefs. Nyíri writes a paper demonstrating (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   41 citations  
  29. Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  30. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  31. Can reinforcement learning learn itself? A reply to 'Reward is enough'.Samuel Allen Alexander - 2021 - Cifma.
    In their paper 'Reward is enough', Silver et al conjecture that the creation of sufficiently good reinforcement learning (RL) agents is a path to artificial general intelligence (AGI). We consider one aspect of intelligence Silver et al did not consider in their paper, namely, that aspect of intelligence involved in designing RL agents. If that is within human reach, then it should also be within AGI's reach. This raises the question: is there an RL environment which incentivises RL agents to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Shared decision-making and maternity care in the deep learning age: Acknowledging and overcoming inherited defeaters.Keith Begley, Cecily Begley & Valerie Smith - 2021 - Journal of Evaluation in Clinical Practice 27 (3):497–503.
    In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. In this paper the authors, with backgrounds in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. Correlation Isn’t Good Enough: Causal Explanation and Big Data. [REVIEW]Frank Cabrera - 2021 - Metascience 30 (2):335-338.
    A review of Gary Smith and Jay Cordes: The Phantom Pattern Problem: The Mirage of Big Data. New York: Oxford University Press, 2020.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Towards Knowledge-driven Distillation and Explanation of Black-box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, to target (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  35. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect the allocation outcomes? (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Microethics for healthcare data science: attention to capabilities in sociotechnical systems.Mark Graves & Emanuele Ratti - 2021 - The Future of Science and Ethics 6:64-73.
    It has been argued that ethical frameworks for data science often fail to foster ethical behavior, and they can be difficult to implement due to their vague and ambiguous nature. In order to overcome these limitations of current ethical frameworks, we propose to integrate the analysis of the connections between technical choices and sociocultural factors into the data science process, and show how these connections have consequences for what data subjects can do, accomplish, and be. Using healthcare as an example, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Exploring Machine Learning Techniques for Coronary Heart Disease Prediction.Hisham Khdair - 2021 - International Journal of Advanced Computer Science and Applications 12 (5):28-36.
    Coronary Heart Disease (CHD) is one of the leading causes of death nowadays. Prediction of the disease at an early stage is crucial for many health care providers to protect their patients and save lives and costly hospitalization resources. The use of machine learning in the prediction of serious disease events using routine medical records has been successful in recent years. In this paper, a comparative analysis of different machine learning techniques that can accurately predict the occurrence of CHD events (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Tecno-especies: la humanidad que se hace a sí misma y los desechables.Mateja Kovacic & María G. Navarro - 2021 - Bajo Palabra. Revista de Filosofía 27 (II Epoca):45-62.
    Popular culture continues fuelling public imagination with things, human and non-human, that we might beco-me or confront. Besides robots, other significant tropes in popular fiction that generated images include non-human humans and cyborgs, wired into his-torically varying sociocultural realities. Robots and artificial intelligence are re-defining the natural order and its hierar-chical structure. This is not surprising, as natural order is always in flux, shaped by new scientific discoveries, especially the reading of the genetic code, that reveal and redefine relationships between (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   12 citations  
  40. Ethical Implications of Alzheimer’s Disease Prediction in Asymptomatic Individuals Through Artificial Intelligence.Frank Ursin, Cristian Timmermann & Florian Steger - 2021 - Diagnostics 11 (3):440.
    Biomarker-based predictive tests for subjectively asymptomatic Alzheimer’s disease (AD) are utilized in research today. Novel applications of artificial intelligence (AI) promise to predict the onset of AD several years in advance without determining biomarker thresholds. Until now, little attention has been paid to the new ethical challenges that AI brings to the early diagnosis in asymptomatic individuals, beyond contributing to research purposes, when we still lack adequate treatment. The aim of this paper is to explore the ethical arguments put forward (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Discounting Desirable Gambles.Gregory Wheeler - 2021 - Proceedings of Machine Learning Research 147:331-341.
    The desirable gambles framework offers the most comprehensive foundations for the theory of lower pre- visions, which in turn affords the most general ac- count of imprecise probabilities. Nevertheless, for all its generality, the theory of lower previsions rests on the notion of linear utility. This commitment to linearity is clearest in the coherence axioms for sets of desirable gambles. This paper considers two routes to relaxing this commitment. The first preserves the additive structure of the desirable gambles framework and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Genealogy of Algorithms: Datafication as Transvaluation.Virgil W. Brower - 2020 - le Foucaldien 6 (1):1-43.
    This article investigates religious ideals persistent in the datafication of information society. Its nodal point is Thomas Bayes, after whom Laplace names the primal probability algorithm. It reconsiders their mathematical innovations with Laplace's providential deism and Bayes' singular theological treatise. Conceptions of divine justice one finds among probability theorists play no small part in the algorithmic data-mining and microtargeting of Cambridge Analytica. Theological traces within mathematical computation are emphasized as the vantage over large numbers shifts to weights beyond enumeration in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   45 citations  
  45. Perceptron Connectives in Knowledge Representation.Pietro Galliani, Guendalina Righetti, Daniele Porello, Oliver Kutz & Nicolas Toquard - 2020 - In Knowledge Engineering and Knowledge Management - 22nd International Conference, {EKAW} 2020, Bolzano, Italy, September 16-20, 2020, Proceedings. Lecture Notes in Computer Science 12387. pp. 183-193.
    We discuss the role of perceptron (or threshold) connectives in the context of Description Logic, and in particular their possible use as a bridge between statistical learning of models from data and logical reasoning over knowledge bases. We prove that such connectives can be added to the language of most forms of Description Logic without increasing the complexity of the corresponding inference problem. We show, with a practical example over the Gene Ontology, how even simple instances of perceptron connectives are (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  46. AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. The Future of Human-Artificial Intelligence Nexus and its Environmental Costs.Petr Spelda & Vit Stritecky - 2020 - Futures 117.
    The environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and Artificial Intelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al., 2019 in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive inferences. The (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  48. What Can Artificial Intelligence Do for Scientific Realism?Petr Spelda & Vit Stritecky - 2020 - Axiomathes 31 (1):85-104.
    The paper proposes a synthesis between human scientists and artificial representation learning models as a way of augmenting epistemic warrants of realist theories against various anti-realist attempts. Towards this end, the paper fleshes out unconceived alternatives not as a critique of scientific realism but rather a reinforcement, as it rejects the retrospective interpretations of scientific progress, which brought about the problem of alternatives in the first place. By utilising adversarial machine learning, the synthesis explores possibility spaces of available evidence for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. 类人猿或安卓会毁灭地球吗?*雷·库兹韦尔(2012年)关于如何创造心灵的评论 (Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012)) (2019年修订版).Michael Richard Starks - 2020 - In 欢迎来到地球上的地狱: 婴儿,气候变化,比特币,卡特尔,中国,民主,多样性,养成基因,平等,黑客,人权,伊斯兰教,自由主义,繁荣,网络,混乱。饥饿,疾病,暴力,人工智能,战争. Las Vegas, NV USA: Reality Press. pp. 146-158.
    几年前,我通常可以从书名中分辨出什么,或者至少从章节标题中看出,会犯什么样的哲学错误,以及错误的频率。就名义上的科学著作而言,这些可能在很大程度上局限于某些章节,这些章节具有哲学意义或试图得出关于该作 品的意义或长期意义的一般性结论。然而,通常情况下,事实的科学问题慷慨地与哲学的胡言乱语,这些事实意味着什么。维特根斯坦在大约80年前描述的科学问题与各种语言游戏所描述的明确区别很少被考虑,因此人们交替 地被科学所震惊,并因它的不连贯而感到沮丧。分析。因此,这是与这个卷。 如果一个人要创造一个或多或少像我们一样的头脑,一个人需要有一个理性的逻辑结构,并理解两种思想体系(双过程理论)。如果一个人要对此进行哲学思考,就需要理解科学事实问题与语言如何在问题语境中工作,以及如何 避免还原主义和科学主义的陷阱的哲学问题之间的区别,但Kurzweil,如最学生的行为,基本上都是无知的。他被模型、理论和概念所陶醉,以及解释的冲动,而维特根斯坦向我们表明,我们只需要描述,理论、概念等 只是使用语言(语言游戏)的方式,只有它们有明确的价值测试(清晰的真理制造者,或约翰西尔(AI最著名的批评家)喜欢说,明确的满意条件(COS))。我试图在我最近的著作中对此作一个开端。 那些希望从现代两个系统的观点来看为人类行为建立一个全面的最新框架的人,可以查阅我的书《路德维希的哲学、心理学、Mind 和语言的逻辑结构》维特根斯坦和约翰·西尔的《第二部》(2019年)。那些对我更多的作品感兴趣的人可能会看到《会说话的猴子——一个末日星球上的哲学、心理学、科学、宗教和政治——文章和评论2006-201 9年第3次(2019年)和自杀乌托邦幻想21篇世纪4日 (2019) .
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. Psychopower and Ordinary Madness: Reticulated Dividuals in Cognitive Capitalism.Ekin Erkan - 2019 - Cosmos and History 15 (1):214-241.
    Despite the seemingly neutral vantage of using nature for widely-distributed computational purposes, neither post-biological nor post-humanist teleology simply concludes with the real "end of nature" as entailed in the loss of the specific ontological status embedded in the identifier "natural." As evinced by the ecological crises of the Anthropocene—of which the 2019 Brazil Amazon rainforest fires are only the most recent—our epoch has transfixed the “natural order" and imposed entropic artificial integration, producing living species that become “anoetic,” made to serve (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 70