Results for 'Algorithmic explainability, Explanation game, Interpretable machine learning, Pareto frontier, Relevance'

1000+ found
Order:
  1.  78
    The Explanation Game: A Formal Framework for Interpretable Machine Learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  2. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies From the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3.  76
    Clinical Applications of Machine Learning Algorithms: Beyond the Black Box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  4. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  5.  81
    The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  38
    ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. The Pragmatic Turn in Explainable Artificial Intelligence (XAI).Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  8.  72
    Local Explanations Via Necessity and Sufficiency: Unifying Theory and Practice.David Watson, Limor Gultchin, Taly Ankur & Luciano Floridi - 2022 - Minds and Machines 32:185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We provide a sound and complete algorithm for computing explanatory factors (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Consequences of Unexplainable Machine Learning for the Notions of a Trusted Doctor and Patient Autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Interprétabilité et explicabilité pour l’apprentissage machine : entre modèles descriptifs, modèles prédictifs et modèles causaux. Une nécessaire clarification épistémologique.Christophe Denis & Franck Varenne - 2019 - Actes de la Conférence Nationale En Intelligence Artificielle - CNIA 2019.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathématique et causale d’un phénomène (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  11. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  12. On Algorithmic Fairness in Medical Practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. The Ethics of Algorithms: Mapping the Debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2).
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Download  
     
    Export citation  
     
    Bookmark   132 citations  
  14.  15
    Machine Learning in Scientific Grant Review: Algorithmically Predicting Project Efficiency in High Energy Physics.Vlasta Sikimić & Sandro Radovanović - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. The Role of Imagination in Social Scientific Discovery: Why Machine Discoverers Will Need Imagination Algorithms.Michael Stuart - 2019 - In Mark Addis, Fernand Gobet & Peter Sozou (eds.), Scientific Discovery in the Social Sciences. Springer Verlag.
    When philosophers discuss the possibility of machines making scientific discoveries, they typically focus on discoveries in physics, biology, chemistry and mathematics. Observing the rapid increase of computer-use in science, however, it becomes natural to ask whether there are any scientific domains out of reach for machine discovery. For example, could machines also make discoveries in qualitative social science? Is there something about humans that makes us uniquely suited to studying humans? Is there something about machines that would bar them (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16.  92
    Formalising Trade-Offs Beyond Algorithmic Fairness: Lessons From Ethical Philosophy and Welfare Economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted toolkits (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  17. The Algorithm Audit: Scoring the Algorithms That Score Us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do not (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  18. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Interpreting the Rules of the Game.C. Mantzavinos - 2007 - In Christoph Engel Firtz Strack (ed.), The Impact of Court Procedure on the Psychology of Judicial Decision-Making. Baden-Baden: Nomos. pp. 16-30.
    After providing a brief overview of the economic theory of judicial decisions this paper presents an argument for why not only the economic theory of judicial decisions, but also the rational approach in general, most often fails in explaining decision-making. Work done within the research program of New Institutionalism is presented as a possible alternative. Within this research program judicial activity is conceptualized as the activity of "interpreting the rules of the game", i.e. the institutions that frame the economic and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Are Algorithms Value-Free? Feminist Theoretical Virtues in Machine Learning.Gabbrielle Johnson - forthcoming - Journal Moral Philosophy.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Understanding From Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  22. Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning About Neural Models.Christopher Grimsley, Elijah Mayfield & Julia Bursten - 2020 - Proceedings of the 12th Conference on Language Resources and Evaluation.
    As the demand for explainable deep learning grows in the evaluation of language technologies, the value of a principled grounding for those explanations grows as well. Here we study the state-of-the-art in explanation for neural models for natural-language processing (NLP) tasks from the viewpoint of philosophy of science. We focus on recent evaluation work that finds brittleness in explanations obtained through attention mechanisms.We harness philosophical accounts of explanation to suggest broader conclusions from these studies. From this analysis, we (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Fair Machine Learning Under Partial Compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect the allocation (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Machine Intelligence: A Chimera.Mihai Nadin - 2019 - AI and Society 34 (2):215-242.
    The notion of computation has changed the world more than any previous expressions of knowledge. However, as know-how in its particular algorithmic embodiment, computation is closed to meaning. Therefore, computer-based data processing can only mimic life’s creative aspects, without being creative itself. AI’s current record of accomplishments shows that it automates tasks associated with intelligence, without being intelligent itself. Mistaking the abstract for the concrete has led to the religion of “everything is an output of computation”—even the humankind that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  25. Saving the Mutual Manipulability Account of Constitutive Relevance.Beate Krickel - 2018 - Studies in History and Philosophy of Science Part A 68:58-67.
    Constitutive mechanistic explanations are said to refer to mechanisms that constitute the phenomenon-to-be-explained. The most prominent approach of how to understand this constitution relation is Carl Craver’s mutual manipulability approach to constitutive relevance. Recently, the mutual manipulability approach has come under attack (Leuridan 2012; Baumgartner and Gebharter 2015; Romero 2015; Harinen 2014; Casini and Baumgartner 2016). Roughly, it is argued that this approach is inconsistent because it is spelled out in terms of interventionism (which is an approach to causation), (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  26.  87
    Cognitive Ontologies, Task Ontologies, and Explanation in Cognitive Neuroscience.Daniel Burnston - forthcoming - In John Bickle, Carl F. Craver & Ann Sophie Barwich (eds.), Neuroscience Experiment: Philosophical and Scientific Perspectives.
    The traditional approach to explanation in cognitive neuroscience is realist about psychological constructs, and treats them as explanatory. On the “standard framework,” cognitive neuroscientists explain behavior as the result of the instantiation of psychological functions in brain activity. This strategy is questioned by results suggesting the distribution of function in the brain, the multifunctionality of individual parts of the brain, and the overlap in neural realization of purportedly distinct psychological constructs. One response to this in the field has been (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27.  28
    Interprétabilité et explicabilité de phénomènes prédits par de l’apprentissage machine.Christophe Denis & Franck Varenne - 2022 - Revue Ouverte d'Intelligence Artificielle 3 (3-4):287-310.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathéma- tique et causale d’un (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  28.  33
    AI Powered Anti-Cyber Bullying System Using Machine Learning Algorithm of Multinomial Naïve Bayes and Optimized Linear Support Vector Machine.Tosin Ige & Sikiru Adewale - 2022 - International Journal of Advanced Computer Science and Applications 13 (5):1 - 5.
    Unless and until our society recognizes cyber bullying for what it is, the suffering of thousands of silent victims will continue.” ~ Anna Maria Chavez. There had been series of research on cyber bullying which are unable to provide reliable solution to cyber bullying. In this research work, we were able to provide a permanent solution to this by developing a model capable of detecting and intercepting bullying incoming and outgoing messages with 92% accuracy. We also developed a chatbot automation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29.  28
    AI Powered Anti-Cyber Bullying System Using Machine Learning Algorithm of Multinomial Naïve Bayes and Optimized Linear Support Vector Machine.Tosin Ige - 2022 - International Journal of Advanced Computer Science and Applications 13 (5):1 - 5.
    Unless and until our society recognizes cyber bullying for what it is, the suffering of thousands of silent victims will continue.” ~ Anna Maria Chavez. There had been series of research on cyber bullying which are unable to provide reliable solution to cyber bullying. In this research work, we were able to provide a permanent solution to this by developing a model capable of detecting and intercepting bullying incoming and outgoing messages with 92% accuracy. We also developed a chatbot automation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30.  30
    Implementation of Data Mining on a Secure Cloud Computing Over a Web API Using Supervised Machine Learning Algorithm.Tosin Ige - 2022 - International Journal of Advanced Computer Science and Applications 13 (5):1 - 4.
    Ever since the era of internet had ushered in cloud computing, there had been increase in the demand for the unlimited data available through cloud computing for data analysis, pattern recognition and technology advancement. With this also bring the problem of scalability, efficiency and security threat. This research paper focuses on how data can be dynamically mine in real time for pattern detection in a secure cloud computing environment using combination of decision tree algorithm and Random Forest over a restful (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Exploring Machine Learning Techniques for Coronary Heart Disease Prediction.Hisham Khdair - 2021 - International Journal of Advanced Computer Science and Applications 12 (5):28-36.
    Coronary Heart Disease (CHD) is one of the leading causes of death nowadays. Prediction of the disease at an early stage is crucial for many health care providers to protect their patients and save lives and costly hospitalization resources. The use of machine learning in the prediction of serious disease events using routine medical records has been successful in recent years. In this paper, a comparative analysis of different machine learning techniques that can accurately predict the occurrence of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Fraudulent Financial Transactions Detection Using Machine Learning.Mosa M. M. Megdad, Samy S. Abu-Naser & Bassem S. Abu-Nasser - 2022 - International Journal of Academic Information Systems Research (IJAISR) 6 (3):30-39.
    It is crucial to actively detect the risks of transactions in a financial company to improve customer experience and minimize financial loss. In this study, we compare different machine learning algorithms to effectively and efficiently predict the legitimacy of financial transactions. The algorithms used in this study were: MLP Repressor, Random Forest Classifier, Complement NB, MLP Classifier, Gaussian NB, Bernoulli NB, LGBM Classifier, Ada Boost Classifier, K Neighbors Classifier, Logistic Regression, Bagging Classifier, Decision Tree Classifier and Deep Learning. The (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  33.  22
    MACHINE LEARNING IMPROVED ADVANCED DIAGNOSIS OF SOFT TISSUES TUMORS.M. Bavadharani - 2022 - Journal of Science Technology and Research (JSTAR) 3 (1):112-123.
    Delicate Tissue Tumors (STT) are a type of sarcoma found in tissues that interface, backing, and encompass body structures. Due to their shallow recurrence in the body and their extraordinary variety, they seem, by all accounts, to be heterogeneous when seen through Magnetic Resonance Imaging (MRI). They are effortlessly mistaken for different infections, for example, fibro adenoma mammae, lymphadenopathy, and struma nodosa, and these indicative blunders have an extensive unfavorable impact on the clinical treatment cycle of patients. Analysts have proposed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Two Challenges for CI Trustworthiness and How to Address Them.Kevin Baum, Eva Schmidt & A. Köhl Maximilian - 2017
    We argue that, to be trustworthy, Computa- tional Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and gras- pable. We support this claim by drawing par- allels with trustworthy human persons, and we show what difference this makes in a hypo- thetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism which could be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. A Bayesian Explanation of the Irrationality of Sexist and Racist Beliefs Involving Generic Content.Paul Silva - 2020 - Synthese 197 (6):2465-2487.
    Various sexist and racist beliefs ascribe certain negative qualities to people of a given sex or race. Epistemic allies are people who think that in normal circumstances rationality requires the rejection of such sexist and racist beliefs upon learning of many counter-instances, i.e. members of these groups who lack the target negative quality. Accordingly, epistemic allies think that those who give up their sexist or racist beliefs in such circumstances are rationally responding to their evidence, while those who do not (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  36. Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - forthcoming - Philosophy of Science:1-13.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this paper, I argue that non-epistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37.  86
    A Revolutionary New Metaphysics, Based on Consciousness, and a Call to All Philosophers.Lorna Green -
    June 2022 -/- A Revolutionary New Metaphysics, Based on Consciousness, and a Call to All Philosophers -/- We are in a unique moment of our history unlike any previous moment ever. Virtually all human economies are based on the destruction of the Earth, and we are now at a place in our history where we can foresee if we continue on as we are, our own extinction. -/- As I write, the planet is in deep trouble, heat, fires, great storms, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Prediction of Heart Disease Using a Collection of Machine and Deep Learning Algorithms.Ali M. A. Barhoom, Abdelbaset Almasri, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2022 - International Journal of Engineering and Information Systems (IJEAIS) 6 (4):1-13.
    Abstract: Heart diseases are increasing daily at a rapid rate and it is alarming and vital to predict heart diseases early. The diagnosis of heart diseases is a challenging task i.e. it must be done accurately and proficiently. The aim of this study is to determine which patient is more likely to have heart disease based on a number of medical features. We organized a heart disease prediction model to identify whether the person is likely to be diagnosed with a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  39.  78
    Max Weber on Explanation of Human Actions: Towards a Reconstruction.Koshy Tharakan - 1995 - Journal of Indian Council of Philosophical Research 12 (3):21-30.
    Recent discussions on the explanation of action are permeated with two divergent models of explanation, namely causal model and non- causal model. For causalists the notion of explanation is intimately related to that of causation. As Davidson contends, any rudimentary explanation of an event gives its cause. More sophisticated explanations may cite a relevant law in support of a singular causal claim. The non-causalists, on the other hand, hold that when we explain an action we do (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40.  99
    Sarcasm Detection in Headline News Using Machine and Deep Learning Algorithms.Alaa Barhoom, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2022 - International Journal of Engineering and Information Systems (IJEAIS) 6 (4):66-73.
    Abstract: Sarcasm is commonly used in news and detecting sarcasm in headline news is challenging for humans and thus for computers. The media regularly seem to engage sarcasm in their news headline to get the attention of people. However, people find it tough to detect the sarcasm in the headline news, hence receiving a mistaken idea about that specific news and additionally spreading it to their friends, colleagues, etc. Consequently, an intelligent system that is able to distinguish between can sarcasm (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  41. Information, Learning and Falsification.David Balduzzi - 2011
    There are (at least) three approaches to quantifying information. The first, algorithmic information or Kolmogorov complexity, takes events as strings and, given a universal Turing machine, quantifies the information content of a string as the length of the shortest program producing it [1]. The second, Shannon information, takes events as belonging to ensembles and quantifies the information resulting from observing the given event in terms of the number of alternate events that have been ruled out [2]. The third, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  87
    Overhead Cross Section Sampling Machine Learning Based Cervical Cancer Risk Factors Prediction.A. Peter Soosai Anandaraj, - 2021 - Turkish Online Journal of Qualitative Inquiry (TOJQI) 12 (6): 7697-7715.
    Most forms of human papillomavirus can create alterations on a woman's cervix that can lead to cervical cancer in the long run, while others can produce genital or epidermal tumors. Cervical cancer is a leading cause of morbidity and mortality among women in low- and middle-income countries. The prediction of cervical cancer still remains an open challenge as there are several risk factors affecting the cervix of the women. By considering the above, the cervical cancer risk factor dataset from KAGGLE (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Confirmation in a Branching World: The Everett Interpretation and Sleeping Beauty.Darren Bradley - 2011 - British Journal for the Philosophy of Science 62 (2):323-342.
    Sometimes we learn what the world is like, and sometimes we learn where in the world we are. Are there any interesting differences between the two kinds of cases? The main aim of this article is to argue that learning where we are in the world brings into view the same kind of observation selection effects that operate when sampling from a population. I will first explain what observation selection effects are ( Section 1 ) and how they are relevant (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  44.  34
    Towards Knowledge-Driven Distillation and Explanation of Black-Box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45.  51
    Prediction of Heart Disease Using a Collection of Machine and Deep Learning Algorithms.Ali M. A. Barhoom, Abdelbaset Almasri, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2022 - International Journal of Engineering and Information Systems (IJEAIS) 6 (4):1-13.
    Abstract: Heart diseases are increasing daily at a rapid rate and it is alarming and vital to predict heart diseases early. The diagnosis of heart diseases is a challenging task i.e. it must be done accurately and proficiently. The aim of this study is to determine which patient is more likely to have heart disease based on a number of medical features. We organized a heart disease prediction model to identify whether the person is likely to be diagnosed with a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Semantic Information G Theory and Logical Bayesian Inference for Machine Learning.Chenguang Lu - 2019 - Information 10 (8):261.
    An important problem with machine learning is that when label number n>2, it is very difficult to construct and optimize a group of learning functions, and we wish that optimized learning functions are still useful when prior distribution P(x) (where x is an instance) is changed. To resolve this problem, the semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms together form a systematic solution. MultilabelMultilabel A semantic channel in the G theory (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Realism, Reliability, and Epistemic Possibility: On Modally Interpreting the Benacerraf–Field Challenge.Brett Topey - 2021 - Synthese 199 (1-2):4415-4436.
    A Benacerraf–Field challenge is an argument intended to show that common realist theories of a given domain are untenable: such theories make it impossible to explain how we’ve arrived at the truth in that domain, and insofar as a theory makes our reliability in a domain inexplicable, we must either reject that theory or give up the relevant beliefs. But there’s no consensus about what would count here as a satisfactory explanation of our reliability. It’s sometimes suggested that giving (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   2 citations  
  48. AISC 17 Talk: The Explanatory Problems of Deep Learning in Artificial Intelligence and Computational Cognitive Science: Two Possible Research Agendas.Antonio Lieto - 2018 - In Proceedings of AISC 2017.
    Endowing artificial systems with explanatory capacities about the reasons guiding their decisions, represents a crucial challenge and research objective in the current fields of Artificial Intelligence (AI) and Computational Cognitive Science [Langley et al., 2017]. Current mainstream AI systems, in fact, despite the enormous progresses reached in specific tasks, mostly fail to provide a transparent account of the reasons determining their behavior (both in cases of a successful or unsuccessful output). This is due to the fact that the classical problem (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Building Machines That Learn and Think About Morality.Christopher Burr & Geoff Keeling - 2018 - In Proceedings of the Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB 2018). Society for the Study of Artificial Intelligence and Simulation of Behaviour.
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  50. Prognostic System for Heart Disease Using Machine Learning: A Review.R. Senthilkumar - 2021 - Journal of Science Technology and Research (JSTAR) 2 (1):33-38.
    In today’s world it became difficult for daily routine check-up. The Heart disease system is an end user support and online consultation project. Here the motto behind it is to make a person to know about their heart related problem and according to it formulate them how much vital the disease is. It will be easy to access and keep track of their respective health. Thus, it’s important to predict the disease as earliest. Attributes such as Bp, Cholesterol, Diabetes are (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000