Results for 'Fair Machine Learning'

999 found
Order:
  1. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  3.  73
    Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4.  19
    Big Data Analytics in Healthcare: Exploring the Role of Machine Learning in Predicting Patient Outcomes and Improving Healthcare Delivery.Federico Del Giorgio Solfa & Fernando Rogelio Simonato - 2023 - International Journal of Computations Information and Manufacturing (Ijcim) 3 (1):1-9.
    Healthcare professionals decide wisely about personalized medicine, treatment plans, and resource allocation by utilizing big data analytics and machine learning. To guarantee that algorithmic recommendations are impartial and fair, however, ethical issues relating to prejudice and data privacy must be taken into account. Big data analytics and machine learning have a great potential to disrupt healthcare, and as these technologies continue to evolve, new opportunities to reform healthcare and enhance patient outcomes may arise. In order (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  6. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  7. Just Machines.Clinton Castro - 2022 - Public Affairs Quarterly 36 (2):163-183.
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  8. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee any (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  10. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Adversarial Sampling for Fairness Testing in Deep Neural Network.Tosin Ige, William Marfo, Justin Tonkinson, Sikiru Adewale & Bolanle Hafiz Matti - 2023 - International Journal of Advanced Computer Science and Applications 14 (2).
    In this research, we focus on the usage of adversarial sampling to test for the fairness in the prediction of deep neural network model across different classes of image in a given dataset. While several framework had been proposed to ensure robustness of machine learning model against adversarial attack, some of which includes adversarial training algorithm. There is still the pitfall that adversarial training algorithm tends to cause disparity in accuracy and robustness among different group. Our research is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  12. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  13. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  14. Can machines think? The controversy that led to the Turing test.Bernardo Gonçalves - 2023 - AI and Society 38 (6):2499-2509.
    Turing’s much debated test has turned 70 and is still fairly controversial. His 1950 paper is seen as a complex and multilayered text, and key questions about it remain largely unanswered. Why did Turing select learning from experience as the best approach to achieve machine intelligence? Why did he spend several years working with chess playing as a task to illustrate and test for machine intelligence only to trade it out for conversational question-answering in 1950? Why did (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  15. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection.Jo Ann Oravec - 2022 - Ethics and Information Technology 24 (1):1-10.
    This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the roles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Performance vs. competence in human–machine comparisons.Chaz Firestone - 2020 - Proceedings of the National Academy of Sciences 41.
    Does the human mind resemble the machines that can behave like it? Biologically inspired machine-learning systems approach “human-level” accuracy in an astounding variety of domains, and even predict human brain activity—raising the exciting possibility that such systems represent the world like we do. However, even seemingly intelligent machines fail in strange and “unhumanlike” ways, threatening their status as models of our minds. How can we know when human–machine behavioral differences reflect deep disparities in their underlying capacities, vs. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  18. A Ghost Workers' Bill of Rights: How to Establish a Fair and Safe Gig Work Platform.Julian Friedland, David Balkin & Ramiro Montealegre - 2020 - California Management Review 62 (2).
    Many of us assume that all the free editing and sorting of online content we ordinarily rely on is carried out by AI algorithms — not human persons. Yet in fact, that is often not the case. This is because human workers remain cheaper, quicker, and more reliable than AI for performing myriad tasks where the right answer turns on ineffable contextual criteria too subtle for algorithms to yet decode. The output of this work is then used for machine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Word vector embeddings hold social ontological relations capable of reflecting meaningful fairness assessments.Ahmed Izzidien - 2021 - AI and Society (March 2021):1-20.
    Programming artificial intelligence to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair or fair. It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Machine Learning-Based Diabetes Prediction: Feature Analysis and Model Assessment.Fares Wael Al-Gharabawi & Samy S. Abu-Naser - 2023 - International Journal of Academic Engineering Research (IJAER) 7 (9):10-17.
    This study employs machine learning to predict diabetes using a Kaggle dataset with 13 features. Our three-layer model achieves an accuracy of 98.73% and an average error of 0.01%. Feature analysis identifies Age, Gender, Polyuria, Polydipsia, Visual blurring, sudden weight loss, partial paresis, delayed healing, irritability, Muscle stiffness, Alopecia, Genital thrush, Weakness, and Obesity as influential predictors. These findings have clinical significance for early diabetes risk assessment. While our research addresses gaps in the field, further work is needed (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen C. King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Machine Learning, Misinformation, and Citizen Science.Adrian K. Yee - 2023 - European Journal for Philosophy of Science 13 (56):1-24.
    Current methods of operationalizing concepts of misinformation in machine learning are often problematic given idiosyncrasies in their success conditions compared to other models employed in the natural and social sciences. The intrinsic value-ladenness of misinformation and the dynamic relationship between citizens' and social scientists' concepts of misinformation jointly suggest that both the construct legitimacy and the construct validity of these models needs to be assessed via more democratic criteria than has previously been recognized.
    Download  
     
    Export citation  
     
    Bookmark  
  23.  19
    Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - forthcoming - Philosophy Compass.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning---as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark  
  24.  54
    Medical Image Classification with Machine Learning Classifier.Destiny Agboro - forthcoming - Journal of Computer Science.
    In contemporary healthcare, medical image categorization is essential for illness prediction, diagnosis, and therapy planning. The emergence of digital imaging technology has led to a significant increase in research into the use of machine learning (ML) techniques for the categorization of images in medical data. We provide a thorough summary of recent developments in this area in this review, using knowledge from the most recent research and cutting-edge methods.We begin by discussing the unique challenges and opportunities associated with (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Machine learning, justification, and computational reliabilism.Juan Manuel Duran - 2023
    This article asks the question, ``what is reliable machine learning?'' As I intend to answer it, this is a question about epistemic justification. Reliable machine learning gives justification for believing its output. Current approaches to reliability (e.g., transparency) involve showing the inner workings of an algorithm (functions, variables, etc.) and how they render outputs. We then have justification for believing the output because we know how it was computed. Thus, justification is contingent on what can be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  27. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  28. Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics.Vlasta Sikimić & Sandro Radovanović - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Machines learning values.Steve Petersen - 2020 - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. New York, USA: Oxford University Press.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  30. Machine Learning and Job Posting Classification: A Comparative Study.Ibrahim M. Nasser & Amjad H. Alzaanin - 2020 - International Journal of Engineering and Information Systems (IJEAIS) 4 (9):06-14.
    In this paper, we investigated multiple machine learning classifiers which are, Multinomial Naive Bayes, Support Vector Machine, Decision Tree, K Nearest Neighbors, and Random Forest in a text classification problem. The data we used contains real and fake job posts. We cleaned and pre-processed our data, then we applied TF-IDF for feature extraction. After we implemented the classifiers, we trained and evaluated them. Evaluation metrics used are precision, recall, f-measure, and accuracy. For each classifier, results were summarized (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Fraudulent Financial Transactions Detection Using Machine Learning.Mosa M. M. Megdad, Samy S. Abu-Naser & Bassem S. Abu-Nasser - 2022 - International Journal of Academic Information Systems Research (IJAISR) 6 (3):30-39.
    It is crucial to actively detect the risks of transactions in a financial company to improve customer experience and minimize financial loss. In this study, we compare different machine learning algorithms to effectively and efficiently predict the legitimacy of financial transactions. The algorithms used in this study were: MLP Repressor, Random Forest Classifier, Complement NB, MLP Classifier, Gaussian NB, Bernoulli NB, LGBM Classifier, Ada Boost Classifier, K Neighbors Classifier, Logistic Regression, Bagging Classifier, Decision Tree Classifier and Deep (...). The dataset was collected from Kaggle depository. It consists of 6362620 rows and 10 columns. The best classifier with unbalanced dataset was the Random Forest Classifier. The Accuracy 99.97%, precession 99.96%, Recall 99.97% and the F1-score 99.96%. However, the best classifier with balanced dataset was the Bagging Classifier. The Accuracy 99.96%, precession 99.95%, Recall 99.98% and the F1-score 99.96%. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  32. Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People.S. M. Amadae & Shahar Avin - 2019 - In Vincent Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives. Stockholm, Sweden: pp. 105-118.
    This article assesses how autonomy and machine learning impact the existential risk of nuclear war. It situates the problem of cyber security, which proceeds by stealth, within the larger context of nuclear deterrence, which is effective when it functions with transparency and credibility. Cyber vulnerabilities poses new weaknesses to the strategic stability provided by nuclear deterrence. This article offers best practices for the use of computer and information technologies integrated into nuclear weapons systems. Focusing on nuclear command and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33.  70
    The Use of Machine Learning Methods for Image Classification in Medical Data.Destiny Agboro - forthcoming - International Journal of Ethics.
    Integrating medical imaging with computing technologies, such as Artificial Intelligence (AI) and its subsets: Machine learning (ML) and Deep Learning (DL) has advanced into an essential facet of present-day medicine, signaling a pivotal role in diagnostic decision-making and treatment plans (Huang et al., 2023). The significance of medical imaging is escalated by its sustained growth within the realm of modern healthcare (Varoquaux and Cheplygina, 2022). Nevertheless, the ever-increasing volume of medical images compared to the availability of imaging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. MACHINE LEARNING IMPROVED ADVANCED DIAGNOSIS OF SOFT TISSUES TUMORS.M. Bavadharani - 2022 - Journal of Science Technology and Research (JSTAR) 3 (1):112-123.
    Delicate Tissue Tumors (STT) are a type of sarcoma found in tissues that interface, backing, and encompass body structures. Due to their shallow recurrence in the body and their extraordinary variety, they seem, by all accounts, to be heterogeneous when seen through Magnetic Resonance Imaging (MRI). They are effortlessly mistaken for different infections, for example, fibro adenoma mammae, lymphadenopathy, and struma nodosa, and these indicative blunders have an extensive unfavorable impact on the clinical treatment cycle of patients. Analysts have proposed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Exploring Machine Learning Techniques for Coronary Heart Disease Prediction.Hisham Khdair - 2021 - International Journal of Advanced Computer Science and Applications 12 (5):28-36.
    Coronary Heart Disease (CHD) is one of the leading causes of death nowadays. Prediction of the disease at an early stage is crucial for many health care providers to protect their patients and save lives and costly hospitalization resources. The use of machine learning in the prediction of serious disease events using routine medical records has been successful in recent years. In this paper, a comparative analysis of different machine learning techniques that can accurately predict the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - forthcoming - ACM Computing Surveys.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. The Explanatory Role of Machine Learning in Molecular Biology.Fridolin Gross - forthcoming - Erkenntnis:1-21.
    The philosophical debate around the impact of machine learning in science is often framed in terms of a choice between AI and classical methods as mutually exclusive alternatives involving difficult epistemological trade-offs. A common worry regarding machine learning methods specifically is that they lead to opaque models that make predictions but do not lead to explanation or understanding. Focusing on the field of molecular biology, I argue that in practice machine learning is often used (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Widening Access to Applied Machine Learning With TinyML.Vijay Reddi, Brian Plancher, Susan Kennedy, Laurence Moroney, Pete Warden, Lara Suzuki, Anant Agarwal, Colby Banbury, Massimo Banzi, Matthew Bennett, Benjamin Brown, Sharad Chitlangia, Radhika Ghosal, Sarah Grafman, Rupert Jaeger, Srivatsan Krishnan, Maximilian Lam, Daniel Leiker, Cara Mann, Mark Mazumder, Dominic Pajak, Dhilan Ramaprasad, J. Evan Smith, Matthew Stewart & Dustin Tingley - 2022 - Harvard Data Science Review 4 (1).
    Broadening access to both computational and educational resources is crit- ical to diffusing machine learning (ML) innovation. However, today, most ML resources and experts are siloed in a few countries and organizations. In this article, we describe our pedagogical approach to increasing access to applied ML through a massive open online course (MOOC) on Tiny Machine Learning (TinyML). We suggest that TinyML, applied ML on resource-constrained embedded devices, is an attractive means to widen access because TinyML (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  41. Machine Learning Application to Predict The Quality of Watermelon Using JustNN.Ibrahim M. Nasser - 2019 - International Journal of Engineering and Information Systems (IJEAIS) 3 (10):1-8.
    In this paper, a predictive artificial neural network (ANN) model was developed and validated for the purpose of prediction whether a watermelon is good or bad, the model was developed using JUSTNN software environment. Prediction is done based on some watermelon attributes that are chosen to be input data to the ANN. Attributes like color, density, sugar rate, and some others. The model went through multiple learning-validation cycles until the error is zero, so the model is 100% percent accurate (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. How Values Shape the Machine Learning Opacity Problem.Emily Sullivan - 2022 - In Insa Lawler, Kareem Khalifa & Elay Shech (eds.), Scientific Understanding and Representation. Routledge. pp. 306-322.
    One of the main worries with machine learning model opacity is that we cannot know enough about how the model works to fully understand the decisions they make. But how much is model opacity really a problem? This chapter argues that the problem of machine learning model opacity is entangled with non-epistemic values. The chapter considers three different stages of the machine learning modeling process that corresponds to understanding phenomena: (i) model acceptance and linking (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Disease Identification using Machine Learning and NLP.S. Akila - 2022 - Journal of Science Technology and Research (JSTAR) 3 (1):78-92.
    Artificial Intelligence (AI) technologies are now widely used in a variety of fields to aid with knowledge acquisition and decision-making. Health information systems, in particular, can gain the most from AI advantages. Recently, symptoms-based illness prediction research and manufacturing have grown in popularity in the healthcare business. Several scholars and organisations have expressed an interest in applying contemporary computational tools to analyse and create novel approaches for rapidly and accurately predicting illnesses. In this study, we present a paradigm for assessing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. An Unconventional Look at AI: Why Today’s Machine Learning Systems are not Intelligent.Nancy Salay - 2020 - In LINKs: The Art of Linking, an Annual Transdisciplinary Review, Special Edition 1, Unconventional Computing. pp. 62-67.
    Machine learning systems (MLS) that model low-level processes are the cornerstones of current AI systems. These ‘indirect’ learners are good at classifying kinds that are distinguished solely by their manifest physical properties. But the more a kind is a function of spatio-temporally extended properties — words, situation-types, social norms — the less likely an MLS will be able to track it. Systems that can interact with objects at the individual level, on the other hand, and that can sustain (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  46. Diachronic and synchronic variation in the performance of adaptive machine learning systems: the ethical challenges.Joshua Hatherley & Robert Sparrow - 2023 - Journal of the American Medical Informatics Association 30 (2):361-366.
    Objectives: Machine learning (ML) has the potential to facilitate “continual learning” in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such “adaptive” ML systems in medicine that have, thus far, been neglected in the literature. -/- Target audience: The target audiences for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Overhead Cross Section Sampling Machine Learning based Cervical Cancer Risk Factors Prediction.A. Peter Soosai Anandaraj, - 2021 - Turkish Online Journal of Qualitative Inquiry (TOJQI) 12 (6): 7697-7715.
    Most forms of human papillomavirus can create alterations on a woman's cervix that can lead to cervical cancer in the long run, while others can produce genital or epidermal tumors. Cervical cancer is a leading cause of morbidity and mortality among women in low- and middle-income countries. The prediction of cervical cancer still remains an open challenge as there are several risk factors affecting the cervix of the women. By considering the above, the cervical cancer risk factor dataset from KAGGLE (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. AI Powered Anti-Cyber bullying system using Machine Learning Algorithm of Multinomial Naïve Bayes and Optimized Linear Support Vector Machine.Tosin Ige & Sikiru Adewale - 2022 - International Journal of Advanced Computer Science and Applications 13 (5):1 - 5.
    Unless and until our society recognizes cyber bullying for what it is, the suffering of thousands of silent victims will continue.” ~ Anna Maria Chavez. There had been series of research on cyber bullying which are unable to provide reliable solution to cyber bullying. In this research work, we were able to provide a permanent solution to this by developing a model capable of detecting and intercepting bullying incoming and outgoing messages with 92% accuracy. We also developed a chatbot automation (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  49.  85
    An Introduction to Artificial Psychology Application Fuzzy Set Theory and Deep Machine Learning in Psychological Research using R.Farahani Hojjatollah - 2023 - Springer Cham. Edited by Hojjatollah Farahani, Marija Blagojević, Parviz Azadfallah, Peter Watson, Forough Esrafilian & Sara Saljoughi.
    Artificial Psychology (AP) is a highly multidisciplinary field of study in psychology. AP tries to solve problems which occur when psychologists do research and need a robust analysis method. Conventional statistical approaches have deep rooted limitations. These approaches are excellent on paper but often fail to model the real world. Mind researchers have been trying to overcome this by simplifying the models being studied. This stance has not received much practical attention recently. Promoting and improving artificial intelligence helps mind researchers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50.  27
    Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains.Thomas F. Burns - 2023 - Journal of Medical Ethics 49 (8):547-548.
    Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 999