Results for 'Machine Learning (ML)'

50 found
Order:
  1.  59
    Machine Learning-Based Cyberbullying Detection System with Enhanced Accuracy and Speed.M. Arulselvan - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):421-429.
    The rise of social media has created a new platform for communication and interaction, but it has also facilitated the spread of harmful behaviors such as cyberbullying. Detecting and mitigating cyberbullying on social media platforms is a critical challenge that requires advanced technological solutions. This paper presents a novel approach to cyberbullying detection using a combination of supervised machine learning (ML) and natural language processing (NLP) techniques, enhanced by optimization algorithms. The proposed system is designed to identify and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. The Use of Machine Learning Methods for Image Classification in Medical Data.Destiny Agboro - forthcoming - International Journal of Ethics.
    Integrating medical imaging with computing technologies, such as Artificial Intelligence (AI) and its subsets: Machine learning (ML) and Deep Learning (DL) has advanced into an essential facet of present-day medicine, signaling a pivotal role in diagnostic decision-making and treatment plans (Huang et al., 2023). The significance of medical imaging is escalated by its sustained growth within the realm of modern healthcare (Varoquaux and Cheplygina, 2022). Nevertheless, the ever-increasing volume of medical images compared to the availability of imaging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Diachronic and synchronic variation in the performance of adaptive machine learning systems: the ethical challenges.Joshua Hatherley & Robert Sparrow - 2023 - Journal of the American Medical Informatics Association 30 (2):361-366.
    Objectives: Machine learning (ML) has the potential to facilitate “continual learning” in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such “adaptive” ML systems in medicine that have, thus far, been neglected in the literature. -/- Target audience: The target audiences for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - 2021 - ACM Computing Surveys 54 (3):1-18.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. MACHINE LEARNING IMPROVED ADVANCED DIAGNOSIS OF SOFT TISSUES TUMORS.M. Bavadharani - 2022 - Journal of Science Technology and Research (JSTAR) 3 (1):112-123.
    Delicate Tissue Tumors (STT) are a type of sarcoma found in tissues that interface, backing, and encompass body structures. Due to their shallow recurrence in the body and their extraordinary variety, they seem, by all accounts, to be heterogeneous when seen through Magnetic Resonance Imaging (MRI). They are effortlessly mistaken for different infections, for example, fibro adenoma mammae, lymphadenopathy, and struma nodosa, and these indicative blunders have an extensive unfavorable impact on the clinical treatment cycle of patients. Analysts have proposed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  9. Should the use of adaptive machine learning systems in medicine be classified as research?Robert Sparrow, Joshua Hatherley, Justin Oakley & Chris Bain - 2024 - American Journal of Bioethics 24 (10):58-69.
    A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called “update problem,” which concerns how regulators should approach systems whose performance and parameters continue to change even (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  10.  82
    OPTIMIZING CONSUMER BEHAVIOUR ANALYTICS THROUGH ADVANCED MACHINE LEARNING ALGORITHMS.S. Yoheswari - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):360-368.
    Consumer behavior analytics has become a pivotal aspect for businesses to understand and predict customer preferences and actions. The advent of machine learning (ML) algorithms has revolutionized this field by providing sophisticated tools for data analysis, enabling businesses to make data-driven decisions. However, the effectiveness of these ML algorithms significantly hinges on the optimization techniques employed, which can enhance model accuracy and efficiency. This paper explores the application of various optimization techniques in consumer behaviour analytics using machine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11.  44
    OPTIMIZING CONSUMER BEHAVIOUR ANALYTICS THROUGH ADVANCED MACHINE LEARNING ALGORITHMS.Yoheswari S. - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):362-370.
    Consumer behavior analytics has become a pivotal aspect for businesses to understand and predict customer preferences and actions. The advent of machine learning (ML) algorithms has revolutionized this field by providing sophisticated tools for data analysis, enabling businesses to make data-driven decisions. However, the effectiveness of these ML algorithms significantly hinges on the optimization techniques employed, which can enhance model accuracy and efficiency. This paper explores the application of various optimization techniques in consumer behaviour analytics using machine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12.  78
    Automated Cyberbullying Detection Framework Using NLP and Supervised Machine Learning Models.M. Arul Selvan - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):421-432.
    The rise of social media has created a new platform for communication and interaction, but it has also facilitated the spread of harmful behaviors such as cyberbullying. Detecting and mitigating cyberbullying on social media platforms is a critical challenge that requires advanced technological solutions. This paper presents a novel approach to cyberbullying detection using a combination of supervised machine learning (ML) and natural language processing (NLP) techniques, enhanced by optimization algorithms. The proposed system is designed to identify and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Understanding with Toy Surrogate Models in Machine Learning.Andrés Páez - 2024 - Minds and Machines 34 (4):45.
    In the natural and social sciences, it is common to use toy models—extremely simple and highly idealized representations—to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14.  64
    OPTIMIZED CYBERBULLYING DETECTION IN SOCIAL MEDIA USING SUPERVISED MACHINE LEARNING AND NLP TECHNIQUES.S. Yoheswari - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):421-435.
    The rise of social media has created a new platform for communication and interaction, but it has also facilitated the spread of harmful behaviors such as cyberbullying. Detecting and mitigating cyberbullying on social media platforms is a critical challenge that requires advanced technological solutions. This paper presents a novel approach to cyberbullying detection using a combination of supervised machine learning (ML) and natural language processing (NLP) techniques, enhanced by optimization algorithms. The proposed system is designed to identify and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Securing the Internet of Things: A Study on Machine Learning-Based Solutions for IoT Security and Privacy Challenges.Aziz Ullah Karimy & P. Chandrasekhar Reddy - 2023 - Zkg International 8 (2):30-65.
    The Internet of Things (IoT) is a rapidly growing technology that connects and integrates billions of smart devices, generating vast volumes of data and impacting various aspects of daily life and industrial systems. However, the inherent characteristics of IoT devices, including limited battery life, universal connectivity, resource-constrained design, and mobility, make them highly vulnerable to cybersecurity attacks, which are increasing at an alarming rate. As a result, IoT security and privacy have gained significant research attention, with a particular focus on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Medical Image Classification with Machine Learning Classifier.Destiny Agboro - forthcoming - Journal of Computer Science.
    In contemporary healthcare, medical image categorization is essential for illness prediction, diagnosis, and therapy planning. The emergence of digital imaging technology has led to a significant increase in research into the use of machine learning (ML) techniques for the categorization of images in medical data. We provide a thorough summary of recent developments in this area in this review, using knowledge from the most recent research and cutting-edge methods.We begin by discussing the unique challenges and opportunities associated with (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  41
    Machine Learning-Driven Optimization for Accurate Cardiovascular Disease Prediction.Yoheswari S. - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):350-359.
    The research methodology involves data preprocessing, feature engineering, model training, and performance evaluation. We employ optimization methods such as Genetic Algorithms and Grid Search to fine-tune model parameters, ensuring robust and generalizable models. The dataset used includes patient medical records, with features like age, blood pressure, cholesterol levels, and lifestyle habits serving as inputs for the ML models. Evaluation metrics, including accuracy, precision, recall, F1-score, and the area under the ROC curve (AUC-ROC), assess the model's predictive power.
    Download  
     
    Export citation  
     
    Bookmark  
  18. An Unconventional Look at AI: Why Today’s Machine Learning Systems are not Intelligent.Nancy Salay - 2020 - In LINKs: The Art of Linking, an Annual Transdisciplinary Review, Special Edition 1, Unconventional Computing. pp. 62-67.
    Machine learning systems (MLS) that model low-level processes are the cornerstones of current AI systems. These ‘indirect’ learners are good at classifying kinds that are distinguished solely by their manifest physical properties. But the more a kind is a function of spatio-temporally extended properties — words, situation-types, social norms — the less likely an MLS will be able to track it. Systems that can interact with objects at the individual level, on the other hand, and that can sustain (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Widening Access to Applied Machine Learning With TinyML.Vijay Reddi, Brian Plancher, Susan Kennedy, Laurence Moroney, Pete Warden, Lara Suzuki, Anant Agarwal, Colby Banbury, Massimo Banzi, Matthew Bennett, Benjamin Brown, Sharad Chitlangia, Radhika Ghosal, Sarah Grafman, Rupert Jaeger, Srivatsan Krishnan, Maximilian Lam, Daniel Leiker, Cara Mann, Mark Mazumder, Dominic Pajak, Dhilan Ramaprasad, J. Evan Smith, Matthew Stewart & Dustin Tingley - 2022 - Harvard Data Science Review 4 (1).
    Broadening access to both computational and educational resources is crit- ical to diffusing machine learning (ML) innovation. However, today, most ML resources and experts are siloed in a few countries and organizations. In this article, we describe our pedagogical approach to increasing access to applied ML through a massive open online course (MOOC) on Tiny Machine Learning (TinyML). We suggest that TinyML, applied ML on resource-constrained embedded devices, is an attractive means to widen access because TinyML (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20.  46
    Optimized Cloud Computing Solutions for Cardiovascular Disease Prediction Using Advanced Machine Learning.Kannan K. S. - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):465-480.
    The world's leading cause of morbidity and death is cardiovascular diseases (CVD), which makes early detection essential for successful treatments. This study investigates how optimization techniques can be used with machine learning (ML) algorithms to forecast cardiovascular illnesses more accurately. ML models can evaluate enormous datasets by utilizing data-driven techniques, finding trends and risk factors that conventional methods can miss. In order to increase prediction accuracy, this study focuses on adopting different machine learning algorithms, including Decision (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21.  52
    Innovative Approaches in Cardiovascular Disease Prediction Through Machine Learning Optimization.M. Arul Selvan - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):350-359.
    Cardiovascular diseases (CVD) represent a significant cause of morbidity and mortality worldwide, necessitating early detection for effective intervention. This research explores the application of machine learning (ML) algorithms in predicting cardiovascular diseases with enhanced accuracy by integrating optimization techniques. By leveraging data-driven approaches, ML models can analyze vast datasets, identifying patterns and risk factors that traditional methods might overlook. This study focuses on implementing various ML algorithms, such as Decision Trees, Random Forest, Support Vector Machines, and Neural Networks, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains.Thomas F. Burns - 2023 - Journal of Medical Ethics 49 (8):547-548.
    Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Disease Identification using Machine Learning and NLP.S. Akila - 2022 - Journal of Science Technology and Research (JSTAR) 3 (1):78-92.
    Artificial Intelligence (AI) technologies are now widely used in a variety of fields to aid with knowledge acquisition and decision-making. Health information systems, in particular, can gain the most from AI advantages. Recently, symptoms-based illness prediction research and manufacturing have grown in popularity in the healthcare business. Several scholars and organisations have expressed an interest in applying contemporary computational tools to analyse and create novel approaches for rapidly and accurately predicting illnesses. In this study, we present a paradigm for assessing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24.  41
    Efficient Cloud-Enabled Cardiovascular Disease Risk Prediction and Management through Optimized Machine Learning.P. Selvaprasanth - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):454-475.
    The world's leading cause of morbidity and death is cardiovascular diseases (CVD), which makes early detection essential for successful treatments. This study investigates how optimization techniques can be used with machine learning (ML) algorithms to forecast cardiovascular illnesses more accurately. ML models can evaluate enormous datasets by utilizing data-driven techniques, finding trends and risk factors that conventional methods can miss. In order to increase prediction accuracy, this study focuses on adopting different machine learning algorithms, including Decision (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25.  69
    OPTIMIZED CARDIOVASCULAR DISEASE PREDICTION USING MACHINE LEARNING ALGORITHMS.S. Yoheswari - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):350-359.
    Cardiovascular diseases (CVD) represent a significant cause of morbidity and mortality worldwide, necessitating early detection for effective intervention. This research explores the application of machine learning (ML) algorithms in predicting cardiovascular diseases with enhanced accuracy by integrating optimization techniques. By leveraging data-driven approaches, ML models can analyze vast datasets, identifying patterns and risk factors that traditional methods might overlook. This study focuses on implementing various ML algorithms, such as Decision Trees, Random Forest, Support Vector Machines, and Neural Networks, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap.Tianqi Kou - manuscript
    Two goals - improving replicability and accountability of Machine Learning research respectively, have accrued much attention from the AI ethics and the Machine Learning community. Despite sharing the measures of improving transparency, the two goals are discussed in different registers - replicability registers with scientific reasoning whereas accountability registers with ethical reasoning. Given the existing challenge of the Responsibility Gap - holding Machine Learning scientists accountable for Machine Learning harms due to them (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. What Counts as “Clinical Data” in Machine Learning Healthcare Applications?Joshua August Skorburg - 2020 - American Journal of Bioethics 20 (11):27-30.
    Peer commentary on Char, Abràmoff & Feudtner (2020) target article: "Identifying Ethical Considerations for Machine Learning Healthcare Applications" .
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Accelerating Artificial Intelligence: Exploring the Implications of Xenoaccelerationism and Accelerationism for AI and Machine Learning.Kaiola liu - 2023 - Dissertation, University of Hawaii
    This article analyzes the potential impacts of Xenoaccelerationism and Accelerationism on the development of artificial intelligence (AI) and machine learning (ML). It examines how these speculative philosophies, which advocate technological acceleration and integration of diverse knowledge, may shape priorities and approaches in AI research and development. The risks and benefits of aligning AI progress with accelerationist values are discussed.
    Download  
     
    Export citation  
     
    Bookmark  
  29. Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs.Sina Fazelpour - 2021
    This paper focuses on two highly publicized formal trade-offs in the field of responsible artificial intelligence (AI) -- between predictive accuracy and fairness and between predictive accuracy and interpretability. These formal trade-offs are often taken by researchers, practitioners, and policy-makers to directly imply corresponding tensions between underlying values. Thus interpreted, the trade-offs have formed a core focus of normative engagement in AI governance, accompanied by a particular division of labor along disciplinary lines. This paper argues against this prevalent interpretation by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.Benjamin Davies & Thomas Douglas - 2022 - In Jesper Ryberg & Julian V. Roberts (eds.), Sentencing and Artificial Intelligence. Oxford: OUP.
    It is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed to predict recidivism. A similar approach could be taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: information about race could be withheld from the ML tool during its training phase, ensuring (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  31. Do ML models represent their targets?Emily Sullivan - forthcoming - Philosophy of Science.
    I argue that ML models used in science function as highly idealized toy models. If we treat ML models as a type of highly idealized toy model, then we can deploy standard representational and epistemic strategies from the toy model literature to explain why ML models can still provide epistemic success despite their lack of similarity to their targets.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Imagine This: Opaque DLMs are Reliable in the Context of Justification.Logan Carter - manuscript
    Artificial intelligence (AI) and machine learning (ML) models have undoubtedly become useful tools in science. In general, scientists and ML developers are optimistic – perhaps rightfully so – about the potential that these models have in facilitating scientific progress. The philosophy of AI literature carries a different mood. The attention of philosophers remains on potential epistemological issues that stem from the so-called “black box” features of ML models. For instance, Eamon Duede (2023) argues that opacity in deep (...) models (DLMs) is epistemically problematic in the context of justification, though not in the context of discovery. -/- In this paper, I aim to show that a similar epistemological concern is echoed in the epistemology of imagination literature. It is traditionally held that, given its black box features, reliance on the imagination is epistemically problematic in the context of justification, though not in the context of discovery. The constraints-based approach to the imagination answers the epistemological concern by providing an account of how we can rely on the imagination in the context of justification by way of constraints. I argue by analogy that a similar approach can be applied to the opaque DLM case. Ultimately, my goal is to explore just how far this analogy extends, and whether a constraints-based approach to opaque DLMs can answer the epistemological concern surrounding their black box features in the context of justification. -/- (Note that this paper is IN PROGRESS and UNPUBLISHED). (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  34. The Future of Human-Artificial Intelligence Nexus and its Environmental Costs.Petr Spelda & Vit Stritecky - 2020 - Futures 117.
    The environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and Artificial Intelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al., 2019 in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Responding to the Watson-Sterkenburg debate on clustering algorithms and natural kinds.Warmhold Jan Thomas Mollema - manuscript
    In Philosophy and Technology 36, David Watson discusses the epistemological and metaphysical implications of unsupervised machine learning (ML) algorithms. Watson is sympathetic to the epistemological comparison of unsupervised clustering, abstraction and generative algorithms to human cognition and sceptical about ML’s mechanisms having ontological implications. His epistemological commitments are that we learn to identify “natural kinds through clustering algorithms”, “essential properties via abstraction algorithms”, and “unrealized possibilities via generative models” “or something very much like them.” The same issue contains (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies from the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Understanding Biology in the Age of Artificial Intelligence.Adham El Shazly, Elsa Lawerence, Srijit Seal, Chaitanya Joshi, Matthew Greening, Pietro Lio, Shantung Singh, Andreas Bender & Pietro Sormanni - manuscript
    Modern life sciences research is increasingly relying on artificial intelligence (AI) approaches to model biological systems, primarily centered around the use of machine learning (ML) models. Although ML is undeniably useful for identifying patterns in large, complex data sets, its widespread application in biological sciences represents a significant deviation from traditional methods of scientific inquiry. As such, the interplay between these models and scientific understanding in biology is a topic with important implications for the future of scientific research, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Mapping Value Sensitive Design onto AI for Social Good Principles.Steven Umbrello & Ibo van de Poel - 2021 - AI and Ethics 1 (3):283–296.
    Value Sensitive Design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  39. Vertrouwen in de geneeskunde en kunstmatige intelligentie.Lily Frank & Michal Klincewicz - 2021 - Podium Voor Bioethiek 3 (28):37-42.
    Kunstmatige intelligentie (AI) en systemen die met machine learning (ML) werken, kunnen veel onderdelen van het medische besluitvormingsproces ondersteunen of vervangen. Ook zouden ze artsen kunnen helpen bij het omgaan met klinische, morele dilemma’s. AI/ML-beslissingen kunnen zo in de plaats komen van professionele beslissingen. We betogen dat dit belangrijke gevolgen heeft voor de relatie tussen een patiënt en de medische professie als instelling, en dat dit onvermijdelijk zal leiden tot uitholling van het institutionele vertrouwen in de geneeskunde.
    Download  
     
    Export citation  
     
    Bookmark  
  40. The debate on the ethics of AI in health care: a reconstruction and critical review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Ethical Issues in Text Mining for Mental Health.Joshua Skorburg & Phoebe Friesen - forthcoming - In Morteza Dehghani & Ryan Boyd (eds.), The Atlas of Language Analysis in Psychology. Guilford Press.
    A recent systematic review of Machine Learning (ML) approaches to health data, containing over 100 studies, found that the most investigated problem was mental health (Yin et al., 2019). Relatedly, recent estimates suggest that between 165,000 and 325,000 health and wellness apps are now commercially available, with over 10,000 of those designed specifically for mental health (Carlo et al., 2019). In light of these trends, the present chapter has three aims: (1) provide an informative overview of some of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  42. Predicting Students' end-of-term Performances using ML Techniques and Environmental Data.Ahmed Mohammed Husien, Osama Hussam Eljamala, Waleed Bahgat Alwadia & Samy S. Abu-Naser - 2023 - International Journal of Academic Information Systems Research (IJAISR) 7 (10):19-25.
    Abstract: This study introduces a machine learning-based model for predicting student performance using a comprehensive dataset derived from educational sources, encompassing 15 key features and comprising 62,631 student samples. Our five-layer neural network demonstrated remarkable performance, achieving an accuracy of 89.14% and an average error of 0.000715, underscoring its effectiveness in predicting student outcomes. Crucially, this research identifies pivotal determinants of student success, including factors such as socio-economic background, prior academic history, study habits, and attendance patterns, shedding light (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Persons or datapoints?: Ethics, artificial intelligence, and the participatory turn in mental health research.Joshua August Skorburg, Kieran O'Doherty & Phoebe Friesen - 2024 - American Psychologist 79 (1):137-149.
    This article identifies and examines a tension in mental health researchers’ growing enthusiasm for the use of computational tools powered by advances in artificial intelligence and machine learning (AI/ML). Although there is increasing recognition of the value of participatory methods in science generally and in mental health research specifically, many AI/ML approaches, fueled by an ever-growing number of sensors collecting multimodal data, risk further distancing participants from research processes and rendering them as mere vectors or collections of data (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Are generics and negativity about social groups common on social media? A comparative analysis of Twitter (X) data.Uwe Peters & Ignacio Ojea Quintana - 2024 - Synthese 203 (6):1-22.
    Many philosophers hold that generics (i.e., unquantified generalizations) are pervasive in communication and that when they are about social groups, this may offend and polarize people because generics gloss over variations between individuals. Generics about social groups might be particularly common on Twitter (X). This remains unexplored, however. Using machine learning (ML) techniques, we therefore developed an automatic classifier for social generics, applied it to 1.1 million tweets about people, and analyzed the tweets. While it is often suggested (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Personalized Patient Preference Predictors Are Neither Technically Feasible nor Ethically Desirable.Nathaniel Sharadin - 2024 - American Journal of Bioethics 24 (7):62-65.
    Except in extraordinary circumstances, patients' clinical care should reflect their preferences. Incapacitated patients cannot report their preferences. This is a problem. Extant solutions to the problem are inadequate: surrogates are unreliable, and advance directives are uncommon. In response, some authors have suggested developing algorithmic "patient preference predictors" (PPPs) to inform care for incapacitated patients. In a recent paper, Earp et al. propose a new twist on PPPs. Earp et al. suggest we personalize PPPs using modern machine learning (ML) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46.  46
    (1 other version)Institutional Trust in Medicine in the Age of Artificial Intelligence.Michał Klincewicz - 2023 - In David Collins, Iris Vidmar Jovanović & Mark Alfano (eds.), The Moral Psychology of Trust. Lexington Books.
    It is easier to talk frankly to a person whom one trusts. It is also easier to agree with a scientist whom one trusts. Even though in both cases the psychological state that underlies the behavior is called ‘trust’, it is controversial whether it is a token of the same psychological type. Trust can serve an affective, epistemic, or other social function, and comes to interact with other psychological states in a variety of ways. The way that the functional role (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. The purpose of qualia: What if human thinking is not (only) information processing?Martin Korth - manuscript
    Despite recent breakthroughs in the field of artificial intelligence (AI) – or more specifically machine learning (ML) algorithms for object recognition and natural language processing – it seems to be the majority view that current AI approaches are still no real match for natural intelligence (NI). More importantly, philosophers have collected a long catalogue of features which imply that NI works differently from current AI not only in a gradual sense, but in a more substantial way: NI is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Excavating “Excavating AI”: The Elephant in the Gallery.Michael J. Lyons - 2020 - arXiv 2009:1-15.
    Two art exhibitions, “Training Humans” and “Making Faces,” and the accompanying essay “Excavating AI: The politics of images in machine learning training sets” by Kate Crawford and Trevor Paglen, are making substantial impact on discourse taking place in the social and mass media networks, and some scholarly circles. Critical scrutiny reveals, however, a self-contradictory stance regarding informed consent for the use of facial images, as well as serious flaws in their critique of ML training sets. Our analysis underlines (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  49. Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 1.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  50. Axe the X in XAI: A Plea for Understandable AI.Andrés Páez - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also (...)
    Download  
     
    Export citation  
     
    Bookmark