Results for 'Fair machine learning'

989 found
Order:
  1. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Jessica Dai, Sina Fazelpour & Zachary Lipton, Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  75
    The Representative Individuals Approach to Fair Machine Learning.Clinton Castro & Loi Michele - forthcoming - AI and Ethics.
    The demands of fair machine learning are often expressed in probabilistic terms. Yet, most of the systems of concern are deterministic in the sense that whether a given subject will receive a given score on the basis of their traits is, for all intents and purposes, either zero or one. What, then, can justify this probabilistic talk? We argue that the statistical reference classes used in fairness measures can be understood as defining the probability that hypothetical persons, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  4. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  5.  44
    Machine Learning for Autonomous Systems: Navigating Safety, Ethics, and Regulation In.Madhu Aswathy - 2025 - International Journal of Advanced Research in Education and Technology 12 (2):458-463.
    Autonomous systems, powered by machine learning (ML), have the potential to revolutionize various industries, including transportation, healthcare, and robotics. However, the integration of machine learning in autonomous systems raises significant challenges related to safety, ethics, and regulatory compliance. Ensuring the reliability and trustworthiness of these systems is crucial, especially when they operate in environments with high risks, such as self-driving cars or medical robots. This paper explores the intersection of machine learning and autonomous systems, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6.  18
    Machine Learning For Autonomous Systems: Navigating Safety, Ethics, and Regulation In.Saurav Choure Aswathy Madhu, Ankita Shinde - 2025 - International Journal of Innovative Research in Computer and Communication Engineering 13 (2):1680-1685.
    Autonomous systems, powered by machine learning (ML), have the potential to revolutionize various industries, including transportation, healthcare, and robotics. However, the integration of machine learning in autonomous systems raises significant challenges related to safety, ethics, and regulatory compliance. Ensuring the reliability and trustworthiness of these systems is crucial, especially when they operate in environments with high risks, such as self-driving cars or medical robots. This paper explores the intersection of machine learning and autonomous systems, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  8.  48
    EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI): ENHANCING TRANSPARENCY AND TRUST IN MACHINE LEARNING MODELS.Prasad Pasam Thulasiram - 2025 - International Journal for Innovative Engineering and Management Research 14 (1):204-213.
    This research reviews explanation and interpretation for Explainable Artificial Intelligence (XAI) methods in order to boost complex machine learning model interpretability. The study shows the influence and belief of XAI in users that trust an Artificial Intelligence system and investigates ethical concerns, particularly fairness and biasedness of all the nontransparent models. It discusses the shortfalls related to XAI techniques, putting crucial emphasis on extended scope, enhancement and scalability potential. A number of outstanding issuesespecially in need of further work (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  9. Big Data Analytics in Healthcare: Exploring the Role of Machine Learning in Predicting Patient Outcomes and Improving Healthcare Delivery.Federico Del Giorgio Solfa & Fernando Rogelio Simonato - 2023 - International Journal of Computations Information and Manufacturing (Ijcim) 3 (1):1-9.
    Healthcare professionals decide wisely about personalized medicine, treatment plans, and resource allocation by utilizing big data analytics and machine learning. To guarantee that algorithmic recommendations are impartial and fair, however, ethical issues relating to prejudice and data privacy must be taken into account. Big data analytics and machine learning have a great potential to disrupt healthcare, and as these technologies continue to evolve, new opportunities to reform healthcare and enhance patient outcomes may arise. In order (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In Atoosa Kasirzadeh & Andrew Smart, ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  11. Just Machines.Clinton Castro - 2022 - Public Affairs Quarterly 36 (2):163-183.
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  12. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  13. Broomean(ish) Algorithmic Fairness?Clinton Castro - forthcoming - Journal of Applied Philosophy.
    Recently, there has been much discussion of ‘fair machine learning’: fairness in data-driven decision-making systems (which are often, though not always, made with assistance from machine learning systems). Notorious impossibility results show that we cannot have everything we want here. Such problems call for careful thinking about the foundations of fair machine learning. Sune Holm has identified one promising way forward, which involves applying John Broome's theory of fairness to the puzzles of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  57
    Disciplining Deliberation: A Socio-technical Perspective on Machine Learning Trade-Offs.Sina Fazelpour - forthcoming - British Journal for the Philosophy of Science.
    This paper examines two prominent formal trade-offs in artificial intelligence (AI)---between predictive accuracy and fairness, and between predictive accuracy and interpretability. These trade-offs have become a central focus in normative and regulatory discussions as policymakers seek to understand the value tensions that can arise in the social adoption of AI tools. The prevailing interpretation views these formal trade-offs as directly corresponding to tensions between underlying social values, implying unavoidable conflicts between those social objectives. In this paper, I challenge that prevalent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs.Sina Fazelpour - forthcoming - British Journal for the Philosophy of Science.
    This paper examines two prominent formal trade-offs in artificial intelligence (AI)---between predictive accuracy and fairness, and between predictive accuracy and interpretability. These trade-offs have become a central focus in normative and regulatory discussions as policymakers seek to understand the value tensions that can arise in the social adoption of AI tools. The prevailing interpretation views these formal trade-offs as directly corresponding to tensions between underlying social values, implying unavoidable conflicts between those social objectives. In this paper, I challenge that prevalent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  17. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee any (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  18. Adversarial Sampling for Fairness Testing in Deep Neural Network.Tosin Ige, William Marfo, Justin Tonkinson, Sikiru Adewale & Bolanle Hafiz Matti - 2023 - International Journal of Advanced Computer Science and Applications 14 (2).
    In this research, we focus on the usage of adversarial sampling to test for the fairness in the prediction of deep neural network model across different classes of image in a given dataset. While several framework had been proposed to ensure robustness of machine learning model against adversarial attack, some of which includes adversarial training algorithm. There is still the pitfall that adversarial training algorithm tends to cause disparity in accuracy and robustness among different group. Our research is (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  19. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  21.  91
    Ethical Considerations of AI and ML in Insurance Risk Management: Addressing Bias and Ensuring Fairness (8th edition).Palakurti Naga Ramesh - 2025 - International Journal of Multidisciplinary Research in Science, Engineering and Technology 8 (1):202-210.
    Artificial Intelligence (AI) and Machine Learning (ML) are transforming the insurance industry by optimizing risk assessment, fraud detection, and customer service. However, the rapid adoption of these technologies raises significant ethical concerns, particularly regarding bias and fairness. This chapter explores the ethical challenges of using AI and ML in insurance risk management, focusing on bias mitigation and fairness enhancement strategies. By analyzing real-world case studies, regulatory frameworks, and technical methodologies, this chapter aims to provide a roadmap for developing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Algorithmic Fairness Criteria as Evidence.Will Fleisher - forthcoming - Ergo: An Open Access Journal of Philosophy.
    Statistical fairness criteria are widely used for diagnosing and ameliorating algorithmic bias. However, these fairness criteria are controversial as their use raises several difficult questions. I argue that the major problems for statistical algorithmic fairness criteria stem from an incorrect understanding of their nature. These criteria are primarily used for two purposes: first, evaluating AI systems for bias, and second constraining machine learning optimization problems in order to ameliorate such bias. The first purpose typically involves treating each criterion (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  23. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  24. Performance vs. competence in human–machine comparisons.Chaz Firestone - 2020 - Proceedings of the National Academy of Sciences 41.
    Does the human mind resemble the machines that can behave like it? Biologically inspired machine-learning systems approach “human-level” accuracy in an astounding variety of domains, and even predict human brain activity—raising the exciting possibility that such systems represent the world like we do. However, even seemingly intelligent machines fail in strange and “unhumanlike” ways, threatening their status as models of our minds. How can we know when human–machine behavioral differences reflect deep disparities in their underlying capacities, vs. (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  25. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  26. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection.Jo Ann Oravec - 2022 - Ethics and Information Technology 24 (1):1-10.
    This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the roles (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Can machines think? The controversy that led to the Turing test.Bernardo Gonçalves - 2023 - AI and Society 38 (6):2499-2509.
    Turing’s much debated test has turned 70 and is still fairly controversial. His 1950 paper is seen as a complex and multilayered text, and key questions about it remain largely unanswered. Why did Turing select learning from experience as the best approach to achieve machine intelligence? Why did he spend several years working with chess playing as a task to illustrate and test for machine intelligence only to trade it out for conversational question-answering in 1950? Why did (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  28. Decisional Value Scores.Gabriella Waters, William Mapp & Phillip Honenberger - 2024 - AI and Ethics 2024.
    Research in ethical AI has made strides in quantitative expression of ethical values such as fairness, transparency, and privacy. Here we contribute to this effort by proposing a new family of metrics called “decisional value scores” (DVS). DVSs are scores assigned to a system based on whether the decisions it makes meet or fail to meet a particular standard (either individually, in total, or as a ratio or average over decisions made). Advantages of DVS include greater discrimination capacity between types (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. A Ghost Workers' Bill of Rights: How to Establish a Fair and Safe Gig Work Platform.Julian Friedland, David Balkin & Ramiro Montealegre - 2020 - California Management Review 62 (2).
    Many of us assume that all the free editing and sorting of online content we ordinarily rely on is carried out by AI algorithms — not human persons. Yet in fact, that is often not the case. This is because human workers remain cheaper, quicker, and more reliable than AI for performing myriad tasks where the right answer turns on ineffable contextual criteria too subtle for algorithms to yet decode. The output of this work is then used for machine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. (1 other version)Word vector embeddings hold social ontological relations capable of reflecting meaningful fairness assessments.Ahmed Izzidien - 2021 - AI and Society (March 2021):1-20.
    Programming artificial intelligence to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair or fair. It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences.Lode Lauwaert - 2023 - Artificial Intelligence Review 56:3473–3504.
    Since its emergence in the 1960s, Artifcial Intelligence (AI) has grown to conquer many technology products and their felds of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from diferent domains, together with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32.  55
    Machine Learning for Characterization and Analysis of Microstructure and Spectral Data of Materials.Venkataramaiah Gude - 2023 - International Journal of Intelligent Systems and Applications in Engineering 12 (21):820 - 826.
    In the contemporary world, there is lot of research going on in creating novel nano materials that are essential for many industries including electronic chips and storage devices in cloud to mention few. At the same time, there is emergence of usage of machine learning (ML) for solving problems in different industries such as manufacturing, physics and chemical engineering. ML has potential to solve many real world problems with its ability to learn in either supervised or unsupervised means. (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  33. Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  34. Machine Learning-Based Diabetes Prediction: Feature Analysis and Model Assessment.Fares Wael Al-Gharabawi & Samy S. Abu-Naser - 2023 - International Journal of Academic Engineering Research (IJAER) 7 (9):10-17.
    This study employs machine learning to predict diabetes using a Kaggle dataset with 13 features. Our three-layer model achieves an accuracy of 98.73% and an average error of 0.01%. Feature analysis identifies Age, Gender, Polyuria, Polydipsia, Visual blurring, sudden weight loss, partial paresis, delayed healing, irritability, Muscle stiffness, Alopecia, Genital thrush, Weakness, and Obesity as influential predictors. These findings have clinical significance for early diabetes risk assessment. While our research addresses gaps in the field, further work is needed (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  35.  28
    Leveraging Machine Learning for Real-Time Short-Term Snowfall Forecasting Using MultiSource Atmospheric and Terrain Data Integration.Gopinathan Vimal Raja - 2022 - International Journal of Multidisciplinary Research in Science, Engineering and Technology 5 (8):1336-1339.
    This paper presents a machine learning-based framework for real-time short-term snowfall forecasting by integrating atmospheric and topographic data. The model uses real-time meteorological data such as temperature, humidity, and pressure, along with terrain data like elevation and land cover, to predict snowfall occurrence within a 12-hour forecast window. Random Forest (RF) and Support Vector Machine (SVM) models are employed to process these multi-source inputs, demonstrating a significant improvement in prediction accuracy over traditional methods. Experimental results show that (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  36. Machine Learning, Misinformation, and Citizen Science.Adrian K. Yee - 2023 - European Journal for Philosophy of Science 13 (56):1-24.
    Current methods of operationalizing concepts of misinformation in machine learning are often problematic given idiosyncrasies in their success conditions compared to other models employed in the natural and social sciences. The intrinsic value-ladenness of misinformation and the dynamic relationship between citizens' and social scientists' concepts of misinformation jointly suggest that both the construct legitimacy and the construct validity of these models needs to be assessed via more democratic criteria than has previously been recognized.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37.  21
    Machine Learning Meets Network Management and Orchestration in Edge-Based Networking Paradigms": The Integration of Machine Learning for Managing and Orchestrating Networks at the Edge, where Real-Time Decision-Making is C.Odubade Kehinde Santhosh Katragadda - 2022 - International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering 11 (4):1635-1645.
    Integrating machine learning (ML) into network management and orchestration has revolutionized edgebased networking paradigms, where real-time decision-making is critical. Traditional network management approaches often struggle with edge environments' dynamic and resource-constrained nature. By leveraging ML algorithms, networks at the edge can achieve enhanced efficiency, automation, and adaptability in areas such as traffic prediction, resource allocation, and anomaly detection (Wang et al., 2021). Supervised and unsupervised learning techniques facilitate proactive network optimization, reducing latency and improving quality of service (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. (1 other version)Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen C. King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich, On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  39.  44
    Utilizing Machine Learning for Automated Data Normalization in Supermarket Sales Databases.Gopinathan Vimal Raja - 2025 - International Journal of Advanced Research in Education and Technology(Ijarety) 10 (1):9-12.
    Data normalization is a crucial step in database management systems (DBMS), ensuring consistency, minimizing redundancy, and enhancing query performance. Traditional methods of normalization in supermarket sales databases often demand significant manual effort and domain expertise, making the process time-consuming and prone to errors. This paper introduces an innovative machine learning (ML)-based framework to automate data normalization in supermarket sales databases. The proposed approach utilizes both supervised and unsupervised ML techniques to identify functional dependencies, detect anomalies, and suggest optimal (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  40. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  41. Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models (...)
    Download  
     
    Export citation  
     
    Bookmark   67 citations  
  42.  19
    Quantum Machine Learning: Harnessing Quantum Algorithms for Supervised and Unsupervised Learning.Mittal Mohit - 2022 - International Journal of Innovative Research in Science, Engineering and Technology 11 (9):11631-11637.
    Quantum machine learning (QML) provides a transformative approach to data analysis by integrating the principles of quantum computing with classical machine learning methods. With the exponential growth of data and the increasing complexity of computational tasks, quantum algorithms offer tremendous advantages in terms of processing speed, memory efficiency, and the ability to resolve issues intractable for classical systems. In this work, the use of QML techniques for both supervised and unsupervised learning problems is explored. Quantum-enhanced (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Preparing undergraduates for visual analytics.Ronald A. Rensink - 2015 - IEEE Computer Graphics and Applications 35 (2):16-20.
    Visual analytics (VA) combines the strengths of human and machine intelligence to enable the discovery of interesting patterns in challenging datasets. Historically, most attention has been given to developing the machine component—for example, machine learning or the human-computer interface. However, it is also essential to develop the abilities of the analysts themselves, especially at the beginning of their careers. -/- For the past several years, we at the University of British Columbia (UBC)—with the support of The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - 2024 - Philosophy Compass 19 (5):e12974.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning – as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  46.  34
    MACHINE LEARNING ALGORITHMS FOR REALTIME MALWARE DETECTION.Sharma Sidharth - 2017 - Journal of Artificial Intelligence and Cyber Security (Jaics) 1 (1):12-16.
    With the rapid evolution of information technology, malware has become an advanced cybersecurity threat, targeting computer systems, smart devices, and large-scale networks in real time. Traditional detection methods often fail to recognize emerging malware variants due to limitations in accuracy, adaptability, and response time. This paper presents a comprehensive review of machine learning algorithms for real-time malware detection, categorizing existing approaches based on their methodologies and effectiveness. The study examines recent advancements and evaluates the performance of various (...) learning techniques in detecting malware with minimal false positives and improved scalability. Additionally, key challenges, such as adversarial attacks, computational overhead, and real-time processing constraints, are discussed, along with potential solutions to enhance detection capabilities. An empirical evaluation is conducted to assess the effectiveness of different machine learning models, providing insights for future research in real-time malware detection. Keywords: Real-t. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  47. Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics.Vlasta Sikimić & Sandro Radovanović - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  49. The Use of Machine Learning Methods for Image Classification in Medical Data.Destiny Agboro - forthcoming - International Journal of Ethics.
    Integrating medical imaging with computing technologies, such as Artificial Intelligence (AI) and its subsets: Machine learning (ML) and Deep Learning (DL) has advanced into an essential facet of present-day medicine, signaling a pivotal role in diagnostic decision-making and treatment plans (Huang et al., 2023). The significance of medical imaging is escalated by its sustained growth within the realm of modern healthcare (Varoquaux and Cheplygina, 2022). Nevertheless, the ever-increasing volume of medical images compared to the availability of imaging (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  50. Algorithms and Autonomy: The Ethics of Automated Decision Systems.Alan Rubel, Clinton Castro & Adam Pham - 2021 - Cambridge University Press.
    Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work… the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predictive policing, background checks, news feeds, ride-sharing platforms, social media, and election interference. Using these (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
1 — 50 / 989