Results for 'machine learning fairness'

965 found
Order:
  1. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Jessica Dai, Sina Fazelpour & Zachary Lipton (eds.), Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  3. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In Atoosa Kasirzadeh & Andrew Smart (eds.), ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  6. Big Data Analytics in Healthcare: Exploring the Role of Machine Learning in Predicting Patient Outcomes and Improving Healthcare Delivery.Federico Del Giorgio Solfa & Fernando Rogelio Simonato - 2023 - International Journal of Computations Information and Manufacturing (Ijcim) 3 (1):1-9.
    Healthcare professionals decide wisely about personalized medicine, treatment plans, and resource allocation by utilizing big data analytics and machine learning. To guarantee that algorithmic recommendations are impartial and fair, however, ethical issues relating to prejudice and data privacy must be taken into account. Big data analytics and machine learning have a great potential to disrupt healthcare, and as these technologies continue to evolve, new opportunities to reform healthcare and enhance patient outcomes may arise. In order to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs.Sina Fazelpour - manuscript
    This paper focuses on two highly publicized formal trade-offs in the field of responsible artificial intelligence (AI) -- between predictive accuracy and fairness and between predictive accuracy and interpretability. These formal trade-offs are often taken by researchers, practitioners, and policy-makers to directly imply corresponding tensions between underlying values. Thus interpreted, the trade-offs have formed a core focus of normative engagement in AI governance, accompanied by a particular division of labor along disciplinary lines. This paper argues against this prevalent interpretation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Just Machines.Clinton Castro - 2022 - Public Affairs Quarterly 36 (2):163-183.
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, owed (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  9. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  10. Adversarial Sampling for Fairness Testing in Deep Neural Network.Tosin Ige, William Marfo, Justin Tonkinson, Sikiru Adewale & Bolanle Hafiz Matti - 2023 - International Journal of Advanced Computer Science and Applications 14 (2).
    In this research, we focus on the usage of adversarial sampling to test for the fairness in the prediction of deep neural network model across different classes of image in a given dataset. While several framework had been proposed to ensure robustness of machine learning model against adversarial attack, some of which includes adversarial training algorithm. There is still the pitfall that adversarial training algorithm tends to cause disparity in accuracy and robustness among different group. Our research (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  11. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  12. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and (...) in healthcare. In this paper, we provide the building blocks for an account of algorithmic bias and its normative relevance in medicine. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  13. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  14. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  16. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  17. Performance vs. competence in human–machine comparisons.Chaz Firestone - 2020 - Proceedings of the National Academy of Sciences 41.
    Does the human mind resemble the machines that can behave like it? Biologically inspired machine-learning systems approach “human-level” accuracy in an astounding variety of domains, and even predict human brain activity—raising the exciting possibility that such systems represent the world like we do. However, even seemingly intelligent machines fail in strange and “unhumanlike” ways, threatening their status as models of our minds. How can we know when human–machine behavioral differences reflect deep disparities in their underlying capacities, vs. (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  18. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection.Jo Ann Oravec - 2022 - Ethics and Information Technology 24 (1):1-10.
    This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Can machines think? The controversy that led to the Turing test.Bernardo Gonçalves - 2023 - AI and Society 38 (6):2499-2509.
    Turing’s much debated test has turned 70 and is still fairly controversial. His 1950 paper is seen as a complex and multilayered text, and key questions about it remain largely unanswered. Why did Turing select learning from experience as the best approach to achieve machine intelligence? Why did he spend several years working with chess playing as a task to illustrate and test for machine intelligence only to trade it out for conversational question-answering in 1950? Why did (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  20. A Ghost Workers' Bill of Rights: How to Establish a Fair and Safe Gig Work Platform.Julian Friedland, David Balkin & Ramiro Montealegre - 2020 - California Management Review 62 (2).
    Many of us assume that all the free editing and sorting of online content we ordinarily rely on is carried out by AI algorithms — not human persons. Yet in fact, that is often not the case. This is because human workers remain cheaper, quicker, and more reliable than AI for performing myriad tasks where the right answer turns on ineffable contextual criteria too subtle for algorithms to yet decode. The output of this work is then used for machine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Word vector embeddings hold social ontological relations capable of reflecting meaningful fairness assessments.Ahmed Izzidien - 2021 - AI and Society (March 2021):1-20.
    Programming artificial intelligence to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair or fair. It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  23. Machine Learning-Based Diabetes Prediction: Feature Analysis and Model Assessment.Fares Wael Al-Gharabawi & Samy S. Abu-Naser - 2023 - International Journal of Academic Engineering Research (IJAER) 7 (9):10-17.
    This study employs machine learning to predict diabetes using a Kaggle dataset with 13 features. Our three-layer model achieves an accuracy of 98.73% and an average error of 0.01%. Feature analysis identifies Age, Gender, Polyuria, Polydipsia, Visual blurring, sudden weight loss, partial paresis, delayed healing, irritability, Muscle stiffness, Alopecia, Genital thrush, Weakness, and Obesity as influential predictors. These findings have clinical significance for early diabetes risk assessment. While our research addresses gaps in the field, further work is needed (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Machine Learning, Misinformation, and Citizen Science.Adrian K. Yee - 2023 - European Journal for Philosophy of Science 13 (56):1-24.
    Current methods of operationalizing concepts of misinformation in machine learning are often problematic given idiosyncrasies in their success conditions compared to other models employed in the natural and social sciences. The intrinsic value-ladenness of misinformation and the dynamic relationship between citizens' and social scientists' concepts of misinformation jointly suggest that both the construct legitimacy and the construct validity of these models needs to be assessed via more democratic criteria than has previously been recognized.
    Download  
     
    Export citation  
     
    Bookmark  
  25. (1 other version)Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen C. King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  27. Machine learning, justification, and computational reliabilism.Juan Manuel Duran - 2023
    This article asks the question, ``what is reliable machine learning?'' As I intend to answer it, this is a question about epistemic justification. Reliable machine learning gives justification for believing its output. Current approaches to reliability (e.g., transparency) involve showing the inner workings of an algorithm (functions, variables, etc.) and how they render outputs. We then have justification for believing the output because we know how it was computed. Thus, justification is contingent on what can be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  29. Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Preparing undergraduates for visual analytics.Ronald A. Rensink - 2015 - IEEE Computer Graphics and Applications 35 (2):16-20.
    Visual analytics (VA) combines the strengths of human and machine intelligence to enable the discovery of interesting patterns in challenging datasets. Historically, most attention has been given to developing the machine component—for example, machine learning or the human-computer interface. However, it is also essential to develop the abilities of the analysts themselves, especially at the beginning of their careers. -/- For the past several years, we at the University of British Columbia (UBC)—with the support of The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - 2024 - Philosophy Compass 19 (5):e12974.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning – as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark  
  32. Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics.Vlasta Sikimić & Sandro Radovanović - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Machine Learning and Job Posting Classification: A Comparative Study.Ibrahim M. Nasser & Amjad H. Alzaanin - 2020 - International Journal of Engineering and Information Systems (IJEAIS) 4 (9):06-14.
    In this paper, we investigated multiple machine learning classifiers which are, Multinomial Naive Bayes, Support Vector Machine, Decision Tree, K Nearest Neighbors, and Random Forest in a text classification problem. The data we used contains real and fake job posts. We cleaned and pre-processed our data, then we applied TF-IDF for feature extraction. After we implemented the classifiers, we trained and evaluated them. Evaluation metrics used are precision, recall, f-measure, and accuracy. For each classifier, results were summarized (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - 2021 - ACM Computing Surveys 54 (3):1-18.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People.S. M. Amadae & Shahar Avin - 2019 - In Vincent Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives. Stockholm: SIPRI. pp. 105-118.
    This article assesses how autonomy and machine learning impact the existential risk of nuclear war. It situates the problem of cyber security, which proceeds by stealth, within the larger context of nuclear deterrence, which is effective when it functions with transparency and credibility. Cyber vulnerabilities poses new weaknesses to the strategic stability provided by nuclear deterrence. This article offers best practices for the use of computer and information technologies integrated into nuclear weapons systems. Focusing on nuclear command and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Credit Score Classification Using Machine Learning.Mosa M. M. Megdad & Samy S. Abu-Naser - 2024 - International Journal of Academic Information Systems Research (IJAISR) 8 (5):1-10.
    Abstract: Ensuring the proactive detection of transaction risks is paramount for financial institutions, particularly in the context of managing credit scores. In this study, we compare different machine learning algorithms to effectively and efficiently. The algorithms used in this study were: MLogisticRegressionCV, ExtraTreeClassifier,LGBMClassifier,AdaBoostClassifier, GradientBoostingClassifier,Perceptron,RandomForestClassifier,KNeighborsClassifier,BaggingClassifier, DecisionTreeClassifier, CalibratedClassifierCV, LabelPropagation, Deep Learning. The dataset was collected from Kaggle depository. It consists of 164 rows and 8 columns. The best classifier with unbalanced dataset was the LogisticRegressionCV. The Accuracy 100.0%, precession 100.0%,Recall100.0% (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Machines learning values.Steve Petersen - 2020 - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. Oxford University Press.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  39. The Use of Machine Learning Methods for Image Classification in Medical Data.Destiny Agboro - forthcoming - International Journal of Ethics.
    Integrating medical imaging with computing technologies, such as Artificial Intelligence (AI) and its subsets: Machine learning (ML) and Deep Learning (DL) has advanced into an essential facet of present-day medicine, signaling a pivotal role in diagnostic decision-making and treatment plans (Huang et al., 2023). The significance of medical imaging is escalated by its sustained growth within the realm of modern healthcare (Varoquaux and Cheplygina, 2022). Nevertheless, the ever-increasing volume of medical images compared to the availability of imaging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Transparent, explainable, and accountable AI for robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  41. Diachronic and synchronic variation in the performance of adaptive machine learning systems: the ethical challenges.Joshua Hatherley & Robert Sparrow - 2023 - Journal of the American Medical Informatics Association 30 (2):361-366.
    Objectives: Machine learning (ML) has the potential to facilitate “continual learning” in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such “adaptive” ML systems in medicine that have, thus far, been neglected in the literature. -/- Target audience: The target audiences for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. The Explanatory Role of Machine Learning in Molecular Biology.Fridolin Gross - forthcoming - Erkenntnis:1-21.
    The philosophical debate around the impact of machine learning in science is often framed in terms of a choice between AI and classical methods as mutually exclusive alternatives involving difficult epistemological trade-offs. A common worry regarding machine learning methods specifically is that they lead to opaque models that make predictions but do not lead to explanation or understanding. Focusing on the field of molecular biology, I argue that in practice machine learning is often used (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Fraudulent Financial Transactions Detection Using Machine Learning.Mosa M. M. Megdad, Samy S. Abu-Naser & Bassem S. Abu-Nasser - 2022 - International Journal of Academic Information Systems Research (IJAISR) 6 (3):30-39.
    It is crucial to actively detect the risks of transactions in a financial company to improve customer experience and minimize financial loss. In this study, we compare different machine learning algorithms to effectively and efficiently predict the legitimacy of financial transactions. The algorithms used in this study were: MLP Repressor, Random Forest Classifier, Complement NB, MLP Classifier, Gaussian NB, Bernoulli NB, LGBM Classifier, Ada Boost Classifier, K Neighbors Classifier, Logistic Regression, Bagging Classifier, Decision Tree Classifier and Deep (...). The dataset was collected from Kaggle depository. It consists of 6362620 rows and 10 columns. The best classifier with unbalanced dataset was the Random Forest Classifier. The Accuracy 99.97%, precession 99.96%, Recall 99.97% and the F1-score 99.96%. However, the best classifier with balanced dataset was the Bagging Classifier. The Accuracy 99.96%, precession 99.95%, Recall 99.98% and the F1-score 99.96%. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  44. An Investigation into the Performances of the State-of-the-art Machine Learning Approaches for Various Cyber-attack Detection: A Survey. [REVIEW]Tosin Ige, Christopher Kiekintveld & Aritran Piplai - forthcoming - Proceedings of the IEEE:11.
    To secure computers and information systems from attackers taking advantage of vulnerabilities in the system to commit cybercrime, several methods have been proposed for real-time detection of vulnerabilities to improve security around information systems. Of all the proposed methods, machine learning had been the most effective method in securing a system with capabilities ranging from early detection of software vulnerabilities to real-time detection of ongoing compromise in a system. As there are different types of cyberattacks, each of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. MACHINE LEARNING IMPROVED ADVANCED DIAGNOSIS OF SOFT TISSUES TUMORS.M. Bavadharani - 2022 - Journal of Science Technology and Research (JSTAR) 3 (1):112-123.
    Delicate Tissue Tumors (STT) are a type of sarcoma found in tissues that interface, backing, and encompass body structures. Due to their shallow recurrence in the body and their extraordinary variety, they seem, by all accounts, to be heterogeneous when seen through Magnetic Resonance Imaging (MRI). They are effortlessly mistaken for different infections, for example, fibro adenoma mammae, lymphadenopathy, and struma nodosa, and these indicative blunders have an extensive unfavorable impact on the clinical treatment cycle of patients. Analysts have proposed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Medical Image Classification with Machine Learning Classifier.Destiny Agboro - forthcoming - Journal of Computer Science.
    In contemporary healthcare, medical image categorization is essential for illness prediction, diagnosis, and therapy planning. The emergence of digital imaging technology has led to a significant increase in research into the use of machine learning (ML) techniques for the categorization of images in medical data. We provide a thorough summary of recent developments in this area in this review, using knowledge from the most recent research and cutting-edge methods.We begin by discussing the unique challenges and opportunities associated with (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. (1 other version)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  48. Exploring Machine Learning Techniques for Coronary Heart Disease Prediction.Hisham Khdair - 2021 - International Journal of Advanced Computer Science and Applications 12 (5):28-36.
    Coronary Heart Disease (CHD) is one of the leading causes of death nowadays. Prediction of the disease at an early stage is crucial for many health care providers to protect their patients and save lives and costly hospitalization resources. The use of machine learning in the prediction of serious disease events using routine medical records has been successful in recent years. In this paper, a comparative analysis of different machine learning techniques that can accurately predict the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. An Unconventional Look at AI: Why Today’s Machine Learning Systems are not Intelligent.Nancy Salay - 2020 - In LINKs: The Art of Linking, an Annual Transdisciplinary Review, Special Edition 1, Unconventional Computing. pp. 62-67.
    Machine learning systems (MLS) that model low-level processes are the cornerstones of current AI systems. These ‘indirect’ learners are good at classifying kinds that are distinguished solely by their manifest physical properties. But the more a kind is a function of spatio-temporally extended properties — words, situation-types, social norms — the less likely an MLS will be able to track it. Systems that can interact with objects at the individual level, on the other hand, and that can sustain (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
1 — 50 / 965