Results for 'Fair Machine Learning'

971 found
Order:
  1. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Jessica Dai, Sina Fazelpour & Zachary Lipton (eds.), Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In Atoosa Kasirzadeh & Andrew Smart (eds.), ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  6. Big Data Analytics in Healthcare: Exploring the Role of Machine Learning in Predicting Patient Outcomes and Improving Healthcare Delivery.Federico Del Giorgio Solfa & Fernando Rogelio Simonato - 2023 - International Journal of Computations Information and Manufacturing (Ijcim) 3 (1):1-9.
    Healthcare professionals decide wisely about personalized medicine, treatment plans, and resource allocation by utilizing big data analytics and machine learning. To guarantee that algorithmic recommendations are impartial and fair, however, ethical issues relating to prejudice and data privacy must be taken into account. Big data analytics and machine learning have a great potential to disrupt healthcare, and as these technologies continue to evolve, new opportunities to reform healthcare and enhance patient outcomes may arise. In order (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Just Machines.Clinton Castro - 2022 - Public Affairs Quarterly 36 (2):163-183.
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  8. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  9. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee any (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  10. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  11. Broomean(ish) Algorithmic Fairness?Clinton Castro - forthcoming - Journal of Applied Philosophy.
    Recently, there has been much discussion of ‘fair machine learning’: fairness in data-driven decision-making systems (which are often, though not always, made with assistance from machine learning systems). Notorious impossibility results show that we cannot have everything we want here. Such problems call for careful thinking about the foundations of fair machine learning. Sune Holm has identified one promising way forward, which involves applying John Broome's theory of fairness to the puzzles of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  13. Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs.Sina Fazelpour - 2021
    This paper focuses on two highly publicized formal trade-offs in the field of responsible artificial intelligence (AI) -- between predictive accuracy and fairness and between predictive accuracy and interpretability. These formal trade-offs are often taken by researchers, practitioners, and policy-makers to directly imply corresponding tensions between underlying values. Thus interpreted, the trade-offs have formed a core focus of normative engagement in AI governance, accompanied by a particular division of labor along disciplinary lines. This paper argues against this prevalent interpretation by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Adversarial Sampling for Fairness Testing in Deep Neural Network.Tosin Ige, William Marfo, Justin Tonkinson, Sikiru Adewale & Bolanle Hafiz Matti - 2023 - International Journal of Advanced Computer Science and Applications 14 (2).
    In this research, we focus on the usage of adversarial sampling to test for the fairness in the prediction of deep neural network model across different classes of image in a given dataset. While several framework had been proposed to ensure robustness of machine learning model against adversarial attack, some of which includes adversarial training algorithm. There is still the pitfall that adversarial training algorithm tends to cause disparity in accuracy and robustness among different group. Our research is (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  15. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  16. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  17. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  18.  73
    Decisional Value Scores.Gabriella Waters, William Mapp & Phillip Honenberger - 2024 - AI and Ethics 2024.
    Research in ethical AI has made strides in quantitative expression of ethical values such as fairness, transparency, and privacy. Here we contribute to this effort by proposing a new family of metrics called “decisional value scores” (DVS). DVSs are scores assigned to a system based on whether the decisions it makes meet or fail to meet a particular standard (either individually, in total, or as a ratio or average over decisions made). Advantages of DVS include greater discrimination capacity between types (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Performance vs. competence in human–machine comparisons.Chaz Firestone - 2020 - Proceedings of the National Academy of Sciences 41.
    Does the human mind resemble the machines that can behave like it? Biologically inspired machine-learning systems approach “human-level” accuracy in an astounding variety of domains, and even predict human brain activity—raising the exciting possibility that such systems represent the world like we do. However, even seemingly intelligent machines fail in strange and “unhumanlike” ways, threatening their status as models of our minds. How can we know when human–machine behavioral differences reflect deep disparities in their underlying capacities, vs. (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  20. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection.Jo Ann Oravec - 2022 - Ethics and Information Technology 24 (1):1-10.
    This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the roles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Can machines think? The controversy that led to the Turing test.Bernardo Gonçalves - 2023 - AI and Society 38 (6):2499-2509.
    Turing’s much debated test has turned 70 and is still fairly controversial. His 1950 paper is seen as a complex and multilayered text, and key questions about it remain largely unanswered. Why did Turing select learning from experience as the best approach to achieve machine intelligence? Why did he spend several years working with chess playing as a task to illustrate and test for machine intelligence only to trade it out for conversational question-answering in 1950? Why did (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  22. A Ghost Workers' Bill of Rights: How to Establish a Fair and Safe Gig Work Platform.Julian Friedland, David Balkin & Ramiro Montealegre - 2020 - California Management Review 62 (2).
    Many of us assume that all the free editing and sorting of online content we ordinarily rely on is carried out by AI algorithms — not human persons. Yet in fact, that is often not the case. This is because human workers remain cheaper, quicker, and more reliable than AI for performing myriad tasks where the right answer turns on ineffable contextual criteria too subtle for algorithms to yet decode. The output of this work is then used for machine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23.  67
    A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences.Lode Lauwaert - 2023 - Artificial Intelligence Review 56:3473–3504.
    Since its emergence in the 1960s, Artifcial Intelligence (AI) has grown to conquer many technology products and their felds of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from diferent domains, together with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. (1 other version)Word vector embeddings hold social ontological relations capable of reflecting meaningful fairness assessments.Ahmed Izzidien - 2021 - AI and Society (March 2021):1-20.
    Programming artificial intelligence to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair or fair. It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  26. Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  27. (1 other version)Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen C. King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  29. Machine Learning-Based Diabetes Prediction: Feature Analysis and Model Assessment.Fares Wael Al-Gharabawi & Samy S. Abu-Naser - 2023 - International Journal of Academic Engineering Research (IJAER) 7 (9):10-17.
    This study employs machine learning to predict diabetes using a Kaggle dataset with 13 features. Our three-layer model achieves an accuracy of 98.73% and an average error of 0.01%. Feature analysis identifies Age, Gender, Polyuria, Polydipsia, Visual blurring, sudden weight loss, partial paresis, delayed healing, irritability, Muscle stiffness, Alopecia, Genital thrush, Weakness, and Obesity as influential predictors. These findings have clinical significance for early diabetes risk assessment. While our research addresses gaps in the field, further work is needed (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  73
    Machine Learning-Based Cyberbullying Detection System with Enhanced Accuracy and Speed.M. Arulselvan - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):421-429.
    The rise of social media has created a new platform for communication and interaction, but it has also facilitated the spread of harmful behaviors such as cyberbullying. Detecting and mitigating cyberbullying on social media platforms is a critical challenge that requires advanced technological solutions. This paper presents a novel approach to cyberbullying detection using a combination of supervised machine learning (ML) and natural language processing (NLP) techniques, enhanced by optimization algorithms. The proposed system is designed to identify and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Preparing undergraduates for visual analytics.Ronald A. Rensink - 2015 - IEEE Computer Graphics and Applications 35 (2):16-20.
    Visual analytics (VA) combines the strengths of human and machine intelligence to enable the discovery of interesting patterns in challenging datasets. Historically, most attention has been given to developing the machine component—for example, machine learning or the human-computer interface. However, it is also essential to develop the abilities of the analysts themselves, especially at the beginning of their careers. -/- For the past several years, we at the University of British Columbia (UBC)—with the support of The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Machine Learning, Misinformation, and Citizen Science.Adrian K. Yee - 2023 - European Journal for Philosophy of Science 13 (56):1-24.
    Current methods of operationalizing concepts of misinformation in machine learning are often problematic given idiosyncrasies in their success conditions compared to other models employed in the natural and social sciences. The intrinsic value-ladenness of misinformation and the dynamic relationship between citizens' and social scientists' concepts of misinformation jointly suggest that both the construct legitimacy and the construct validity of these models needs to be assessed via more democratic criteria than has previously been recognized.
    Download  
     
    Export citation  
     
    Bookmark  
  33. Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics.Vlasta Sikimić & Sandro Radovanović - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  34. Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Machine Learning and Job Posting Classification: A Comparative Study.Ibrahim M. Nasser & Amjad H. Alzaanin - 2020 - International Journal of Engineering and Information Systems (IJEAIS) 4 (9):06-14.
    In this paper, we investigated multiple machine learning classifiers which are, Multinomial Naive Bayes, Support Vector Machine, Decision Tree, K Nearest Neighbors, and Random Forest in a text classification problem. The data we used contains real and fake job posts. We cleaned and pre-processed our data, then we applied TF-IDF for feature extraction. After we implemented the classifiers, we trained and evaluated them. Evaluation metrics used are precision, recall, f-measure, and accuracy. For each classifier, results were summarized (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - 2024 - Philosophy Compass 19 (5):e12974.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning – as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - 2021 - ACM Computing Surveys 54 (3):1-18.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38.  49
    Machine Learning for Optimized Attribute-Based Data Management in Secure Cloud Storage.P. Selvaprasanth - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):434-450.
    Cloud storage's scalability, accessibility, and affordability have made it essential in the digital age. Data security and privacy remain a major issue due to the large volume of sensitive data kept on cloud services. Traditional encryption is safe but slows data recovery, especially for keyword searches. Secure, fine-grained access control and quick keyword searches over encrypted data are possible using attribute-based keyword search (ABKS). This study examines how ABKS might optimize search efficiency and data security in cloud storage systems. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Diachronic and synchronic variation in the performance of adaptive machine learning systems: the ethical challenges.Joshua Hatherley & Robert Sparrow - 2023 - Journal of the American Medical Informatics Association 30 (2):361-366.
    Objectives: Machine learning (ML) has the potential to facilitate “continual learning” in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such “adaptive” ML systems in medicine that have, thus far, been neglected in the literature. -/- Target audience: The target audiences for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Fraudulent Financial Transactions Detection Using Machine Learning.Mosa M. M. Megdad, Samy S. Abu-Naser & Bassem S. Abu-Nasser - 2022 - International Journal of Academic Information Systems Research (IJAISR) 6 (3):30-39.
    It is crucial to actively detect the risks of transactions in a financial company to improve customer experience and minimize financial loss. In this study, we compare different machine learning algorithms to effectively and efficiently predict the legitimacy of financial transactions. The algorithms used in this study were: MLP Repressor, Random Forest Classifier, Complement NB, MLP Classifier, Gaussian NB, Bernoulli NB, LGBM Classifier, Ada Boost Classifier, K Neighbors Classifier, Logistic Regression, Bagging Classifier, Decision Tree Classifier and Deep (...). The dataset was collected from Kaggle depository. It consists of 6362620 rows and 10 columns. The best classifier with unbalanced dataset was the Random Forest Classifier. The Accuracy 99.97%, precession 99.96%, Recall 99.97% and the F1-score 99.96%. However, the best classifier with balanced dataset was the Bagging Classifier. The Accuracy 99.96%, precession 99.95%, Recall 99.98% and the F1-score 99.96%. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  42. Machines learning values.Steve Petersen - 2020 - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. Oxford University Press.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  43.  87
    Synthetic Health Data: Real Ethical Promise and Peril.Daniel Susser, Daniel S. Schiff, Sara Gerke, Laura Y. Cabrera, I. Glenn Cohen, Megan Doerr, Jordan Harrod, Kristin Kostick-Quenet, Jasmine McNealy, Michelle N. Meyer, W. Nicholson Price & Jennifer K. Wagner - 2024 - Hastings Center Report 54 (5):8-13.
    Researchers and practitioners are increasingly using machine‐generated synthetic data as a tool for advancing health science and practice, by expanding access to health data while—potentially—mitigating privacy and related ethical concerns around data sharing. While using synthetic data in this way holds promise, we argue that it also raises significant ethical, legal, and policy concerns, including persistent privacy and security problems, accuracy and reliability issues, worries about fairness and bias, and new regulatory challenges. The virtue of synthetic data is often (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People.S. M. Amadae & Shahar Avin - 2019 - In Vincent Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives. Stockholm: SIPRI. pp. 105-118.
    This article assesses how autonomy and machine learning impact the existential risk of nuclear war. It situates the problem of cyber security, which proceeds by stealth, within the larger context of nuclear deterrence, which is effective when it functions with transparency and credibility. Cyber vulnerabilities poses new weaknesses to the strategic stability provided by nuclear deterrence. This article offers best practices for the use of computer and information technologies integrated into nuclear weapons systems. Focusing on nuclear command and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  64
    Privacy and Machine Learning- Based Artificial Intelligence: Philosophical, Legal, and Technical Investigations.Haleh Asgarinia - 2024 - Dissertation, Department of Philisophy, University of Twente
    This dissertation consists of five chapters, each written as independent research papers that are unified by an overarching concern regarding information privacy and machine learning-based artificial intelligence (AI). This dissertation addresses the issues concerning privacy and AI by responding to the following three main research questions (RQs): RQ1. ‘How does an AI system affect privacy?’; RQ2. ‘How effectively does the General Data Protection Regulation (GDPR) assess and address privacy issues concerning both individuals and groups?’; and RQ3. ‘How can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46.  49
    Machine Learning-Driven Optimization for Accurate Cardiovascular Disease Prediction.Yoheswari S. - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):350-359.
    The research methodology involves data preprocessing, feature engineering, model training, and performance evaluation. We employ optimization methods such as Genetic Algorithms and Grid Search to fine-tune model parameters, ensuring robust and generalizable models. The dataset used includes patient medical records, with features like age, blood pressure, cholesterol levels, and lifestyle habits serving as inputs for the ML models. Evaluation metrics, including accuracy, precision, recall, F1-score, and the area under the ROC curve (AUC-ROC), assess the model's predictive power.
    Download  
     
    Export citation  
     
    Bookmark  
  47. Transparent, explainable, and accountable AI for robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  48. Credit Score Classification Using Machine Learning.Mosa M. M. Megdad & Samy S. Abu-Naser - 2024 - International Journal of Academic Information Systems Research (IJAISR) 8 (5):1-10.
    Abstract: Ensuring the proactive detection of transaction risks is paramount for financial institutions, particularly in the context of managing credit scores. In this study, we compare different machine learning algorithms to effectively and efficiently. The algorithms used in this study were: MLogisticRegressionCV, ExtraTreeClassifier,LGBMClassifier,AdaBoostClassifier, GradientBoostingClassifier,Perceptron,RandomForestClassifier,KNeighborsClassifier,BaggingClassifier, DecisionTreeClassifier, CalibratedClassifierCV, LabelPropagation, Deep Learning. The dataset was collected from Kaggle depository. It consists of 164 rows and 8 columns. The best classifier with unbalanced dataset was the LogisticRegressionCV. The Accuracy 100.0%, precession 100.0%,Recall100.0% (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49.  68
    Machine Learning-Enhanced Secure Cloud Storage with Attribute-Based Data Access.A. Manoj Prabaharan - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):418-429.
    Cloud computing has transformed data management and storage by providing unmatched scalability, flexibility, and cost-effectiveness. However, rising cloud storage use has raised data security and privacy issues. As sensitive data being outsourced to third-party cloud providers, security is crucial. Traditional encryption methods secure data but make data recovery difficult. Specifically, efficiently searching encrypted data without compromising security is difficult.
    Download  
     
    Export citation  
     
    Bookmark  
  50.  66
    Intelligent Driver Drowsiness Detection System Using Optimized Machine Learning Models.M. Arulselvan - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):397-405.
    : Driver drowsiness is a significant factor contributing to road accidents, resulting in severe injuries and fatalities. This study presents an optimized approach for detecting driver drowsiness using machine learning techniques. The proposed system utilizes real-time data to analyze driver behavior and physiological signals to identify signs of fatigue. Various machine learning algorithms, including Support Vector Machines (SVM), Convolutional Neural Networks (CNN), and Random Forest, are explored for their efficacy in detecting drowsiness. The system incorporates an (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 971