Results for 'AI training'

970 found
Order:
  1. AI training data, model success likelihood, and informational entropy-based value.Quan-Hoang Vuong, Viet-Phuong La & Minh-Hoang Nguyen - manuscript
    Since the release of OpenAI's ChatGPT, the world has entered a race to develop more capable and powerful AI, including artificial general intelligence (AGI). The development is constrained by the dependency of AI on the model, quality, and quantity of training data, making the AI training process highly costly in terms of resources and environmental consequences. Thus, improving the effectiveness and efficiency of the AI training process is essential, especially when the Earth is approaching the climate tipping (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. How AI Trained on the Confucian Analects Can Solve Ethical Dilemmas.Emma So - 2024 - Curieux Academic Journal 1 (Issue 42):56-67.
    The influence of AI has spread globally, intriguing both the East and the West. As a result, some Chinese scholars have explored how AI and Chinese philosophy can be examined together, and have offered some unique insights into AI from a Chinese philosophical perspective. Similarly, we investigate how the two fields can be developed in conjunction, focusing on the popular Confucian philosophy. In this work, we use Confucianism as a philosophical foundation to investigate human-technology relations closely, proposing that a Confucian-imbued (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Collective ownership of AI.Markus Furendal - 2025 - In Martin Hähnel & Regina Müller (eds.), A Companion to Applied Philosophy of AI. Wiley-Blackwell.
    AI technology promises to be both the most socially important and the most profitable technology of a generation. At the same time, the control over – and profits from – the technology is highly concentrated to a handful of large tech companies. This chapter discusses whether bringing AI technology under collective ownership and control is an attractive way of counteracting this development. It discusses justice-based rationales for collective ownership, such as the claim that, since the training of AI systems (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - forthcoming - AI and Society:1-12.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  5. Can AI Achieve Common Good and Well-being? Implementing the NSTC's R&D Guidelines with a Human-Centered Ethical Approach.Jr-Jiun Lian - 2024 - 2024 Annual Conference on Science, Technology, and Society (Sts) Academic Paper, National Taitung University. Translated by Jr-Jiun Lian.
    This paper delves into the significance and challenges of Artificial Intelligence (AI) ethics and justice in terms of Common Good and Well-being, fairness and non-discrimination, rational public deliberation, and autonomy and control. Initially, the paper establishes the groundwork for subsequent discussions using the Academia Sinica LLM incident and the AI Technology R&D Guidelines of the National Science and Technology Council(NSTC) as a starting point. In terms of justice and ethics in AI, this research investigates whether AI can fulfill human common (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Excavating “Excavating AI”: The Elephant in the Gallery.Michael J. Lyons - 2020 - arXiv 2009:1-15.
    Two art exhibitions, “Training Humans” and “Making Faces,” and the accompanying essay “Excavating AI: The politics of images in machine learning training sets” by Kate Crawford and Trevor Paglen, are making substantial impact on discourse taking place in the social and mass media networks, and some scholarly circles. Critical scrutiny reveals, however, a self-contradictory stance regarding informed consent for the use of facial images, as well as serious flaws in their critique of ML training sets. Our analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  7. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  8. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - manuscript
    A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but cohesive, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and cohesiveness promise to facilitate (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Unjustified untrue "beliefs": AI hallucinations and justification logics.Kristina Šekrst - forthcoming - In Kordula Świętorzecka, Filip Grgić & Anna Brozek (eds.), Logic, Knowledge, and Tradition. Essays in Honor of Srecko Kovac.
    In artificial intelligence (AI), responses generated by machine-learning models (most often large language models) may be unfactual information presented as a fact. For example, a chatbot might state that the Mona Lisa was painted in 1815. Such phenomenon is called AI hallucinations, seeking inspiration from human psychology, with a great difference of AI ones being connected to unjustified beliefs (that is, AI “beliefs”) rather than perceptual failures). -/- AI hallucinations may have their source in the data itself, that is, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10.  34
    AI-Driven Air Quality Forecasting Using Multi-Scale Feature Extraction and Recurrent Neural Networks.P. Selvaprasanth - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):575-590.
    We investigate the application of Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) networks, and a hybrid CNN-LSTM model for forecasting air pollution levels based on historical data. Our experimental setup uses real-world air quality datasets from multiple regions, containing measurements of pollutants like PM2.5, PM10, CO, NO2, and SO2, alongside meteorological data such as temperature, humidity, and wind speed. The models are trained, validated, and tested using a split dataset, and their accuracy is evaluated using performance metrics like Mean (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. ITS for Enhancing Training Methodology for Students Majoring in Electricity.Mohammed S. Nassr & Samy S. Abu-Naser - 2019 - International Journal of Academic Pedagogical Research (IJAPR) 3 (3):16-30.
    This thesis focuses on the use of intelligent tutoring system for education and training of students specialized in electricity in the field of technical and vocational education. The use of modern systems in training and education will have a great positive impact in improving the level of students receiving training and education; this will improve the level of the local economy by producing students of professionals who are able to engage in society efficiently, especially for those who (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. “Excavating AI” Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset.Michael J. Lyons - 2021 - arXiv 2107:1-20.
    Twenty-five years ago, my colleagues Miyuki Kamachi and Jiro Gyoba and I designed and photographed JAFFE, a set of facial expression images intended for use in a study of face perception. In 2019, without seeking permission or informing us, Kate Crawford and Trevor Paglen exhibited JAFFE in two widely publicized art shows. In addition, they published a nonfactual account of the images in the essay “Excavating AI: The Politics of Images in Machine Learning Training Sets.” The present article recounts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13.  98
    From Enclosure to Foreclosure and Beyond: Opening AI’s Totalizing Logic.Katia Schwerzmann - forthcoming - AI and Society.
    This paper reframes the issue of appropriation, extraction, and dispossession through AI—an assemblage of machine learning models trained on big data—in terms of enclosure and foreclosure. While enclosures are the product of a well-studied set of operations pertaining to both the constitution of the sovereign State and the primitive accumulation of capital, here, I want to recover an older form of the enclosure operation to then contrast it with foreclosure to better understand the effects of current algorithmic rationality. I argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. What does AI believe in?Evgeny Smirnov - manuscript
    I conducted an experiment by using four different artificial intelligence models developed by OpenAI to estimate the persuasiveness and rational justification of various philosophical stances. The AI models used were text-davinci-003, text-ada-001, text-curie-001, and text-babbage-001, which differed in complexity and the size of their training data sets. For the philosophical stances, the list of 30 questions created by Bourget & Chalmers (2014) was used. The results indicate that it seems that each model has its own plausible ‘cognitive’ style. The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15.  82
    Innovating Financial and Medical Services: Generative AI’s Impact on Banking and Healthcare.M. Sheik Dawood - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):610-618.
    Results indicate substantial improvements in efficiency, accuracy, and personalized care, but also highlight the challenges of data privacy, ethical considerations, and system scalability. By providing a structured analysis, this research contributes insights into optimizing generative AI deployments for both banking and healthcare, ensuring a balance between innovation and risk management. The study concludes with recommendations for future research directions, including advanced model training, ethical guidelines, and enhanced privacy measures. These insights aim to inform practitioners on the benefits of generative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. (1 other version)Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen C. King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  17. Turning queries into questions: For a plurality of perspectives in the age of AI and other frameworks with limited (mind)sets.Claudia Westermann & Tanu Gupta - 2023 - Technoetic Arts 21 (1):3-13.
    The editorial introduces issue 21.1 of Technoetic Arts via a critical reflection on the artificial intelligence hype (AI hype) that emerged in 2022. Tracing the history of the critique of Large Language Models, the editorial underscores that there are substantial ethical challenges related to bias in the training data, copyright issues, as well as ecological challenges which the technology industry has consistently downplayed over the years. -/- The editorial highlights the distinction between the current AI technology’s reliance on extensive (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18.  67
    From Enclosure to Foreclosure and Beyond: Opening AI’s Totalizing Logic.Katia Schwerzmann - forthcoming - AI and Society.
    This paper reframes the issue of appropriation, extraction, and dispossession through AI—an assemblage of machine learning models trained on big data—in terms of enclosure and foreclosure. While enclosures are the product of a well-studied set of operations pertaining to both the constitution of the sovereign State and the primitive accumulation of capital, here, I want to recover an older form of the enclosure operation to then contrast it with foreclosure to better understand the effects of current algorithmic rationality. I argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Diffusing the Creator: Attributing Credit for Generative AI Outputs.Donal Khosrowi, Finola Finn & Elinor Clark - 2023 - Aies '23: Proceedings of the 2023 Aaai/Acm Conference on Ai, Ethics, and Society.
    The recent wave of generative AI (GAI) systems like Stable Diffusion that can produce images from human prompts raises controversial issues about creatorship, originality, creativity and copyright. This paper focuses on creatorship: who creates and should be credited with the outputs made with the help of GAI? Existing views on creatorship are mixed: some insist that GAI systems are mere tools, and human prompters are creators proper; others are more open to acknowledging more significant roles for GAI, but most conceive (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. What Are Lacking in Sora and V-JEPA’s World Models? -A Philosophical Analysis of Video AIs Through the Theory of Productive Imagination.Jianqiu Zhang - unknown
    Sora from Open AI has shown exceptional performance, yet it faces scrutiny over whether its technological prowess equates to an authentic comprehension of reality. Critics contend that it lacks a foundational grasp of the world, a deficiency V-JEPA from Meta aims to amend with its joint embedding approach. This debate is vital for steering the future direction of Artificial General Intelligence(AGI). We enrich this debate by developing a theory of productive imagination that generates a coherent world model based on Kantian (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21.  92
    AI-Driven Emotion Recognition and Regulation Using Advanced Deep Learning Models.S. Arul Selvan - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):383-389.
    Emotion detection and management have emerged as pivotal areas in humancomputer interaction, offering potential applications in healthcare, entertainment, and customer service. This study explores the use of deep learning (DL) models to enhance emotion recognition accuracy and enable effective emotion regulation mechanisms. By leveraging large datasets of facial expressions, voice tones, and physiological signals, we train deep neural networks to recognize a wide array of emotions with high precision. The proposed system integrates emotion recognition with adaptive management strategies that provide (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. A framework of AI-Powered Engineering Technology to aid Altair Data Intelligence Start-up Benefits; speeding up Data-Driven Solution.Md Majidul Haque Bhuiyan - manuscript
    Today, software instruments support all parts of engineering work, from design to creation. Many engineering processes call for tedious routine appointments and torments with manual handoffs and data storehouses. AI designers train profound brain networks and incorporate them into software structures.
    Download  
     
    Export citation  
     
    Bookmark  
  23. Health Care Using AI.T. Poongodi - 2019 - International Journal of Research and Analytical Reviews 6 (2):141-145.
    Breast cancer treatment is being transformed by artificial intelligence (AI). Nevertheless, most scientists, engineers, and physicians aren't ready to contribute to the healthcare AI revolution. In this paper, we discuss our experiences teaching a new American student undergraduate course that seeks to train the next generation for cross-cultural design thinking, which we believe is critical for AI to realize its full potential in breast cancer treatment. The main tasks of this course are preparing, performing and translating interviews with healthcare professionals (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Revolutionizing Education with ChatGPT: Enhancing Learning Through Conversational AI.Prapasiri Klayklung, Piyawatjana Chocksathaporn, Pongsakorn Limna, Tanpat Kraiwanit & Kris Jangjarat - 2023 - Universal Journal of Educational Research 2 (3):217-225.
    The development of conversational artificial intelligence (AI) has brought about new opportunities for improving the learning experience in education. ChatGPT, a large language model trained on a vast corpus of text, has the potential to revolutionize education by enhancing learning through personalized and interactive conversations. This paper explores the benefits of integrating ChatGPT in education in Thailand. The research strategy employed in this study was qualitative, utilizing in-depth interviews with eight key informants who were selected using purposive sampling. The collected (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Netizens, Academicians, and Information Professionals' Opinions About AI With Special Reference To ChatGPT.Subaveerapandiyan A., A. Vinoth & Neelam Tiwary - 2023 - Library Philosophy and Practice (E-Journal):1-16.
    This study aims to understand the perceptions and opinions of academicians towards ChatGPT-3 by collecting and analyzing social media comments, and a survey was conducted with library and information science professionals. The research uses a content analysis method and finds that while ChatGPT-3 can be a valuable tool for research and writing, it is not 100% accurate and should be cross-checked. The study also finds that while some academicians may not accept ChatGPT-3, most are starting to accept it. The study (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  27. Book review: Coeckelbergh, Mark (2022): The political philosophy of AI. [REVIEW]Michael W. Schmidt - 2024 - TATuP - Zeitschrift Für Technikfolgenabschätzung in Theorie Und Praxis 33 (1):68–69.
    Mark Coeckelbergh starts his book with a very powerful picture based on a real incident: On the 9th of January 2020, Robert Williams was wrongfully arrested by Detroit police officers in front of his two young daughters, wife and neighbors. For 18 hours the police would not disclose the grounds for his arrest (American Civil Liberties Union 2020; Hill 2020). The decision to arrest him was primarily based on a facial detection algorithm which matched Mr. Williams’ driving license photo with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Augmenting Morality through Ethics Education: the ACTWith model.Jeffrey White - 2024 - AI and Society:1-20.
    Recently in this journal, Jessica Morley and colleagues (AI & SOC 2023 38:411–423) review AI ethics and education, suggesting that a cultural shift is necessary in order to prepare students for their responsibilities in developing technology infrastructure that should shape ways of life for many generations. Current AI ethics guidelines are abstract and difficult to implement as practical moral concerns proliferate. They call for improvements in ethics course design, focusing on real-world cases and perspective-taking tools to immerse students in challenging (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Critical Provocations for Synthetic Data.Daniel Susser & Jeremy Seeman - 2024 - Surveillance and Society 22 (4):453-459.
    Training artificial intelligence (AI) systems requires vast quantities of data, and AI developers face a variety of barriers to accessing the information they need. Synthetic data has captured researchers’ and industry’s imagination as a potential solution to this problem. While some of the enthusiasm for synthetic data may be warranted, in this short paper we offer critical counterweight to simplistic narratives that position synthetic data as a cost-free solution to every data-access challenge—provocations highlighting ethical, political, and governance issues the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Classification of Real and Fake Human Faces Using Deep Learning.Fatima Maher Salman & Samy S. Abu-Naser - 2022 - International Journal of Academic Engineering Research (IJAER) 6 (3):1-14.
    Artificial intelligence (AI), deep learning, machine learning and neural networks represent extremely exciting and powerful machine learning-based techniques used to solve many real-world problems. Artificial intelligence is the branch of computer sciences that emphasizes the development of intelligent machines, thinking and working like humans. For example, recognition, problem-solving, learning, visual perception, decision-making and planning. Deep learning is a subset of machine learning in artificial intelligence that has networks capable of learning unsupervised from data that is unstructured or unlabeled. Deep learning (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  31. Does ChatGPT have semantic understanding?Lisa Miracchi Titus - 2024 - Cognitive Systems Research 83 (101174):1-13.
    Over the last decade, AI models of language and word meaning have been dominated by what we might call a statistics-of-occurrence, strategy: these models are deep neural net structures that have been trained on a large amount of unlabeled text with the aim of producing a model that exploits statistical information about word and phrase co-occurrence in order to generate behavior that is similar to what a human might produce, or representations that can be probed to exhibit behavior similar to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  32. Human-Aided Artificial Intelligence: Or, How to Run Large Computations in Human Brains? Towards a Media Sociology of Machine Learning.Rainer Mühlhoff - 2019 - New Media and Society 1.
    Today, artificial intelligence, especially machine learning, is structurally dependent on human participation. Technologies such as Deep Learning (DL) leverage networked media infrastructures and human-machine interaction designs to harness users to provide training and verification data. The emergence of DL is therefore based on a fundamental socio-technological transformation of the relationship between humans and machines. Rather than simulating human intelligence, DL-based AIs capture human cognitive abilities, so they are hybrid human-machine apparatuses. From a perspective of media philosophy and social-theoretical critique, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  33. Artificial Intelligence and Punjabi Culture.D. P. Singh - 2023 - International Culture and Art (Ica) 5 (4):11-14.
    Artificial Intelligence (AI) is a technology that makes machines smart and capable of doing things that usually require human intelligence. AI works by training machines to learn from data and experiences. Such devices can recognize patterns, understand spoken language, see and understand images, and even make predictions based on their learning. Voice assistants like Siri or Alexa can understand our voice commands, answer questions, and perform tasks for us. AI-based self-driving cars can sense their surroundings, make decisions, and drive (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34.  36
    A Novel Deep Learning-Based Framework for Intelligent Malware Detection in Cybersecurity.P. Selvaprasanth - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):666-669.
    With the proliferation of sophisticated cyber threats, traditional malware detection techniques are becoming inadequate to ensure robust cybersecurity. This study explores the integration of deep learning (DL) techniques into malware detection systems to enhance their accuracy, scalability, and adaptability. By leveraging convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformers, this research presents an intelligent malware detection framework capable of identifying both known and zero-day threats. The methodology involves feature extraction from static, dynamic, and hybrid malware datasets, followed by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  36. Imagination, Creativity, and Artificial Intelligence.Peter Langland-Hassan - 2024 - In Amy Kind & Julia Langkau (eds.), Oxford Handbook of Philosophy of Imagination and Creativity. Oxford University Press.
    This chapter considers the potential of artificial intelligence (AI) to exhibit creativity and imagination, in light of recent advances in generative AI and the use of deep neural networks (DNNs). Reasons for doubting that AI exhibits genuine creativity or imagination are considered, including the claim that the creativity of an algorithm lies in its developer, that generative AI merely reproduces patterns in its training data, and that AI is lacking in a necessary feature for creativity or imagination, such as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Diagnosis of Blood Cells Using Deep Learning.Ahmed J. Khalil & Samy S. Abu-Naser - 2022 - Dissertation, University of Tehran
    In computer science, Artificial Intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Deep Learning is a new field of research. One of the branches of Artificial Intelligence Science deals with the creation of theories and algorithms that (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  38. Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.Benjamin Davies & Thomas Douglas - 2022 - In Jesper Ryberg & Julian V. Roberts (eds.), Sentencing and Artificial Intelligence. Oxford: OUP.
    It is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed to predict recidivism. A similar approach could be taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: information about race could be withheld from the ML tool during its training phase, ensuring that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  39. Towards Pedagogy supporting Ethics in Analysis.Marie Oldfield - 2022 - Journal of Humanistic Mathematics 12 (2).
    Over the past few years we have seen an increasing number of legal proceedings related to inappropriately implemented technology. At the same time career paths have diverged from the foundation of statistics out to Data Scientist, Machine Learning and AI. All of these new branches being fundamentally branches of statistics and mathematics. This has meant that formal training has struggled to keep up with what is required in the plethora of new roles. Mathematics as a taught subject is still (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Interventionist Methods for Interpreting Deep Neural Networks.Raphaël Millière & Cameron Buckner - forthcoming - In Gualtiero Piccinini (ed.), Neurocognitive Foundations of Mind. Routledge.
    Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Ethical Implications of Alzheimer’s Disease Prediction in Asymptomatic Individuals Through Artificial Intelligence.Frank Ursin, Cristian Timmermann & Florian Steger - 2021 - Diagnostics 11 (3):440.
    Biomarker-based predictive tests for subjectively asymptomatic Alzheimer’s disease (AD) are utilized in research today. Novel applications of artificial intelligence (AI) promise to predict the onset of AD several years in advance without determining biomarker thresholds. Until now, little attention has been paid to the new ethical challenges that AI brings to the early diagnosis in asymptomatic individuals, beyond contributing to research purposes, when we still lack adequate treatment. The aim of this paper is to explore the ethical arguments put forward (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Universal Science of Mind: Can Complexity-Based Artificial Intelligence Save the World in Crisis?Andrei P. Kirilyuk - manuscript
    While practical efforts in the field of artificial intelligence grow exponentially, the truly scientific and mathematically exact understanding of the underlying phenomena of intelligence and consciousness is still missing in the conventional science framework. The inevitably dominating empirical, trial-and-error approach has vanishing efficiency for those extremely complicated phenomena, ending up in fundamentally limited imitations of intelligent behaviour. We provide the first-principle analysis of unreduced many-body interaction process in the brain revealing its qualitatively new features, which give rise to rigorously defined (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar to exemplar-based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. The Ethics of Automating Therapy.Jake Burley, James J. Hughes, Alec Stubbs & Nir Eisikovits - 2024 - Ieet White Papers.
    The mental health crisis and loneliness epidemic have sparked a growing interest in leveraging artificial intelligence (AI) and chatbots as a potential solution. This report examines the benefits and risks of incorporating chatbots in mental health treatment. AI is used for mental health diagnosis and treatment decision-making and to train therapists on virtual patients. Chatbots are employed as always-available intermediaries with therapists, flagging symptoms for human intervention. But chatbots are also sold as stand-alone virtual therapists or as friends and lovers. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Artificial Beings Worthy of Moral Consideration in Virtual Environments: An Analysis of Ethical Viability.Stefano Gualeni - 2020 - Journal of Virtual Worlds Research 13 (1).
    This article explores whether and under which circumstances it is ethically viable to include artificial beings worthy of moral consideration in virtual environments. In particular, the article focuses on virtual environments such as those in digital games and training simulations – interactive and persistent digital artifacts designed to fulfill specific purposes, such as entertainment, education, training, or persuasion. The article introduces the criteria for moral consideration that serve as a framework for this analysis. Adopting this framework, the article (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Negligent Algorithmic Discrimination.Andrés Páez - 2021 - Law and Contemporary Problems 84 (3):19-33.
    The use of machine learning algorithms has become ubiquitous in hiring decisions. Recent studies have shown that many of these algorithms generate unlawful discriminatory effects in every step of the process. The training phase of the machine learning models used in these decisions has been identified as the main source of bias. For a long time, discrimination cases have been analyzed under the banner of disparate treatment and disparate impact, but these concepts have been shown to be ineffective in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. The Ghost in the Machine has an American accent: value conflict in GPT-3.Rebecca Johnson, Giada Pistilli, Natalia Menedez-Gonzalez, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene & Donald Jay Bertulfo - manuscript
    The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world’s cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the co- creation of language and cultural value impacts large language models (LLMs). (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Babbling stochastic parrots? A Kripkean argument for reference in large language models.Steffen Koch - forthcoming - Philosophy of Ai.
    Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper develops a more optimistic view. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50.  28
    Revolutionizing Cybersecurity: Intelligent Malware Detection Through Deep Neural Networks.M. Sheik Dawood - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):655-666.
    With the proliferation of sophisticated cyber threats, traditional malware detection techniques are becoming inadequate to ensure robust cybersecurity. This study explores the integration of deep learning (DL) techniques into malware detection systems to enhance their accuracy, scalability, and adaptability. By leveraging convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformers, this research presents an intelligent malware detection framework capable of identifying both known and zero-day threats. The methodology involves feature extraction from static, dynamic, and hybrid malware datasets, followed by (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 970