Results for 'Evaluating AI'

973 found
Order:
  1. The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (1):99-120.
    Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   163 citations  
  2. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  4. Evaluation and Design of Generalist Systems (EDGeS).John Beverley & Amanda Hicks - 2023 - Ai Magazine.
    The field of AI has undergone a series of transformations, each marking a new phase of development. The initial phase emphasized curation of symbolic models which excelled in capturing reasoning but were fragile and not scalable. The next phase was characterized by machine learning models—most recently large language models (LLMs)—which were more robust and easier to scale but struggled with reasoning. Now, we are witnessing a return to symbolic models as complementing machine learning. Successes of LLMs contrast with their inscrutability, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Toward an Ethics of AI Assistants: an Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  7. Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology.Piercosma Bisconti, Davide Orsitto, Federica Fedorczyk, Fabio Brau, Marianna Capasso, Lorenzo De Marinis, Hüseyin Eken, Federica Merenda, Mirko Forti, Marco Pacini & Claudia Schettini - 2022 - AI and Society 1 (1):1-10.
    In this paper, we propose a methodology to maximize the benefits of interdisciplinary cooperation in AI research groups. Firstly, we build the case for the importance of interdisciplinarity in research groups as the best means to tackle the social implications brought about by AI systems, against the backdrop of the EU Commission proposal for an Artificial Intelligence Act. As we are an interdisciplinary group, we address the multi-faceted implications of the mass-scale diffusion of AI-driven technologies. The result of our exercise (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  8. AI-aesthetics and the Anthropocentric Myth of Creativity.Emanuele Arielli & Lev Manovich - 2022 - NODES 1 (19-20).
    Since the beginning of the 21st century, technologies like neural networks, deep learning and “artificial intelligence” (AI) have gradually entered the artistic realm. We witness the development of systems that aim to assess, evaluate and appreciate artifacts according to artistic and aesthetic criteria or by observing people’s preferences. In addition to that, AI is now used to generate new synthetic artifacts. When a machine paints a Rembrandt, composes a Bach sonata, or completes a Beethoven symphony, we say that this is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Extinction Risks from AI: Invisible to Science?Vojtech Kovarik, Christiaan van Merwijk & Ida Mattsson - manuscript
    In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart’s Law as “Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity”, and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart’s Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims to be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Algorithm Evaluation Without Autonomy.Scott Hill - forthcoming - AI and Ethics.
    In Algorithms & Autonomy, Rubel, Castro, and Pham (hereafter RCP), argue that the concept of autonomy is especially central to understanding important moral problems about algorithms. In particular, autonomy plays a role in analyzing the version of social contract theory that they endorse. I argue that although RCP are largely correct in their diagnosis of what is wrong with the algorithms they consider, those diagnoses can be appropriated by moral theories RCP see as in competition with their autonomy based theory. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. AI and Structural Injustice: Foundations for Equity, Values, and Responsibility.Johannes Himmelreich & Désirée Lim - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    This chapter argues for a structural injustice approach to the governance of AI. Structural injustice has an analytical and an evaluative component. The analytical component consists of structural explanations that are well-known in the social sciences. The evaluative component is a theory of justice. Structural injustice is a powerful conceptual tool that allows researchers and practitioners to identify, articulate, and perhaps even anticipate, AI biases. The chapter begins with an example of racial bias in AI that arises from structural injustice. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Big Tech corporations and AI: A Social License to Operate and Multi-Stakeholder Partnerships in the Digital Age.Marianna Capasso & Steven Umbrello - 2023 - In Francesca Mazzi & Luciano Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer Verlag. pp. 231–249.
    The pervasiveness of AI-empowered technologies across multiple sectors has led to drastic changes concerning traditional social practices and how we relate to one another. Moreover, market-driven Big Tech corporations are now entering public domains, and concerns have been raised that they may even influence public agenda and research. Therefore, this chapter focuses on assessing and evaluating what kind of business model is desirable to incentivise the AI for Social Good (AI4SG) factors. In particular, the chapter explores the implications of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. (1 other version)AI, Biometric Analysis, and Emerging Cheating Detection Systems: The Engineering of Academic Integrity?Jo Ann Oravec - 2022 - Education Policy Analysis Archives 175 (30):1-18.
    Abstract: Cheating behaviors have been construed as a continuing and somewhat vexing issue for academic institutions as they increasingly conduct educational processes online and impose metrics on instructional evaluation. Research, development, and implementation initiatives on cheating detection have gained new dimensions in the advent of artificial intelligence (AI) applications; they have also engendered special challenges in terms of their social, ethical, and cultural implications. An assortment of commercial cheating–detection systems have been injected into educational contexts with little input on the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Evaluating Future Nanotechnology: The Net Societal Impacts of Atomically Precise Manufacturing.Steven Umbrello & Seth D. Baum - 2018 - Futures 100:63-73.
    Atomically precise manufacturing (APM) is the assembly of materials with atomic precision. APM does not currently exist, and may not be feasible, but if it is feasible, then the societal impacts could be dramatic. This paper assesses the net societal impacts of APM across the full range of important APM sectors: general material wealth, environmental issues, military affairs, surveillance, artificial intelligence, and space travel. Positive effects were found for material wealth, the environment, military affairs (specifically nuclear disarmament), and space travel. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  15. AI-Assisted Formal Buyer-Seller Marketing Theory.Angelina Inesia-Forde - 2024 - Asian Journal of Basic Science and Research 6 (2):01-40.
    Customer behavior, market dynamics, and technological advances have made it challenging for marketing theorists to provide comprehensive explanations and actionable insights. Although there are numerous substantive marketing frameworks, no formal marketing theory exists. This study aims to develop the first formal grounded theory in marketing by incorporating artificial intelligence and Forde's conceptual framework as a guiding lens. Charmaz's constructivist grounded theory tradition and Forde's conceptual framework and data analysis strategy were employed for this purpose. The data analysis strategy used with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. The Concept of Accountability in AI Ethics and Governance.Theodore Lechterman - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    Calls to hold artificial intelligence to account are intensifying. Activists and researchers alike warn of an “accountability gap” or even a “crisis of accountability” in AI. Meanwhile, several prominent scholars maintain that accountability holds the key to governing AI. But usage of the term varies widely in discussions of AI ethics and governance. This chapter begins by disambiguating some different senses and dimensions of accountability, distinguishing it from neighboring concepts, and identifying sources of confusion. It proceeds to explore the idea (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  17.  57
    AI-Driven Deduplication for Scalable Data Management in Hybrid Cloud Infrastructure.S. Yoheswari - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):587-597.
    The exponential growth of data storage requirements has become a pressing challenge in hybrid cloud environments, necessitating efficient data deduplication methods. This research proposes a novel Smart Deduplication Framework (SDF) designed to identify and eliminate redundant data, thus optimizing storage usage and improving data retrieval speeds. The framework leverages a hybrid cloud architecture, combining the scalability of public clouds with the security of private clouds. By employing a combination of client-side hashing, metadata indexing, and machine learning-based duplicate detection, the framework (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  97
    Of Opaque Oracles: Epistemic Dependence on AI in Science Poses No Novel Problems for Social Epistemology.Jakob Ortmann - forthcoming - Synthese.
    Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present a counterexample (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Ethics in AI: Balancing Innovation and Responsibility.Mosa M. M. Megdad, Mohammed H. S. Abueleiwa, Mohammed Al Qatrawi, Jehad El-Tantaw, Fadi E. S. Harara, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Pedagogical Research (IJAPR) 8 (9):20-25.
    Abstract: As artificial intelligence (AI) technologies become more integrated across various sectors, ethical considerations in their development and application have gained critical importance. This paper delves into the complex ethical landscape of AI, addressing significant challenges such as bias, transparency, privacy, and accountability. It explores how these issues manifest in AI systems and their societal impact, while also evaluating current strategies aimed at mitigating these ethical concerns, including regulatory frameworks, ethical guidelines, and best practices in AI design. Through a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Powerful Qualities, Phenomenal Properties and AI.Ashley Coates - 2023 - In William A. Bauer & Anna Marmodoro (eds.), Artificial Dispositions: Investigating Ethical and Metaphysical Issues. New York: Bloomsbury. pp. 169-192.
    “Strong AI” is the view that it is possible for an artificial agent to be mentally indistinguishable from human agents. Because the behavioral dispositions of artificial agents are determined by underlying dispositional systems, Strong AI seems to entail human behavioral dispositions are also determined by dispositional systems. It is, however, highly intuitive that non-dispositional, phenomenal properties, such as being in pain, at least partially determine certain human behavioral dispositions, like the disposition to take a pain killer. Consequently, Strong AI seems (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Virtues for AI.Jakob Ohlhorst - manuscript
    Virtue theory is a natural approach towards the design of artificially intelligent systems, given that the design of artificial intelligence essentially aims at designing agents with excellent dispositions. This has led to a lively research programme to develop artificial virtues. However, this research programme has until now had a narrow focus on moral virtues in an Aristotelian mould. While Aristotelian moral virtue has played a foundational role for the field, it unduly constrains the possibilities of virtue theory for artificial intelligence. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. The promise and perils of AI in medicine.Robert Sparrow & Joshua James Hatherley - 2019 - International Journal of Chinese and Comparative Philosophy of Medicine 17 (2):79-109.
    What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  24. Transforming Data Analysis through AI-Powered Data Science.Mathan Kumar - 2023 - Proceedings of IEEE 2 (2):1-5.
    AI-powered records science is revolutionizing the way facts are analyzed and understood. It can significantly improve the exceptional of information evaluation and boost its speed. AI-powered facts technological know-how enables access to more extensive, extra complicated information sets, faster insights, faster trouble solving, and higher choice making. Using the use of AI-powered information technological know-how techniques and tools, organizations can provide more accurate outcomes with shorter times to choices. AI-powered facts technology also offers more correct predictions of activities and developments (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Epistemic considerations when AI answers questions for us.Johan F. Hoorn & Juliet J.-Y. Chen - manuscript
    In this position paper, we argue that careless reliance on AI to answer our questions and to judge our output is a violation of Grice’s Maxim of Quality as well as a violation of Lemoine’s legal Maxim of Innocence, performing an (unwarranted) authority fallacy, and while lacking assessment signals, committing Type II errors that result from fallacies of the inverse. What is missing in the focus on output and results of AI-generated and AI-evaluated content is, apart from paying proper tribute, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Transforming Industries: The Role of Generative AI in Revolutionizing Banking and Healthcare.M. Selvaprasanth - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):580-600.
    The research evaluates generative AI’s capabilities through a multi-phase framework, addressing how data synthesis, language models, and predictive algorithms contribute to sector-specific applications. In banking, the model assesses the impact of AI-driven chatbot interactions, credit risk assessment, and personalized financial services on customer experience and bank performance. Healthcare applications are explored through image synthesis for diagnostics, predictive modeling in patient care, and drug discovery simulations. The experimental setup is rigorously tested across metrics such as response accuracy, cost-effectiveness, and data privacy (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27.  42
    AI-Driven Air Quality Forecasting Using Multi-Scale Feature Extraction and Recurrent Neural Networks.P. Selvaprasanth - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):575-590.
    We investigate the application of Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) networks, and a hybrid CNN-LSTM model for forecasting air pollution levels based on historical data. Our experimental setup uses real-world air quality datasets from multiple regions, containing measurements of pollutants like PM2.5, PM10, CO, NO2, and SO2, alongside meteorological data such as temperature, humidity, and wind speed. The models are trained, validated, and tested using a split dataset, and their accuracy is evaluated using performance metrics like Mean (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Content Reliability in the Age of AI: A Comparative Study of Human vs. GPT-Generated Scholarly Articles.Rajesh Kumar Maurya & Swati R. Maurya - 2024 - Library Progress International 44 (3):1932-1943.
    The rapid advancement of Artificial Intelligence (AI) and the developments of Large Language Models (LLMs) like Generative Pretrained Transformers (GPTs) have significantly influenced content creation in scholarly communication and across various fields. This paper presents a comparative analysis of the content reliability between human-generated and GPT-generated scholarly articles. Recent developments in AI suggest that GPTs have become capable in generating content that can mimic human language to a greater extent. This highlights and raises questions about the quality, accuracy, and reliability (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Exploring the Intersection of Rationality, Reality, and Theory of Mind in AI Reasoning: An Analysis of GPT-4's Responses to Paradoxes and ToM Tests.Lucas Freund - manuscript
    This paper investigates the responses of GPT-4, a state-of-the-art AI language model, to ten prominent philosophical paradoxes, and evaluates its capacity to reason and make decisions in complex and uncertain situations. In addition to analyzing GPT-4's solutions to the paradoxes, this paper assesses the model's Theory of Mind (ToM) capabilities by testing its understanding of mental states, intentions, and beliefs in scenarios ranging from classic ToM tests to complex, real-world simulations. Through these tests, we gain insight into AI's potential for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31.  57
    Evaluating the Impact of Telemedicine on Doctors' Work-Life Harmony in Diverse Healthcare Settings.Prabaharan Manoj - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):520-530.
    Furthermore, the paper delves into the role of hospital management and policy in easing the digital transition and fostering a more harmonious work-life balance. By analyzing technological tools and frameworks in telemedicine, the research identifies areas where improvements can be made, offering recommendations for enhancing doctors' digital efficiency while promoting better work-life harmony. This study contributes to understanding how technology can be harnessed to benefit healthcare professionals, particularly in managing the dual demands of professional duties and personal well-being.
    Download  
     
    Export citation  
     
    Bookmark  
  32. The Use of Artificial Intelligence (AI) in Qualitative Research for Theory Development.Prokopis A. Christou - 2023 - The Qualitative Report 28 (9):2739-2755.
    Theory development is an important component of academic research since it can lead to the acquisition of new knowledge, the development of a field of study, and the formation of theoretical foundations to explain various phenomena. The contribution of qualitative researchers to theory development and advancement remains significant and highly valued, especially in an era of various epochal shifts and technological innovation in the form of Artificial Intelligence (AI). Even so, the academic community has not yet fully explored the dynamics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Turing’s imitation game: still an impossible challenge for all machines and some judges––an evaluation of the 2008 Loebner contest. [REVIEW]Luciano Floridi & Mariarosaria Taddeo - 2009 - Minds and Machines 19 (1):145-150.
    An evaluation of the 2008 Loebner contest.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  34. Leveraging Explainable AI and Multimodal Data for Stress Level Prediction in Mental Health Diagnostics.Destiny Agboro - 2025 - International Journal of Research and Scientific Innovation.
    The increasing prevalence of mental health issues, particularly stress, has necessitated the development of data-driven, interpretable machine learning models for early detection and intervention. This study leverages multimodal data, including activity levels, perceived stress scores (PSS), and event counts, to predict stress levels among individuals. A series of models, including Logistic Regression, Random Forest, Gradient Boosting, and Neural Networks, were evaluated for their predictive performance. Results demonstrated that ensemble models, particularly Random Forest and Gradient Boosting, performed significantly better compared to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Biases, Evidence and Inferences in the story of Ai.Efraim Wallach - manuscript
    This treatise covers the history, now more than 170 years long, of researches and debates concerning the biblical city of Ai. This archetypical chapter in the evolution of biblical archaeology and historiography was never presented in full. I use the historical data as a case study to explore a number of epistemological issues, such as the creation and revision of scientific knowledge, the formation and change of consensus, the Kuhnian model of paradigm shift, several models of discrimination between hypotheses about (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Every dog has its day: An in-depth analysis of the creative ability of visual generative AI.Maria Hedblom - 2024 - Cosmos+Taxis 12 (5-6):88-103.
    The recent remarkable success of generative AI models to create text and images has already started altering our perspective of intelligence and the “uniqueness” of humanity in this world. Simultaneously, arguments on why AI will never exceed human intelligence are ever-present as seen in Landgrebe and Smith (2022). To address whether machines may rule the world after all, this paper zooms in on one of the aspects of intelligence Landgrebe and Smith (2022) neglected to consider: creativity. Using Rhodes four Ps (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Development and validation of the AI attitude scale (AIAS-4): a brief measure of general attitude toward artificial intelligence.Simone Grassini - 2023 - Frontiers in Psychology 14:1191628.
    The rapid advancement of artificial intelligence (AI) has generated an increasing demand for tools that can assess public attitudes toward AI. This study proposes the development and the validation of the AI Attitude Scale (AIAS), a concise self-report instrument designed to evaluate public perceptions of AI technology. The first version of the AIAS that the present manuscript proposes comprises five items, including one reverse-scored item, which aims to gauge individuals’ beliefs about AI’s influence on their lives, careers, and humanity overall. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Exploration of the creative processes in animals, robots, and AI: who holds the authorship?Jessica Lombard, Cédric Sueur, Marie Pelé, Olivier Capra & Benjamin Beltzung - 2024 - Humanities and Social Sciences Communications 11 (1).
    Picture a simple scenario: a worm, in its modest way, traces a trail of paint as it moves across a sheet of paper. Now shift your imagination to a more complex scene, where a chimpanzee paints on another sheet of paper. A simple question arises: Do you perceive an identical creative process in these two animals? Can both of these animals be designated as authors of their creation? If only one, which one? This paper delves into the complexities of authorship, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Is a subpersonal epistemology possible? Re-evaluating cognitive integration for extended cognition.Hadeel Naeem - 2021 - Dissertation, University of Edinburgh
    Virtue reliabilism provides an account of epistemic integration that explains how a reliable-belief forming process can become a knowledge-conducive ability of one’s cognitive character. The univocal view suggests that this epistemic integration can also explain how an external process can extend one’s cognition into the environment. Andy Clark finds a problem with the univocal view. He claims that cognitive extension is a wholly subpersonal affair, whereas the epistemic integration that virtue reliabilism puts forward requires personal-level agential involvement. To adjust the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum in science. Idealizations (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41.  90
    Algorithmic Fairness Criteria as Evidence.Will Fleisher - forthcoming - Ergo: An Open Access Journal of Philosophy.
    Statistical fairness criteria are widely used for diagnosing and ameliorating algorithmic bias. However, these fairness criteria are controversial as their use raises several difficult questions. I argue that the major problems for statistical algorithmic fairness criteria stem from an incorrect understanding of their nature. These criteria are primarily used for two purposes: first, evaluating AI systems for bias, and second constraining machine learning optimization problems in order to ameliorate such bias. The first purpose typically involves treating each criterion as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Editorial: Risks of artificial intelligence.Vincent C. Müller - 2015 - In Risks of general intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Risks of artificial intelligence.Vincent C. Muller (ed.) - 2015 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and access (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  45. The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do not (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  46. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Publishing Robots.Nick Hadsell, Rich Eva & Kyle Huitt - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    If AI can write an excellent philosophy paper, we argue that philosophy journals should strongly consider publishing that paper. After all, AI stands to make significant contributions to ongoing projects in some subfields, and it benefits the world of philosophy for those contributions to be published in journals, the primary purpose of which is to disseminate significant contributions to philosophy. We also propose the Sponsorship Model of AI journal refereeing to mitigate any costs associated with our view. This model requires (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Algorithms Advise, Humans Decide: the Evidential Role of the Patient Preference Predictor.Nicholas Makins - forthcoming - Journal of Medical Ethics.
    An AI-based “patient preference predictor” (PPP) is a proposed method for guiding healthcare decisions for patients who lack decision-making capacity. The proposal is to use correlations between sociodemographic data and known healthcare preferences to construct a model that predicts the unknown preferences of a particular patient. In this paper, I highlight a distinction that has been largely overlooked so far in debates about the PPP–that between algorithmic prediction and decision-making–and argue that much of the recent philosophical disagreement stems from this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered by conflations (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. An Essay on Artifical Dispositions and Dispositional Compatibilism.Atilla Akalın - 2024 - Felsefe Dünyasi 79:165-187..
    The rapid pace of technological advancements offers an essential field of research for a deeper understanding of man’s relationship with artifacts of her design. These artifacts designed by humans can have various mental and physical effects on their users. The human interaction with the artifact is not passive; on the contrary, it exhibits a potential that reveals the inner dispositions of human beings and makes them open to new creations. In this article, we will examine the impact of technology on (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 973