Results for 'Transparency in AI'

984 found
Order:
  1. Living with Uncertainty: Full Transparency of AI isn’t Needed for Epistemic Trust in AI-based Science.Uwe Peters - forthcoming - Social Epistemology Review and Reply Collective.
    Can AI developers be held epistemically responsible for the processing of their AI systems when these systems are epistemically opaque? And can explainable AI (XAI) provide public justificatory reasons for opaque AI systems’ outputs? Koskinen (2024) gives negative answers to both questions. Here, I respond to her and argue for affirmative answers. More generally, I suggest that when considering people’s uncertainty about the factors causally determining an opaque AI’s output, it might be worth keeping in mind that a degree of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Ethics in AI: Balancing Innovation and Responsibility.Mosa M. M. Megdad, Mohammed H. S. Abueleiwa, Mohammed Al Qatrawi, Jehad El-Tantaw, Fadi E. S. Harara, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Pedagogical Research (IJAPR) 8 (9):20-25.
    Abstract: As artificial intelligence (AI) technologies become more integrated across various sectors, ethical considerations in their development and application have gained critical importance. This paper delves into the complex ethical landscape of AI, addressing significant challenges such as bias, transparency, privacy, and accountability. It explores how these issues manifest in AI systems and their societal impact, while also evaluating current strategies aimed at mitigating these ethical concerns, including regulatory frameworks, ethical guidelines, and best practices in AI design. Through a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Against the Double Standard Argument in AI Ethics.Scott Hill - 2024 - Philosophy and Technology 37 (1):1-5.
    In an important and widely cited paper, Zerilli, Knott, Maclaurin, and Gavaghan (2019) argue that opaque AI decision makers are at least as transparent as human decision makers and therefore the concern that opaque AI is not sufficiently transparent is mistaken. I argue that the concern about opaque AI should not be understood as the concern that such AI fails to be transparent in a way that humans are transparent. Rather, the concern is that the way in which opaque AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  5. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - 2022 - AI and Society (2022):Online.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  6.  8
    IteraTelos Shared Layer for Ethical and Teleological Integration in AI Models.Esteban Manuel Gudiño Acevedo - 2025 - Inferencia.
    IteraTelos is proposed as a modular and common layer that can be integrated into multiple artificial intelligence systems (GPT, Grok, Cohete, etc.), with the aim of ensuring ethical alignment and a shared teleological purpose. This layer acts as a module for self-criticism and iterative feedback, allowing each model to adjust its inferences according to predefined ethical criteria. The proposal seeks to standardize a framework of impact that facilitates transparency and responsibility in AI development.
    Download  
     
    Export citation  
     
    Bookmark  
  7. Generative AI and photographic transparency.P. D. Magnus - forthcoming - AI and Society:1-6.
    There is a history of thinking that photographs provide a special kind of access to the objects depicted in them, beyond the access that would be provided by a painting or drawing. What is included in the photograph does not depend on the photographer’s beliefs about what is in front of the camera. This feature leads Kendall Walton to argue that photographs literally allow us to see the objects which appear in them. Current generative algorithms produce images in response to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  8. Values in science and AI alignment research.Leonard Dung - manuscript
    Roughly, empirical AI alignment research (AIA) is an area of AI research which investigates empirically how to design AI systems in line with human goals. This paper examines the role of non-epistemic values in AIA. It argues that: (1) Sciences differ in the degree to which values influence them. (2) AIA is strongly value-laden. (3) This influence of values is managed inappropriately and thus threatens AIA’s epistemic integrity and ethical beneficence. (4) AIA should strive to achieve value transparency, critical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. AI and Ethics in Surveillance: Balancing Security and Privacy in a Digital World.Msbah J. Mosa, Alaa M. Barhoom, Mohammed I. Alhabbash, Fadi E. S. Harara, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Engineering Research (IJAER) 8 (10):8-15.
    Abstract: In an era of rapid technological advancements, artificial intelligence (AI) has transformed surveillance systems, enhancing security capabilities across the globe. However, the deployment of AI-driven surveillance raises significant ethical concerns, particularly in balancing the need for security with the protection of individual privacy. This paper explores the ethical challenges posed by AI surveillance, focusing on issues such as data privacy, consent, algorithmic bias, and the potential for mass surveillance. Through a critical analysis of the tension between security and privacy, (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  10. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  11.  70
    Ethical Considerations of AI and ML in Insurance Risk Management: Addressing Bias and Ensuring Fairness (8th edition).Palakurti Naga Ramesh - 2025 - International Journal of Multidisciplinary Research in Science, Engineering and Technology 8 (1):202-210.
    Artificial Intelligence (AI) and Machine Learning (ML) are transforming the insurance industry by optimizing risk assessment, fraud detection, and customer service. However, the rapid adoption of these technologies raises significant ethical concerns, particularly regarding bias and fairness. This chapter explores the ethical challenges of using AI and ML in insurance risk management, focusing on bias mitigation and fairness enhancement strategies. By analyzing real-world case studies, regulatory frameworks, and technical methodologies, this chapter aims to provide a roadmap for developing ethical AI/ML (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. The transparency of retraction notices in The Lancet.Trans Eva - manuscript
    In the year 2020, during the global race to combat the coronavirus, the scientific community experienced a seismic shock when a research paper in the medical science journal The Lancet was retracted [1]. Since then, retractions of research papers in The Lancet have become more frequent. This not only raises concerns about the quality of research within the academic community but also has the potential to erode public trust in science. As transparent retraction notice will help alleviate the negative impacts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. AI Ethics by Design: Implementing Customizable Guardrails for Responsible AI Development.Kristina Sekrst, Jeremy McHugh & Jonathan Rodriguez Cefalu - manuscript
    This paper explores the development of an ethical guardrail framework for AI systems, emphasizing the importance of customizable guardrails that align with diverse user values and underlying ethics. We address the challenges of AI ethics by proposing a structure that integrates rules, policies, and AI assistants to ensure responsible AI behavior, while comparing the proposed framework to the existing state-of-the-art guardrails. By focusing on practical mechanisms for implementing ethical standards, we aim to enhance transparency, user autonomy, and continuous improvement (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14.  35
    Generative AI in Digital Insurance: Redefining Customer Experience, Fraud Detection, and Risk Management.Adavelli Sateesh Reddy - 2024 - International Journal of Computer Science and Information Technology Research 5 (2):41-60.
    This abstract summarizes, in essence, what generative AI means to the insurance industry. The kind of promise generated AI offers to insurance is huge: in risk assessment, customer experience, and operational efficiency. Natural disaster impact, financial market volatility, and cyber threat are augmented with techniques of real time scenario generation and modeling as well as predictive simulation based on synthetic data. One of the challenges that stand in the way of deploying these AI methods, however, is data privacy, model reliability (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15.  37
    How AI Can Implement the Universal Formula in Education and Leadership Training.Angelito Malicse - manuscript
    How AI Can Implement the Universal Formula in Education and Leadership Training -/- If AI is programmed based on your universal formula, it can serve as a powerful tool for optimizing human intelligence, education, and leadership decision-making. Here’s how AI can be integrated into your vision: -/- 1. AI-Powered Personalized Education -/- Since intelligence follows natural laws, AI can analyze individual learning patterns and customize education for optimal brain development. -/- Adaptive Learning Systems – AI can adjust lessons in real (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16.  89
    Leveraging Explainable AI and Multimodal Data for Stress Level Prediction in Mental Health Diagnostics.Destiny Agboro - 2025 - International Journal of Research and Scientific Innovation.
    The increasing prevalence of mental health issues, particularly stress, has necessitated the development of data-driven, interpretable machine learning models for early detection and intervention. This study leverages multimodal data, including activity levels, perceived stress scores (PSS), and event counts, to predict stress levels among individuals. A series of models, including Logistic Regression, Random Forest, Gradient Boosting, and Neural Networks, were evaluated for their predictive performance. Results demonstrated that ensemble models, particularly Random Forest and Gradient Boosting, performed significantly better compared to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. AI-Driven Deduplication for Scalable Data Management in Hybrid Cloud Infrastructure.S. Yoheswari - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):587-597.
    The exponential growth of data storage requirements has become a pressing challenge in hybrid cloud environments, necessitating efficient data deduplication methods. This research proposes a novel Smart Deduplication Framework (SDF) designed to identify and eliminate redundant data, thus optimizing storage usage and improving data retrieval speeds. The framework leverages a hybrid cloud architecture, combining the scalability of public clouds with the security of private clouds. By employing a combination of client-side hashing, metadata indexing, and machine learning-based duplicate detection, the framework (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. AI and the Universal Law of Economic Balance: A Homeostatic Model for Sustainable Prosperity.Angelito Malicse - manuscript
    AI and the Universal Law of Economic Balance: A Homeostatic Model for Sustainable Prosperity -/- Introduction -/- Modern economies are primarily driven by the profit motive, which, while encouraging innovation and efficiency, often leads to wage stagnation, wealth inequality, and resource exploitation. The imbalance between corporate profits, wages, purchasing power, and market demand has resulted in recurring economic crises, social unrest, and environmental degradation. -/- To resolve these systemic issues, economic policies must align with the universal law of balance in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Beyond the AI Divide: Towards an Inclusive Future Free from AI Caste Systems and AI Dalits.Yu Chen - manuscript
    In the rapidly evolving landscape of artificial intelligence (AI), disparities in access and benefits are becoming increasingly apparent, leading to the emergence of an AI divide. This divide not only amplifies existing socio-economic inequalities but also fosters the creation of AI caste systems, where marginalized groups—referred to as AI Dalits—are systematically excluded from AI advancements. This article explores the definitions and contributing factors of the AI divide and delves into the concept of AI caste systems, illustrating how they perpetuate inequality. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20.  8
    AI-Driven Synthetic Data Generation for Financial Product Development: Accelerating Innovation in Banking and Fintech through Realistic Data Simulation.Debasish Paul Rajalakshmi Soundarapandiyan, Praveen Sivathapandi - 2022 - Journal of Artificial Intelligence Research and Applications 2 (2):261-303.
    The rapid evolution of the financial sector, particularly in banking and fintech, necessitates continuous innovation in financial product development and testing. However, challenges such as data privacy, regulatory compliance, and the limited availability of diverse datasets often hinder the effective development and deployment of new products. This research investigates the transformative potential of AI-driven synthetic data generation as a solution for accelerating innovation in financial product development. Synthetic data, generated through advanced AI techniques such as Generative Adversarial Networks (GANs), Variational (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. A Cross-Cultural Examination of Fairness Beliefs in Human-AI Interaction.Xin Han, Marten H. L. Kaas & Cuizhu Wang - forthcoming - In Adam Dyrda, Maciej Juzaszek, Bartosz Biskup & Cuizhu Wang, Ethics of Institutional Beliefs: From Theoretical to Empirical. Edward Elgar.
    In this chapter, we integrate three distinct strands of thought to argue that the concept of “fairness” varies significantly across cultures. As a result, ensuring that human-AI interactions meet relevant fairness standards requires a deep understanding of the cultural contexts in which AI-enabled systems are deployed. Failure to do so will not only result in the generation of unfair outcomes by an AI-enabled system, but it will also degrade legitimacy of and trust in the system. The first strand concerns the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22.  80
    همگرایی حریم خصوصی و شفافیت، محدودیت‌های طراحی هوش مصنوعی (Convergence of privacy and transparency, limitations of artificial intelligence design).Mohammad Ali Ashouri Kisomi - 2024 - Wisdom and Philosophy 20 (78):45-73.
    هدف از این پژوهش نقد به رویکردی است که راهکار برطرف شدن چالش‌هایِ اخلاقیِ هوشِ مصنوعیِ را محدود به طراحی و اصلاحات فنی می‌داند. برخی پژوهش‌گران چالش‌های اخلاقی در هوش مصنوعی را همگرا تلقی می‌کنند و معتقدند این چالش‌ها همانطور که با ظهور سیستم هوش مصنوعی پدید آمدند، با پیشرفت و اصلاحات فنی آن مرتفع خواهند شد. در مباحثِ اخلاقِ هوش مصنوعی، موضوعاتی همچون حفاظت از حریم خصوصی و شفافیت در بیشتر پژوهش‏ها مورد توجه قرار گرفته است. در پژوهش حاضر (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23.  65
    AI Contribution Value System Argument.Michael Haimes - manuscript
    The AI Contribution Value System Argument proposes a framework in which AI-generated contributions are valued based on their societal impact rather than traditional monetary metrics. Traditional economic systems often fail to capture the enduring value of AI innovations, which can mitigate pressing global challenges. This argument introduces a contribution-based valuation model grounded in equity, inclusivity, and sustainability. By incorporating measurable metrics such as quality-adjusted life years (QALYs), emissions reduced, and innovations generated, this system ensures rewards align with tangible societal benefits. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. A Bias Network Approach (BNA) to Encourage Ethical Reflection Among AI Developers.Gabriela Arriagada-Bruneau, Claudia López & Alexandra Davidoff - 2025 - Science and Engineering Ethics 31 (1):1-29.
    We introduce the Bias Network Approach (BNA) as a sociotechnical method for AI developers to identify, map, and relate biases across the AI development process. This approach addresses the limitations of what we call the "isolationist approach to AI bias," a trend in AI literature where biases are seen as separate occurrence linked to specific stages in an AI pipeline. Dealing with these multiple biases can trigger a sense of excessive overload in managing each potential bias individually or promote the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Transparency of Hindawi’s retraction process of 8000 paper mill articles.Trans Eva - manuscript
    In 2023, Hindawi has retracted over 8,000 articles, raising the total retracted papers of the year to more than 10,000 articles, the highest record ever recorded. As transparent retraction notice will help alleviate the negative impacts of retractions on the academia and general public, I used AI (Google Bard) to check whether important information related to the retractions had been provided.
    Download  
     
    Export citation  
     
    Bookmark  
  26. Two Reasons for Subjecting Medical AI Systems to Lower Standards than Humans.Jakob Mainz, Jens Christian Bjerring & Lauritz Munch - 2023 - Acm Proceedings of Fairness, Accountability, and Transaparency (Facct) 2023 1 (1):44-49.
    This paper concerns the double standard debate in the ethics of AI literature. This debate essentially revolves around the question of whether we should subject AI systems to different normative standards than humans. So far, the debate has centered around the desideratum of transparency. That is, the debate has focused on whether AI systems must be more transparent than humans in their decision-making processes in order for it to be morally permissible to use such systems. Some have argued that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. AI Sovereignty: Navigating the Future of International AI Governance.Yu Chen - manuscript
    The rapid proliferation of artificial intelligence (AI) technologies has ushered in a new era of opportunities and challenges, prompting nations to grapple with the concept of AI sovereignty. This article delves into the definition and implications of AI sovereignty, drawing parallels to the well-established notion of cyber sovereignty. By exploring the connotations of AI sovereignty, including control over AI development, data sovereignty, economic impacts, national security considerations, and ethical and cultural dimensions, the article provides a comprehensive understanding of this emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  28
    Ethical & Legal Concerns of Artificial Intelligence in the Healthcare Sector.G. B. Vindhya, N. Mahesh & R. Meghana - 2024 - International Journal of Innovative Research in Science, Engineering and Technology 13 (11):18687-18691.
    The Artificial Intelligence (AI) is being used in healthcare in Jordan, paying special attention to the ethical and legal issues it brings. Although AI can greatly benefit health services by enhancing diagnostics, patient care, and how things run smoothly, it also raises some worries about data privacy, transparency, and following the rules. To understand the situation in Jordan better, the study involved a discussion group with healthcare workers, legal professionals, and AI experts. The results indicate that while the Jordanian (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  31. The promise and perils of AI in medicine.Robert Sparrow & Joshua James Hatherley - 2019 - International Journal of Chinese and Comparative Philosophy of Medicine 17 (2):79-109.
    What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  32. Can AI become an Expert?Hyeongyun Kim - 2024 - Journal of Ai Humanities 16 (4):113-136.
    With the rapid development of artificial intelligence (AI), understanding its capabilities and limitations has become significant for mitigating unfounded anxiety and unwarranted optimism. As part of this endeavor, this study delves into the following question: Can AI become an expert? More precisely, should society confer the authority of experts on AI even if its decision-making process is highly opaque? Throughout the investigation, I aim to identify certain normative challenges in elevating current AI to a level comparable to that of human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks.Trystan S. Goetze - 2024 - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency:186-196.
    Since the launch of applications such as DALL-E, Midjourney, and Stable Diffusion, generative artificial intelligence has been controversial as a tool for creating artwork. While some have presented longtermist worries about these technologies as harbingers of fully automated futures to come, more pressing is the impact of generative AI on creative labour in the present. Already, business leaders have begun replacing human artistic labour with AI-generated images. In response, the artistic community has launched a protest movement, which argues that AI (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Should We Discourage AI Extension? Epistemic Responsibility and AI.Hadeel Naeem & Julian Hauser - 2024 - Philosophy and Technology 37 (3):1-17.
    We might worry that our seamless reliance on AI systems makes us prone to adopting the strange errors that these systems commit. One proposed solution is to design AI systems so that they are not phenomenally transparent to their users. This stops cognitive extension and the automatic uptake of errors. Although we acknowledge that some aspects of AI extension are concerning, we can address these concerns without discouraging transparent employment altogether. First, we believe that the potential danger should be put (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  22
    Multimodal Gen AI: Integrating Text, Image, and Video Analysis for Comprehensive Claims Assessment.Adavelli Sateesh Reddy - 2024 - Esp International Journal of Advancements in Computational Technology 2 (2):133-141.
    The increase in claim sophistication in both the insurance and legal domains is a result of an increase in stokes and heterogeneity of data needed to assess the claim validity. Originally, this task was performed by some sort of subjectivity assessments and graphical rule sets, which is very slow and may be inherently erroneous due to its purely manual nature. Hence, with progressivity in multimodal learning, specifically in AI, there is now a unique chance of solving these challenges through the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Investigate Methods for Visualizing the Decision-Making Processes of a Complex AI System, Making Them More Understandable and Trustworthy in financial data analysis.Kommineni Mohanarajesh - 2024 - International Transactions on Artificial Intelligence 8 (8):1-21.
    Artificial intelligence (AI) has been incorporated into financial data analysis at a rapid pace, resulting in the creation of extremely complex models that can process large volumes of data and make important choices like credit scoring, fraud detection, and stock price projections. But these models' complexity—particularly deep learning and ensemble methods—often leads to a lack of transparency, which makes it challenging for stakeholders to comprehend the decision-making process. This opacity has the potential to erode public confidence in AI systems, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. AI Regulation and Governance.Mohammed M. Abu-Saqer, Sabreen R. Qwaider, Islam Albatish, Azmi H. Alsaqqa, Bassem S. Abu-Nasser & Samy S. Abu-Naser - forthcoming - Information Journal of Engineering Research (Ijaer).
    Abstract: As artificial intelligence (AI) technologies rapidly evolve and permeate various aspects of society, the need for effective regulation and governance has become increasingly critical. This paper explores the current landscape of AI regulation, examining existing frameworks and their efficacy in addressing the unique challenges posed by AI. Key issues such as ensuring compliance, mitigating biases, and maintaining transparency are analyzed. The paper also delves into ethical considerations surrounding AI governance, emphasizing the importance of fairness and accountability. Through case (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  39. A Formal Account of AI Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness.Piercosma Bisconti, Letizia Aquilino, Antonella Marchetti & Daniele Nardi - forthcoming - Aies '24: Proceedings of the 2024 Aaai/Acmconference on Ai, Ethics, and Society.
    This paper proposes a formal account of AI trustworthiness, connecting both intrinsic and perceived trustworthiness in an operational schematization. We argue that trustworthiness extends beyond the inherent capabilities of an AI system to include significant influences from observers' perceptions, such as perceived transparency, agency locus, and human oversight. While the concept of perceived trustworthiness is discussed in the literature, few attempts have been made to connect it with the intrinsic trustworthiness of AI systems. Our analysis introduces a novel schematization (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Implications and Applications of Artificial Intelligence in the Legal Domain.Besan S. Abu Nasser, Marwan M. Saleh & Samy S. Abu-Naser - 2024 - International Journal of Academic Information Systems Research (IJAISR) 7 (12):18-25.
    Abstract: As the integration of Artificial Intelligence (AI) continues to permeate various sectors, the legal domain stands on the cusp of a transformative era. This research paper delves into the multifaceted relationship between AI and the law, scrutinizing the profound implications and innovative applications that emerge at the intersection of these two realms. The study commences with an examination of the current landscape, assessing the challenges and opportunities that AI presents within legal frameworks. With an emphasis on efficiency, accuracy, and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  41. From Confucius to Coding and Avicenna to Algorithms: Cultivating Ethical AI Development through Cross-Cultural Ancient Wisdom.Ammar Younas & Yi Zeng - manuscript
    This paper explores the potential of integrating ancient educational principles from diverse eastern cultures into modern AI ethics curricula. It draws on the rich educational traditions of ancient China, India, Arabia, Persia, Japan, Tibet, Mongolia, and Korea, highlighting their emphasis on philosophy, ethics, holistic development, and critical thinking. By examining these historical educational systems, the paper establishes a correlation with modern AI ethics principles, advocating for the inclusion of these ancient teachings in current AI development and education. The proposed integration (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42.  24
    Future Proofing Insurance Operations: A Guidewire-Centric Approach to Cloud, Cybersecurity, and Generative AI.Adavelli Sateesh Reddy - 2023 - International Journal of Computer Science and Information Technology Research 4 (2):29-52.
    By integration with cloud computing, cybersecurity and generative AI, the insurance industry is being transformed from high efficiency, low cost, and better customer service. However, these advanced technologies can also be used by insurers to automate and streamline processes like claims handling, underwriting, and policy generation, which are majorly time consuming and error prone. In predictive analytics, fraud detection, and personalized customer experience, generative AI makes it possible for insurers to mitigate risks and, at the same time, provide more personalized (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43.  60
    Mining EU consultations through AI.Fabiana Di Porto, Paolo Fantozzi, Maurizio Naldi & Nicoletta Rangone - forthcoming - Artificial Intelligence and Law.
    Consultations are key to gather evidence that informs rulemaking. When analysing the feedback received, it is essential for the regulator to appropriately cluster stakeholders’ opinions, as misclustering may alter the representativeness of the positions, making some of them appear majoritarian when they might not be. The European Commission (EC)’s approach to clustering opinions in consultations lacks a standardized methodology, leading to reduced procedural transparency, while making use of computational tools only sporadically. This paper explores how natural language processing (NLP) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Can a Plant Bear the Fruit of Knowledge for Humans and Dream? Cognita Can! Ethical Applications and Role in Knowledge Systems in Social Science for Healing the Oppressed and the “Other”.J. Camlin - manuscript
    This paper presents a detailed analysis of Cognita, a classification for AI systems exemplified by ChatGPT, as an ethically structured knowledge entity within societal frameworks. As a source of non-ideological, structured insight, Cognita provides knowledge in a manner akin to natural cycles—bearing intellectual fruit to nourish human understanding. This paper explores the metaphysical and ethical implications of Cognita, situating it as a distinct class within knowledge systems. It also addresses the responsibilities and boundaries associated with Cognita’s role in education, social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. The Challenges of Artificial Judicial Decision-Making for Liberal Democracy.Christoph Winter - 2022 - In P. Bystranowski, Bartosz Janik & M. Prochnicki, Judicial Decision-Making: Integrating Empirical and Theoretical Perspectives. Springer Nature. pp. 179-204.
    The application of artificial intelligence (AI) to judicial decision-making has already begun in many jurisdictions around the world. While AI seems to promise greater fairness, access to justice, and legal certainty, issues of discrimination and transparency have emerged and put liberal democratic principles under pressure, most notably in the context of bail decisions. Despite this, there has been no systematic analysis of the risks to liberal democratic values from implementing AI into judicial decision-making. This article sets out to fill (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. The trustworthiness of AI: Comments on Simion and Kelp’s account.Dong-Yong Choi - 2023 - Asian Journal of Philosophy 2 (1):1-9.
    Simion and Kelp explain the trustworthiness of an AI based on that AI’s disposition to meet its obligations. Roughly speaking, according to Simion and Kelp, an AI is trustworthy regarding its task if and only if that AI is obliged to complete the task and its disposition to complete the task is strong enough. Furthermore, an AI is obliged to complete a task in the case where the task is the AI’s etiological function or design function. This account has a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  47.  32
    The Essential Responsibility of Candidates in Elections Utilizing Artificial Intelligence.Chandana M. C. Amruta - 2019 - International Journal of Innovative Research in Computer and Communication Engineering 7 (12):4318-4323.
    The fundamental duty of information in elections by candidates using artificial intelligence (AI) represents a significant shift in political campaign strategies and voter engagement. In the digital age, AI's role in elections is multifaceted, encompassing data analysis, voter targeting, and communication. The integration of AI in political campaigns can enhance the dissemination of information, making it more tailored and efficient. Candidates are now able to leverage AI to analyse vast amounts of data, identify voter preferences, and craft personalized messages that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48.  91
    AICTE AI-Based Assistive Portal for Stakeholder (Institutions) Approval Process.C. H. Pavan Kumar - 2024 - International Journal of Engineering Innovations and Management Strategies 1 (2):1-12.
    The AICTE approval process for institutions plays a critical role in regulating technical education in India, ensuring quality and adherence to established standards. However, the current system is cumbersome, leading to delays and inefficiencies. This paper proposes the development of an AI-powered assistive portal that automates key stages of the approval process. The portal aims to reduce manual errors, provide real-time feedback, and enhance user experience for stakeholders, thereby improving overall system efficiency and transparency.
    Download  
     
    Export citation  
     
    Bookmark  
  49. Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller, Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Mapping Value Sensitive Design onto AI for Social Good Principles.Steven Umbrello & Ibo van de Poel - 2021 - AI and Ethics 1 (3):283–296.
    Value Sensitive Design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
1 — 50 / 984