Switch to: References

Add citations

You must login to add citations.
  1. The application of chatbot on Vietnamese misgrant workers’ right protection in the implementation of new generation free trade agreements (FTAS).Quoc Nguyen Phan, Chin-Chin Tseng, Thu Thi Hoai Le & Thi Bich Ngoc Nguyen - 2023 - AI and Society 38 (4):1771-1783.
    The accession and implementation of new generation free trade agreements bring numerous opportunities as well as challenges to Viet Nam, regarding trade, labor and investment. The increasing number of workers abroad puts a pressure on Vietnamese government to support them in new working cultures and environments. The application of chatbot, which has been known to help certain vulnerable groups such as patients, women and migrants could be one of the tools to support Vietnamese migrant workers by providing immediate information, network (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Misinformation, Content Moderation, and Epistemology: Protecting Knowledge.Keith Raymond Harris - 2024 - Routledge.
    This book argues that misinformation poses a multi-faceted threat to knowledge, while arguing that some forms of content moderation risk exacerbating these threats. It proposes alternative forms of content moderation that aim to address this complexity while enhancing human epistemic agency. The proliferation of fake news, false conspiracy theories, and other forms of misinformation on the internet and especially social media is widely recognized as a threat to individual knowledge and, consequently, to collective deliberation and democracy itself. This book argues (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Network of AI and trustworthy: response to Simion and Kelp’s account of trustworthy AI.Fei Song - 2023 - Asian Journal of Philosophy 2 (2):1-8.
    Simion and Kelp develop the obligation-based account of trustworthiness as a compelling general account of trustworthiness and then apply this account to various instances of AI. By doing so, they explain in what way any AI can be considered trustworthy, as per the general account. Simion and Kelp identify that any account of trustworthiness that relies on assumptions of agency that are too anthropocentric, such as that being trustworthy, must involve goodwill. I argue that goodwill is a necessary condition for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • KI:Text: Diskurse über KI-Textgeneratoren.Gerhard Schreiber & Lukas Ohly (eds.) - 2024 - De Gruyter.
    Wenn Künstliche Intelligenz (KI) Texte generieren kann, was sagt das darüber, was ein Text ist? Worin unterscheiden sich von Menschen geschriebene und mittels KI generierte Texte? Welche Erwartungen, Befürchtungen und Hoffnungen hegen Wissenschaften, wenn in ihren Diskursen KI-generierte Texte rezipiert werden und Anerkennung finden, deren Urheberschaft und Originalität nicht mehr eindeutig definierbar sind? Wie verändert sich die Arbeit mit Quellen und welche Konsequenzen ergeben sich daraus für die Kriterien wissenschaftlicher Textarbeit und das Verständnis von Wissenschaft insgesamt? Welche Chancen, Grenzen und (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Public perceptions of the use of artificial intelligence in Defence: a qualitative exploration.Lee Hadlington, Maria Karanika-Murray, Jane Slater, Jens Binder, Sarah Gardner & Sarah Knight - forthcoming - AI and Society:1-14.
    There are a wide variety of potential applications of artificial intelligence (AI) in Defence settings, ranging from the use of autonomous drones to logistical support. However, limited research exists exploring how the public view these, especially in view of the value of public attitudes for influencing policy-making. An accurate understanding of the public’s perceptions is essential for crafting informed policy, developing responsible governance, and building responsive assurance relating to the development and use of AI in military settings. This study is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem.Juri Viehoff - 2023 - Philosophy and Technology 36 (4):1-29.
    Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • La libertad como el punto de encuentro para la construcción de la confianza en las relaciones humanas.Carlos Vargas-González & Iván-Darío Toro-Jaramillo - 2021 - Isegoría 65:09-09.
    This paper proposes freedom as the condition of possibility for the construction of trust in human relationships. The methodology used is a review of the scientific literature of the most recent moral and political philosophy. As a result of the dialogue between different positions, it is discovered that freedom, despite being present in the act of trust, is forgotten in the discussion around trust, a forgetfulness that has as its main causes the assumption that trust is natural and the confusion (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Organisational responses to the ethical issues of artificial intelligence.Bernd Carsten Stahl, Josephina Antoniou, Mark Ryan, Kevin Macnish & Tilimbe Jiya - 2022 - AI and Society 37 (1):23-37.
    The ethics of artificial intelligence is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence.Georg Starke & Marcello Ienca - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trustworthy artificial intelligence.Mona Simion & Christoph Kelp - 2023 - Asian Journal of Philosophy 2 (1):1-12.
    This paper develops an account of trustworthy AI. Its central idea is that whether AIs are trustworthy is a matter of whether they live up to their function-based obligations. We argue that this account serves to advance the literature in a couple of important ways. First, it serves to provide a rationale for why a range of properties that are widely assumed in the scientific literature, as well as in policy, to be required of trustworthy AI, such as safety, justice, (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Can robots be trustworthy?Ines Schröder, Oliver Müller, Helena Scholl, Shelly Levy-Tzedek & Philipp Kellmeyer - 2023 - Ethik in der Medizin 35 (2):221-246.
    Definition of the problem This article critically addresses the conceptualization of trust in the ethical discussion on artificial intelligence (AI) in the specific context of social robots in care. First, we attempt to define in which respect we can speak of ‘social’ robots and how their ‘social affordances’ affect the human propensity to trust in human–robot interaction. Against this background, we examine the use of the concept of ‘trust’ and ‘trustworthiness’ with respect to the guidelines and recommendations of the High-Level (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Creating God.Andie Rothenhäusler - 2024 - In Gerhard Schreiber & Lukas Ohly (eds.), KI:Text: Diskurse über KI-Textgeneratoren. De Gruyter. pp. 183-198.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Dual-Use and Trustworthy? A Mixed Methods Analysis of AI Diffusion Between Civilian and Defense R&D.Christian Reuter, Thea Riebe & Stefka Schmid - 2022 - Science and Engineering Ethics 28 (2):1-23.
    Artificial Intelligence (AI) seems to be impacting all industry sectors, while becoming a motor for innovation. The diffusion of AI from the civilian sector to the defense sector, and AI’s dual-use potential has drawn attention from security and ethics scholars. With the publication of the ethical guideline Trustworthy AI by the European Union (EU), normative questions on the application of AI have been further evaluated. In order to draw conclusions on Trustworthy AI as a point of reference for responsible research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Learning to Live with Strange Error: Beyond Trustworthiness in Artificial Intelligence Ethics.Charles Rathkopf & Bert Heinrichs - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-13.
    Position papers on artificial intelligence (AI) ethics are often framed as attempts to work out technical and regulatory strategies for attaining what is commonly called trustworthy AI. In such papers, the technical and regulatory strategies are frequently analyzed in detail, but the concept of trustworthy AI is not. As a result, it remains unclear. This paper lays out a variety of possible interpretations of the concept and concludes that none of them is appropriate. The central problem is that, by framing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trustworthy AI: a plea for modest anthropocentrism.Rune Nyrup - 2023 - Asian Journal of Philosophy 2 (2):1-10.
    Simion and Kelp defend a non-anthropocentric account of trustworthy AI, based on the idea that the obligations of AI systems should be sourced in purely functional norms. In this commentary, I highlight some pressing counterexamples to their account, involving AI systems that reliably fulfil their functions but are untrustworthy because those functions are antagonistic to the interests of the trustor. Instead, I outline an alternative account, based on the idea that AI systems should not be considered primarily as tools but (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • 'You have to put a lot of trust in me': autonomy, trust, and trustworthiness in the context of mobile apps for mental health.Regina Müller, Nadia Primc & Eva Kuhn - 2023 - Medicine, Health Care and Philosophy 26 (3):313-324.
    Trust and trustworthiness are essential for good healthcare, especially in mental healthcare. New technologies, such as mobile health apps, can affect trust relationships. In mental health, some apps need the trust of their users for therapeutic efficacy and explicitly ask for it, for example, through an avatar. Suppose an artificial character in an app delivers healthcare. In that case, the following questions arise: Whom does the user direct their trust to? Whether and when can an avatar be considered trustworthy? Our (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trust criteria for artificial intelligence in health: normative and epistemic considerations.Kristin Kostick-Quenet, Benjamin H. Lang, Jared Smith, Meghan Hurley & Jennifer Blumenthal-Barby - forthcoming - Journal of Medical Ethics.
    Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool’s computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand how variability in trust (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care.Angeliki Kerasidou, Antoniya Georgieva & Rachel Dlugatch - 2023 - BMC Medical Ethics 24 (1):1-16.
    BackgroundDespite the recognition that developing artificial intelligence (AI) that is trustworthy is necessary for public acceptability and the successful implementation of AI in healthcare contexts, perspectives from key stakeholders are often absent from discourse on the ethical design, development, and deployment of AI. This study explores the perspectives of birth parents and mothers on the introduction of AI-based cardiotocography (CTG) in the context of intrapartum care, focusing on issues pertaining to trust and trustworthiness.MethodsSeventeen semi-structured interviews were conducted with birth parents (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective.Erik Hermann - 2022 - Journal of Business Ethics 179 (1):43-61.
    Artificial intelligence is shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Ethics of AI Ethics. A Constructive Critique.Jan-Christoph Heilinger - 2022 - Philosophy and Technology 35 (3):1-20.
    The paper presents an ethical analysis and constructive critique of the current practice of AI ethics. It identifies conceptual substantive and procedural challenges and it outlines strategies to address them. The strategies include countering the hype and understanding AI as ubiquitous infrastructure including neglected issues of ethics and justice such as structural background injustices into the scope of AI ethics and making the procedures and fora of AI ethics more inclusive and better informed with regard to philosophical ethics. These measures (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Tech Ethics Through Trust Auditing.Matthew Grellette - 2022 - Science and Engineering Ethics 28 (3):1-15.
    The public’s trust in the technology sector is waning and, in response, technology companies and state governments have started to champion “tech ethics”. That is, they have pledged to design, develop, distribute, and employ new technologies in an ethical manner. In this paper, I observe that tech ethics is already subject to a widespread pathology in that technology companies, the primary executors of tech ethics, are incentivized to pursue it half-heartedly or even disingenuously. Next, I highlight two emerging strategies which (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Analysis of Beliefs Acquired from a Conversational AI: Instruments-based Beliefs, Testimony-based Beliefs, and Technology-based Beliefs.Ori Freiman - forthcoming - Episteme:1-17.
    Speaking with conversational AIs, technologies whose interfaces enable human-like interaction based on natural language, has become a common phenomenon. During these interactions, people form their beliefs due to the say-so of conversational AIs. In this paper, I consider, and then reject, the concepts of testimony-based beliefs and instrument-based beliefs as suitable for analysis of beliefs acquired from these technologies. I argue that the concept of instrument-based beliefs acknowledges the non-human agency of the source of the belief. However, the analysis focuses (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Modeling AI Trust for 2050: perspectives from media and info-communication experts.Katalin Feher, Lilla Vicsek & Mark Deuze - forthcoming - AI and Society:1-14.
    The study explores the future of AI-driven media and info-communication as envisioned by experts from all world regions, defining relevant terminology and expectations for 2050. Participants engaged in a 4-week series of surveys, questioning their definitions and projections about AI for the field of media and communication. Their expectations predict universal access to democratically available, automated, personalized and unbiased information determined by trusted narratives, recolonization of information technology and the demystification of the media process. These experts, as technology ambassadors, advocate (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence.Michele Farisco, Kathinka Evers & Arleen Salles - 2022 - Neuroethics 15 (1):1-12.
    Contemporary ethical analysis of Artificial Intelligence is growing rapidly. One of its most recognizable outcomes is the publication of a number of ethics guidelines that, intended to guide governmental policy, address issues raised by AI design, development, and implementation and generally present a set of recommendations. Here we propose two things: first, regarding content, since some of the applied issues raised by AI are related to fundamental questions about topics like intelligence, consciousness, and the ontological and ethical status of humans, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Incorporation, Transparency and Cognitive Extension: Why the Distinction Between Embedded and Extended Might Be More Important to Ethics Than to Metaphysics.Mirko Farina & Andrea Lavazza - 2022 - Philosophy and Technology 35 (1):1-21.
    We begin by introducing our readers to the Extended Mind Thesis and briefly discuss a series of arguments in its favour. We continue by showing of such a theory can be resisted and go on to demonstrate that a more conservative account of cognition can be developed. We acknowledge a stalemate between these two different accounts of cognition and notice a couple of issues that we argue have prevented further progress in the field. To overcome the stalemate, we propose to (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • AI employment decision-making: integrating the equal opportunity merit principle and explainable AI.Gary K. Y. Chan - forthcoming - AI and Society:1-12.
    Artificial intelligence tools used in employment decision-making cut across the multiple stages of job advertisements, shortlisting, interviews and hiring, and actual and potential bias can arise in each of these stages. One major challenge is to mitigate AI bias and promote fairness in opaque AI systems. This paper argues that the equal opportunity merit principle is an ethical approach for fair AI employment decision-making. Further, explainable AI can mitigate the opacity problem by placing greater emphasis on enhancing the understanding of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine and human agents in moral dilemmas: automation–autonomic and EEG effect.Federico Cassioli, Laura Angioletti & Michela Balconi - forthcoming - AI and Society:1-13.
    Automation is inherently tied to ethical challenges because of its potential involvement in morally loaded decisions. In the present research, participants (n = 34) took part in a moral multi-trial dilemma-based task where the agent (human vs. machine) and the behavior (action vs. inaction) factors were randomized. Self-report measures, in terms of morality, consciousness, responsibility, intentionality, and emotional impact evaluation were gathered, together with electroencephalography (delta, theta, beta, upper and lower alpha, and gamma powers) and peripheral autonomic (electrodermal activity, heart (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From the Ground Truth Up: Doing AI Ethics from Practice to Principles.James Brusseau - 2022 - AI and Society 37 (1):1-7.
    Recent AI ethics has focused on applying abstract principles downward to practice. This paper moves in the other direction. Ethical insights are generated from the lived experiences of AI-designers working on tangible human problems, and then cycled upward to influence theoretical debates surrounding these questions: 1) Should AI as trustworthy be sought through explainability, or accurate performance? 2) Should AI be considered trustworthy at all, or is reliability a preferable aim? 3) Should AI ethics be oriented toward establishing protections for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Represent me: please! Towards an ethics of digital twins in medicine.Matthias Braun - 2021 - Journal of Medical Ethics 47 (6):394-400.
    Simulations are used in very different contexts and for very different purposes. An emerging development is the possibility of using simulations to obtain a more or less representative reproduction of organs or even entire persons. Such simulations are framed and discussed using the term ‘digital twin’. This paper unpacks and scrutinises the current use of such digital twins in medicine and the ideas embedded in this practice. First, the paper maps the different types of digital twins. A special focus is (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • A Leap of Faith: Is There a Formula for “Trustworthy” AI?Matthias Braun, Hannah Bleher & Patrik Hummel - 2021 - Hastings Center Report 51 (3):17-22.
    Trust is one of the big buzzwords in debates about the shaping of society, democracy, and emerging technologies. For example, one prominent idea put forward by the High‐Level Expert Group on Artificial Intelligence appointed by the European Commission is that artificial intelligence should be trustworthy. In this essay, we explore the notion of trust and argue that both proponents and critics of trustworthy AI have flawed pictures of the nature of trust. We develop an approach to understanding trust in AI (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle.Simone Borsci, Ville V. Lehtola, Francesco Nex, Michael Ying Yang, Ellen-Wien Augustijn, Leila Bagheriye, Christoph Brune, Ourania Kounadi, Jamy Li, Joao Moreira, Joanne Van Der Nagel, Bernard Veldkamp, Duc V. Le, Mingshu Wang, Fons Wijnhoven, Jelmer M. Wolterink & Raul Zurita-Milla - forthcoming - AI and Society:1-20.
    The European Union Commission’s whitepaper on Artificial Intelligence proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical, and methodological constraints. We identify two main obstacles in the implementation of this plan: the lack of a coherent EU vision to drive future (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
    In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (_including_ other (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (E)‐Trust and Its Function: Why We Shouldn't Apply Trust and Trustworthiness to Human–AI Relations.Pepijn Al - 2023 - Journal of Applied Philosophy 40 (1):95-108.
    With an increasing use of artificial intelligence (AI) systems, theorists have analyzed and argued for the promotion of trust in AI and trustworthy AI. Critics have objected that AI does not have the characteristics to be an appropriate subject for trust. However, this argumentation is open to counterarguments. Firstly, rejecting trust in AI denies the trust attitudes that some people experience. Secondly, we can trust other non‐human entities, such as animals and institutions, so why can we not trust AI systems? (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Intersection of Bernard Lonergan’s Critical Realism, the Common Good, and Artificial Intelligence in Modern Religious Practices.Steven Umbrello - 2023 - Religions 14 (12):1536.
    Artificial intelligence (AI) profoundly influences a number of societal structures today, including religious dynamics. Using Bernard Lonergan’s critical realism as a lens, this article investigates the intersections of AI and religious traditions in their shared pursuit of the common good. Beginning with Lonergan’s principle that humans construct their understanding through cognitive processes, we examine how AI-mediated realities align with or challenge traditional religious tenets. By delving into specific cases, we spotlight AI’s role in reshaping religious symbols, rituals, and even creating (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can Artificial Intelligence Make Art?Elzė Sigutė Mikalonytė & Markus Kneer - 2022 - ACM Transactions on Human-Robot Interactions.
    In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Use of Artificial Intelligence (AI) in Qualitative Research for Theory Development.Prokopis A. Christou - 2023 - The Qualitative Report 28 (9):2739-2755.
    Theory development is an important component of academic research since it can lead to the acquisition of new knowledge, the development of a field of study, and the formation of theoretical foundations to explain various phenomena. The contribution of qualitative researchers to theory development and advancement remains significant and highly valued, especially in an era of various epochal shifts and technological innovation in the form of Artificial Intelligence (AI). Even so, the academic community has not yet fully explored the dynamics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Relativistic Conceptions of Trustworthiness: Implications for the Trustworthy Status of National Identification Systems.Paul Smart, Wendy Hall & Michael Boniface - 2022 - Data and Policy 4 (e21):1-16.
    Trustworthiness is typically regarded as a desirable feature of national identification systems (NISs); but the variegated nature of the trustor communities associated with such systems makes it difficult to see how a single system could be equally trustworthy to all actual and potential trustors. This worry is accentuated by common theoretical accounts of trustworthiness. According to such accounts, trustworthiness is relativized to particular individuals and particular areas of activity, such that one can be trustworthy with regard to some individuals in (...)
    Download  
     
    Export citation  
     
    Bookmark