Results for 'LLM'

56 found
Order:
  1. RSI-LLM: Humans create a world for AI.R. Ishizaki & Mahito Sugiyama - manuscript
    In this paper, we propose RSI-LLM (Recursively Self-Improving Large Language Model), which recursively executes its inference and improves its parameters to fulfill the instrumental goals of superintelligence: G1: Self-preservation, G2: Goal-content integrity, G3: Intelligence enhancement, and G4: Resource acquisition. We empirically observed the behavior of the LLM that tries to design tools to achieve G1~G4, within the autonomous self-improvement and knowledge acquisition. During interventions in these LLMs' coding experiments to ensure safetyness, we have also discovered that, as the creator of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Abundance of words versus Poverty of mind: The hidden human costs of LLMs.Quan-Hoang Vuong & Manh-Tung Ho - manuscript
    This essay analyzes the rise of Large Language Models (LLMs) such as GPT-4 or Gemini, which are now incorporated in a wide range of products and services in everyday life. Importantly, it considers some of their hidden human costs. First, is the question of who is left behind by the further infusion of LLMs in society. Second, is the issue of social inequalities between lingua franca and those which are not. Third, LLMs will help disseminate scientific concepts, but their meanings' (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - forthcoming - Transactions of the Association for Computational Linguistics.
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call bibliotechnism, is that LLMs generate novel text. We begin with a defense of bibliotechnism, showing how even novel text may inherit its meaning from original human-generated text. We then argue that bibliotechnism faces an independent challenge from examples in which LLMs generate novel reference, using new names to refer to new entities. Such examples could be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Language Writ Large: LLMs, ChatGPT, Grounding, Meaning and Understanding.Stevan Harnad - manuscript
    Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how ChatGPT works (its huge text database, its statistics, its vector representations, and their huge number of parameters, its next-word training, and so on). But none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. This has even driven some of us to conclude that ChatGPT actually understands. It is not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5.  56
    Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (SCT). We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. What lies behind AGI: ethical concerns related to LLMs.Giada Pistilli - 2022 - Éthique Et Numérique 1 (1):59-68.
    This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
    Download  
     
    Export citation  
     
    Bookmark  
  7. Introduction to the Special Issue - LLMs and Writing.Syed AbuMusab - 2024 - Teaching Philosophy 47 (2):139-142.
    Download  
     
    Export citation  
     
    Bookmark  
  8. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and access (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI.Elan Moritz - manuscript
    The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques as practiced (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Artificial Intelligence in Higher Education in South Africa: Some Ethical Considerations (14th edition).Tanya de Villiers-Botha - forthcoming - Kagisano.
    There are calls from various sectors, including the popular press, industry, and academia, to incorporate artificial intelligence (AI)-based technologies in general, and large language models (LLMs) (such as ChatGPT and Gemini) in particular, into various spheres of the South African higher education sector. Nonetheless, the implementation of such technologies is not without ethical risks, notably those related to bias, unfairness, privacy violations, misinformation, lack of transparency, and threats to autonomy. This paper gives an overview of the more pertinent ethical concerns (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12.  48
    (DRAFT) 如何藉由「以人為本」進路實現國科會AI科研發展倫理指南.Jr-Jiun Lian - 2024 - 2024科技與社會(Sts)年會年度學術研討會論文 ,國立臺東大學.
    本文深入探討人工智慧(AI)於實現共同福祉與幸福、公平與非歧視、理性公共討論及自主與控制之倫理與正義重要性與挑戰。以中央研究院LLM事件及國家科學技術委員會(NSTC)AI技術研發指導方針為基礎,本文 分析AI能否滿足人類共同利益與福祉。針對AI不公正,本文評估其於區域、產業及社會影響。並探討AI公平與非歧視挑戰,尤其偏差數據訓練問題,及後處理監管,強調理性公共討論之重要性。進而,本文探討理性公眾於 公共討論中之挑戰及應對,如STEM科學素養與技術能力教育之重要性。最後,本文提出“以人為本”方法,非僅依賴AI技術效用最大化,以實現AI正義。 -/- 關鍵詞:AI倫理與正義、公平與非歧視、偏差數據訓練、公共討論、自主性、以人為本的方法.
    Download  
     
    Export citation  
     
    Bookmark  
  13.  98
    AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's inquiries into (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - manuscript
    Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful integration to augment (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution.Flor Miriam Plaza-del Arco, Amanda Cercas Curry & Alba Curry - 2024 - Arxiv.
    Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and closed-source). We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - forthcoming - Social Epistemology.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. ChatGPT and the Technology-Education Tension: Applying Contextual Virtue Epistemology to a Cognitive Artifact.Guido Cassinadri - 2024 - Philosophy and Technology 37 (14):1-28.
    According to virtue epistemology, the main aim of education is the development of the cognitive character of students (Pritchard, 2014, 2016). Given the proliferation of technological tools such as ChatGPT and other LLMs for solving cognitive tasks, how should educational practices incorporate the use of such tools without undermining the cognitive character of students? Pritchard (2014, 2016) argues that it is possible to properly solve this ‘technology-education tension’ (TET) by combining the virtue epistemology framework with the theory of extended cognition (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  20. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete paper assignments (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  23. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Publish with AUTOGEN or Perish? Some Pitfalls to Avoid in the Pursuit of Academic Enhancement via Personalized Large Language Models.Alexandre Erler - 2023 - American Journal of Bioethics 23 (10):94-96.
    The potential of using personalized Large Language Models (LLMs) or “generative AI” (GenAI) to enhance productivity in academic research, as highlighted by Porsdam Mann and colleagues (Porsdam Mann...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  61
    Apriori Knowledge in an Era of Computational Opacity: The Role of AI in Mathematical Discovery.Eamon Duede & Kevin Davey - forthcoming - Philosophy of Science.
    Computation is central to contemporary mathematics. Many accept that we can acquire genuine mathematical knowledge of the Four Color Theorem from Appel and Haken's program insofar as it is simply a repetitive application of human forms of mathematical reasoning. Modern LLMs / DNNs are, by contrast, opaque to us in significant ways, and this creates obstacles in obtaining mathematical knowledge from them. We argue, however, that if a proof-checker automating human forms of proof-checking is attached to such machines, then we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.
    How do essentialist beliefs about categories arise? We hypothesize that such beliefs are transmitted via language. We subject large language models (LLMs) to vignettes from the literature on essentialist categorization and find that they align well with people when the studies manipulated teleological information -- information about what something is for. We examine whether in a classic test of essentialist categorization -- the transformation task -- LLMs prioritize teleological properties over information about what something looks like, or is made of. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete paper assignments (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Unjustified untrue "beliefs": AI hallucinations and justification logics.Kristina Šekrst - forthcoming - In Kordula Świętorzecka, Filip Grgić & Anna Brozek (eds.), Logic, Knowledge, and Tradition. Essays in Honor of Srecko Kovac.
    In artificial intelligence (AI), responses generated by machine-learning models (most often large language models) may be unfactual information presented as a fact. For example, a chatbot might state that the Mona Lisa was painted in 1815. Such phenomenon is called AI hallucinations, seeking inspiration from human psychology, with a great difference of AI ones being connected to unjustified beliefs (that is, AI “beliefs”) rather than perceptual failures). -/- AI hallucinations may have their source in the data itself, that is, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Emerging Technologies & Higher Education.Jake Burley & Alec Stubbs - 2023 - Ieet White Papers.
    Extended Reality (XR) and Large Language Model (LLM) technologies have the potential to significantly influence higher education practices and pedagogy in the coming years. As these emerging technologies reshape the educational landscape, it is crucial for educators and higher education professionals to understand their implications and make informed policy decisions for both individual courses and universities as a whole. -/- This paper has two parts. In the first half, we give an overview of XR technologies and their potential future role (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. The Hazards of Putting Ethics on Autopilot.Julian Friedland, B. Balkin, David & Kristian Myrseth - 2024 - MIT Sloan Management Review 65 (4).
    The generative AI boom is unleashing its minions. Enterprise software vendors have rolled out legions of automated assistants that use large language model (LLM) technology, such as ChatGPT, to offer users helpful suggestions or to execute simple tasks. These so-called copilots and chatbots can increase productivity and automate tedious manual work. In this article, we explain how that leads to the risk that users' ethical competence may degrade over time — and what to do about it.
    Download  
     
    Export citation  
     
    Bookmark  
  31. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - manuscript
    The advent of Generative AI, particularly through Large Language Models (LLMs) like ChatGPT and its successors, marks a paradigm shift in the AI landscape. Advanced LLMs exhibit multimodality, handling diverse data formats, thereby broadening their application scope. However, the complexity and emergent autonomy of these models introduce challenges in predictability and legal compliance. This paper analyses the legal and regulatory implications of Generative AI and LLMs in the European Union context, focusing on liability, privacy, intellectual property, and cybersecurity. It examines (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Download  
     
    Export citation  
     
    Bookmark  
  34. Evaluation and Design of Generalist Systems (EDGeS).John Beverley & Amanda Hicks - 2023 - Ai Magazine.
    The field of AI has undergone a series of transformations, each marking a new phase of development. The initial phase emphasized curation of symbolic models which excelled in capturing reasoning but were fragile and not scalable. The next phase was characterized by machine learning models—most recently large language models (LLMs)—which were more robust and easier to scale but struggled with reasoning. Now, we are witnessing a return to symbolic models as complementing machine learning. Successes of LLMs contrast with their inscrutability, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Writing with ChatGPT.Ricky Mouser - 2024 - Teaching Philosophy 47 (2):173-191.
    Many instructors see the use of LLMs like ChatGPT on course assignments as a straightforward case of cheating, and try hard to prevent their students from doing so by including new warnings of consequences on their syllabi, turning to iffy plagiarism detectors, or scheduling exams to occur in-class. And the use of LLMs probably is cheating, given the sorts of assignments we are used to giving and the sorts of skills we take ourselves to be instilling in our students. But (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Does ChatGPT Have a Mind?Simon Goldstein & Benjamin Anders Levinstein - manuscript
    This paper examines the question of whether Large Language Models (LLMs) like ChatGPT possess minds, focusing specifically on whether they have a genuine folk psychology encompassing beliefs, desires, and intentions. We approach this question by investigating two key aspects: internal representations and dispositions to act. First, we survey various philosophical theories of representation, including informational, causal, structural, and teleosemantic accounts, arguing that LLMs satisfy key conditions proposed by each. We draw on recent interpretability research in machine learning to support these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Large Language Models: Assessment for Singularity.R. Ishizaki & Mahito Sugiyama - manuscript
    The potential for Large Language Models (LLMs) to attain technological singularity—the point at which artificial intelligence (AI) surpasses human intellect and autonomously improves itself—is a critical concern in AI research. This paper explores the feasibility of current LLMs achieving singularity by examining the philosophical and practical requirements for such a development. We begin with a historical overview of AI and intelligence amplification, tracing the evolution of LLMs from their origins to state-of-the-art models. We then proposes a theoretical framework to assess (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Conversations with Chatbots.P. J. Connolly - forthcoming - In Patrick Connolly, Sandy Goldberg & Jennifer Saul (eds.), Conversations Online. Oxford University Press.
    The problem considered in this chapter emerges from the tension we find when looking at the design and architecture of chatbots on the one hand and their conversational aptitude on the other. In the way that LLM chatbots are designed and built, we have good reason to suppose they don't possess second-order capacities such as intention, belief or knowledge. Yet theories of conversation make great use of second-order capacities of speakers and their audiences to explain how aspects of interaction succeed. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39.  65
    Can AI Achieve Common Good and Well-being? Implementing the NSTC's R&D Guidelines with a Human-Centered Ethical Approach.Jr-Jiun Lian - 2024 - 2024 Annual Conference on Science, Technology, and Society (Sts) Academic Paper, National Taitung University. Translated by Jr-Jiun Lian.
    This paper delves into the significance and challenges of Artificial Intelligence (AI) ethics and justice in terms of Common Good and Well-being, fairness and non-discrimination, rational public deliberation, and autonomy and control. Initially, the paper establishes the groundwork for subsequent discussions using the Academia Sinica LLM incident and the AI Technology R&D Guidelines of the National Science and Technology Council(NSTC) as a starting point. In terms of justice and ethics in AI, this research investigates whether AI can fulfill human common (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Some discussions on critical information security issues in the artificial intelligence era.Vuong Quan Hoang, Viet-Phuong La, Hong-Son Nguyen & Minh-Hoang Nguyen - manuscript
    The rapid advancement of Information Technology (IT) platforms and programming languages has transformed the dynamics and development of human society. The cyberspace and associated utilities are expanding, leading to a gradual shift from real-world living to virtual life (also known as cyberspace or digital space). The expansion and development of Natural Language Processing (NLP) models and Large Language Models (LLMs) demonstrate human-like characteristics in reasoning, perception, attention, and creativity, helping humans overcome operational barriers. Alongside the immense potential of artificial intelligence (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Social AI and The Equation of Wittgenstein’s Language User With Calvino’s Literature Machine.Warmhold Jan Thomas Mollema - 2024 - International Review of Literary Studies 6 (1):39-55.
    Is it sensical to ascribe psychological predicates to AI systems like chatbots based on large language models (LLMs)? People have intuitively started ascribing emotions or consciousness to social AI (‘affective artificial agents’), with consequences that range from love to suicide. The philosophical question of whether such ascriptions are warranted is thus very relevant. This paper advances the argument that LLMs instantiate language users in Ludwig Wittgenstein’s sense but that ascribing psychological predicates to these systems remains a functionalist temptation. Social AIs (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Tiến bộ công nghệ, AI: Kỷ nguyên số và an ninh thông tin quốc gia.Vương Quân Hoàng, Lã Việt Phương, Nguyễn Hồng Sơn & Nguyễn Minh Hoàng - manuscript
    Sự tiến bộ nhanh chóng của các nền tảng Công nghệ Thông tin (CNTT) và ngôn ngữ lập trình đã làm thay đổi hình thái vận động và phát triển của xã hội loài người. Không gian mạng và các tiện ích đi kèm ngày càng được mở rộng, dẫn đến sự chuyển dịch dần từ đời sống trong thế giới thực sang đời sống trong thế giới ảo (còn gọi là không gian mạng hay không gian số). Sự mở (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Technosophistische Schattenspiele.Wessel Reijers, Felix Maschewski & Anna-Verena Nosthoff - 2023 - Philosophie Magazin.
    Download  
     
    Export citation  
     
    Bookmark  
  44. Một số vấn đề an ninh thông tin trọng yếu trong kỷ nguyên AI.Vương Quân Hoàng, Lã Việt Phương, Nguyễn Hồng Sơn & Nguyễn Minh Hoàng - 2024 - Cổng Thông Tin Điện Tử Học Viện Cảnh Sát Nhân Dân.
    Sự tiến bộ nhanh chóng của các nền tảng Công nghệ Thông tin (CNTT) và ngôn ngữ lập trình đã làm thay đổi hình thái vận động và phát triển của xã hội loài người. Không gian mạng và các tiện ích đi kèm ngày càng được mở rộng, dẫn đến sự chuyển dịch dần từ đời sống trong thế giới thực sang đời sống trong thế giới ảo (còn gọi là không gian mạng hay không gian số). Trong bối (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Một số vấn đề an ninh thông tin trọng yếu trong kỷ nguyên AI (Phần 1: Tiến bộ công nghệ - Thách thức).Vương Quân Hoàng, Lã Việt Phương, Nguyễn Hồng Sơn & Nguyễn Minh Hoàng - 2024 - Hội Đồng Lý Luận Trung Ương.
    Sự tiến bộ nhanh chóng của các nền tảng Công nghệ Thông tin (CNTT) và ngôn ngữ lập trình đã làm thay đổi hình thái vận động và phát triển của xã hội loài người. Không gian mạng và các tiện ích đi kèm ngày càng được mở rộng, dẫn đến sự chuyển dịch dần từ đời sống trong thế giới thực sang đời sống trong thế giới ảo (còn gọi là không gian mạng hay không gian số). Sự mở (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Một số vấn đề an ninh thông tin trọng yếu trong kỷ nguyên AI (Phần 2: Con người - Xã hội).Vương Quân Hoàng, Lã Việt Phương, Nguyễn Hồng Sơn & Nguyễn Minh Hoàng - 2024 - Hội Đồng Lý Luận Trung Ương.
    Sự tiến bộ nhanh chóng của các nền tảng Công nghệ Thông tin (CNTT) và ngôn ngữ lập trình đã làm thay đổi hình thái vận động và phát triển của xã hội loài người. Không gian mạng và các tiện ích đi kèm ngày càng được mở rộng, dẫn đến sự chuyển dịch dần từ đời sống trong thế giới thực sang đời sống trong thế giới ảo (còn gọi là không gian mạng hay không gian số). Sự mở (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Blurring the Line Between Human and Machine Minds: Is U.S. Law Ready for Artificial Intelligence?Kipp Coddington & Saman Aryana - manuscript
    This Essay discusses whether U.S. law is ready for artificial intelligence (“AI”) which is headed down the road of blurring the line between human and machine minds. Perhaps the most high-profile and recent examples of AI are Large Language Models (“LLMs”) such as ChatGPT and Google Gemini that can generate written text, reason and analyze in a manner that seems to mimic human capabilities. U.S. law is based on English common law, which in turn incorporates Christian principles that assume the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. The Ghost in the Machine has an American accent: value conflict in GPT-3.Rebecca Johnson, Giada Pistilli, Natalia Menedez-Gonzalez, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene & Donald Jay Bertulfo - manuscript
    The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world’s cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the co- creation of language and cultural value impacts large language models (LLMs). (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49.  96
    Epistemological Alchemy through the hermeneutics of Bits and Bytes.Shahnawaz Akhtar - manuscript
    This paper delves into the profound advancements of Large Language Models (LLMs), epitomized by GPT-3, in natural language processing and artificial intelligence. It explores the epistemological foundations of LLMs through the lenses of Aristotle and Kant, revealing apparent distinctions from human learning. Transitioning seamlessly, the paper then delves into the ethical landscape, extending beyond knowledge acquisition to scrutinize the implications of LLMs in decision-making and content creation. The ethical scrutiny, employing virtue ethics, deontological ethics, and teleological ethics, delves into LLMs' (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50.  31
    Self-Adversarial Surveillance for Superalignment.R. Ishizaki & Mahito Sugiyama - manuscript
    In this paper, first we discuss the conditions under which a Large Language Model (LLM) can emulate a superior LLM and potentially trigger an intelligence explosion, along with the characteristics and dangers of the resulting superintelligence. We also explore ``superalignment,'' the process of safely keeping an intelligence explosion under human control. We discuss the goals that should be set for the initial LLM that might trigger the intelligence explosion and the Self-Adversarial Surveillance (SAS) system, which involves having the LLM evaluate (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 56