Results for ' Generative AI'

946 found
Order:
  1. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - manuscript
    The advent of Generative AI, particularly through Large Language Models (LLMs) like ChatGPT and its successors, marks a paradigm shift in the AI landscape. Advanced LLMs exhibit multimodality, handling diverse data formats, thereby broadening their application scope. However, the complexity and emergent autonomy of these models introduce challenges in predictability and legal compliance. This paper analyses the legal and regulatory implications of Generative AI and LLMs in the European Union context, focusing on liability, privacy, intellectual property, and cybersecurity. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Embracing ChatGPT and other generative AI tools in higher education: The importance of fostering trust and responsible use in teaching and learning.Jonathan Y. H. Sim - 2023 - Higher Education in Southeast Asia and Beyond.
    Trust is the foundation for learning, and we must not allow ignorance of this new technologies, like Generative AI, to disrupt the relationship between students and educators. As a first step, we need to actively engage with AI tools to better understand how they can help us in our work.
    Download  
     
    Export citation  
     
    Bookmark  
  3. Generative AI and photographic transparency.P. D. Magnus - forthcoming - AI and Society:1-6.
    There is a history of thinking that photographs provide a special kind of access to the objects depicted in them, beyond the access that would be provided by a painting or drawing. What is included in the photograph does not depend on the photographer’s beliefs about what is in front of the camera. This feature leads Kendall Walton to argue that photographs literally allow us to see the objects which appear in them. Current generative algorithms produce images in response (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Generative AI and the value changes and conflicts in its integration in Japanese educational system.Ngoc-Thang B. Le, Phuong-Thao Luu & Manh-Tung Ho - manuscript
    This paper critically examines Japan's approach toward the adoption of Generative AI such as ChatGPT in education via studying media discourse and guidelines at both the national as well as local levels. It highlights the lack of consideration for socio-cultural characteristics inherent in the Japanese educational systems, such as the notion of self, teachers’ work ethics, community-centric activities for the successful adoption of the technology. We reveal ChatGPT’s infusion is likely to further accelerate the shift away from traditional notion (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Diffusing the Creator: Attributing Credit for Generative AI Outputs.Donal Khosrowi, Finola Finn & Elinor Clark - 2023 - Aies '23: Proceedings of the 2023 Aaai/Acm Conference on Ai, Ethics, and Society.
    The recent wave of generative AI (GAI) systems like Stable Diffusion that can produce images from human prompts raises controversial issues about creatorship, originality, creativity and copyright. This paper focuses on creatorship: who creates and should be credited with the outputs made with the help of GAI? Existing views on creatorship are mixed: some insist that GAI systems are mere tools, and human prompters are creators proper; others are more open to acknowledging more significant roles for GAI, but most (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Generative AI and human labor: who is replaceable?AbuMusab Syed - 2023 - AI and Society:1-3.
    Download  
     
    Export citation  
     
    Bookmark  
  7. AGGA: A Dataset of Academic Guidelines for Generative AIs.Junfeng Jiao, Saleh Afroogh, Kevin Chen, David Atkinson & Amit Dhurandhar - 2024 - Harvard Dataverse 4.
    AGGA (Academic Guidelines for Generative AIs) is a dataset of 80 academic guidelines for the usage of generative AIs and large language models in academia, selected systematically and collected from official university websites across six continents. Comprising 181,225 words, the dataset supports natural language processing tasks such as language modeling, sentiment and semantic analysis, model synthesis, classification, and topic labeling. It can also serve as a benchmark for ambiguity detection and requirements categorization. This resource aims to facilitate research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8.  72
    Can ChatGPT be an author? Generative AI creative writing assistance and perceptions of authorship, creatorship, responsibility, and disclosure.Paul Formosa, Sarah Bankins, Rita Matulionyte & Omid Ghasemi - forthcoming - AI and Society.
    The increasing use of Generative AI raises many ethical, philosophical, and legal issues. A key issue here is uncertainties about how different degrees of Generative AI assistance in the production of text impacts assessments of the human authorship of that text. To explore this issue, we developed an experimental mixed methods survey study (N = 602) asking participants to reflect on a scenario of a human author receiving assistance to write a short novel as part of a 3 (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9.  91
    AGGA: A Dataset of Academic Guidelines for Generative AIs.Saleh Afroogh, Junfeng Jiao, Chen Kevin, David Atkinson4 & Amit Dhurandhar - 2024 - Harvard Dataverse 4.
    AGGA (Academic Guidelines for Generative AIs) is a dataset of 80 academic guidelines for the usage of generative AIs and large language models in academia, selected systematically and collected from official university websites across six continents. Comprising 181,225 words, the dataset supports natural language processing tasks such as language modeling, sentiment and semantic analysis, model synthesis, classification, and topic labeling. It can also serve as a benchmark for ambiguity detection and requirements categorization. This resource aims to facilitate research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Escape climate apathy by harnessing the power of generative AI.Quan-Hoang Vuong & Manh-Tung Ho - 2024 - AI and Society 39:1-2.
    “Throw away anything that sounds too complicated. Only keep what is simple to grasp...If the information appears fuzzy and causes the brain to implode after two sentences, toss it away and stop listening. Doing so will make the news as orderly and simple to understand as the truth.” - In “GHG emissions,” The Kingfisher Story Collection, (Vuong 2022a).
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  11. Every dog has its day: An in-depth analysis of the creative ability of visual generative AI.Maria Hedblom - 2024 - Cosmos+Taxis 12 (5-6):88-103.
    The recent remarkable success of generative AI models to create text and images has already started altering our perspective of intelligence and the “uniqueness” of humanity in this world. Simultaneously, arguments on why AI will never exceed human intelligence are ever-present as seen in Landgrebe and Smith (2022). To address whether machines may rule the world after all, this paper zooms in on one of the aspects of intelligence Landgrebe and Smith (2022) neglected to consider: creativity. Using Rhodes four (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Legal Definitions of Intimate Images in the Age of Sexual Deepfakes and Generative AI.Suzie Dunn - 2024 - McGill Law Journal 69:1-15.
    In January 2024, non-consensual deepfakes came to public attention with the spread of AI generated sexually abusive images of Taylor Swift. Although this brought new found energy to the debate on what some call non-consensual synthetic intimate images (i.e. images that use technology such as AI or photoshop to make sexual images of a person without their consent), female celebrities like Swift have had deepfakes like these made of them for years. In 2017, a Reddit user named “deepfakes” posted several (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks.Trystan S. Goetze - 2024 - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency:186-196.
    Since the launch of applications such as DALL-E, Midjourney, and Stable Diffusion, generative artificial intelligence has been controversial as a tool for creating artwork. While some have presented longtermist worries about these technologies as harbingers of fully automated futures to come, more pressing is the impact of generative AI on creative labour in the present. Already, business leaders have begun replacing human artistic labour with AI-generated images. In response, the artistic community has launched a protest movement, which argues (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  79
    AI training data, model success likelihood, and informational entropy-based value.Quan-Hoang Vuong, Viet-Phuong La & Minh-Hoang Nguyen - manuscript
    Since the release of OpenAI's ChatGPT, the world has entered a race to develop more capable and powerful AI, including artificial general intelligence (AGI). The development is constrained by the dependency of AI on the model, quality, and quantity of training data, making the AI training process highly costly in terms of resources and environmental consequences. Thus, improving the effectiveness and efficiency of the AI training process is essential, especially when the Earth is approaching the climate tipping points and planetary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's inquiries (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Mapping the potential AI-driven virtual hyper-personalised ikigai universe.Soenke Ziesche & Roman Yampolskiy - manuscript
    Ikigai is a Japanese concept, which, in brief, refers to the “reason or purpose to live”. I-risks have been identified as a category of risks complementing x- risks, i.e., existential risks, and s-risks, i.e., suffering risks, which describes undesirable future scenarios in which humans are deprived of the pursuit of their individual ikigai. While some developments in AI increase i-risks, there are also AI-driven virtual opportunities, which reduce i-risks by increasing the space of potential ikigais, largely due to developments in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Defining Generative Artificial Intelligence: An Attempt to Resolve the Confusion about Diffusion.Raphael Ronge, Markus Maier & Benjamin Rathgeber - manuscript
    The concept of Generative Artificial Intelligence (GenAI) is ubiquitous in the public and semi-technical domain, yet rarely defined precisely. We clarify main concepts that are usually discussed in connection to GenAI and argue that one ought to distinguish between the technical and the public discourse. In order to show its complex development and associated conceptual ambiguities, we offer a historical-systematic reconstruction of GenAI and explicitly discuss two exemplary cases: the generative status of the Large Language Model BERT and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19.  86
    What Are Lacking in Sora and V-JEPA’s World Models? -A Philosophical Analysis of Video AIs Through the Theory of Productive Imagination.Jianqiu Zhang - unknown
    Sora from Open AI has shown exceptional performance, yet it faces scrutiny over whether its technological prowess equates to an authentic comprehension of reality. Critics contend that it lacks a foundational grasp of the world, a deficiency V-JEPA from Meta aims to amend with its joint embedding approach. This debate is vital for steering the future direction of Artificial General Intelligence(AGI). We enrich this debate by developing a theory of productive imagination that generates a coherent world model based on Kantian (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. AI-generated art and fiction: signifying everything, meaning nothing?Steven R. Kraaijeveld - forthcoming - AI and Society:1-3.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  21. Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau - manuscript
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22.  21
    Sự gia tăng của AI tạo sinh và những rủi ro tiềm ẩn cho con người.Hoang Tung-Duong, Dang Tuan-Dung & Manh-Tung Ho - 2024 - Tạp Chí Thông Tin Và Truyền Thông 9 (9/2024):66-73.
    Sự xuật hiện của các công cụ AI tạo sinh trên nền tảng các mô hình ngôn ngữ lớn (LLMs) đã đem đến một công cụ mới cho con người, đặc biệt là trong các ngành sư phạm, báo chí, nhưng chúng cũng đem đến nhiều vấn đề Trong bài viết này, nhóm tác giả sẽ chỉ ra những những bất cập mới xuất hiện hoặc những vấn đề đã tồn tại nhưng có nguy cơ được đẩy lên cao hơn (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. The Artificial Sublime.Regina Rini - manuscript
    Generative AI systems like ChatGPT and Midjourney can produce prose or images. But can they produce art? I argue that this question, though natural and intriguing, is the wrong one to ask. A better question is this: can generative AI yield distinct or novel forms of aesthetic value? And I argue that the answer is yes. Generative AI can be used to put us in contact with the artificial sublime – a type of aesthetic value that Kant (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. ChatGPT and the Technology-Education Tension: Applying Contextual Virtue Epistemology to a Cognitive Artifact.Guido Cassinadri - 2024 - Philosophy and Technology 37 (14):1-28.
    According to virtue epistemology, the main aim of education is the development of the cognitive character of students (Pritchard, 2014, 2016). Given the proliferation of technological tools such as ChatGPT and other LLMs for solving cognitive tasks, how should educational practices incorporate the use of such tools without undermining the cognitive character of students? Pritchard (2014, 2016) argues that it is possible to properly solve this ‘technology-education tension’ (TET) by combining the virtue epistemology framework with the theory of extended cognition (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  25. The Hazards of Putting Ethics on Autopilot.Julian Friedland, B. Balkin, David & Kristian Myrseth - 2024 - MIT Sloan Management Review 65 (4).
    The generative AI boom is unleashing its minions. Enterprise software vendors have rolled out legions of automated assistants that use large language model (LLM) technology, such as ChatGPT, to offer users helpful suggestions or to execute simple tasks. These so-called copilots and chatbots can increase productivity and automate tedious manual work. In this article, we explain how that leads to the risk that users' ethical competence may degrade over time — and what to do about it.
    Download  
     
    Export citation  
     
    Bookmark  
  26. Can AI Mind Be Extended?Alice C. Helliwell - 2019 - Evental Aesthetics 8 (1):93-120.
    Andy Clark and David Chalmers’s theory of extended mind can be reevaluated in today’s world to include computational and Artificial Intelligence (AI) technology. This paper argues that AI can be an extension of human mind, and that if we agree that AI can have mind, it too can be extended. It goes on to explore the example of Ganbreeder, an image-making AI which utilizes human input to direct behavior. Ganbreeder represents one way in which AI extended mind could be achieved. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27.  62
    Digital Homunculi: Reimagining Democracy Research with Generative Agents.Petr Špecián - manuscript
    The pace of technological change continues to outstrip the evolution of democratic institutions, creating an urgent need for innovative approaches to democratic reform. However, the experimentation bottleneck - characterized by slow speed, high costs, limited scalability, and ethical risks - has long hindered progress in democracy research. This paper proposes a novel solution: employing generative artificial intelligence (GenAI) to create synthetic data through the simulation of digital homunculi, GenAI-powered entities designed to mimic human behavior in social contexts. By enabling (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28.  46
    Content Reliability in the Age of AI: A Comparative Study of Human vs. GPT-Generated Scholarly Articles.Rajesh Kumar Maurya & Swati R. Maurya - 2024 - Library Progress International 44 (3):1932-1943.
    The rapid advancement of Artificial Intelligence (AI) and the developments of Large Language Models (LLMs) like Generative Pretrained Transformers (GPTs) have significantly influenced content creation in scholarly communication and across various fields. This paper presents a comparative analysis of the content reliability between human-generated and GPT-generated scholarly articles. Recent developments in AI suggest that GPTs have become capable in generating content that can mimic human language to a greater extent. This highlights and raises questions about the quality, accuracy, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered by conflations (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Prosthetic Godhood and Lacan’s Alethosphere: The Psychoanalytic Significance of the Interplay of Randomness and Structure in Generative Art.Rayan Magon - 2023 - 26Th Generative Art Conference.
    Psychoanalysis, particularly as articulated by figures like Freud and Lacan, highlights the inherent division within the human subject—a schism between the conscious and unconscious mind. It could be said that this suggests that such an internal division becomes amplified in the context of generative art, where technology and algorithms are used to generate artistic expressions that are meant to emerge from the depths of the unconscious. Here, we encounter the tension between the conscious artist and the generative process (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Wittgenstein and the Aesthetic Robot's Handicap.Julian Friedland - 2005 - Philosophical Investigations 28 (2):177-192.
    Ask most any cognitive scientist working today if a digital computational system could develop aesthetic sensibility and you will likely receive the optimistic reply that this remains an open empirical question. However, I attempt to show, while drawing upon the later Wittgenstein, that the correct answer is in fact available. And it is a negative a priori. It would seem, for example, that recent computational successes in generative AI and textual attribution, most notably those of Donald Foster (famed finder (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  32. Artificial Intelligence Implications for Academic Cheating: Expanding the Dimensions of Responsible Human-AI Collaboration with ChatGPT.Jo Ann Oravec - 2023 - Journal of Interactive Learning Research 34 (2).
    Cheating is a growing academic and ethical concern in higher education. This article examines the rise of artificial intelligence (AI) generative chatbots for use in education and provides a review of research literature and relevant scholarship concerning the cheating-related issues involved and their implications for pedagogy. The technological “arms race” that involves cheating-detection system developers versus technology savvy students is attracting increased attention to cheating. AI has added new dimensions to academic cheating challenges as students (as well as faculty (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Netizens, Academicians, and Information Professionals' Opinions About AI With Special Reference To ChatGPT.Subaveerapandiyan A., A. Vinoth & Neelam Tiwary - 2023 - Library Philosophy and Practice (E-Journal):1-16.
    This study aims to understand the perceptions and opinions of academicians towards ChatGPT-3 by collecting and analyzing social media comments, and a survey was conducted with library and information science professionals. The research uses a content analysis method and finds that while ChatGPT-3 can be a valuable tool for research and writing, it is not 100% accurate and should be cross-checked. The study also finds that while some academicians may not accept ChatGPT-3, most are starting to accept it. The study (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Chatbot Epistemology.Susan Schneider - manuscript
    AI chatbots are disseminating more and more of the Internet’s search engine activity, transforming the face of education, serving as personalized AIs in intellectual and emotional relationships with humans, becoming “digital workers” that may outmode us at work, and more. Indeed, the larger category of generative AI may be one of the most transformative technologies of this decade, or even this century. Given this, it is imperative that we understand the epistemological challenges that arise with the everyday use of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Imagination, Creativity, and Artificial Intelligence.Peter Langland-Hassan - 2024 - In Amy Kind & Julia Langkau (eds.), Oxford Handbook of Philosophy of Imagination and Creativity. Oxford University Press.
    This chapter considers the potential of artificial intelligence (AI) to exhibit creativity and imagination, in light of recent advances in generative AI and the use of deep neural networks (DNNs). Reasons for doubting that AI exhibits genuine creativity or imagination are considered, including the claim that the creativity of an algorithm lies in its developer, that generative AI merely reproduces patterns in its training data, and that AI is lacking in a necessary feature for creativity or imagination, such (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Synthetic Socio-Technical Systems: Poiêsis as Meaning Making.Piercosma Bisconti, Andrew McIntyre & Federica Russo - 2024 - Philosophy and Technology 37 (3):1-19.
    With the recent renewed interest in AI, the field has made substantial advancements, particularly in generative systems. Increased computational power and the availability of very large datasets has enabled systems such as ChatGPT to effectively replicate aspects of human social interactions, such as verbal communication, thus bringing about profound changes in society. In this paper, we explain that the arrival of generative AI systems marks a shift from ‘interacting through’ to ‘interacting with’ technologies and calls for a reconceptualization (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Emergent Universal Economic Models: The Future of Human Dynamics.James Sirois - 2023 - Philosopherstudio.Wordpress.Com.
    Human civilization is very clearly reaching a point of critical mass when it comes to technology and how it transforms culture and the economics that is therefore driven forward. The conversation around the practical aspects of generative artificial intelligence (Chat GPT, Q Star, Bard, Claude, Genesis, Firefly, and others) and their ethical implications is massively trending. The political conversations around it are slow to catch up but will soon take over once the general public feels their impact, which is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Emerging Technologies & Higher Education.Jake Burley & Alec Stubbs - 2023 - Ieet White Papers.
    Extended Reality (XR) and Large Language Model (LLM) technologies have the potential to significantly influence higher education practices and pedagogy in the coming years. As these emerging technologies reshape the educational landscape, it is crucial for educators and higher education professionals to understand their implications and make informed policy decisions for both individual courses and universities as a whole. -/- This paper has two parts. In the first half, we give an overview of XR technologies and their potential future role (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. L’intelligenza artificiale non dominerà il mondo (interview, with English translation).Pierangelo Soldavini & Barry Smith - 2024 - Il Sole di 24 Ore 2024.
    Artificial intelligence is man's attempt to use software to emulate the intelligence of human beings. But the complexity of the human neurological system formed in the course of evolution is impossible to replicate: "Human languages and societies are complex systems, indeed complex systems of many complex systems," so much so that their mathematical modeling is impossible. Barry Smith, philosopher and professor at the University at Buffalo. shows no uncertainty about this. His latest book written with Jobst Landgrebe, a mathematician and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Publish with AUTOGEN or Perish? Some Pitfalls to Avoid in the Pursuit of Academic Enhancement via Personalized Large Language Models.Alexandre Erler - 2023 - American Journal of Bioethics 23 (10):94-96.
    The potential of using personalized Large Language Models (LLMs) or “generative AI” (GenAI) to enhance productivity in academic research, as highlighted by Porsdam Mann and colleagues (Porsdam Mann...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43.  92
    The Age of Superintelligence: ~Capitalism to Broken Communism~.R. Ishizaki & Mahito Sugiyama - manuscript
    In this study, we metaphysically discuss how societal values will change and what will happen to the world when superintelligence is safely realized. By providing a mathematical definition of superintelligence, we examine the phenomena derived from this thesis. If an intelligence explosion is triggered under safe management through advanced AI technologies such as large language models (LLMs), it is thought that a modern form of broken communism—where rights are bifurcated from the capitalist system—will first emerge. In that era, the value (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. The University Teaching Opportunities Programme (UTOP): An Opportunity for Educators and Students to Learn from One Another.Jonathan Y. H. Sim - 2024 - Teaching Connections.
    Jonathan takes us through his experiences of being a mentor for UTOP (University Teaching Opportunities Programme), particularly how it enabled him to collaborate with his UTOP student mentees to design a learning activity in which students could think critically about AI-generated output.
    Download  
     
    Export citation  
     
    Bookmark  
  45.  26
    Tiny creation, but not a small feat.A. I. S. D. L. Team - 2024 - Sm3D Portal.
    About six weeks ago, our post referred to the long-winding path to a new theoretical innovation as the pursuit of “useless knowledge” in Flexner’s terms. That little creation is the freshly minted informational entropy-based definition of value, presented in a very short paper, initially regarded by its authors as a research note. (And it still is.) Well, only two and a half months since its birth, this new concept of value has had enough time to power up several of our (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46.  23
    Ảnh hưởng của các sản phẩm trí tuệ nhân tạo tạo sinh lên ngành báo chí và truyền thông: Hành vi và kinh tế báo chí.Manh-Tung Ho, T. Hong-Kong Nguyen & Tung-Duong Hoang - 2024 - Tạp Chí Thông Tin Và Truyền Thông 8 (8/2024):80-89.
    Sự thâm nhập của trí tuệ nhân tạo (AI) tạo sinh vào ngành báo chí đã tạo nên sự thay đổi về xu hướng tiêu thụ và sản xuất thông tin. Cụ thể, có năm thay đổi trong xu hướng hành vi tiêu thụ và sản xuất thông tin như sau: việc tạo ra nội dung trở nên dễ dàng và đa dạng hơn; AI tạo sinh có thể là tiếp điểm mới của con người đối với dòng chảy của (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. On Fostering Responsible and Rigorous Learning with ChatGPT.Jonathan Y. H. Sim - 2023 - Teaching Connections.
    We are pleased to feature a video interview with Jonathan Sim, where he shares his ongoing journey of integrating artificial intelligence (AI) in his teaching, the challenges encountered along the way, and what educators can do to get their students to meaningfully engage with AI tools like ChatGPT to enhance their learning.
    Download  
     
    Export citation  
     
    Bookmark  
  48. Reading Law with ChatGPT (With Special Emphasis on Contextual Canons).Varol Akman - 2024 - Law, Ethics, and Technology 2024 (3):06.
    We study the performance of ChatGPT interpreting prompts that require legal expertise to answer. Our inputs are very close adaptations from the "Contextual Canons" section of Antonin Scalia and Bryan Garner's Reading Law: The Interpretation of Legal Texts (Thomson West: 2012). We report our experiments and findings for the entire section (comprising 14 canons) of the book. We conclude that as a legal reasoner ChatGPT is exceptionally successful in taking the contextual canons into account.
    Download  
     
    Export citation  
     
    Bookmark  
  49.  78
    Deepfakes, Simone Weil, and the concept of reading.Steven R. Kraaijeveld - forthcoming - AI and Society:1-3.
    Download  
     
    Export citation  
     
    Bookmark  
  50. The semiotic functioning of synthetic media.Auli Viidalepp - 2022 - Információs Társadalom 4:109-118.
    The interpretation of many texts in the everyday world is concerned with their truth value in relation to the reality around us. The recent publication experiments with computer-generated texts have shown that the distinction between true and false, or reality and fiction, is not always clear from the text itself. Essentially, in today’s media space, one may encounter texts, videos or images that deceive the reader by displaying nonsensical content or nonexistent events, while nevertheless appearing as genuine human-produced messages. This (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 946