Results for 'AI'

214 found
Order:
  1.  81
    Saliva Ontology: An Ontology-Based Framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2.  28
    Bioinformatics Advances in Saliva Diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The (...) goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva diagnostics made possible by the Saliva Ontology and SDxMart. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  3.  25
    Towards a Body Fluids Ontology: A Unified Application Ontology for Basic and Translational Science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Transparent, Explainable, and Accountable AI for Robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  5.  57
    Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - forthcoming - Synthese:arXiv:1901.02918v1.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Why AI Doomsayers Are Like Sceptical Theists and Why It Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an (...)increasing number of academic publications analysing and evaluating their seriousness. Nick Bostroms superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  7. New Developments in the Philosophy of AI.Vincent Müller - 2016 - In Fundamental Issues of Artificial Intelligence. Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  8.  49
    Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2017 - In Vincent Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Berlin: Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency (...) and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in part because most of the final goals we could give an AI admit of so-called "perverse instantiations". I propose a novel solution to this puzzle: instruct the AI to love humanity. The proposal is compared with Yudkowsky's Coherent Extrapolated Volition, and Bostrom's Moral Modeling proposals. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Philosophy and Theory of Artificial Intelligence, 34 October (Report on PT-AI 2011).Vincent C. Müller - 2011 - The Reasoner 5 (11):192-193.
    Report for "The Reasoner" on the conference "Philosophy and Theory of Artificial Intelligence", 3 & 4 October 2011, Thessaloniki, Anatolia College/ACT, http://www.pt-ai.org (...). --- Organization: Vincent C. Müller, Professor of Philosophy at ACT & James Martin Fellow, Oxford http://www.sophia.de --- Sponsors: EUCogII, Oxford-FutureTech, AAAI, ACM-SIGART, IACAP, ECCAI. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Theory and Philosophy of AI (Minds and Machines, 22/2 - Special Volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced (...) Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Toward an Ethics of AI Assistants: an Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. AI, Concepts, and the Paradox of Mental Representation, with a Brief Discussion of Psychological Essentialism.Eric Dietrich - 2001 - J. Of Exper. And Theor. AI 13 (1):1-7.
    Mostly philosophers cause trouble. I know because on alternate Thursdays I am one -- and I live in a philosophy department where I watch all of them cause (...) trouble. Everyone in artificial intelligence knows how much trouble philosophers can cause (and in particular, we know how much trouble one philosopher -- John Searle -- has caused). And, we know where they tend to cause it: in knowledge representation and the semantics of data structures. This essay is about a recent case of this sort of thing. One of the take-home messages will be that AI ought to redouble its efforts t o understand concepts. (shrink)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  15. AI, Situatedness, Creativity, and Intelligence; or the Evolution of the Little Hearing Bones.Eric Dietrich - 1996 - J. Of Experimental and Theoretical AI 8 (1):1-6.
    Good sciences have good metaphors. Indeed, good sciences are good because they have good metaphors. AI could use more good metaphors. In this editorial, I would like (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  16. AI and the Mechanistic Forces of Darkness.Eric Dietrich - 1995 - J. Of Experimental and Theoretical AI 7 (2):155-161.
    Under the Superstition Mountains in central Arizona toil those who would rob humankind o f its humanity. These gray, soulless monsters methodically tear away at our meaning, (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  17.  19
    First Human Upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The (...)only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here the ways to create the safest and simplest form of AI, which may work as AI Nanny. Such AI system will be enough to solve most problems, which we expect the AI will solve, including control of robotics, acceleration of the medical research, but will present less risk, as it will be less different from humans. As AI police, it will work as operation system for most computers, producing world surveillance system, which will be able to envision and stop any potential terrorists and bad actors in advance. As uploading technology is lagging, and neuromorphic AI is intrinsically dangerous, the most plausible way to human-based AI Nanny is either functional model of the human mind or a Narrow-AI empowered group of people. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Global Solutions Vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align orbox (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20.  41
    AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI.Jose Hernandez-Orallo & Karina Vold - forthcoming - In Proceedings of the AAAI/ACM.
    Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Is There a Future for AI Without Representation?Vincent C. Müller - 2007 - Minds and Machines 17 (1):101-115.
    This paper investigates the prospects of Rodney Brooksproposal for AI without representation. It turns out that the supposedly characteristic features ofnew AI” (embodiment, situatedness, absence (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. A Modal Defence of Strong AI.Steffen Borge - 2007 - In Dermot Moran Stephen Voss (ed.), The Proceedings of the Twenty-First World Congress of Philosophy. The Philosophical Society of Turkey. pp. 127-131.
    John Searle has argued that the aim of strong AI of creating a thinking computer is misguided. Searles Chinese Room Argument purports to show that syntax (...)does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality. But we are not mainly interested in the program itself but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the implementation of them cannot suffice for semantics. Perhaps our world is a world in which any implementation of the right computer program will create a system with intrinsic intentionality, in which case Searles Chinese Room Scenario is empirically (nomically) impossible. But, indeed, perhaps our world is a world in which Searles Chinese Room Scenario is empirically (nomically) possible and that the silicon basis of modern day computers are one kind of material unsuited to give you intrinsic intentionality. The metaphysical question turns out to be a question of what kind of world we are in and I argue that in this respect we do not know our modal address. The Modal Address Argument does not ensure that strong AI will succeed, but it shows that Searles challenge on the research program of strong AI fails in its objectives. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Representation, Analytic Pragmatism and AI.Raffaela Giovagnoli - 2013 - In Gordana Dodig-Crnkovic Raffaela Giovagnoli (ed.), Computing Nature. pp. 161--169.
    Our contribution aims at individuating a valid philosophical strategy for a fruitful confrontation between human and artificial representation. The ground for this theoretical option resides in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24.  38
    Consciousness as Computation: A Defense of Strong AI Based on Quantum-State Functionalism.R. Michael Perry - 2006 - In Charles Tandy (ed.), Death and Anti-Death, Volume 4: Twenty Years After De Beauvoir, Thirty Years After Heidegger. Palo Alto: Ria University Press.
    The viewpoint that consciousness, including feeling, could be fully expressed by a computational device is known as strong artificial intelligence or strong AI. Here I offer a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25.  27
    Tu Quoque: The Strong AI Challenge to Selfhood, Intentionality and Meaning and Some Artistic Responses.Erik C. Banks - manuscript
    This paper offers a "tu quoque" defense of strong AI, based on the argument that phenomena of self-consciousness and intentionality are nothing but the " (...)negative space" drawn around the concrete phenomena of brain states and causally connected utterances and objects. Any machine that was capable of concretely implementing the positive phenomena would automatically inherit the negative space around these that we call self-consciousness and intention. Because this paper was written for a literary audience, some examples from Greek tragedy, noir fiction, science fiction and Dada are deployed to illustrate the view. (shrink)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  26. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Making AI Philosophical Again: On Philip E. Agre's Legacy.Jethro Masís - 2014 - Continent 4 (1):58-70.
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  28. Assessing the Future Plausibility of Catastrophically Dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29.  68
    Making Metaethics Work for AI: Realism and Anti-Realism.Michal Klincewicz & Lily E. Frank - 2018 - In Mark Coeckelbergh, M. Loh, J. Funk, M. Seibt & J. Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and Public Space. Amsterdam, Netherlands: IOS Press. pp. 311-318.
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, (...)by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Levels of Self-Improvement in AI and Their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which (...)has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvementlike the intelligence-measuring problem, testing problem, parent-child problem and halting riskseven non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze how self-improvement could happen on different stages of the development of AI, including the stages at which AI is boxed or hiding in the internet. (shrink)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  31. La riscoperta dell'umiltà come virtù relazionale: la risposta della tradizione ai problemi contemporanei.Michel Croce - 2014 - In Simona Langella & Maria Silvia Vaccarezza (eds.), Emozioni e virtù. Percorsi e prospettive di un tema classico. Napoli-Salerno: Orthotes. pp. 159-170.
    Questo contributo riguarda il tema specifico dellumiltà come virtù etica e nasce allinterno di uno studio più ampio sulla relazione tra umiltà in campo morale e (...) umiltà intellettuale, tema ricorrente tra i sostenitori della Virtue Epistemology. Lintento di questo saggio quello di approfondire il recente dibattito circa la natura dellumiltà come virtù e la sua definizione e il mio obiettivo quello di mostrare come la tradizione aristotelico-tomista, generalmente sottovalutata da chi si occupa di umiltà nella filosofia analitica contemporanea, possa fornire una risposta soddisfacente alle problematiche più recenti relative a questo tema. La struttura di questo breve contributo prevede un primo paragrafo in cui offrirò una sintetica panoramica sulla concezione di umiltà di Aristotele e Tommaso dAquino. Quindi, nel secondo paragrafo, analizzerò la ricezione di questa virtù allinterno della filosofia morale contemporanea di matrice analitica, affrontando due problemi di cui qualsiasi definizione di umiltà deve dare spiegazione: la compatibilità di umiltà e conoscenza di sé, e la possibilità, per chi eccelle in un determinato ambito, di essere umile. Mostrerò come la tradizione aristotelico-tomista possa fornire una risposta efficace a tali quesiti recenti e presenterò i tratti generali della concezione di umiltà come virtù relazionale. Infine concluderò indicando alcuni punti degni di futuri sviluppi. (shrink)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  32.  20
    Book Review of: R. Turner, Logics for AI[REVIEW]Gary James Jason - 1989 - Philosophia 19 (1):73-83.
    Download  
     
    Export citation  
     
    Bookmark  
  33.  31
    Narrow AI Nanny: Reaching Strategic Advantage Via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its (...)creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways to create the safest and simplest form of AI which may work as an AI Nanny, that is, a global surveillance state powered by a Narrow AI, or AI Police. A similar but more limited system has already been implemented in China for the prevention of ordinary crime. AI police will be able to predict the actions of and stop potential terrorists and bad actors in advance. Implementation of such AI police will probably consist of two steps: first, a strategic decisive advantage via Narrow AI created by an intelligence services of a nuclear superpower, and then ubiquitous control over potentially dangerous agents which could create unauthorized artificial general intelligence which could evolve into Superintelligence. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  34.  22
    Revised: From Color, to Consciousness, Toward Strong AI.Xinyuan Gu - manuscript
    This article cohesively discusses three topics, namely color and its perception, the yet-to-be-solved hard problem of consciousness, and the theoretical possibility of strong AI. First, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Dai modelli fisici ai sistemi complessi.Giorgio Turchetti - 2012 - In Vincenzo Fano, Enrico Giannetto, Giulia Giannini & Pierluigi Graziani (eds.), Complessità e Riduzionismo. © ISONOMIA – Epistemologica, University of Urbino. pp. 108-125.
    Losservazione della natura con lintento di capire lorigine della varietà di forme e fenomeni in cui si manifesta ha origini remote. Allinizio il rapporto (...)con i fenomeni naturali era dominato da sentimenti quali paura e stupore che conducevano a supporre lesistenza di entità sfuggenti alla percezione diretta che permeavano gli elementi animandoli. Ecco dunque che la magia rappresenta lelemento dominante della filosofia naturale primitiva caratterizzata da una unicità degli eventi e dalla impossibilità di capirli e dominarli in quanto frutto della volontà di essenze a noi estranee e non governabili. Con il nascere della civiltà ed il suo progredire il tempo dedicato ai lavori necessari per il sostentamento e la sopravvivenza diminuì e nella ripartizione dei compiti alcuni individui potevano dedicare parte del loro tempo alla osservazione della natura ed alla sua interpretazione in termini non trascendenti. Nella natura, intesa come tutto ciò che ci circonda composto da esseri viventi e da materia inorganica nelle sua varie aggregazioni sulla terra e nel cosmo, ciò che attirò lattenzione fin dallinizio furono furono i fenomeni regolari e periodici come i moti della luna, dei pianeti e delle stelle. Nel contempo dopo una spinta iniziale dettata da esigenze pratiche come contare gli oggetti o misurare i campi, la matematica si era sviluppata autonomamente e si rivelò idonea a descrivere in termini quantitativi i moti dei corpi celesti. La terra era al centro delluniverso mentre il moto degli altri corpi celesti risultava da una composizione di moti circolari uniformi. Questa visione geocentrica e pitagorica (armonia delle sfere) delluniverso ha prevalso fino agli albori della scienza moderna, anche se una descrizione eliocentrica, basata su validi argomenti, era stata proposta. Per quanto riguarda la struttura della materia i presocratici avevano già proposto i quattro elementi mentre gli atomisti avevano ricondotto tutto ad entità elementari primigenie, il cui aggregarsi e disgregarsi da luogo a tutti gli stati e le molteplici forme della materia. Queste intuizioni si ritrovano nella fisica moderna che contempla quattro stati di aggregazione, che hanno come unico sostrato comune gli atomi. La fisica moderna nasce con Galileo e Newton, la cui dinamica si sviluppa a partire dalle leggi di Keplero che descrivono il moto dei pianeti nel sistema eliocentrico, per potersi poi applicare ad un qualunque sistema materiale. Pertanto nei due secoli successivi si ritenne che un modello meccanico potesse essere sviluppato per un qualunque sistema fisico e quindi per luniverso intero la cui evoluzione doveva essere matematicamente prevedibile. Per i fenomeni termici tuttavia vennero formulate leggi ad hoc come quelle della termodinamica che mostrano come i processi macroscopici siano irreversibili in contrasto con le leggi della meccanica. Si deve a Boltzmann1 il tentativo di ricondurre la termodinamica alla meccanica per un gran numero di particelle dei cui moti disordinati viene data una lettura di carattere statistico. Laumento della entropia e la irreversibilità seguono dalla ipotesi di caos molecolare ossia che i moti siano così disordinati che si perde rapidamente memoria dello stato iniziale. Lidea di introdurre una misura di probabilità nel contesto della meccanica sembra antitetica con la natura stessa della teoria rivolta fino ad allora allo studio di sistemi con moti regolari, reversibili e prevedibili individualmente su tempi lunghi. Tuttavia lanalisi probabilistica diventa essenziale per lo studio di sistemi caratterizzati da forti instabilità, e da orbite irregolari per i quali la previsione richiede una conoscenza della condizioni iniziali con precisioni fisicamente non raggiungibili. Combinando la evoluzione deterministica della meccanica di Newton o di Hamilton con la descrizione statistica attraverso una opportuna misura invariante di probabilità nello spazio delle fasi, nasce la teoria dei sistemi dinamici2 che consente di descrivere non solo i sistemi ordinati o i sistemi caotici ma anche tutti quelli che vedono coesistere in diverse proporzioni ordine e caos e che presentano una straordinaria varietà di strutture geometriche e proprietà statistiche, tanto da fornire almeno se non proprio un quadro teorico per lo meno metafore utili per la descrizione dei sistemi complessi. Anche se non cè unanime consenso ci sembra appropriato definire complessi non tanto sistemi caratterizzati da interazioni non lineari tra i suoi componenti e da proprietà emergenti, che rientrano a pieno titolo nel quadro dei sistemi dinamici, ma piuttosto i sistemi viventi o quelli di vita artificiale che ne condividono le proprietà essenziali3. Tra queste possiamo certamente annoverare la capacità di gestire la informazione e di replicarsi, consentendo tramite un meccanismo di mutazione e selezione di dare origine a strutture di crescente ricchezza strutturale e dotate di capacità cognitive sempre più elaborate. Una teoria dei sistemi complessi non esiste ancora, anche se la teoria degli automi sviluppata da Von Neumann4 e la teoria della evoluzione di Darwin5 ne possono fornire alcuni pilastri importanti. Recentemente la teoria delle reti è stata utilizzata con successo per descrivere le proprietà statistiche delle connessioni tra gli elementi costitutivi (nodi) di un sistema complesso6. Le connessioni che non sono completamente casuali completamente gerarchiche, consentono una sufficiente robustezza rispetto a malfunzionamenti o danneggiamenti dei nodi unita a un adeguato livello di organizzazione per consentirne un funzionamento efficiente. Nei sistemi fisici il modello base è un insieme di atomi o molecole interagenti, che danno luogo a strutture diverse quali un gas, un liquido o un cristallo, come risultato di proprietà emergenti. Nello stesso modo per i sistemi complessi possiamo proporre un sistema automi interagenti come modello base. Le molteplici forme che il sistema assume anche in questo caso vanno considerate come proprietà emergenti del medesimo sostrato al mutare delle condizioni esterne e frutto delle i replicazioni, ciascuna delle quali introduce piccole ma significative varianti. Questa è la grande differenza tra un sistema fisico ed un sistema complesso: il primo fissate le condizioni esterne ha sempre le medesime proprietà, il secondo invece cambia con il fluire del tempo perché la sua organizzazione interna muta non solo al cambiare di fattori ambientali ma anche con il succedersi delle generazioni. Cè dunque un flusso di informazione che cresce con il tempo e che consente agli automi costituenti ed alla intera struttura di acquisire nuove capacità. Questo aumento di ordine e ricchezza strutturale avviene naturalmente a spese dellambiente circostante, in modo che globalmente i la sua entropia cresce in accordo con la seconda legge della termodinamica. In assenza di una teoria formalizzata paragonabile a quella dei sistemi dinamici, per i sistemi complessi si possono fare osservazioni e misure, sia puntuali sui costituenti elementari e sulle loro connessioni, sia globali sullintero sistema, oppure costruire modelli suscettibili di essere validati attraverso la simulazione. Se di un sistema si riesce infatti a fornire una descrizione sufficientemente dettagliata, è poi possibile osservare come questo si comporti traducendo le regole in algoritmi e costruendo quindi una versione virtuale, anche se semplificata del sistema stesso. Il passaggio più difficile è il confronto tra il sistema simulato ed il sistema vero, che passa necessariamente attraverso la valutazione di una numero limitato di parametri che ne caratterizzino e proprietà. La codifica del progetto è una proprietà cruciale dei sistemi complessi perché questa si realizza con un dispendio di massa ed energia incomparabilmente più piccolo rispetto a quello necessario per realizzare lintera struttura; nello stesso tempo apportare piccole modifiche ad un progetto è rapido ed economico. In questo processo che comporta la continua introduzione di varianti si aprono molteplici strade e con lo scorrere del tempo si realizza una storia in modo unico e irripetibile. Anche il susseguirsi di eventi fisici caratterizzati da processi irreversibili e dalla presenza di molteplici biforcazioni da origine ad una storia che non si può percorrere a ritroso, riprodurre qualora fossimo in grado ripartire dalle stesse condizioni iniziali. Tuttavia esiste una differenza profonda tra la storia di un sistema fisico come il globo terrestre e la storia della vita. La prima registra i molteplici cambiamenti che ha subito la superficie del nostro pianeta ove montagne e mari nascono e scompaiono senza un chiaro disegno soggiacente. La storia della vita è caratterizzata da una progressiva crescita della ricchezza strutturale e funzionale e accompagnata da una crescita della complessità progettuale. La rappresentazione di questa storia prende la forma di un albero con le sue ramificazioni che mostra la continua diversificazione delle strutture e la loro evoluzione verso forme sempre più avanzate. La direzione in cui scorre il tempo è ben definita: le strutture affinano le capacità sensoriali mentre cresce la potenza degli organi che elaborano la informazione. Un sistema complesso è anche caratterizzato da un molteplicità di scale, tanto più alta quanto più si sale sulla scala evolutiva. La ragione è che il procedere verso strutture sempre più elaborate avviene utilizzando altre strutture come mattoni per cui limmagine che si può fornire è quella di una rete di automi a più strati: partendo dal basso una rete con le sue proprietà emergenti diventa il nodo di una rete di secondo livello, cioè un automa di secondo livello che interagisce con altri automi dello stesso tipo e così via. Nei sistemi inorganici, dove non esiste un progetto, si distinguono di norma solo due livelli, quello dei costituenti elementari e quello su scala macroscopica. I sistemi fisici sono riconducibili a poche leggi universali, che governano i costituenti elementari della materia, ma il passaggio dalla descrizione dalla piccola alla grande scala è impervio e consentito soltanto dalla simulazione numerica, quando ci allontaniamo dalle situazioni più semplici caratterizzate da un equilibrio statistico. I limiti che il disegno riduzionista incontra già per i sistemi fisici diventano decisamente più forti nel caso dei sistemi complessi. (shrink)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  36. Intentionality and Background: Searle and Dreyfus Against Classical AI Theory.Teodor Negru - 2013 - Filosofi a Unisinos 14.
    Download  
     
    Export citation  
     
    Bookmark  
  37. From Human to Artificial Cognition and Back: New Perspectives on Cognitively Inspired AI Systems.Antonio Lieto & Daniele Radicioni - 2016 - Cognitive Systems Research 39 (c):1-3.
    We overview the main historical and technological elements characterising the rise, the fall and the recent renaissance of the cognitive approaches to Artificial Intelligence and provide some (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Editorial: Risks of Artificial Intelligence.Vincent C. Müller - 2016 - In Risks of artificial intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Future Progress in Artificial Intelligence: A Survey of Expert Opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent Müller (ed.), Fundamental Issues of Artificial Intelligence. Springer. pp. 553-571.
    There is, in some quarters, concern about highlevel machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. (...)In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to highlevel machine intelligence coming up within a particular timeframe, which risks they see with that development, and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to bebadorextremely badfor humanity. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  40. AAAI: an Argument Against Artificial Intelligence.Sander Beckers - 2017 - In Vincent Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Introduction: Philosophy and Theory of Artificial Intelligence.Vincent C. Müller - 2012 - Minds and Machines 22 (2):67-69.
    The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Risks of Artificial General Intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special IssueRisks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - (...)Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictionsand what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Philosophy and Theory of Artificial Intelligence.Vincent C. Müller (ed.) - 2013 - Springer.
    [Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificial intelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like (...)humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human intelligence. This consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we stand and where we should go from here. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Philosophy and Theory of Artificial Intelligence 2017.Vincent Müller (ed.) - 2017 - Berlin: Springer.
    This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on (...)November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Risks of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity (...) would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean OHeigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Fundamental Issues of Artificial Intelligence.Vincent Müller (ed.) - 2016 - Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of (...) present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues this volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference onTheory and Philosophy of Artificial Intelligenceheld in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48.  19
    Tacit Representations and Artificial Intelligence: Hidden Lessons From an Embodied Perspective on Cognition.Elena Spitzer - 2016 - In Vincent Müller (ed.), Fundamental Issues of Artificial Intelligence. Springer. pp. 425-441.
    In this paper, I explore how an embodied perspective on cognition might inform research on artificial intelligence. Many embodied cognition theorists object to the central role that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49.  49
    Should Machines Be Tools or Tool-Users? Clarifying Motivations and Assumptions in the Quest for Superintelligence.Dan J. Bruiger - manuscript
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, (...)goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion ofmind-in-generalbased on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), projecting human conscious-level notions into the operations of computers creates confusion and makes it harder to identify the nature and location of that threshold. There is confusion, in particular, about howand even whethervarious capabilities deemed intelligent relate to human consciousness. This suggests that insufficient thought has been given to very fundamental conceptsa dangerous state of affairs, given the intrinsic power of the technology. It also suggests that research in the area of artificial general intelligence may unwittingly be (mis)guided by unconscious motivations and assumptions. While it might be inconsequential if philosophers get it wrong (or fail to agree on what is right), it could be devastating if AI developers, corporations, and governments follow suit. It therefore seems worthwhile to try to clarify some fundamental notions. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  50.  52
    Criatividade, Transhumanismo e a metáfora Co-criador Criado.Eduardo R. Cruz - 2017 - Quaerentibus 5 (9):42-64.
    The goal of Transhumanism is to change the human condition through radical enhancement of its positive traits and through AI (Artificial Intelligence). Among these traits the transhumanists (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
1 — 50 / 214