Hay quienes sostienen que las producciones diseñadas conforman un género discursivo. Pero, ¿es realmente así? ¿Reamente todas las producciones diseñadas conforman un género discursivo o más bien se adaptan a diversos géneros pre-existentes, y en constante modificación?
In Kant’s logical texts the reference of the form S is P to an “unknown = x” is well known, but its understanding still remains controversial. Due to the universality of all concepts, the subject as much as the predicate is regarded as predicate of the x, which, in turn, is regarded as the subject of the judgment. In the CPR, this Kantian interpretation of the S-P relationship leads to the question about the relations between intuition and concept in judgment. (...) In contrast to intuition, if no concept, due to its universality, refers immediately to an object, how should one understand the relations of S and P to one another, as well as their relations to intuition, which corresponds to the possible individuality of the object in general = x? To answer this question, it is necessary to understand Kant’s notion of extension, and to prove its irreducibility to the Port-Royal notion of extension as well as to the Fregean one. (shrink)
The answer to some of the longstanding issues in the 20th century theoretical physics, such as those of the incompatibility between general relativity and quantum mechanics, the broken symmetries of the electroweak force acting at the subatomic scale and the missing mass of Higgs particle, and also those of the cosmic singularity and the black matter and energy, appear to be closely related to the problem of the quantum texture of space-time and the fluctuations of its underlying geometry. Each region (...) of space landscape seem to be filled with spacetime weaved and knotted networks, for example, spacetime has immaterial curvature and structures, such as topological singularities, and obeys the laws of quantum physics. Thus, it is filled with potentialparticles, pairs of virtual matter and anti-matter units, and potential properties at the quantum scale. For example, quantum entities (like fields and particles) have both wave (i.e., continuous) and particle (i.e., discrete) properties and behaviors. At the quantum level (precisely, the Planck scale) of space-time such properties and behaviors could emerge from some underlying (dynamic) phase space related to some field theory. Accordingly, these properties and behaviors leave their signature on objects and phenomena in the real Universe. In this paper we consider some conceptual issues of this question. (shrink)
Cloud computing is rapidly gaining traction in business. It offers businesses online services on demand (such as Gmail, iCloud and Salesforce) and allows them to cut costs on hardware and IT support. This is the first paper in business ethics dealing with this new technology. It analyzes the informational duties of hosting companies that own and operate cloud computing datacenters (e.g., Amazon). It considers the cloud services providers leasing ‘space in the cloud’ from hosting companies (e.g, Dropbox, Salesforce). And it (...) examines the business and private ‘clouders’ using these services. The first part of the paper argues that hosting companies, services providers and clouders have mutual informational (epistemic) obligations to provide and seek information about relevant issues such as consumer privacy, reliability of services, data mining and data ownership. The concept of interlucency is developed as an epistemic virtue governing ethically effective communication. The second part considers potential forms of government restrictions on or proscriptions against the development and use of cloud computing technology. Referring to the concept of technology neutrality, it argues that interference with hosting companies and cloud services providers is hardly ever necessary or justified. It is argued, too, however, that businesses using cloud services (banks, law firms, hospitals etc. storing client data in the cloud, e.g.) will have to follow rather more stringent regulations. (shrink)
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...) affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. (shrink)
This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.
Este trabalho tem por objetivo examinar as relações entre conhecimento e verdade (no sentido de descobrimento e abertura), no contexto da Ontologia Fundamental, de Martin Heidegger. Num primeiro momento, busca-se caracterizar o conhecimento como um modo derivado do ser-no-mundo enquanto ocupação, patenteando a estrutura intencional que lhe é própria, bem como explicitando a interpretação fenomenológicoexistencial do “resultado” do comportamento cognoscitivo (conceitos de substância/eidos), que é posta em questão quanto à sua correção, em se considerando os conceitos da Física Moderna. A (...) abordagem do fenômeno do conhecimento, aqui empreendida, culmina com a apresentação das relações “implícitas” entre o modo de ser do conhecimento e o “problema da verdade”, presentes na análise do fenômeno da enunciação predicativa (parágrafo 33, de Ser e Tempo), com base na qual se intenta indicar o lugar da Lógica na Ontologia Fundamental. Em seguida, no contexto da tematização do significado existencial de verdade, foca-se o conceito de Evidenciação (Ausweisung), no qual se entrelaçam plenamente as concepções heideggerianas de conhecimento e de verdade, porquanto se trata da descrição da verdade do conhecimento como um modo de descobrimento enunciativo dos entes tais como são em si mesmos. Busca-se, em primeiro lugar, mostrar a apropriação de Husserl e de Aristóteles, bem como a manutenção da idéia de verdade como adequação, embutidas naquele conceito. Propõe-se a idéia de que a não-verdade (falsidade) dos enunciados predicativos (não explicitamente tematizada por Heidegger, em Ser e Tempo) pode ser pensada a partir do conceito husserliano de “síntese de diferenciação”. Procura-se, ainda, esclarecer o que significa o “em si mesmo” dos entes que se dão na evidenciação e se ocultam na diferenciação, e salientar o aspecto problemático da idéia de uma dadidade dos entes em “si mesmos”, no âmbito da investigação científica. Por fim, apresenta-se a discussão acerca da concepção heideggeriana de verdade, iniciada por Tugendhat, em 1964, e constantemente retomada por vários filósofos, inclusive por Gethmann, cuja interpretação é aqui avaliada. (shrink)
The idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to address social problems effectively through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies (Cath et al. 2018). This article addresses this gap (...) by extrapolating seven ethical factors that are essential for future AI4SG initiatives from the analysis of 27 case studies of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...) key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
Let us start by some general definitions of the concept of complexity. We take a complex system to be one composed by a large number of parts, and whose properties are not fully explained by an understanding of its components parts. Studies of complex systems recognized the importance of “wholeness”, defined as problems of organization (and of regulation), phenomena non resolvable into local events, dynamics interactions in the difference of behaviour of parts when isolated or in higher configuration, etc., in (...) short, systems of various orders (or levels) not understandable by investigation of their respective parts in isolation. In a complex system it is essential to distinguish between ‘global’ and ‘local’ properties. Theoretical physicists in the last two decades have discovered that the collective behaviour of a macro-system, i.e. a system composed of many objects, does not change qualitatively when the behaviour of single components are modified slightly. Conversely, it has been also found that the behaviour of single components does change when the overall behaviour of the system is modified. There are many universal classes which describe the collective behaviour of the system, and each class has its own characteristics; the universal classes do not change when we perturb the system. The most interesting and rewarding work consists in finding these universal classes and in spelling out their properties. This conception has been followed in studies done in the last twenty years on second order phase transitions. The objective, which has been mostly achieved, was to classify all possible types of phase transitions in different universality classes and to compute the parameters that control the behaviour of the system near the transition (or critical or bifurcation) point as a function of the universality class. This point of view is not very different from the one expressed by Thom in the introduction of Structural Stability and Morphogenesis (1975). It differs from Thom’s program because there is no a priori idea of the mathematical framework which should be used. Indeed Thom considers only a restricted class of models (ordinary differential equations in low dimensional spaces) while we do not have any prejudice regarding which models should be accepted. One of the most interesting and surprising results obtained by studying complex systems is the possibility of classifying the configurations of the system taxonomically. It is well-known that a well founded taxonomy is possible only if the objects we want to classify have some unique properties, i.e. species may be introduced in an objective way only if it is impossible to go continuously from one specie to another; in a more mathematical language, we say that objects must have the property of ultrametricity. More precisely, it was discovered that there are conditions under which a class of complex systems may only exist in configurations that have the ultrametricity property and consequently they can be classified in a hierarchical way. Indeed, it has been found that only this ultrametricity property is shared by the near-optimal solutions of many optimization problems of complex functions, i.e. corrugated landscapes in Kauffman’s language. These results are derived from the study of spin glass model, but they have wider implications. It is possible that the kind of structures that arise in these cases is present in many other apparently unrelated problems. Before to go on with our considerations, we have to pick in mind two main complementary ideas about complexity. (i) According to the prevalent and usual point of view, the essence of complex systems lies in the emergence of complex structures from the non-linear interaction of many simple elements that obey simple rules. Typically, these rules consist of 0–1 alternatives selected in response to the input received, as in many prototypes like cellular automata, Boolean networks, spin systems, etc. Quite intricate patterns and structures can occur in such systems. However, what can be also said is that these are toy systems, and the systems occurring in reality rather consist of elements that individually are quite complex themselves. (ii) So, this bring a new aspect that seems essential and indispensable to the emergence and functioning of complex systems, namely the coordination of individual agents or elements that themselves are complex at their own scale of operation. This coordination dramatically reduces the degree of freedom of those participating agents. Even the constituents of molecules, i.e. the atoms, are rather complicated conglomerations of subatomic particles, perhaps ultimately excitations of patterns of superstrings. Genes, the elementary biochemical coding units, are very complex macromolecular strings, as are the metabolic units, the proteins. Neurons, the basic elements of cognitive networks, themselves are cells. In those mentioned and in other complex systems, it is an important feature that the potential complexity of the behaviour of the individual agents gets dramatically simplified through the global interactions within the system. The individual degrees of freedom are drastically reduced, or, in a more formal terminology, the factual space of the system is much smaller than the product of the state space of the individual elements. That is one key aspect. The other one is that on this basis, that is utilizing the coordination between the activities of its members, the system then becomes able to develop and express a coherent structure at a higher level, that is, an emergent behaviour (and emergent properties) that transcends what each element is individually capable of. (shrink)
The experience of Costa Rica highlights the potential for conflicts between the right to health and fair priority setting. For example, one study found that most favorable rulings by the Costa Rican constitutional court concerning claims for medications under the right to health were either for experimental treatments or for medicines that should have low priority based on health gain per unit of expenditure and severity of disease. In order to better align rulings with priority setting criteria, in 2014, the (...) court initiated a reform in its assessment of claims for medicine. This paper assesses this reform’s impact on the fairness of resource allocation. It finds three effects. First, a reduction in successful claims for experimental medication, which is beneficial. Second, an increase in the success rate of medication lawsuits, which is detrimental because most claims are for extremely cost-ineffective medications. Third, a decline in the number of claims for medicine, which is beneficial because it forestalls such low-priority spending. This paper estimates that, taking all three effects into account, the reform has had a modest net positive impact on overall resource allocation. However, it also argues that there is a need for further reforms to lower the number of claims to low-priority medicines that are granted. (shrink)
This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...) believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. (shrink)
O artigo trata da ideia de paz política desde a mecânica dos corpos no Leviatã de Thomas Hobbes. Buscou-se compreender como um pressuposto egoísta pode chegar a uma ideia de consenso pela linguagem.
O presente trabalho constrói uma ponte teórica entre Hegel e Marcuse, com intento de demonstrar a possibilidade de transformações históricas concretas, oferecida pela experiência estética. Tendo como paradigma a sociedade administrada, procurou-se estabelecer, a partir de Marcuse e Hegel, um caminho que demonstra como o indivíduo plasma a razão na história, o que pressupõe uma ação libertadora consciente. Todavia, em uma sociedade unidimensional, não há abertura para outras dimensões e, portanto, a razão unidimensional se torna totalitária, paralisando a história. Nesse (...) sentido, buscou-se demonstrar a saída para a dimensão estética, que propicia uma negação da unidimensionalidade da razão moderna. O indivíduo bidimensional, ie, que está aberto para a realidade e para a dimensão estética, pode plasmar a razão na história e, portanto, transformar o mundo. (shrink)
O presente artigo relata a estética de classe de um grupo social que foi chamado de pirangueiro. A partir de uma pesquisa observacional, com amparo referencial nas teorias do habitus e do campo, em Bourdieu, baseada na técnica de flanagem, reconstruiu-se a ideia do subcampo da moda de resistência, que se apresenta como contraposição ao campo da moda dominante. Com essa moda de resistência, o jovem pirangueiro traz um elemento de auto-distinção e, ao mesmo tempo, um critério de preconceito.
O presente trabalho teórico e bibliográfico se dedica a reconstruir o conceito de declinação espontânea do átomo em Epicuro, tal qual Marx defendeu em sua tese de doutorado. Seu objetivo é descobrir se a clinamen do átomo está presente em Epicuro e, se está, como se pode pensá-la. Percebe-se que a clinamen que aparece nos textos remanescentes de Epicuro não traz a noção de declinação espontânea, como aparece em Lucrécio e Marx, como Quartim de Moraes afirma. Entretanto, tendo como base (...) as premissas fundamentais de Epicuro, conclui-se que é necessário, ao sistematizar o seu pensamento, introduzir a noção de declinação espontânea para não reduzir sua filosofia ao determinismo e ao fatalismo. A clinamen não é um absurdo no pensamento de Epicuro, mas é uma questão fundamental que passa por toda sua filosofia e que poderia estar presente nas obras perdidas deste autor, assim como Marx pensou. Fundamentalmente, como filósofo da felicidade, Epicuro introduz a clinamen como declinação da vida infeliz. (shrink)
The usual way to try to ground knowing according to contemporary theory of knowledge is: We know something if (1) it’s true, (2) we believe it, and (3) we believe it for the “right” reasons. Floridi proposes a better way. His grounding is based partly on probability theory, and partly on a question/answer network of verbal and behavioural interactions evolving in time. This is rather like modeling the data-exchange between a data-seeker who needs to know which button to press on (...) a food-dispenser and a data-knower who already knows the correct number. The success criterion, hence the grounding, is whether the seeker’s probability of lunch is indeed increasing (hence uncertainty is decreasing) as a result of the interaction. Floridi also suggests that his philosophy of information casts some light on the problem of consciousness. I’m not so sure. (shrink)
Common mental health disorders are rising globally, creating a strain on public healthcare systems. This has led to a renewed interest in the role that digital technologies may have for improving mental health outcomes. One result of this interest is the development and use of artificial intelligence for assessing, diagnosing, and treating mental health issues, which we refer to as ‘digital psychiatry’. This article focuses on the increasing use of digital psychiatry outside of clinical settings, in the following sectors: education, (...) employment, financial services, social media, and the digital well-being industry. We analyse the ethical risks of deploying digital psychiatry in these sectors, emphasising key problems and opportunities for public health, and offer recommendations for protecting and promoting public health and well-being in information societies. (shrink)
This article analyses the ethical aspects of multistakeholder recommendation systems (RSs). Following the most common approach in the literature, we assume a consequentialist framework to introduce the main concepts of multistakeholder recommendation. We then consider three research questions: who are the stakeholders in a RS? How are their interests taken into account when formulating a recommendation? And, what is the scientific paradigm underlying RSs? Our main finding is that multistakeholder RSs (MRSs) are designed and theorised, methodologically, according to neoclassical welfare (...) economics. We consider and reply to some methodological objections to MRSs on this basis, concluding that the multistakeholder approach offers the resources to understand the normative social dimension of RSs. (shrink)
Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...) counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients’ health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted. (shrink)
Vilém Flusser, philosopher of communication, and Luciano Floridi, philosopher of information have been engaged with common subjects, extracting surprisingly similar conclusions in distant ages, affecting distant audiences. Curiously, despite the common characteristics, their works have almost never been used together. This paper presents Flusser’s concepts of functionaries, informational environment, information recycle, and posthistory as mellontological hypotheses verified in Floridi’s recently proposed realistic neologisms of inforgs, infosphere, e-nvironmentalism, and hyperhistory. Following Plutarch’s literature model of “parallel lives,” the description of an (...) earlier and a more recent persona’s common virtues, I juxtapose the works of the two authors. Through that, their “virtues” are mutually verified and proven diachronic. I also hold that because of his philosophical approaches to information-oriented subjects, Flusser deserves a place in the history of Philosophy of Information, and subsequently, that building an interdisciplinary bridge between philosophies of Information and Communication would be fruitful for the further development of both fields. (shrink)
This paper aims to reconstruct a possible answer to the classical Newman’s objection which has been used countless times to argue against structural realism. The reconstruction starts from the new strand of structural realism – informational structural realism – authored by Luciano Floridi. Newman’s objection had previously stated that all propositions which comprise the mathematical structures are merely trivial truths and can be instantiated by multiple models. This paper examines whether informational structural realism can overcome this objection by analysing (...) the previous attempts to answer this objection, attempts which either try to save the Ramseyfication of mathematical propositions or give up on it. The informational structural realism way is to attempt a third way, the neo-Kantian way inspired by the work of Ernst Cassirer, but also to change the formalism from a mathematical to an informational one. This paper shows how this original combination of neo-Kantianism, informational formalism and the method of levels of abstraction provide the tools to overcome Newman’s objection. (shrink)
This paper demonstrates the practical and philosophical strengths of adopting Luciano Floridi’s “general definition of information” (GDI) for use in the information sciences (IS). Many definitions of information have been proposed, but little work has been done to determine which definitions are most coherent or useful. Consequently, doubts have been cast on the necessity and possibility of finding a definition. In response to these doubts, the paper shows how items and events central to IS are adequately described by Floridi’s (...) conception of information, and demonstrates how it helps clarify the muddy theoretical framework resulting from the many previous definitions. To this end, it analyzes definitions, popular in IS, that conceive of information as energy, processes, knowledge, and physical objects. The paper finds that each of these definitions produces problematic or counterintuitive implications that the GDI suitably accounts for. It discusses the role of truth in IS, notes why the GDI is preferable to its truth-requiring variant, and ends with comments about the import of such a theory for IS research and practice. (shrink)
In The Philosophy of Information, Luciano Floridi presents a theory of “strongly semantic information”, based on the idea that “information encapsulates truth” . Starting with Popper, philosophers of science have developed different explications of the notion of verisimilitude or truthlikeness, construed as a combination of truth and information. Thus, the theory of strongly semantic information and the theory of verisimilitude are intimately tied. Yet, with few exceptions, this link has virtually pass unnoticed. In this paper, we briefly survey both (...) theories and offer a critical comparison of strongly semantic information and related notions, like truth, verisimilitude, and partial truth. (shrink)
Autorka nawiązuje do artykułu J.F. Lyotarda „A Few Words to Sing” w której filozof podejmuje się analizy utworu Sequenza III Luciano Berio napisanego i śpiewanego przez Cathy Bereberian. „A Few Words to Sing” jest przykładem podejmowania przez Lyotarda tematów muzycznych „na granicy”. W tym konkretnym przypadku autorka sugeruje, że wspomniana analiza bardzo dobrze wpisuje się w postulowane przez Lyotarda kategorie figury oraz oddania głosu [ofierze] w opozycji wobec tego co [czysto]estetyczne (resisting the aesthetic). Zainteresowania muzyczne Lyotarda, być może nie (...) tak wyeksponowane jak jego zainteresowania obrazem (choć kategorie figury i gestu okazują się łatwo przekraczać granice rodzajów sztuki) także bardzo wyraźnie wskazują na szczególny rodzaj [anty]estetyki uprawiany przez Lyotrada, gdzie udzielenie głosu ofierze staje się istotnym, jeśli nie podstawowym kryterium sztuki. (shrink)
This paper reviews the complex, overlapping ideas of two prominent Italian philosophers, Lorenzo Magnani and Luciano Floridi, with the aim of facilitating the nonviolent transformation of self and world, and with a focus on information technologies in mediating this process. In Floridi’s information ethics, problems of consistency arise between self-poiesis, anagnorisis, entropy, evil, and the narrative structure of the world. Solutions come from Magnani’s work in distributed morality, moral mediators, moral bubbles and moral disengagement. Finally, two examples of information (...) technology, one ancient and one new, a Socratic narrative and an information processing model of moral cognition, are offered as mediators for the nonviolent transformation of self and world respectively, while avoiding the tragic requirements inherent in Floridi’s proposal. (shrink)
Sumário: 1. El caso del método científico, Alberto Oliva; 2. Un capítulo de la prehistoria de las ciencias humanas: la defensa por Vico de la tópica, Jorge Alberto Molina; 3. La figura de lo cognoscible y los mundos, Pablo Vélez León; 4. Lebenswelt de Husserl y las neurociencias, Vanessa Fontana; 5. El uso estético del concepto de mundos posibles, Jairo Dias Carvalho; 6. Realismo normativo no naturalista y mundos morales imposibles, Alcino Eduardo Bonella; 7. En la lógica de pragmatismo, Hércules (...) de Araujo Feitosa, Luiz Henrique da Cruz Silvestrini; 8 La lógica, el tiempo y el lenguaje natural: un sistema formal para tiempos de portugués, Carlos Luciano Manholi. / Temas em filosofia contemporânea II / Jonas Rafael Becker Arenhart, Jaimir Conte, Cezar Augusto Mortari (orgs.) – Florianópolis : NEL/UFSC, 2016. 167 p. : tabs. - (Rumos da epistemologia ; v. 14); ISBN 978-85-87253-27-9 (papel); ISBN 978-85-87253-26-2 (e-book). (shrink)
European Computing and Philosophy conference, 2–4 July Barcelona The Seventh ECAP (European Computing and Philosophy) conference was organized by Jordi Vallverdu at Autonomous University of Barcelona. The conference started with the IACAP (The International Association for CAP) presidential address by Luciano Floridi, focusing on mechanisms of knowledge production in informational networks. The first keynote delivered by Klaus Mainzer made a frame for the rest of the conference, by elucidating the fundamental role of complexity of informational structures that can be (...) analyzed on different levels of organization giving place for variety of possible approaches which converge in this cross-disciplinary and multi-disciplinary research field. Keynotes by Kevin Warwick about re-embodiment of rats’ neurons into robots, Raymond Turner on syntax and semantics in programming languages, Roderic Guigo on Biocomputing Sciences and Francesco Subirada on the past and future of supercomputing presented different topics of philosophical as well as practical aspects of computing. Vonference tracks included: Philosophy of Information (Patrick Allo), Philosophy of Computer Science (Raymond Turner), Computer and Information Ethics (Johnny Søraker and Alison Adam), Computational Approaches to the Mind (Ruth Hagengruber), IT and Cultural Diversity (Jutta Weber and Charles Ess), Crossroads (David Casacuberta), Robotics, AI & Ambient Intelligence (Thomas Roth-Berghofer), Biocomputing, Evolutionary and Complex Systems (Gordana Dodig Crnkovic and Søren Brier), E-learning, E-science and Computer-Supported Cooperative Work (Annamaria Carusi) and Technological Singularity and Acceleration Studies (Amnon Eden). (shrink)
This essay introduces the philosophy of legal information (PLI), which is a response to the radical changes brought about in philosophy by the information revolution. It reviews in some detail the work of Luciano Floridi, who is an influential advocate for an information turn in philosophy that he calls the philosophy of information (PI). Floridi proposes that philosophers investigate the conceptual nature of information as it currently exists across multiple disciplines. He shows how a focus on the informational nature (...) of traditional philosophical questions can be transformative for philosophy and for human self-understanding. The philosophy of legal information (PLI) proposed here views laws as a body of information that is stored, manipulated, and analyzed through multiple methods, including the computational methodologies. PLI offers resources for evaluating the ethical and political implications of legal infomatics (also known as "legal information systems"). -/- This essay introduces PLI. Parts I and II describe Floridi's philosophy of information. First, Part I introduces the transformation in the concept of information that occurred in the twentieth century through the work of Alan Turning and Claude Shannon. Part II describes Floridi's approaches to traditional questions in epistemology, ontology, and ethics. Part III applies PI to the analysis of legal positivism. It suggests that PLI is a viable project that has potential for transforming the understanding law in the information age. -/- . (shrink)
Some Anglophone legal theorists look to analytic philosophy for core presuppositions. For example, the epistemological theories of Ludwig Wittgenstein and Willard Quine shape the theories of Dennis Patterson and Brian Leiter, respectively. These epistemologies are anti-foundational since they reject the kind of certain grounding that is exemplified in Cartesian philosophy. And, they are coherentist in that they seek to legitimate truth-claims by reference to entire linguistic systems. While these theories are insightful, the current context of information and communication technologies (ICT) (...) has created new informational concepts and issues. As a result, the analytic epistemologies are increasingly challenged by alternative perspectives. One such alternative is Structural Realism (SR), which is influential among the natural sciences, and especially physics. “Informational Structural Realism,” (ISR) is a variant of SR that was introduced by Luciano Floridi. Unlike the coherentist theories, ISR promotes examination of the connections among types of information and informational structures. It is an important shift for legal theory today, since the challenges that the ICT presents have to do with pattern recognition across vast domains of diverse data. An informational jurisprudence is now required to understand the issues emerging from the ICT. (shrink)
In this paper I suggest that, over and above the need to explore and understand the technological newness of computer art works, there is a need to address the aesthetic significance of the changes and effects that such technological newness brings about, considering the whole environmental transaction pertaining to new media, including what they can or do offer and what users do or can do with such offerings, and how this whole package is integrated into our living spaces and activities. (...) I argue that, given the primacy of computer-based interaction in the new-media, the notion of ‘ornamentality’ indicates the ground-floor aesthetics of new-media environments. I locate ornamentality not only in the logically constitutive principles of the new-media (hypertextuality and interactivity) but also in their multiform cultural embodiments (decoration as cultural interface). I utilize Kendall Walton’s theory of ornamentality in order to construe a puzzle pertaining to the ornamental erosion of information in new-media environments. I argue that insofar as we consider new-media to be conduits of ‘real-life’, the excessive density of ornamental devices prevalent in certain new-media environments forces us to conduct our inquiries under conditions of neustic uncertainty, that is, uncertainty concerning the kind of relationship that we, the users, have to the propositional content mediated. I conclude that this puzzle calls our attention to a peculiar interrogatory complexity inherent in any game of knowledge-seeking conducted across the infosphere, which is not restricted to the simplest form of data retrieval, especially in mixed-reality environments and when the knowledge sought is embodied mimetically. (shrink)
The enormous increasing of connections between people and the noteworthy enlargement of domains and methods in sciences have augmented extraordinarily the cardinality of the set of meaningful human symbols. We know that complexity is always on the way to become complication, i.e. a non-tractable topic. For this reason scholars engage themselves more and more in attempting to tame plurality and chaos. In this book distinguished scientists, philosophers and historians of science reflect on the topic from a multidisciplinary point of view. (...) Is it possible to dominate complexity through reductionism? Are there other conceptual instruments useful to take account of complexity? What is complexity in biology, mathematics, physics and philosophy of mind? These are some of the questions which are faced in this volume. (shrink)
On the Nature of Philosophy: A Historical-Pragmatist Point of View. The aim of the paper is to examine the nature of philosophy from the historical-pragmatist point of view. In the first part, the paper deals with the meaning holism and family resemblance of various exemplifications of philosophy, which are taken as presuppositions of our approach to define philosophy as an activity. In the second part, the paper criticizes those approaches which define philosophy as a quasi-science or a super-science. In the (...) third part, the paper finally offers a definition of philosophy as a two-way intellectual activity consisting in outsourcing and insourcing of open questions and solutions. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.