A "concept" in the sense favoured by Wittgenstein is a paradigm for a transition between parts of a notational system. A concept-determining sentence such as "There is no reddish green" registers the absence of such a transition. This suggests a plausible account of what is perceived in an experiment that was first designed by Crane and Piantanida, who claim to have induced perceptions of reddish green. I shall propose a redescription of the relevant phenomena, invoking only ordinary colour concepts. This (...) redescription is not ruled out by anything the experimenters say. It accounts for certain peculiarities in both their descriptions and their subjects', and suggests that instead of discovering forbidden colours the experimenters introduced a new use of "-ish". Still, there is a point in speaking of "reddish green" in their context, which can be motivated by invoking what Wittgenstein calls a "physiognomy". (shrink)
This is an English translation of Waldenfels' German essay: Equality and inequality are basic elements of law, justice and politics. Equality integrates each of us into a common sphere by distributing rights, duties and chances among us. Equality turns into mere indifference as far as we get overintegrated into social orders. When differences are fading away experience loses its relief and individuals lose their face. Our critical reflections start from the inevitable paradox of making equal what is not equal. In (...) various ways they refer to Nietzsche’s concept of order, to Marx’s analysis of money, to Lévinas’s ethics of the Other, and to novelists like Dostoevsky and Musil. Our critique turns against two extremes, on the one hand against any sort of normalism fixed on functioning orders, on the other hand against any sort of anomalism dreaming of mere events and permanent ruptures. Responsive phenomenology shows how we are confronted with extraordinary events. Those deviate from the ordinary and transgress its borders, without leaving the normality of our everyday world behind. The process of equalizing moves between the ordinary and the extraordinary. What makes the difference and resists mere indifference are creative responses which are to be invented again and again. (shrink)
This Paper addresses the problem of statelessness, a problem which remains despite treaties and judicial decisions elaborating distinct rules to protect stateless persons. I explain why this has been so. Drawing from the work of Bernhard Waldenfels, I argue that international and domestic courts have presupposed a territorial sense of space, a territorial knowledge and the founding date for the territorial structure of a state-centric international legal community. I then focus upon the idea that an impartial third party can (...) resolve a dispute involving stateless persons by deferring to a universal rule. I call the third party the ‘rule of law third’. Such a rule, I argue, presupposes a presupposed knowledge over stateless persons. The Third takes for granted the territorial boundary of a legal structure, a boundary which excludes the recognition of outsiders to the boundary. (shrink)
This is, to the best of my knowledge, the first published attempt at a rigorous logical formalization of a passage in Leibniz's Monadology. The method we followed was suggested by Johannes Czermak.
XENOLOGY AND XENOTOPOGRAPHY OF BERNHARD WALDENFELS The paper strives to adapt Bernhard Waldenfels’ xenology and so called ‘xenotopography’ for the philosophico-literary studies in fantastic world-building with a special concern of the ‘portal-quest’ model of fantasy and SF. Following Waldenfel’s remarks on the nature of post- Husserlian diastasis of our world [Heimwelt] and otherworld [Fremdwelt] and acknowledging the consequences of allocating one’s attitude towards the otherness in the symbolical borderland [‘sphere of intermonde’] in between, it is examined whether such (...) a model can occur in the fantastic literature and what may be the consequence of xenotopographic reconsideration of its basic ontological premises. Additionally, the article offers an original xenotopograpfic model of worldbuilding which addresses three carefully chosen case studies of fantastic worlds from Orson Scott Card’s Ender’s Game tetralogy, Neil Gaiman’s Stardust and George R. R. Martin’s The Song of Ice and Fire. In the end, it is suggested that hitherto presented xenotopography gravely inspired a postmodern shift in the genres of fantasy and SF which results in more ethically conscious representations of the otherness and even more concise and alien comprehensive world-building. (shrink)
By observing processes of art perception two contradicting phenomena come to the fore: a feeling of closeness and distance at the same time. The phenomenologists Martin Heidegger and Bernhard Waldenfels describe this connection as a matter of being ("Seinsweise") of images, that allows new views of the world whereas the two cultural anthropologists Ernst Cassirer and Hartmut Böhme note that the connection between both can be seen as the basis of experience ("Erlebnisweise") of images and therefore serves for communication. (...) Here one can see, knowledge-oriented notions the former promote do not exclude semiotic notions the latter imply. (shrink)
According to an often repeated definition, economics is the science of individual choices and their consequences. The emphasis on choice is often used – implicitly or explicitly – to mark a contrast between markets and the state: While the price mechanism in well-functioning markets preserves freedom of choice and still efficiently coordinates individual actions, the state has to rely to some degree on coercion to coordinate individual actions. Since coercion should not be used arbitrarily, coordination by the state needs to (...) be legitimized by the consent of its citizens. The emphasis in economic theory on freedom of choice in the market sphere suggests that legitimization in the market sphere is “automatic” and that markets can thus avoid the typical legitimization problem of the state. In this paper, I shall question the alleged dichotomy between legitimization in the market and in the state. I shall argue that it is the result of a conflation of choice and consent in economics and show how an independent concept of consent makes the need for legitimization of market transactions visible. Footnotes1 For helpful comments and suggestions I am most grateful to Marc Fleurbaey, Alain Marciano, Herlinde Pauer-Studer, Thomas Pogge, Hans Bernhard Schmid, to seminar or conference participants in Aix-Marseille, Tutzing, Paris, and Amsterdam, and to two anonymous referees. (shrink)
A look at the dynamical concept of space and space-generating processes to be found in Kant, J.F. Herbart and the mathematician Bernhard Riemann's philosophical writings.
This paper discusses two approaches of the relationship between subjectivity and intersubjectivity. The Husserlian one, a transcendental phenomenological investigation of the possibility of subjectivity and intersubjectivity, and the Waldenfelsian one, an ethical phenomenological investigation of day to day intersubjective interactions. Both authors pretend to give account of the conditions of possibility of intersubjective interaction. However, Husserl starts with the investigation of the transcendental structure of subjectivity, that is, the fundamental conditions required for the appearance of consciousness. By contrast, Waldenfels looks (...) first at practical interaction and draws conclusions on the deeper structure of subjectivity based on the traces he discovers to be characteristic for this interaction. Our interest lies in determining which of the two approaches should be given priority for the investigation of the constitution of intersubjectivity. (shrink)
International human rights law is profoundly oxymoronic. Certain well-known international treaties claim a universal character for human rights, but international tribunals often interpret and enforce these either narrowly or, if widely, they rely upon sovereign states to enforce the rights against themselves. International lawyers and diplomats have usually tried to resolve the apparent contradiction by pressing for more general rules in the form of treaties, legal doctrines, and institutional procedures. Despite such efforts, aliens remain who are neither legal nor illegal (...) and who thereby slip through a discourse that claims universality. I ask, why does international legal discourse claim a universality of human rights enforceable by impartial, politically neutral tribunals when it also recognises that a state may refuse to recognise some groups as “persons”? I turn to the works of Bernhard Waldenfels for an explanation. To that end, I briefly outline two examples of state-centered human rights treaties. I then reconstruct Waldenfels’ explanation as to how a territorial sense of space needs an alien exterior to the space. The territorial structure assumes time is frozen as of the date of the foundation of the structure. The body of the alien is taken as a biological body. The personality, motives, and actions of the alien are the consequence of the imagination of people inside the territorial boundary. The dominant international legal discourse reinforces and institutionalises such a territorial sense of space and frozen time because the territorial state is considered the primary legal subject of international law. I also retrieve, however, an experiential but concealed sense of space and time. To retrieve this sense of space and time requires that lawyers see the world through the twilight of legality heretofore ignored as pre-legal. (shrink)
Few have given an extended treatment of the non-statistical sense of normality: a sense captured in sentences like “dogs have four legs,” or “hammers normally have metal heads,” or “it is normal for badgers to take dust baths.” The most direct extant treatment is Bernhard Nickel’s Between Logic and the World, where he claims that the normal or characteristic for a kind is what we can explain by appeal to the right sorts of explanations. Just which explanatory strategies can (...) ground normalities, though, is difficult to determine without inviting circularity into the account. After raising this and other worries for Nickel’s account, I develop my own account according to which normal features are those which are explained by the kind of thing involved. (shrink)
Endowing artificial systems with explanatory capacities about the reasons guiding their decisions, represents a crucial challenge and research objective in the current fields of Artificial Intelligence (AI) and Computational Cognitive Science [Langley et al., 2017]. Current mainstream AI systems, in fact, despite the enormous progresses reached in specific tasks, mostly fail to provide a transparent account of the reasons determining their behavior (both in cases of a successful or unsuccessful output). This is due to the fact that the classical problem (...) of opacity in artificial neural networks (ANNs) explodes with the adoption of current Deep Learning techniques [LeCun, Bengio, Hinton, 2015]. In this paper we argue that the explanatory deficit of such techniques represents an important problem, that limits their adoption in the cognitive modelling and computational cognitive science arena. In particular we will show how the current attempts of providing explanations of the deep nets behaviour (see e.g. [Ritter et al. 2017] are not satisfactory. As a possibile way out to this problem, we present two different research strategies. The first strategy aims at dealing with the opacity problem by providing a more abstract interpretation of neural mechanisms and representations. This approach is adopted, for example, by the biologically inspired SPAUN architecture [Eliasmith et al., 2012] and by other proposals suggesting, for example, the interpretation of neural networks in terms of the Conceptual Spaces framework [Gärdenfors 2000, Lieto, Chella and Frixione, 2017]. All such proposals presuppose that the neural level of representation can be considered somehow irrelevant for attacking the problem of explanation [Lieto, Lebiere and Oltramari, 2017]. In our opinion, pursuing this research direction can still preserve the use of deep learning techniques in artificial cognitive models provided that novel and additional results in terms of “transparency” are obtained. The second strategy is somehow at odds with respect to the previous one and tries to address the explanatory issue by avoiding to directly solve the “opacity” problem. In this case, the idea is that one of resorting to pre-compiled plausible explanatory models of the word used in combination with deep-nets (see e.g. [Augello et al. 2017]). We argue that this research agenda, even if does not directly fits the explanatory needs of Computational Cognitive Science, can still be useful to provide results in the area of applied AI aiming at shedding light on the models of interaction between low level and high level tasks (e.g. between perceptual categorization and explanantion) in artificial systems. (shrink)
From the beginning of the 16th century to the end of the 18th century, there were not less than ten philosophers who focused extensively on Venn’s ostensible analytical diagrams, as noted by modern historians of logic (Venn, Gardner, Baron, Coumet et al.). But what was the reason for early modern philosophers to use logic or analytical diagrams? Among modern historians of logic one can find two theses which are closely connected to each other: M. Gardner states that since the Middle (...) Ages certain logic diagrams were used just in order to teach “dull-witted students”. Therefore, logic diagrams were just a means to an end. According to P. Bernhard, the appreciation of logic diagrams had not started prior to the 1960s, therefore the fact that logic diagrams become an end the point of research arose very late. The paper will focus on the question whether logic resp. analytical diagrams were just means in the history of (early) modern logic or not. In contrast to Gardner, I will argue that logic diagrams were not only used as a tool for “dull-witted students”, but rather as a tool used by didactic reformers in early modern logic. In predating Bernhard’s thesis, I will argue that in the 1820s logic diagrams had already become a value in themselves in Arthur Schopenhauer’s lectures on logic, especially in proof theory. (shrink)
The basis of analyzes carried out in the article is the work of Bernhard Welte: Meister Eckhart. Gedanken zu seinen Gedanken. The central subject of research is the idea Abgeschiedenheit (“isolation”). Following the interpretation of Welte it has been considerated a phenomeno‐ logical description on two ways. From the practical experience, as a modus vivendi a religious man, and from the theoretical, as speculative thought. Theoretical considerations consist of analysis of the concept of truth and goodness, which Eckhart identifies (...) with the idea of God. Welte shows that these concepts of medieval thinker at the starting point considerations are still sink in the schemes of metaphysical thinking, but in the next stages of its argumentation he overcome metaphysical discourse. The purpose of the article, guided by the suggestion Weltes interpretation, is to show ways of reaching the source forms of religious experience. At the same time the text raised the issue of problematic use formulas of mystical union and noticed parallels have been taking place between the method of phenomenological reduction and the idea of Abgeschiedenheit. (shrink)
This paper argues that to account for group speech acts, we should adopt a representationalist account of mode / force. Individual and collective subjects do not only represent what they e.g. assert or order. By asserting or ordering they also indicate their theoretical or practical positions towards what they assert or order. The ‘Frege point’ cannot establish the received dichotomy of force and propositional content. On the contrary, only the representationalist account allows a satisfactory response to it. It also allows (...) us to give a more satisfactory analysis of the speech act of inviting a joint commitment and to answer two important questions Bernhard Schmid has raised about group speech acts, namely whether there are 1st person plural forms of Moore’s paradox and of 1st person authority. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.