This paper introduces Exclusion Logic - a simple modal logic without negation or disjunction. We show that this logic has an efficient decision procedure. We describe how Exclusion Logic can be used as a deontic logic. We compare this deontic logic with Standard Deontic Logic and with more syntactically restricted logics.
If there truly is a proof that shows that no universal halt decider exists on the basis that certain tuples: (H, Wm, W) are undecidable, then this very same proof (implemented as a Turing machine) could be used by H to reject some of its inputs. When-so-ever the hypothetical halt decider cannot derive a formal proof from its input strings and initial state to final states corresponding the mathematical logic functions of Halts(Wm, W) or Loops(Wm, W), halting undecidability (...) has been decided. (shrink)
The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988 ) monograph, “Representation & Reality”, which if correct, has (...) important implications for turing machine functionalism and the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements every finitestate automata”, I will try to establish the weaker result that, “everything implements the specific machine Q on a particular input set ( x )”. Then, equating Q ( x ) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate conscious experience and hence that disembodied minds lurk everywhere. (shrink)
It is argued that Nozick's experience machine thought experiment does not pose a particular difficulty for mental state theories of well-being. While the example shows that we value many things beyond our mental states, this simply reflects the fact that we value more than our own well-being. Nor is a mental state theorist forced to make the dubious claim that we maintain these other values simply as a means to desirable mental states. Valuing more than our mental (...) states is compatible with maintaining that the impact of such values upon our well-being lies in their impact upon our mental lives. (shrink)
The initial argument presented herein is not significantly original—it is a simple reflection upon a notion of computation originally developed by Putnam and criticised by Chalmers et al. . In what follows, instead of seeking to justify Putnam’s conclusion that every open system implements every FiniteState Automaton and hence that psychological states of the brain cannot be functional states of a computer, I will establish the weaker result that, over a finite time window every open system (...) implements the trace of FSA Q, as it executes program on input . If correct the resulting bold philosophical claim is that phenomenal states—such as feelings and visual experiences—can never be understood or explained functionally. (shrink)
Interaction between drivers and pedestrians is often facilitated by informal communicative cues, like hand gestures, facial expressions, and eye contact. In the near future, however, when semi- and fully autonomous vehicles are introduced into the traffic system, drivers will gradually assume the role of mere passengers, who are casually engaged in non-driving-related activities and, therefore, unavailable to participate in traffic interaction. In this novel traffic environment, advanced communication interfaces will need to be developed that inform pedestrians of the current (...) class='Hi'>state and future behavior of an autonomous vehicle, in order to maximize safety and efficiency for all road users. The aim of the present review is to provide a comprehensive account of empirical work in the field of external human–machine interfaces for autonomous vehicle-to- pedestrian communication. In the great majority of covered studies, participants clearly benefited from the presence of a communication interface when interacting with an autonomous vehicle. Nevertheless, standardized interface evaluation procedures and optimal interface specifications are still lacking. (shrink)
In pedestrian detection, occlusions are typically treated as an unstructured source of noise and explicit models have lagged behind those for object appearance, which will result in degradation of detection performance. In this paper, a hierarchical co-occurrence model is proposed to enhance the semantic representation of a pedestrian. In our proposed hierarchical model, a latent SVM structure is employed to model the spatial co-occurrence relations among the parent–child pairs of nodes as hidden variables for handling the partial occlusions. (...) Moreover, the visibility statuses of the pedestrian can be generated by learning co-occurrence relations from the positive training data with large numbers of synthetically occluded instances. Finally, based on the proposed hierarchical co-occurrence model, a pedestrian detection algorithm is implemented to incorporate visibility statuses by means of a Random Forest ensemble. The experimental results on three public datasets demonstrate the log-average miss rate of the proposed algorithm has 5% improvement for pedestrians with partial occlusions compared with the state-of-the-arts. (shrink)
The distinction between nature and artifice has been definitive for Western conceptions of the role of humans within their natural environment. But the human must already be separated from nature in order to distinguish between nature and artifice. This separation, in turn, facilitates a classification of knowledge in general, typically cast in terms of a hierarchy of sciences that ascends from the natural sciences to the social (or human) sciences. However, this hierarchy considers nature as a substantial foundation upon which (...) artifice operates and to which it responds. Here I examine three inter-related concepts that, by focusing on events rather than substances, operate beyond the nature–artifice distinction and thereby resist the hierarchical classification of the sciences: Foucault’s concept of technology, the concept of milieu as it crosses over historically from physics to biology and anthropology, and Deleuze and Guattari’s reconfiguration of the concept of milieu in terms of their concept of machine. (shrink)
This paper shows how the classical finite probability theory (with equiprobable outcomes) can be reinterpreted and recast as the quantum probability calculus of a pedagogical or "toy" model of quantum mechanics over sets (QM/sets). There are two parts. The notion of an "event" is reinterpreted from being an epistemological state of indefiniteness to being an objective state of indefiniteness. And the mathematical framework of finite probability theory is recast as the quantum probability calculus for QM/sets. The (...) point is not to clarify finite probability theory but to elucidate quantum mechanics itself by seeing some of its quantum features in a classical setting. (shrink)
The article deals with a pivotal conceptual distinction employed in philosophical discussions about global justice. Cosmopolitans claim that arguing from the perspective of moral cosmopolitanism does not necessarily entail defending a global coercive political authority, or a "world-state", and suggest that ambitious political and economic (social) goals implied in moral cosmopolitanism may be achieved via some kind of non-hierarchical, dispersed and/or decentralised institutional arrangements. I argue that insofar as moral cosmopolitans retain "strong" moral claims, this is an untenable (...) position, and that the goals of cosmopolitan justice, as explicated by its major proponents, require nothing less than a global state-like entity with coercive powers. My background ambition is to supplement some existing works questioning the notion of "governance without government" with an argument that goes right to the conceptual heart of cosmopolitan thought. To embed my central theoretical argument in real-world developments, I draw on some recent scholarship regarding the nature of international organizations, European Union, or transnational democratization. Finally I suggest that only after curbing moral aspirations in the first place can a more self-consciously moderate position be constructed, one that will carry practical and feasible implications for institutional design. (shrink)
In this paper, a metal cutting machine position control have been designed and simulated using Matlab/Simulink Toolbox successfully. The open loop response of the system analysis shows that the system needs performance improvement. Static output feedback and full state feedback H 2 controllers have been used to increase the performance of the system. Comparison of the metal cutting machine position using static output feedback and full state feedback H 2 controllers have been done to track a (...) set point position using step and sine wave input signals and a promising results have been analyzed. (shrink)
The main claim of this chapter is that planetary defense against asteroids cannot be implemented under a decentralized model of democratic global governance, as espoused elsewhere in this book. All relevant indices point to the necessity of establishing a centralized global political authority with legitimate coercive powers. It remains to be seen, however, whether such a political system can be in any recognizable sense democratic. It seems unconvincing that planetary-wide physical-threat, all-comprehensive macrosecuritization, coupled with deep transformations of international law, global (...) centralization of core decision-making powers, de-stigmatization of nuclear weapons and the like can proceed, succeed, and be implemented in a non-hierarchical international system where planetary defense constitutes only one regime among many, and where states basically remain the decisive actors. Although rationally and scientifically robust, the project suffers from oversimplification, as well as naivety with respect to how both international and domestic politics works. Among other topics, this chapter discusses problems associated with the rule of law and constituent powers, political representation and sources of legitimacy, conditions of multilevel collective action, or limits of theoretical idealization. The general message is that the planetary defense community needs to be more aware of the social and political context of its own enterprise. (shrink)
This paper connects the hard problem of consciousness to the interpretation of quantum mechanics. It shows that constitutive Russellian pan(proto)psychism (CRP) is compatible with Everett’s relative-state (RS) interpretation. Despite targeting different problems, CRP and RS are related, for they both establish symmetry between micro- and macrosystems, and both call for a deflationary account of Subject. The paper starts from formal arguments that demonstrate the incompatibility of CRP with alternative interpretations of quantum mechanics, followed by showing that RS entails Russellian (...) pan(proto)psychism. Therefore, CRP and RS are mutually supportive. It then provides a unified ontological picture by combining CRP and RS. The challenge faced by CRP, the combination problem, can be resolved by adopting a RS version of quantum mechanics. Technically, this is achieved by a co-consciousness relation capable of explaining the difference between first-person and third-person perspectives. The hierarchical structure of the relation removes any concern on the structural mismatch between the physical and the phenomenal. (shrink)
The viewpoint that consciousness, including feeling, could be fully expressed by a computational device is known as strong artificial intelligence or strong AI. Here I offer a defense of strong AI based on machine-state functionalism at the quantum level, or quantum-state functionalism. I consider arguments against strong AI, then summarize some counterarguments I find compelling, including Torkel Franzén’s work which challenges Roger Penrose’s claim, based on Gödel incompleteness, that mathematicians have nonalgorithmic levels of “certainty.” Some consequences of (...) strong AI are then considered. A resolution is offered of some problems including John Searle’s Chinese Room problem and the problem of consciousness propagation under isomorphism. (shrink)
I have been experimenting with meditation for a long time, but just recently I seem to have come across another being in there. It may just be me looking at me, but whatever it is, it is showing me some really interesting arrangements of colored balls. At first, I thought it was just random colors and shapes, but it became very ordered. It was like this being (me?) was trying to talk to me but couldn’t, so was showing me some (...) math in pictures. I have gone in many times now and am trying to write things down here. Has anyone else ever seen this? My best guess so far is he is showing me a machine that might be useful to rapidly factor integers. I detail this in the analysis below. There is a lot of extra structure that I am currently disregarding, but is probably also something interesting. I will report more when I know more. (shrink)
The concept of modes of existence of semiotic entities underlies (post)Greimasian semiotics, yet it seems to have received little attention. Modes of existence can be used in different senses. For Greimas, from the perspective of narrative semiotics, when Michelangelo first receives a block of marble and decides to sculpt the David, his intention is in a virtual mode; as Michelangelo progresses he ends up bringing the David into existence, and his intention comes to the realized mode. In Fontanille’s tensive semiotics, (...) however, modes of existence can have to do with how one can narrow or broaden the scope of our apprehension of the David as our eyes look at it in order to produce a meaningful experience. In this work, the perspectives of narrative and tensive semiotics are contrasted both theoretically and practically applying both to a number of examples. In order to identify all possible modes of existence and all the possibilities of transitioning from one to the other in the examples presented, we resort to the method of finite-state automata from computer science. In the end, we propose a robust narrative account of modes of existence that relies on narrative semiotics for its definition, but into which intent and apprehension from tensive semiotics can be integrated. This work calls for the need of establishing a syntax of modes of existence, since both Greimas and Fontanille construe them as being necessary to account for the production of signification. (shrink)
With the advent of computers in the experimental labs, dynamic systems have become a new tool for research on problem solving and decision making. A short review of this research is given and the main features of these systems (connectivity and dynamics) are illustrated. To allow systematic approaches to the influential variables in this area, two formal frameworks (linear structural equations and finitestate automata) are presented. Besides the formal background, the article sets out how the task demands (...) of system identification and system control can be realised in these environments, and how psychometrically acceptable dependent variables can be derived. (shrink)
Both mindreading and stereotyping are forms of social cognition that play a pervasive role in our everyday lives, yet too little attention has been paid to the question of how these two processes are related. This paper offers a theory of the influence of stereotyping on mental-state attribution that draws on hierarchical predictive coding accounts of action prediction. It is argued that the key to understanding the relation between stereotyping and mindreading lies in the fact that stereotypes centrally (...) involve character-trait attributions, which play a systematic role in the action–prediction hierarchy. On this view, when we apply a stereotype to an individual, we rapidly attribute to her a cluster of generic character traits on the basis of her perceived social group membership. These traits are then used to make inferences about that individual’s likely beliefs and desires, which in turn inform inferences about her behavior. (shrink)
The argument presented in this paper is not a direct attack or defence of the Chinese Room Argument (CRA), but relates to the premise at its heart, that syntax is not sufficient for semantics, via the closely associated propositions that semantics is not intrinsic to syntax and that syntax is not intrinsic to physics. However, in contrast to the CRA’s critique of the link between syntax and semantics, this paper will explore the associated link between syntax and physics. The main (...) argument presented here is not significantly original – it is a simple reflection upon that originally given by Hilary Putnam (Putnam 1988) and criticised by David Chalmers and others: instead of seeking to justify Putnam’s claim that, “every open system implements every FiniteState Automaton (FSA)”, and hence that psychological states of the brain cannot be functional states of a computer, I will seek to establish the weaker result that, over a finite time window every open system implements the trace of a particular FSA Q, as it executes program (p) on input (x). That this result leads to panpsychism is clear as, equating Q (p, x) to a specific Strong AI program that is claimed to instantiate phenomenal states as it executes, and following Putnam’s procedure, identical computational (and ex hypothesi phenomenal) states (ubiquitous little ‘pixies’) can be found in every open physical system. (shrink)
I use modal logic and transfinite set-theory to define metaphysical foundations for a general theory of computation. A possible universe is a certain kind of situation; a situation is a set of facts. An algorithm is a certain kind of inductively defined property. A machine is a series of situations that instantiates an algorithm in a certain way. There are finite as well as transfinite algorithms and machines of any degree of complexity (e.g., Turing and super-Turing machines and (...) more). There are physically and metaphysically possible machines. There is an iterative hierarchy of logically possible machines in the iterative hierarchy of sets. Some algorithms are such that machines that instantiate them are minds. So there is an iterative hierarchy of finitely and transfinitely complex minds. (shrink)
What were the reasons of the Copernican Revolution ? How did modern science (created by a bunch of ambitious intellectuals) manage to force out the old one created by Aristotle and Ptolemy, rooted in millennial traditions and strongly supported by the Church? What deep internal causes and strong social movements took part in the genesis, development and victory of modern science? The author comes to a new picture of Copernican Revolution on the basis of the elaborated model of scientific revolutions (...) that takes into account some recent advances in philosophy, sociology and history of science. The model was initially invented to describe Einstein’s Revolution of the XX century beginning. The model considers the growth of knowledge as interaction, interpenetration and unification of the research programmes, springing out of different cultural traditions. Thus, Copernican Revolution appears as a result of revealation and (partial) resolution of the dualism , of the gap between Ptolemy’s mathematical astronomy and Aristotelian qualitative physics. The works of Copernicus, Galileo, Kepler and Newton were all the stages of mathematics descendance from skies to earth and reciprocal extrapolation of earth physics on skies. The model elaborated enables to reassess the role of some social factors crucial for the scientific revolution. It is argued that initially modern science was a result of the development of Christian Weltanschaugung . Later the main support came from the absolute monarchies. In the long run the creators of modern science appeared to be the “apparatchics” of the “regime of truth” built-in statemachine. Natural science became a part of ideological state apparatus providing not only scientific education but the internalization of values crucial for the functioning of state. -/- . (shrink)
Explorations in Systems Phenomenology in Relation to Ontology, Hermeneutics and the Meta-dialectics of Design -/- SYNOPSIS A Phenomenological Analysis of Emergent Design is performed based on the foundations of General Schemas Theory. The concept of Sign Engineering is explored in terms of Hermeneutics, Dialectics, and Ontology in order to define Emergent Systems and Metasystems Engineering based on the concept of Meta-dialectics. -/- ABSTRACT Phenomenology, Ontology, Hermeneutics, and Dialectics will dominate our inquiry into the nature of the Emergent Design of the (...) System and its inverse dual, the Meta-system. This is an speculative dissertation that attempts to produce a philosophical, mathematical, and theoretical view of the nature of Systems Engineering Design. Emergent System Design, i.e., the design of yet unheard of and/or hitherto non-existent Systems and Metasystems is the focus. This study is a frontal assault on the hard problem of explaining how Engineering produces new things, rather than a repetition or reordering of concepts that already exist. In this work the philosophies of E. Husserl, A. Gurwitsch, M. Heidegger, J. Derrida, G. Deleuze, A. Badiou, G. Hegel, I. Kant and other Continental Philosophers are brought to bear on different aspects of how new technological systems come into existence through the midwifery of Systems Engineering. Sign Engineering is singled out as the most important aspect of Systems Engineering. We will build on the work of Pieter Wisse and extend his theory of Sign Engineering to define Meta-dialectics in the form of Quadralectics and then Pentalectics. Along the way the various ontological levels of Being are explored in conjunction with the discovery that the Quadralectic is related to the possibility of design primarily at the Third Meta-level of Being, called Hyper Being. Design Process is dependent upon the emergent possibilities that appear in Hyper Being. Hyper Being, termed by Heidegger as Being (Being crossed-out) and termed by Derrida as Differance, also appears as the widest space within the Design Field at the third meta-level of Being and therefore provides the most leverage that is needed to produce emergent effects. Hyper Being is where possibilities appear within our worldview. Possibility is necessary for emergent events to occur. Hyper Being possibilities are extended by Wild Being propensities to allow the embodiment of new things. We discuss how this philosophical background relates to meta-methods such as the Gurevich Abstract StateMachine and the Wisse Metapattern methods, as well as real-time architectural design methods as described in the Integral Software Engineering Methodology. One aim of this research is to find the foundation for extending the ISEM methodology to become a general purpose Systems Design Methodology. Our purpose is also to bring these philosophical considerations into the practical realm by examining P. Bourdieu’s ideas on the relationship between theoretical and practical reason and M. de Certeau’s ideas on practice. The relationship between design and implementation is seen in terms of the Set/Mass conceptual opposition. General Schemas Theory is used as a way of critiquing the dependence of Set based mathematics as a basis for Design. The dissertation delineates a new foundation for Systems Engineering as Emergent Engineering based on General Schemas Theory, and provides an advanced theory of Design based on the understanding of the meta-levels of Being, particularly focusing upon the relationship between Hyper Being and Wild Being in the context of Pure and Process Being. (shrink)
When does it make sense to act randomly? A persuasive argument from Bayesian decision theory legitimizes randomization essentially only in tie-breaking situations. Rational behaviour in humans, non-human animals, and artificial agents, however, often seems indeterminate, even random. Moreover, rationales for randomized acts have been offered in a number of disciplines, including game theory, experimental design, and machine learning. A common way of accommodating some of these observations is by appeal to a decision-maker’s bounded computational resources. Making this suggestion both (...) precise and compelling is surprisingly difficult. Toward this end, I propose two fundamental rationales for randomization, drawing upon diverse ideas and results from the wider theory of computation. The first unifies common intuitions in favour of randomization from the aforementioned disciplines. The second introduces a deep connection between randomization and memory: access to a randomizing device is provably helpful for an agent burdened with a finite memory. Aside from fit with ordinary intuitions about rational action, the two rationales also make sense of empirical observations in the biological world. Indeed, random behaviour emerges more or less where it should, according to the proposal. (shrink)
In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing (...) to a faculty of abstraction. Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works. In this paper, I tie these two questions together, to the mutual benefit of both disciplines. I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call “transformational abstraction”. Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to “nuisance variation” in input. Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind. I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain. (shrink)
According to the Market Failures Approach to business ethics, beyond-compliance duties can be derived by employing the same rationale and arguments that justify state regulation of economic conduct. Very roughly the idea is that managers have a duty to behave as if they were complying with an ideal regulatory regime ensuring Pareto-optimal market outcomes. Proponents of the approach argue that managers have a professional duty not to undermine the institutional setting that defines their role, namely the competitive market. This (...) answer is inadequate, however, for it is the hierarchical firm, rather than the competitive market, that defines the role of corporate managers and shapes their professional obligations. Thus, if the obligations that the market failures approach generates are to apply to managers, they must do so in an indirect way. I suggest that the obligations the market failures approach generates directly apply to shareholders. Managers, in turn, inherit these obligations as part of their duties as loyal agents. (shrink)
Hobbes's interest in the power of the Image was programmatic, as suggested by his shifts from optics, to sensationalist psychology, to the strategic use of classical history, exemplified by Thucydides and Homer. It put a great resource at the disposal of the state-propaganda machine, with application to the question of state-management and crowd control.
Quantum computer is considered as a generalization of Turing machine. The bits are substituted by qubits. In turn, a "qubit" is the generalization of "bit" referring to infinite sets or series. It extends the consept of calculation from finite processes and algorithms to infinite ones, impossible as to any Turing machines (such as our computers). However, the concept of quantum computer mets all paradoxes of infinity such as Gödel's incompletness theorems (1931), etc. A philosophical reflection on how quantum (...) computer might implement the idea of "infinite calculation" is the main subject. (shrink)
Pattern recognition is represented as the limit, to which an infinite Turing process converges. A Turing machine, in which the bits are substituted with qubits, is introduced. That quantum Turing machine can recognize two complementary patterns in any data. That ability of universal pattern recognition is interpreted as an intellect featuring any quantum computer. The property is valid only within a quantum computer: To utilize it, the observer should be sited inside it. Being outside it, the observer would (...) obtain quite different result depending on the degree of the entanglement of the quantum computer and observer. All extraordinary properties of a quantum computer are due to involving a converging infinite computational process contenting necessarily both a continuous advancing calculation and a leap to the limit. Three types of quantum computation can be distinguished according to whether the series is a finite one, an infinite rational or irrational number. (shrink)
Natural argument is represented as the limit, to which an infinite Turing process converges. A Turing machine, in which the bits are substituted with qubits, is introduced. That quantum Turing machine can recognize two complementary natural arguments in any data. That ability of natural argument is interpreted as an intellect featuring any quantum computer. The property is valid only within a quantum computer: To utilize it, the observer should be sited inside it. Being outside it, the observer would (...) obtain quite different result depending on the degree of the entanglement of the quantum computer and observer. All extraordinary properties of a quantum computer are due to involving a converging infinite computational process contenting necessarily both a continuous advancing calculation and a leap to the limit. Three types of quantum computation can be distinguished according to whether the series is a finite one, an infinite rational or irrational number. -/- . (shrink)
In (Gebharter 2014) I suggested a framework for modeling the hierarchical organization of mechanisms. In this short addendum I want to highlight some connections of my approach to the statistics and machine learning literature and some of its limitations not mentioned in the paper.
The purpose of the work is to form the net structure of management of the system of administrative services provision on the basis of implementation of subject-to-subject interactions between state sector and civil society. Methodology. The methodology basis for the investigation is the abstract-logical analysis of theoretical and methodological backgrounds for management of relations and interactions. For the theoretical generalization and formation of net structure, there are used scientific recommendations of Ukrainian scientists regarding the necessity to implement subject-to-subject relations (...) in the system of administrative services provision. Results. The investigations allowed confirming that the hierarchical structure of the state governance system does not give an opportunity to implement equal interaction between a subject of provision and a subject of an appeal as these relations have one – way communication and the feedback channel has a formal character. Moreover, the civil society is not considered by state sector to be a source of methods and ways to develop the system of state governance, in particular, the management system of administrative services provision. Practical meaning. The net structure of management will allow implementing the subject-subject relations in the system, under which the actions of the subject of provision – that means state sector – will be directed to the realization of rights and interests of the subjects of appeal. In their turn, apart from the performance of all legislative responsibilities that they should perform, they can carry out activities directed to the development of management activity in the system of administrative services provision and the whole system of state governance as an integral system of management. Meaning/Distinction. The provided model of the net structure will allow involving citizens in the processes of state governance and increasing the impact of the civil sector during the making of state and management decisions and, as a result, to confirm subject-to-subject positions in the relations. (shrink)
The geosciences include a wide spectrum of disciplines ranging from paleontology to climate science, and involve studies of a vast range of spatial and temporal scales, from the deep-time history of microbial life to the future of a system no less immense and complex than the entire Earth. Modeling is thus a central and indispensable tool across the geosciences. Here, we review both the history and current state of model-based inquiry in the geosciences. Research in these fields makes use (...) of a wide variety of models, such as conceptual, physical, and numerical models, and more specifically cellular automata, artificial neural networks, agent-based models, coupled models, and hierarchical models. We note the increasing demands to incorporate biological and social systems into geoscience modeling, challenging the traditional boundaries of these fields. Understanding and articulating the many different sources of scientific uncertainty – and finding tools and methods to address them – has been at the forefront of most research in geoscience modeling. We discuss not only structuralmodel uncertainties, parameter uncertainties, and solution uncertainties, but also the diverse sources of uncertainty arising from the complex nature of geoscience systems themselves. Without an examination of the geosciences, our philosophies of science and our understanding of the nature of model-based science are incomplete. (shrink)
Soonya Vaada, the prime and significant contribution to Indian philosophical thought from Buddhism will be scientifically developed and presented. How this scientific understanding helped to sow seeds of origin of rationalism and its development in Buddhist thought and life will be delineated. Its role in the shaping of Buddhist and other Indian philosophical systems will be discussed. Its relevance and use in the field of cognitive science and development of theories of human consciousness and mind will be put forward. The (...) idea of absence as zero in number system, vacuum in physics and other natural sciences and state of absence of cognition in mind machine modeling will be presented. The use of significance of Soonya Vaada in philosophy, rational social life, natural sciences and technology, mathematics and cognitive science will be comprehensively discussed and a model for human cognition and communication will be arrived at. -/- . (shrink)
A model of human consciousness is presented here in terms of physics and electronics using Upanishadic awareness. The form of Atman proposed in the Upanishads in relation to human consciousness as oscillating psychic energy-presence and its virtual or unreal energy reflection maya, responsible for mental energy and mental time-space are discussed. Analogy with Fresnel’s bi-prism experimental set up in physical optics is used to state, describe and understand the form, structure and function of Atman and maya, the ingredients of (...) human consciousness. A description of phases of mind in terms of conscious states and transformation of mental energy is given. Four states of consciousness and four modes of language communication and understanding processes are also given. Implications of above scientific awareness of Upanishadic wisdom to the modern scientific fields of physiological psychology, cognitive sciences, mind-machine modeling and natural language comprehension are suggested. -/- . (shrink)
In this paper Lucas suggests that many of his critics have not read carefully neither his exposition nor Penrose’s one, so they seek to refute arguments they never proposed. Therefore he offers a brief history of the Gödelian argument put forward by Gödel, Penrose and Lucas itself: Gödel argued indeed that either mathematics is incompletable – that is axioms can never be comprised in a finite rule and so human mind surpasses the power of any finitemachine (...) – or there exist absolutely unsolvable diophantine problems, and he suggest that the second disjunct is untenable; on the other side, Penrose proposed an argument similar to Lucas’ one but making use of Turing’s theorem. Finally Lucas exposes again his argument and considers some of the most important objections to it. (shrink)
The default view in the epistemology of forgetting is that human memory would be epistemically better if we were not so susceptible to forgetting—that forgetting is in general a cognitive vice. In this paper, I argue for the opposed view: normal human forgetting—the pattern of forgetting characteristic of cognitively normal adult human beings—approximates a virtue located at the mean between the opposed cognitive vices of forgetting too much and remembering too much. I argue, first, that, for any finite cognizer, (...) a certain pattern of forgetting is necessary if her memory is to perform its function well. I argue, second, that, by eliminating clutter from her memory store, this pattern of forgetting improves the overall shape of the subject’s total doxastic state. I conclude by reviewing work in psychology which suggests that normal human forgetting approximates this virtuous pattern of forgetting. (shrink)
This paper investigates the view that digital hypercomputing is a good reason for rejection or re-interpretation of the Church-Turing thesis. After suggestion that such re-interpretation is historically problematic and often involves attack on a straw man (the ‘maximality thesis’), it discusses proposals for digital hypercomputing with Zeno-machines , i.e. computing machines that compute an infinite number of computing steps in finite time, thus performing supertasks. It argues that effective computing with Zeno-machines falls into a dilemma: either they are specified (...) such that they do not have output states, or they are specified such that they do have output states, but involve contradiction. Repairs though non-effective methods or special rules for semi-decidable problems are sought, but not found. The paper concludes that hypercomputing supertasks are impossible in the actual world and thus no reason for rejection of the Church-Turing thesis in its traditional interpretation. (shrink)
When we understand that every potential halt decider must derive a formal mathematical proof from its inputs to its final states previously undiscovered semantic details emerge. -/- When-so-ever the potential halt decider cannot derive a formal proof from its input strings to its final states of Halts or Loops, undecidability has been decided. -/- The formal proof involves tracing the sequence of state transitions of the input TMD as syntactic logical consequence inference steps in the formal language of Turing (...)Machine Descriptions. (shrink)
State features are affected by the connection with digital coins. Social systems create their own limits and remain alive according to their internal logic, which does not derive from the system environment. So, social systems are operationally and autonomously closed - interacting with their environment and there is a general increase in entropy, but individual systems work to maintain and preserve their internal order. Autopoietic systems (like the state, with the tendency to maintain the inner order with a (...) remarkable degree of independence from the outside world) may contrast with allopoietic ones. It results in a state with a finite influence area, but recently troubled by the new forms of digital money and the corresponding infrastructure. DOI: 10.13140/RG.2.2.28007.80803 . (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
This dissertation is a contribution to formal and computational philosophy. -/- In the first part, we show that by exploiting the parallels between large, yet finite lotteries on the one hand and countably infinite lotteries on the other, we gain insights in the foundations of probability theory as well as in epistemology. Case 1: Infinite lotteries. We discuss how the concept of a fair finite lottery can best be extended to denumerably infinite lotteries. The solution boils down to (...) the introduction of infinitesimal probability values, which can be achieved using non-standard analysis. Our solution can be generalized to uncountable sample spaces, giving rise to a Non-Archimedean Probability (NAP) theory. Case 2: Large but finite lotteries. We propose application of the language of relative analysis (a type of non-standard analysis) to formulate a new model for rational belief, called Stratified Belief. This contextualist model seems well-suited to deal with a concept of beliefs based on probabilities ‘sufficiently close to unity’. -/- The second part presents a case study in social epistemology. We model a group of agents who update their opinions by averaging the opinions of other agents. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating. To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. The probability of ending up in an inconsistent belief state turns out to be always smaller than 2%. (shrink)
We introduce and study a variety of modal logics of parallelism, orthogonality, and affine geometries, for which we establish several completeness, decidability and complexity results and state a number of related open, and apparently difficult problems. We also demonstrate that lack of the finite model property of modal logics for sufficiently rich affine or projective geometries (incl. the real affine and projective planes) is a rather common phenomenon.
In this paper we look at the manual analysis of arguments and how this compares to the current state of automatic argument analysis. These considerations are used to develop a new approach combining a machine learning algorithm to extract propositions from text, with a topic model to determine argument structure. The results of this method are compared to a manual analysis.
According to the PubMed resource from the U.S. National Library of Medicine, over 750,000 scientific articles have been published in the ~5000 biomedical journals worldwide in the year 2007 alone. The vast majority of these publications include results from hypothesis-driven experimentation in overlapping biomedical research domains. Unfortunately, the sheer volume of information being generated by the biomedical research enterprise has made it virtually impossible for investigators to stay aware of the latest findings in their domain of interest, let alone to (...) be able to assimilate and mine data from related investigations for purposes of meta-analysis. While computers have the potential for assisting investigators in the extraction, management and analysis of these data, information contained in the traditional journal publication is still largely unstructured, free-text descriptions of study design, experimental application and results interpretation, making it difficult for computers to gain access to the content of what is being conveyed without significant manual intervention. In order to circumvent these roadblocks and make the most of the output from the biomedical research enterprise, a variety of related standards in knowledge representation are being developed, proposed and adopted in the biomedical community. In this chapter, we will explore the current status of efforts to develop minimum information standards for the representation of a biomedical experiment, ontologies composed of shared vocabularies assembled into subsumption hierarchical structures, and extensible relational data models that link the information components together in a machine-readable and human-useable framework for data mining purposes. (shrink)
This text is an English translation of those several sections of the original paper in Russian, where collection-related issues are considered. The full citation of the original paper is as following: Pavlinov I.Ya. 2016. [Bioraznoobrazie i biokollektsii: problema sootvetstvia]. In: Pavlinov I.Ya. (comp.). Aspects of Biodiversity. Archives of Zoological Museum of Lomonosov Moscow State University, Vol. 54, Pр. 733–786. -/- Orientation of biology, as a natural science, on the study and explanation of the similarities and differences between organisms led (...) in the second half of the 20th century to the recognition of a specifi c subject area of biological explorations, viz. biodiversity (BD). One of the important general scientifi c prerequisites for this shift was understanding that (at the level of ontology) the structured diversity of the living nature is its fundamental property equivocal to subjecting of some of its manifestations to certain laws. At the level of epistemology, this led to acknowledging that the “diversifi cationary” approach to description of the living beings is as justifi able as the before dominated “unifi cationary” one. This general trend has led to a signifi cant increase in the attention to BD. From a pragmatic perspective, its leitmotif was conservation of BD as a renewable resource, while from a scientifi c perspective the leitmotif was studying it was studying BD as a specifi c natural phenomenon. These two points of view are united by recognition of the need for scientific substantiation of BD conservation strategy, which implies the need for a detailed study of BD itself. At the level of ontology, one of the key problems in the study of BD (leaving aside the question of its genesis) is determination of its structure, which is interpreted as a manifestation of the structure of the Earth’s biota itself. With this, it is acknowledged that the subject area of empirical explorations is not the BD as a whole ( “Umgebung”) but its particular manifestations (“Umwelts”). It is proposed herewith to recognized, within the latter: fragments of BD (especially taxa and ecosystems), hierarchical levels of BD (primarily within- and interorganismal ones), and aspects of BD (before all taxonomic and meronomic ones). Attention is drawn to a new interpretation of bioinformatics as a discipline that studies the information support of BD explorations. An important fraction of this support are biocollections. The scientifi c value of collections means that they make it possible both empirical inferring and testing (verification) of the knowledge about BD. This makes biocollections, in their epistemological status, equivalent to experiments, and so makes studies of BD quite scientific. It is emphasized that the natural objects (naturalia), which are permanently kept in collections, contain primary (objective) information about BD, while information retrieved somehow from them is a secondary (subjective) one. Collection, as an information resource, serves as a research sample in the studies of BD. Collection pool, as the totality of all collection materials kept in repositories according to certain standards, can be treated as a general sample, and every single collection as a local sample. The main characteristic of collection-as-sample is its representativeness; so the basic strategy of development of the collection pool is to maximize its representativeness as a means to ensure correspondence of structure of biocollection pool to that of BD itself. The most fundamental characteristic of collection, as an information resource, is its scientific signifi cance. The following three main groups of more particular characteristics are distinguished: — the “proper” characteristics of every collection are its meaningfulness, informativeness, reliability, adequacy, documenting, systematicity, volume, structure, uniqueness, stability, lability; — the “external” characteristics of collection are resolution, usability, ethic constituent; — the “service” characteristics of collection are its museofication, storage system security, inclusion in metastructure, cost. In the contemporary world, development of the biocollection pool, as a specific resource for BD research, requires considerable organizational efforts, including work on their “information support” aimed at demonstrating the necessity of existence of the biocollections. (shrink)
In Part I we developed a model, called system P, for constructing the physical universe. In the present paper (Part II) we explore the hypothesis that something exists prior to the physical universe; i.e. we suppose that there exists a sequence of projections (and levels) that is prior to the sequence that constructs the physical universe itself. To avoid an infinite regress, this prior sequence must be finite, meaning that the whole chain of creative projections must begin at some (...) primal level which is itself uncreated. So, from this primal level emanates a primal sequence of projections, which yields a first-created system; by definition, there is no creation prior to this first system. Proceeding from this basis, we use the template of our previous work in constructing entities in the physical universe to outline the construction of entities in this first-created system. Next, we seek an interpretation of this first system and its entities. Since our "primal level" is an uncreated state of being from which all creation springs, it draws obvious allusions to the concept of "God". So at this point the model bumps head-on into theology, and we are forced to ask: Is there some metaphysically- or theologically-related work that can help us to interpret this first-created system and its entities? Indeed, such a work, and consequent interpretations, will be put forth --- from which much more then follows. (shrink)
This essay focuses on the intriguing relationship between mathematics and physical phenomena, arguing that the brain uses a single spatiotemporal- causal objective framework in order to characterize and manipulate basic external data and internal physical and emotional reactive information, into more complex thought and knowledge. It is proposed that multiple hierarchical permutations of this single format eventually give rise to increasingly precise visceral meaning. The main thesis overcomes the epistemological complexities of the Frame Problem by asserting that the primal (...) frame of reference – within which chaotic conscious thought ultimately emerges – is essentially a synchronous representation of the four macro properties of existence plus the genetically derived causal objective, and the embodied physical and emotional reactions that even the lowliest cognitive organisms are born with, and which they automatically express as they struggle to exist within an ever-changing and often hostile environment. -/-. (shrink)
Infinite machines (IMs) can do supertasks. A supertask is an infinite series of operations done in some finite time. Whether or not our universe contains any IMs, they are worthy of study as upper bounds on finite machines. We introduce IMs and describe some of their physical and psychological aspects. An accelerating Turing machine (an ATM) is a Turing machine that performs every next operation twice as fast. It can carry out infinitely many operations in (...) class='Hi'>finite time. Many ATMs can be connected together to form networks of infinitely powerful agents. A network of ATMs can also be thought of as the control system for an infinitely complex robot. We describe a robot with a dense network of ATMs for its retinas, its brain, and its motor controllers. Such a robot can perform psychological supertasks - it can perceive infinitely detailed objects in all their detail; it can formulate infinite plans; it can make infinitely precise movements. An endless hierarchy of IMs might realize a deep notion of intelligent computing everywhere. (shrink)
This paper concerns “human symbolic output,” or strings of characters produced by humans in our various symbolic systems; e.g., sentences in a natural language, mathematical propositions, and so on. One can form a set that consists of all of the strings of characters that have been produced by at least one human up to any given moment in human history. We argue that at any particular moment in human history, even at moments in the distant future, this set is (...) class='Hi'>finite. But then, given fundamental results in recursion theory, the set will also be recursive, recursively enumerable, axiomatizable, and could be the output of a Turing machine. We then argue that it is impossible to produce a string of symbols that humans could possibly produce but no Turing machine could. Moreover, we show that any given string of symbols that we could produce could also be the output of a Turing machine. Our arguments have implications for Hilbert’s sixth problem and the possibility of axiomatizing particular sciences, they undermine at least two distinct arguments against the possibility of Artificial Intelligence, and they entail that expert systems that are the equals of human experts are possible, and so at least one of the goals of Artificial Intelligence can be realized, at least in principle. (shrink)
Non-commuting quantities and hidden parameters – Wave-corpuscular dualism and hidden parameters – Local or nonlocal hidden parameters – Phase space in quantum mechanics – Weyl, Wigner, and Moyal – Von Neumann’s theorem about the absence of hidden parameters in quantum mechanics and Hermann – Bell’s objection – Quantum-mechanical and mathematical incommeasurability – Kochen – Specker’s idea about their equivalence – The notion of partial algebra – Embeddability of a qubit into a bit – Quantum computer is not Turing machine (...) – Is continuality universal? – Diffeomorphism and velocity – Einstein’s general principle of relativity – „Mach’s principle“ – The Skolemian relativity of the discrete and the continuous – The counterexample in § 6 of their paper – About the classical tautology which is untrue being replaced by the statements about commeasurable quantum-mechanical quantities – Logical hidden parameters – The undecidability of the hypothesis about hidden parameters – Wigner’s work and и Weyl’s previous one – Lie groups, representations, and psi-function – From a qualitative to a quantitative expression of relativity − psi-function, or the discrete by the random – Bartlett’s approach − psi-function as the characteristic function of random quantity – Discrete and/ or continual description – Quantity and its “digitalized projection“ – The idea of „velocity−probability“ – The notion of probability and the light speed postulate – Generalized probability and its physical interpretation – A quantum description of macro-world – The period of the as-sociated de Broglie wave and the length of now – Causality equivalently replaced by chance – The philosophy of quantum information and religion – Einstein’s thesis about “the consubstantiality of inertia ant weight“ – Again about the interpretation of complex velocity – The speed of time – Newton’s law of inertia and Lagrange’s formulation of mechanics – Force and effect – The theory of tachyons and general relativity – Riesz’s representation theorem – The notion of covariant world line – Encoding a world line by psi-function – Spacetime and qubit − psi-function by qubits – About the physical interpretation of both the complex axes of a qubit – The interpretation of the self-adjoint operators components – The world line of an arbitrary quantity – The invariance of the physical laws towards quantum object and apparatus – Hilbert space and that of Minkowski – The relationship between the coefficients of -function and the qubits – World line = psi-function + self-adjoint operator – Reality and description – Does a „curved“ Hilbert space exist? – The axiom of choice, or when is possible a flattening of Hilbert space? – But why not to flatten also pseudo-Riemannian space? – The commutator of conjugate quantities – Relative mass – The strokes of self-movement and its philosophical interpretation – The self-perfection of the universe – The generalization of quantity in quantum physics – An analogy of the Feynman formalism – Feynman and many-world interpretation – The psi-function of various objects – Countable and uncountable basis – Generalized continuum and arithmetization – Field and entanglement – Function as coding – The idea of „curved“ Descartes product – The environment of a function – Another view to the notion of velocity-probability – Reality and description – Hilbert space as a model both of object and description – The notion of holistic logic – Physical quantity as the information about it – Cross-temporal correlations – The forecasting of future – Description in separable and inseparable Hilbert space – „Forces“ or „miracles“ – Velocity or time – The notion of non-finite set – Dasein or Dazeit – The trajectory of the whole – Ontological and onto-theological difference – An analogy of the Feynman and many-world interpretation − psi-function as physical quantity – Things in the world and instances in time – The generation of the physi-cal by mathematical – The generalized notion of observer – Subjective or objective probability – Energy as the change of probability per the unite of time – The generalized principle of least action from a new view-point – The exception of two dimensions and Fermat’s last theorem. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.