Rapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit (...) people’s ability to participate in meaningful forms of work. Achievement gaps are interesting, in part, because they are the inverse of the (negative) responsibility gaps already widely discussed in the literature on AI ethics. Having described and explained the problem of achievement gaps, the article concludes by identifying four possible policy responses to the problem. (shrink)
Our computational metaphysics group describes its use of automated reasoning tools to study Leibniz’s theory of concepts. We start with a reconstruction of Leibniz’s theory within the theory of abstract objects (henceforth ‘object theory’). Leibniz’s theory of concepts, under this reconstruction, has a non-modal algebra of concepts, a concept-containment theory of truth, and a modal metaphysics of complete individual concepts. We show how the object-theoretic reconstruction of these components of Leibniz’s theory can be represented for investigation by means of automated (...) theorem provers and finite model builders. The fundamental theorem of Leibniz’s theory is derived using these tools. (shrink)
Human obsolescence is imminent. We are living through an era in which our activity is becoming less and less relevant to our well-being and to the fate of our planet. This trend toward increased obsolescence is likely to continue in the future, and we must do our best to prepare ourselves and our societies for this reality. Far from being a cause for despair, this is in fact an opportunity for optimism. Harnessed in the right way, the technology that hastens (...) our obsolescence can open us up to new utopian possibilities and enable heightened forms of human flourishing. (shrink)
This work provides proof-search algorithms and automated counter-model extraction for a class of STIT logics. With this, we answer an open problem concerning syntactic decision procedures and cut-free calculi for STIT logics. A new class of cut-free complete labelled sequent calculi G3LdmL^m_n, for multi-agent STIT with at most n-many choices, is introduced. We refine the calculi G3LdmL^m_n through the use of propagation rules and demonstrate the admissibility of their structural rules, resulting in auxiliary calculi Ldm^m_nL. In the single-agent case, we (...) show that the refined calculi Ldm^m_nL derive theorems within a restricted class of (forestlike) sequents, allowing us to provide proof-search algorithms that decide single-agent STIT logics. We prove that the proof-search algorithms are correct and terminate. (shrink)
A recent wave of academic and popular publications say that utopia is within reach: Automation will progress to such an extent and include so many high-skill tasks that much human work will soon become superfluous. The gains from this highly automated economy, authors suggest, could be used to fund a universal basic income (UBI). Today's employees would live off the robots' products and spend their days on intrinsically valuable pursuits. I argue that this prediction is unlikely to come true. (...) Historical precedent speaks against it, but the main problem is that the prediction fundamentally misunderstands how capitalism works—its incentives to increase or decrease production, its principles of income allocation, and the underlying conception of merit. (shrink)
Automated influence, delivered by digital targeting technologies such as targeted advertising, digital nudges, and recommender systems, has attracted significant interest from both empirical researchers, on one hand, and critical scholars and policymakers on the other. In this paper, we argue for closer integration of these efforts. Critical scholars and policymakers, who focus primarily on the social, ethical, and political effects of these technologies, need empirical evidence to substantiate and motivate their concerns. However, existing empirical research investigating the effectiveness of these (...) technologies (or lack thereof), neglects other morally relevant effects—which can be felt regardless of whether or not the technologies "work" in the sense of fulfilling the promises of their designers. Drawing from the ethics and policy literature, we enumerate a range of questions begging for empirical analysis—the outline of a research agenda bridging these fields—and issue a call to action for more empirical research that takes these urgent ethics and policy questions as their starting point. (shrink)
Standpoint logic is a recently proposed formalism in the context of knowledge integration, which advocates a multi-perspective approach permitting reasoning with a selection of diverse and possibly conflicting standpoints rather than forcing their unification. In this paper, we introduce nested sequent calculi for propositional standpoint logics---proof systems that manipulate trees whose nodes are multisets of formulae---and show how to automate standpoint reasoning by means of non-deterministic proof-search algorithms. To obtain worst-case complexity-optimal proof-search, we introduce a novel technique in the context (...) of nested sequents, referred to as "coloring," which consists of taking a formula as input, guessing a certain coloring of its subformulae, and then running proof-search in a nested sequent calculus on the colored input. Our technique lets us decide the validity of standpoint formulae in CoNP since proof-search only produces a partial proof relative to each permitted coloring of the input. We show how all partial proofs can be fused together to construct a complete proof when the input is valid, and how certain partial proofs can be transformed into a counter-model when the input is invalid. These "certificates" (i.e. proofs and counter-models) serve as explanations of the (in)validity of the input. (shrink)
Masculinity seems to play a role in the recruitment and radicalization of lone-wolf terrorists and other violent extremists. In this chapter, we examine multiple dimensions of masculinity in six corpora. We do so via linguistic analysis of the corpora associated with and produced by a range of groups and individuals. In particular, we analyze two corpora from each of: men’s rights groups, male supremacists, and manifestos of male domestic terrorists. Our results indicate that there are four distinct strands of thinking, (...) language, and behavior in these groups and individuals: dominant masculinity, which manifests in domination of both women and other men, subsidiary masculinity, which manifests in resentful reactions to domination by other men, misogyny, which manifests in resentful or outright hateful attitudes and actions towards women, and xenophobia, which manifests in fearful and vengeful reactions to perceived invasion by outsiders, especially foreign men. (shrink)
REVIEW OF: Automated Development of Fundamental Mathematical Theories by Art Quaife. (1992: Kluwer Academic Publishers) 271pp. Using the theorem prover OTTER Art Quaife has proved four hundred theorems of von Neumann-Bernays-Gödel set theory; twelve hundred theorems and definitions of elementary number theory; dozens of Euclidean geometry theorems; and Gödel's incompleteness theorems. It is an impressive achievement. To gauge its significance and to see what prospects it offers this review looks closely at the book and the proofs it presents.
Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of (...)automation. In this article, we consider the feasibility and efficacy of ethics-based auditing as a governance mechanism that allows organisations to validate claims made about their ADMS. Building on previous work, we define EBA as a structured process whereby an entity’s present or past behaviour is assessed for consistency with relevant principles or norms. We then offer three contributions to the existing literature. First, we provide a theoretical explanation of how EBA can contribute to good governance by promoting procedural regularity and transparency. Second, we propose seven criteria for how to design and implement EBA procedures successfully. Third, we identify and discuss the conceptual, technical, social, economic, organisational, and institutional constraints associated with EBA. We conclude that EBA should be considered an integral component of multifaced approaches to managing the ethical risks posed by ADMS. (shrink)
Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
The paper critically discusses an empirical study by Mizrahi & Dickinson 2020, which analyzes in a huge data base (JSTORE) the incidence of three types of philosophical arguments. Their results are: 1. Deductive arguments were the most commeon type of argument in philosophy until the end of the 20th century. 2. Around 2008 a shift in methodology occurred, such that the indcutive arguments outweigh other types of argument. The paper, first, criticizes the empirical study as grossly false and considers the (...) - very limited - possibilities of recognition of argument types by present computer programs. (shrink)
Automated Influence is the use of Artificial Intelligence to collect, integrate, and analyse people’s data in order to deliver targeted interventions that shape their behaviour. We consider three central objections against Automated Influence, focusing on privacy, exploitation, and manipulation, showing in each case how a structural version of that objection has more purchase than its interactional counterpart. By rejecting the interactional focus of “AI Ethics” in favour of a more structural, political philosophy of AI, we show that the real problem (...) with Automated Influence is the crisis of legitimacy that it precipitates. (shrink)
This thesis examines the prospects for mechanical procedures that can identify true, complete, universal, first-order logical theories on the basis of a complete enumeration of true atomic sentences. A sense of identification is defined that is more general than those which are usually studied in the learning theoretic and inductive inference literature. Some identification algorithms based on confirmation relations familiar in the philosophy of science are presented. Each of these algorithms is shown to identify all purely universal theories without function (...) symbols. It is demonstrated that no procedure can solve this universal theory inference problem in the more usual senses of identification. The question of efficiency for theory inference systems is addressed, and some definitions of limiting complexity are examined. It is shown that several aspects of obvious strategies for solving the universal theory inference problem are NP-hard. Finally, some non-worst case heuristic search strategies are examined in light of these NP-completeness results. These strategies are based upon an isomorphism between clausal entailments of a certain class and partition lattices, and are applicable to the improvement of earlier work on language acquisition and logical inductive inference. (shrink)
This book provides a critical reflection on automated science and addresses the question whether the computational tools we developed in last decades are changing the way we humans do science. More concretely: Can machines replace scientists in crucial aspects of scientific practice? The contributors to this book rethink and refine some of the main concepts by which science is understood, drawing a fascinating picture of the developments we expect over the next decades of human-machine co-evolution. The volume covers examples from (...) various fields and areas, such as molecular biology, climate modeling, clinical medicine, and artificial intelligence. The explosion of technological tools and drivers for scientific research calls for a renewed understanding of the human character of science. This book aims precisely to contribute to such a renewed understanding of science. (shrink)
Artificial intelligence (AI) technologies are used in various domains of human activities, and one of these domains is scientific research. Now, researchers in many scientific areas try to apply AI technologies to their research and automate it. These researchers claim that the ‘automation of science’ will liberate people from non-creative tasks in scientific research, and radically change the overall state of science and technology so that large-scale innovation results. As I see it, the automation of science is remarkable (...) in another respect: since science is one of the most distinctive human activities, the tendency of automating it prompts us to reconsider the aspects of our humanity itself. One of these aspects is concerned with human creativity, on which this article focuses. In this article, I address two questions concerning the automation of science: first, ‘Can AI makes creative discovery?’; second, ‘What implications may the automation of science have on science and society?’. Scientific discovery is said to be one of the most creative phases of scientific research. I show that, though there are no reasons in principle why AI could not make creative discovery, we do not at present have enough knowledge how to realize it. If the automation of science nevertheless proceeds, the cultural values science as a creative activity has might be reduced, and it might alter the state of scientific community and its relationship with society in some undesirable way. Therefore, I conclude, we need to specify desirable ways of introducing AI technologies into science and devise measures against demerits of automating science. (shrink)
In the biomedical context, policy makers face a large amount of potentially discordant evidence from different sources. This prompts the question of how this evidence should be aggregated in the interests of best-informed policy recommendations. The starting point of our discussion is Hunter and Williams’ recent work on an automated aggregation method for medical evidence. Our negative claim is that it is far from clear what the relevant criteria for evaluating an evidence aggregator of this sort are. What is the (...) appropriate balance between explicitly coded algorithms and implicit reasoning involved, for instance, in the packaging of input evidence? In short: What is the optimal degree of ‘automation’? On the positive side: We propose the ability to perform an adequate robustness analysis as the focal criterion, primarily because it directs efforts to what is most important, namely, the structure of the algorithm and the appropriate extent of automation. Moreover, where there are resource constraints on the aggregation process, one must also consider what balance between volume of evidence and accuracy in the treatment of individual evidence best facilitates inference. There is no prerogative to aggregate the total evidence available if this would in fact reduce overall accuracy. (shrink)
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the (...) right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive meaningful, but properly limited, information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a ‘right to be informed’. Further, the ambiguity and limited scope of the ‘right not to be subject to automated decision-making’ contained in Article 22 (from which the alleged ‘right to explanation’ stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative and policy steps that, if taken, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018. (shrink)
This chapter considers how the adoption of autonomous weapons systems (AWS) may affect jus ad bellum principles of warfare. In particular, it focuses on the use of AWS in non-international armed conflicts (NIAC). Given the proliferation of NIAC, the development and use of AWS will most likely be attuned to this specific theater of war. As warfare waged by modernized liberal democracies (those most likely to develop and employ AWS at present) increasingly moves toward a model of individualized warfare, how, (...) if at all, will the principles by which we measure the justness of the commencement of such hostilities be affected by the introduction of AWS, and how will such hostilities stack up to current legal agreements surrounding their more traditional engagement? This chapter claims that such considerations give us reason to question the moral and legal necessity of ad bellum proper authority. (shrink)
One recent priority of the U.S. government is developing autonomous robotic systems. The U.S. Army has funded research to design a metric of evil to support military commanders with ethical decision-making and, in the future, allow robotic military systems to make autonomous ethical judgments. We use this particular project as a case study for efforts that seek to frame morality in quantitative terms. We report preliminary results from this research, describing the assumptions and limitations of a program that assesses the (...) relative evil of two courses of action. We compare this program to other attempts to simulate ethical decision-making, assess possibilities for overcoming the trade-off between input simplification and output reliability, and discuss the responsibilities of users and designers in implementing such programs. We conclude by discussing the implications that this project highlights for the successes and challenges of developing automated mechanisms for ethical decision making. (shrink)
This work has been designed to develop an a dynamic road signal based on an emergency with density. The sync signal automatically switches to detecting traffic density at the intersection. Traffic congestion is a serious problem in many large cities around the world and has become a nightmare for travelers in these cities. The conventional traffic light system is based on the concept of fixed time assigned to each side of the join that cannot be varied by varying traffic density. (...) The tie times assigned are fixed. But in an emergency, emergency timings change according to the distance and depth of the traffic. At the time the emergency vehicle is read by the receiver it will execute the management of the traffic light. These projects are infrared proximity sensors in the sightline configuration through the load to detect the traffic light density. The vehicle density is measured in several sectors based on the times assigned as a result. Every crossing of the emergency vehicle logs the data's to the cloud. This log signal would help with the destination acknowledgment. This synchronization will greatly reduce traffic jam. -/- . (shrink)
Automated decision making for sentencing is the use of a software algorithm to analyse a convicted offender’s case and deliver a sentence. This chapter reviews the moral arguments for and against employing automated decision making for sentencing and finds that its use is in principle morally permissible. Specifically, it argues that well-designed automated decision making for sentencing will better approximate the just sentence than human sentencers. Moreover, it dismisses common concerns about transparency, privacy and bias as unpersuasive or inapplicable. The (...) chapter also notes that moral disagreement about theories of just sentencing are plausibly resolved by applying the principle of maximising expected moral choiceworthiness, and that automated decision making is better suited to the resulting ensemble model. Finally, the chapter considers the challenge posed by penal populism. The dispiriting conclusion is that although it is in theory morally desirable to use automated decision-making for criminal sentencing, it may well be the case that we ought not to try. (shrink)
Home management and controlling have seen a great introduction to network that enabled digital technology, especially in recent decades. For the purpose of home automation, this technique offers an exciting capability to enhance the connectivity of equipment within the home. Also, with the rapid expansion of the Internet, there are potentials that added to the remote control and monitoring of such network-enabled devices. In this paper, we had been designed and implemented a fully manageable and secure smart home (...) class='Hi'>automation system based on a cloud computing system with an ESP Arduino system. The security of home had been improved by adding a complete camera system with a GSM communication technique to connect the Arduino output data to an external specified number if there is no internet provider. We used three sensors for temperature, gas, and motion measurements. The ESP8226 Wi-Fi device programmed the sensors to maintain the sensors measurements and transfer them to the cloud server database which is programmed to the web server via Appatshy and Mysql formats. The system implemented with high time response so that all readings updated and appeared spontaneously. The designed system should be effective, a secure, and rapid response real-time smart home system should be achieved. (shrink)
In this paper I critique the ethical implications of automating CCTV surveillance. I consider three modes of CCTV with respect to automation: manual, fully automated, and partially automated. In each of these I examine concerns posed by processing capacity, prejudice towards and profiling of surveilled subjects, and false positives and false negatives. While it might seem as if fully automated surveillance is an improvement over the manual alternative in these areas, I demonstrate that this is not necessarily the case. (...) In preference to the extremes I argue in favour of partial automation in which the system integrates a human CCTV operator with some level of automation. To assess the degree to which such a system should be automated I draw on the further issues of privacy and distance. Here I argue that the privacy of the surveilled subject can benefit from automation, while the distance between the surveilled subject and the CCTV operator introduced by automation can have both positive and negative effects. I conclude that in at least the majority of cases more automation is preferable to less within a partially automated system where this does not impinge on efficacy. (shrink)
The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the (...) scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority? Our argument proceeds in four parts. First, we build our formal model, enriching the HITL question to include not only whether humans are actually in the loop of decision-making, but also whether they appear to be so. Second, we describe situations in which the actuality and appearance of HITL align: those that seem to involve human judgment and actually do, and those that seem automated and actually are. Third, we explore instances of misalignment: situations in which systems that seem to involve human judgment actually do not, and situations in which systems that hold themselves out as automated actually rely on humans operating “behind the curtain.” Fourth, we examine the normative issues that result from HITL misalignment, arguing that it challenges individual decision-making about automated systems and complicates collective governance of automation. (shrink)
Gödel's incompleteness theorems establish the stunning result that mathematics cannot be fully formalized and, further, that any formal system containing a modicum of number or set theory cannot establish its own consistency. Wilfried Sieg and Clinton Field, in their paper Automated Search for Gödel's Proofs, presented automated proofs of Gödel's theorems at an abstract axiomatic level; they used an appropriate expansion of the strategic considerations that guide the search of the automated theorem prover AProS. The representability conditions that allow the (...) syntactic notions of the metalanguage to be represented inside the object language were taken as axioms in the automated proofs. The concrete task I am taking on in this project is to extend the search by formally verifying these conditions. Using a formal metatheory defined in the language of binary trees, the syntactic objects of the metatheory lend themselves naturally to a direct encoding in Zermelo's theory of sets. The metatheoretic notions can then be inductively defined and shown to be representable in the object-theory using appropriate inductive arguments. Formal verification of the representability conditions is the first step towards an automated proof thereof which, in turn, brings the automated verification of Gödel's theorems one step closer to completion. (shrink)
Vous éprouvez souvent l'embarras du choix? Que sera la liberté au XXIe siècle? Depuis l'invention des machines et des computeurs, notre quotidien semble avoir gagné du temps. Mais en contrepartie, il paraît de plus en plus codé, soumis aux automatismes. Depuis Leibniz jusqu'à la puce biométrique, l'auteur raconte une histoire alternative de l'ère informatique. Il démontre que le " numérisme ", le principe ordinateur, est une vertu humaine ancestrale et nécessaire. Mais insuffisante. Après le sauve-qui-peut du postmodernisme, un nouveau paradigme (...) est en train de naître, le " créalisme " : le monde est notre création commune incessante. L'horizon de notre liberté politique et existentielle, c'est la Terre comme oeuvre d'art. Construit comme une odyssée philosophique curieuse et empreinte d'humour, voici un traité de cocréativité pour une époque aux forces actives éclatées. Où l'on découvre que l'ordre et l'aventure ne sont pas incompatibles. (shrink)
This has been made available gratis by the publisher. -/- This piece gives the raison d'etre for the development of the converters mentioned in the title. Three reasons are given, one linguistic, one philosophical, and one practical. It is suggested that at least /two/ independent converters are needed. -/- This piece ties together the extended paper "Abstracts from Logical Form I/II," and the short piece providing the comprehensive theory alluded to in the abstract of that extended paper in "Pragmatics, Montague, (...) and 'Abstracts from Logical Form'" by motivating the entire project from beginning to end. (shrink)
Due to the innovative possibilities of digital technologies, the issue of increasing automation is once again on the agenda – and not only in the industry, but also in other branches and sectors of contemporary societies. Although public and scientific discussions about automation seem to raise relevant questions of the “old” debate, such as the replacement of human labor by introducing new technologies, the authors focus here on the new contextual quality of these questions. The debate should rethink (...) the relationship between technology and work with regard to quantitative and qualitative changes in work. In this article, our example will be the introduction of automation in industry, which has been reflected in the widely recognized study by Frey and Osborne in 2013. They estimated the expected impacts of future computerization on US labor market outcomes as very high, specifically regarding the number of jobs at risk. Surprisingly, this study was the starting point of an intensive international debate on the impact of technologies on the future of work and the role of technological change in working environments. Thus, according to the authors, “old” questions remain important, but they should be reinterpreted for “new” societal demands and expectations of future models of work. (shrink)
I am very used to strange books and special people, but Hawkins stands out due to his use of a simple technique for testing muscle tension as a key to the “truth” of any kind of statement whatsoever—i.e., not just to whether the person being tested believes it, but whether it is really true! What is well known is that people will show automatic, unconscious physiological and psychological responses to just about anything they are exposed to—images, sounds, touch, odors, ideas, (...) people. So, muscle reading to find out their true feelings is not radical at all, unlike using it as a dousing stick (more muscle reading) to do “paranormal science”. -/- Hawkins describes the use of decreasing tension in the muscles of an arm in response to increases in cognitive load thus causing the arm to drop in response to the constant pressure of someone’s fingers. He seems unaware that there is a long established and vast ongoing research effort in social psychology referred to by such phrases as ‘implicit cognition’, ‘automaticity’ etc., and that his use of ‘kinesiology’ is one tiny section. In addition to muscle tone (infrequently used) social psychologists measure EEG, galvanic skin response and most frequently verbal responses to words, sentences, images or situations at times varying from seconds to months after the stimulus. Many, such as Bargh and Wegner, take the results to mean we are automatons who learn and act largely without awareness via S1 (automated System 1) and many others such as Kihlstrom and Shanks say these studies are flawed and we are creatures of S2 (deliberative System 2). Though Hawkins seems to have no idea, as in other areas of the descriptive psychology of higher order thought, the situation regarding “automaticity” is still as chaotic as it was when Wittgenstein described the reasons for the sterility and barrenness of psychology in the 30’s. Nevertheless, this book is an easy read and some therapists and spiritual teachers may find it of use. -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019), The Logical Structure of Human Behavior (2019), and Suicidal Utopian Delusions in the 21st Century 4th ed (2019) . (shrink)
I am very used to strange books and special people but Hawkins stands out due to his use of a simple technique for testing muscle tension as a key to the “truth” of any kind of statement whatsoever—i.e., not just to whether the person being tested believes it, but whether it is really true! What is well known is that people will show automatic, unconscious physiological and psychological responses to just about anything they are exposed to—images, sounds, touch, odors, ideas, (...) people. So muscle reading to find out their true feelings is not radical at all, unlike using it as a dousing stick (more muscle reading) to do “paranormal science”. Hawkins describes the use of decreasing tension in the muscles of an arm in response to increases in cognitive load thus causing the arm to drop in response to the constant pressure of someone’s fingers. He seems unaware that there is a long established and vast ongoing research effort in social psychology referred to by such phrases as ‘implicit cognition’, ‘automaticity’ etc., and that his use of ‘kinesiology’ is one tiny section. In addition to muscle tone (infrequently used) social psychologists measure EEG, galvanic skin response and most frequently verbal responses to words, sentences, images or situations at times varying from seconds to months after the stimulus. Many, such as Bargh and Wegner, take the results to mean we are automatons who learn and act largely without awareness via S1 and many others such as Kihlstrom and Shanks say these studies are flawed and we are creatures of S2. Though Hawkins seems to have no idea, as in other areas of the descriptive psychology of higher order thought, the situation regarding “automaticity” is still as chaotic as it was when Wittgenstein described the reasons for the sterility and barrenness of psychology in the 30’s. Nevertheless, this book is an easy read and some therapists and spiritual teachers may find it of use. -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my article The Logical Structure of Philosophy, Psychology, Mind and Language as Revealed in Wittgenstein and Searle 59p(2016). For all my articles on Wittgenstein and Searle see my e-book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Wittgenstein and Searle 367p (2016). Those interested in all my writings in their most recent versions may consult my e-book Philosophy, Human Nature and the Collapse of Civilization - Articles and Reviews 2006-2016 662p (2016). -/- All of my papers and books have now been published in revised versions both in ebooks and in printed books. -/- Talking Monkeys: Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet - Articles and Reviews 2006-2017 (2017) Amazon ASIN # B071HVC7YP. -/- The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle--Articles and Reviews 2006-2016 (2017) Amazon ASIN # B071P1RP1B. -/- Suicidal Utopian Delusions in the 21st century: Philosophy, Human Nature and the Collapse of Civilization - Articles and Reviews 2006-2017 (2017) Amazon ASIN # B0711R5LGX . (shrink)
Reuben Hersh confided to us that, about forty years ago, the late Paul Cohen predicted to him that at some unspecified point in the future, mathematicians would be replaced by computers. Rather than focus on computers replacing mathematicians, however, our aim is to consider the (im)possibility of human mathematicians being joined by “artificial mathematicians” in the proving practice—not just as a method of inquiry but as a fellow inquirer.
Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...) general ones. We defend this thesis by adapting Marr’s famous 1982 framework for understanding information-processing systems. We show how this framework allows one to situate ethical problems at the appropriate level of abstraction, which in turn can be used to target appropriate interventions. (shrink)
A self-fulfilling prophecy is, roughly, a prediction that brings about its own truth. Although true predictions are hard to fault, self-fulfilling prophecies are often regarded with suspicion. In this article, we vindicate this suspicion by explaining what self-fulfilling prophecies are and what is problematic about them, paying special attention to how their problems are exacerbated through automated prediction. Our descriptive account of self-fulfilling prophecies articulates the four elements that define them. Based on this account, we begin our critique by showing (...) that typical self-fulfilling prophecies arise due to mistakes about the relationship between a prediction and its object. Such mistakes—along with other mistakes in predicting or in the larger practical endeavor—are easily overlooked when the predictions turn out true. Thus we note that self-fulfilling prophecies prompt no error signals; truth shrouds their mistakes from humans and machines alike. Consequently, self-fulfilling prophecies create several obstacles to accountability for the outcomes they produce. We conclude our critique by showing how failures of accountability, and the associated failures to make corrections, explain the connection between self-fulfilling prophecies and feedback loops. By analyzing the complex relationships between accuracy and other evaluatively significant features of predictions, this article sheds light both on the special case of self-fulfilling prophecies and on the ethics of prediction more generally. (shrink)
While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much as (...) possible, including the elements of law and morality that are operated by combatants in war. I suggest that, ethically speaking, deploying a legally competent robot in some legally regulated realm is not much different from deploying a more or less well-armed, vulnerable, obedient, or morally discerning soldier or general into battle, a police officer onto patrol, or a lawyer or judge into a trial. All feature automaticity in the sense of deputation to an agent we do not then directly control. Such relations are well understood and well-regulated in morality and law; so there is not much challenging philosophically in having robots be some of these agents — excepting the implications of the limits of robot technology at a given time for responsible deputation. I then consider this proposal in light of the differences between two conceptions of law. These are distinguished by whether each conception sees law as unambiguous rules inherently uncontroversial in each application; and I consider the prospects for robotizing law on each. Likewise for the prospects of robotizing moral theorizing and moral decision-making. Finally I identify certain elements of law and morality, noted by the philosopher Immanuel Kant, which robots can participate in only upon being able to set ends and emotionally invest in their attainment. One conclusion is that while affectless autonomous devices might be fit to rule us, they would not be fit to vote with us. For voting is a process for summing felt preferences, and affectless devices would have none to weigh into the sum. Since they don't care which outcomes obtain, they don't get to vote on which ones to bring about. (shrink)
The integration of Internet Protocol and Embedded Systems can enhance the communication platform. This paper describes the emerging smart technologies based on Internet of Things (IOT) and internet protocols along with embedded systems for monitoring and controlling smart devices with the help of WiFi technology and web applications. The internet protocol (IP) address has been assigned to the things to control and operate the devices via remote network that facilitates the interoperability and end-to-end communication among various devices c,onnected over a (...) network. The HTTP POST and HTTP GET command that supports the RESTful service have been used to ensure the transmission and reception of packets between the IOT Gateway and Cloud Database. The emerging smart technologies based on the Internet of Things (IoT) facilitated features like automation, controllability, interconnectivity, reliability which in turn turn paved the way for a wide range of acceptance amongst the masses. The Internet of Things (IoT) has brought in many new emerging technologies into varoius field like our daily lives, industry, agricultural sector, and many more. The world is experiencing the explosive growth with the advent of Internet of Things (IoT) these years. The potential growth of IoT is enoromous which is evidenced by all the human beings in our day to day life. (shrink)
The somewhat old-fashioned concept of philosophical categories is revived and put to work in automated ontology building. We describe a project harvesting knowledge from Wikipedia’s category network in which the principled ontological structure of Cyc was leveraged to furnish an extra layer of accuracy-checking over and above more usual corrections which draw on automated measures of semantic relatedness.
We argue that lack of direct and conscious control is not, in principle, a reason to be afraid of machines in general and robots in particular: in order to articulate the ethical and political risks of increasing automation one must, therefore, tackle the difficult task of precisely delineating the theoretical and practical limits of sustainable delegation to robots.
For more than a century now, the automation of the means of work has created great apprehension among us. After all, will we all be replaced by machines in the future? Will all forms of labor be automatable? Such questions raise several criticisms in the literature concerned with machine ethics. However, in this study, I will approach this problem from another angle. After all, we can criticize the automation of the means of work in several ways. I invite (...) the reader to entertain the following hypothesis: What if the automation of the means of labor is something beneficial? What if human emancipation does come through our technological development? If the answer is yes, why do we still work so much? I conclude that if automation processes are applied to key points in our social structure, we can emancipate the individual from a reality where we work for no reason. (shrink)
This dissertation is concerned with a fundamental problem at the heart of Arendt’s The Human Condition—namely, ‘the problem of idleness’. This problem is related to the three types of human Arendt identifies as correlated to dominant activities in one’s life, animal laborans, homo faber, and the acting person. It explores Arendt’s predictions of an oncoming automation crisis, and the possibility of a corresponding crisis in the production—consumption cycle. The problem of idleness can be understood as the claim that if (...) people are provided freedom from job-holding so that they may pursue other activities, they would likely turn to consumption to occupy their time. I claim that this problem of idleness is important in any consideration of an oncoming automation crisis, especially in relation to Universal Basic Income (UBI) as a solution to such a crisis. I claim that there is a hole in the UBI literature concerning this problem of idleness, and if left unaddressed it would result in both an ineffective UBI, and in a crisis of meaning for the general populace. This dissertation demonstrates what the problem of idleness is, why it is important, and what possible solutions exist. This contributes to the UBI literature by diagnosing and attempting to solve a gap in the literature which I argue would cause practical challenges in the implementation and stability of a UBI system. I also contribute to the Arendtian literature by problematizing traditional readings of Arendt, and offering a reappraisal of her thought on Marx, art, and the social. (shrink)
Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...) standing could accept responsibility for the actions of autonomous robotic devices—even if that person could not be causally linked to those actions besides this prior agreement. The basic intuition behind our proposal is that humans can impute relations even when no other form of contact can be established. The missed alternative we want to highlight, then, would consist in an exchange: Social prestige in the occupation of a given office would come at the price of signing away part of one's freedoms to a contingent and unpredictable future guided by another agency. (shrink)
This book explores how robotics and artificial intelligence can enhance human lives but also have unsettling “dark sides.” It examines expanding forms of negativity and anxiety about robots, AI, and autonomous vehicles as our human environments are reengineered for intelligent military and security systems and for optimal workplace and domestic operations. It focuses on the impacts of initiatives to make robot interactions more humanlike and less creepy. It analyzes the emerging resistances against these entities in the wake of omnipresent AI (...) applications. It unpacks efforts by developers to have ethical and social influences on robotics and AI, and confronts the AI hype that is designed to shield the entities from criticism. The book draws from science fiction, dramaturgical, ethical, and legal literatures as well as current research agendas of corporations. Engineers, implementers, and researchers have often encountered users' fears and aggressive actions against intelligent entities, especially in the wake of deaths of humans by robots and autonomous vehicles. The book is an invaluable resource for developers and researchers in the field, as well as curious readers who want to play proactive roles in shaping future technologies. (shrink)
मैं बहुत अजीब किताबें और विशेष लोगों के लिए इस्तेमाल कर रहा हूँ, लेकिन Hawkins बाहर बयान के किसी भी प्रकार के "सत्य" के लिए एक कुंजी के रूप में मांसपेशियों के तनाव के परीक्षण के लिए एक सरल तकनीक के अपने उपयोग के कारण बाहर खड़ा है, यानी, न सिर्फ करने के लिए कि क्या व्यक्ति का परीक्षण किया जा रहा है का मानना है यह है, लेकिन क्या यह वास्तव में सच है! क्या अच्छी तरह से जाना जाता (...) है कि लोगों को स्वत:, बेहोश शारीरिक और मनोवैज्ञानिक प्रतिक्रियाओं के बारे में बस कुछ भी वे छवियों, लगता है, स्पर्श, odors, विचारों, लोगों को उजागर कर रहे हैं दिखाएगा. तो, मांसपेशियों को पढ़ने के लिए उनकी सच्ची भावनाओं को खोजने के लिए बिल्कुल कट्टरपंथी नहीं है, यह एक dousing छड़ी के रूप में उपयोग करने के विपरीत (अधिक मांसपेशी पढ़ने) करने के लिए "सामान्य विज्ञान". Hawkins संज्ञानात्मक लोड में वृद्धि के जवाब में एक हाथ की मांसपेशियों में कम तनाव के उपयोग का वर्णन इस प्रकार किसी की उंगलियों के निरंतर दबाव के जवाब में हाथ ड्रॉप करने के लिए कारण. वह अनजान लगता है कि सामाजिक मनोविज्ञान में एक लंबे समय से स्थापित और विशाल चल रहे अनुसंधान प्रयास 'आदर्श अनुभूति', 'स्वचालितता' आदि के रूप में इस तरह के वाक्यांशों द्वारा संदर्भित है, और है कि 'kinesiology' के अपने प्रयोग एक छोटे से खंड है. मांसपेशी टोन के अलावा (समय-समय पर इस्तेमाल किया) सामाजिक मनोवैज्ञानिक ईईजी, गैल्वेनिक त्वचा प्रतिक्रिया और शब्दों, वाक्यों, छवियों या स्थितियों के लिए सबसे अक्सर मौखिक प्रतिक्रियाओं को मापने के लिए कई सेकंड से महीनों के बाद सेकंड के लिए अलग. ऐसे Bargh और Wegner के रूप में कई, परिणाम लेने के लिए मतलब है कि हम automatons जो जानने के लिए और S1 के माध्यम से जागरूकता के बिना काफी हद तक कार्य कर रहे हैं (स्वत: प्रणाली 1) और ऐसे Kihlstrom और Shanks के रूप में कई अन्य लोगों का कहना है कि इन अध्ययनों त्रुटिपूर्ण हैं और हम S2 के जीव हैं (deliberative प्रणाली 2). हालांकि Hawkins के लिए पता नहीं है लगता है, के रूप में उच्च आदेश सोचा के वर्णनात्मक मनोविज्ञान के अन्य क्षेत्रों में, "स्वचालितता" के बारे में स्थिति अभी भी अराजक के रूप में यह था जब Wittgenstein बाँझपन और मनोविज्ञान की बंजरता के लिए कारणों का वर्णन किया गया है में 30 है. फिर भी, इस किताब को एक आसान पढ़ा है और कुछ चिकित्सक और आध्यात्मिक शिक्षकों के उपयोग के मिल सकता है. आधुनिक दो systems दृश्यसे मानव व्यवहार के लिए एक व्यापक अप करने के लिए तारीख रूपरेखा इच्छुक लोगों को मेरी पुस्तक 'दर्शन, मनोविज्ञान, मिनडी और लुडविगमें भाषा की तार्किक संरचना से परामर्श कर सकते हैं Wittgenstein और जॉन Searle '2 एड (2019). मेरे लेखन के अधिक में रुचि रखने वालों को देख सकते हैं 'बात कर रहेबंदर- दर्शन, मनोविज्ञान, विज्ञान, धर्म और राजनीति पर एक बर्बाद ग्रह --लेख और समीक्षा 2006-2019 3 एड (2019) और आत्मघाती यूटोपियान भ्रम 21st मेंसदी 4वें एड (2019) . (shrink)
Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus on (...) the second. In doing so, I make three arguments. First, I argue that there are good reasons to embrace non-work and that these reasons become more compelling in an era of technological unemployment. Second, I argue that the technological advances that make widespread technological unemployment possible could still threaten or undermine human flourishing and meaning, especially if they do not remain confined to the economic sphere. And third, I argue that this threat could be contained if we adopt an integrative approach to our relationship with technology. In advancing these arguments, I draw on three distinct literatures: the literature on technological unemployment and workplace automation; the antiwork critique—which I argue gives reasons to embrace technological unemployment; and the philosophical debate about the conditions for meaning in life—which I argue gives reasons for concern. (shrink)
This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...) believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. (shrink)
This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...) believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. (shrink)
When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number (...) of recent cases involving automated, or algorithmic, decision-systems. We apply our conception of agency laundering to a series of examples, including Facebook’s automated advertising suggestions, Uber’s driver interfaces, algorithmic evaluation of K-12 teachers, and risk assessment in criminal sentencing. We distinguish agency laundering from several other critiques of information technology, including the so-called “responsibility gap,” “bias laundering,” and masking. (shrink)
Engineers are traditionally regarded as trustworthy professionals who meet exacting standards. In this chapter I begin by explicating our trust relationship towards engineers, arguing that it is a linear but indirect relationship in which engineers “stand behind” the artifacts and technological systems that we rely on directly. The chapter goes on to explain how this relationship has become more complex as engineers have taken on two additional aims: the aim of social engineering to create and steer trust between people, and (...) the aim of creating automated systems that take over human tasks and are meant to invite the trust of those who rely on and interact with them. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.