The past four decades of research in the social sciences have shed light on two important phenomena. One is that human decision-making is full of predicable errors and biases that often lead individuals to make choices that defeat their own ends (i.e., the bad choice phenomenon), and the other is that individuals’ decisions and behaviors are powerfully shaped by their environment (i.e., the influence phenomenon). Some have argued that it is ethically defensible that the influence phenomenon be utilized to (...) address the bad choice phenomenon. They propose that “choice architects” learn about the various ways in which choices can be influenced and directed by the environment, and then work to design environments, broadly construed, that influence individuals towards choices that make them better off. Those who advocate intentionally creating choice environments that lead people to better choices believe that doing so is ethically permissible because (1) it makes people better off, and (2) it does so in a way that is entirely compatible with individual liberty. The evaluation of these two claims is the main focus of this paper. (shrink)
For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets (...) of options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings—the choice architectures—to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us—effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them. (shrink)
While nudging has garnered plenty of interdisciplinary attention, the ethics of applying it to climate policy has been little discussed. However, not all ethical considerations surrounding nudging are straightforward to apply to climate nudges. In this article, we overview the state of the debate on the ethics of nudging and highlight themes that are either specific to or particularly important for climate nudges. These include: the justification of nudges that are not self-regarding; how to account for climate change denialists; transparency; (...) knowing the right or best behaviours; justice concerns; and whether the efficacy of nudges is sufficient for nudges to be justified as a response to the climate crisis. We conclude that climate nudges raise distinct ethical questions that ought to be considered in developing climate nudges. (shrink)
Amongst philosophers and cognitive scientists, modularity remains a popular choice for an architecture of the human mind, primarily because of the supposed explanatory value of this approach. Modular architectures can vary both with respect to the strength of the notion of modularity and the scope of the modularity of mind. We propose a dilemma for modular architectures, no matter how these architectures vary along these two dimensions. First, if a modular architecture commits to the informational encapsulation of (...) modules, as it is the case for modularity theories of perception, then modules are on this account impenetrable. However, we argue that there are genuine cases of the cognitive penetrability of perception and that these cases challenge any strong, encapsulated modular architecture of perception. Second, many recent massive modularity theories weaken the strength of the notion of module, while broadening the scope of modularity. These theories do not require any robust informational encapsulation, and thus avoid the incompatibility with cognitive penetrability. However, the weakened commitment to informational encapsulation greatly weakens the explanatory force of the theory and, ultimately, is conceptually at odds with the core of modularity. (shrink)
This essay explores a conception of responsibility at work in moral and criminal responsibility. Our conception draws on work in the compatibilist tradition that focuses on the choices of agents who are reasons-responsive and work in criminal jurisprudence that understands responsibility in terms of the choices of agents who have capacities for practical reason and whose situation affords them the fair opportunity to avoid wrongdoing. Our conception brings together the dimensions of normative competence and situational control, and we factor normative (...) competence into cognitive and volitional capacities, which we treat as equally important to normative competence and responsibility. Normative competence and situational control can and should be understood as expressing a common concern that blame and punishment presuppose that the agent had a fair opportunity to avoid wrongdoing. This fair opportunity is the umbrella concept in our understanding of responsibility, one that explains it distinctive architecture. (shrink)
Nudges are small changes in the presentation of options that make a predictable impact on people's decisions. Proponents of nudges often claim that they are justified as paternalistic interventions that respect autonomy: they lead people to make better choices, while still letting them choose for themselves. However, existing work on nudges ignores the possibility of “hard choices”: cases where a person prefers one option in some respects, and another in other respects, but has no all‐things‐considered preference between the two. In (...) this paper, I argue that many significant medical decisions are hard choices that provide patients with an opportunity to exercise a distinctive sort of “formative autonomy” by settling their preferences and committing themselves to weigh their values in a particular way. Since nudges risk infringing formative autonomy by depriving patients of this opportunity, their use in medical contexts should be sensitive to this risk. (shrink)
Cass Sunstein and Richard Thaler's proposal that social and legal institutions should steer individuals toward some options and away from others-a stance they dub "libertarian paternalism"-has provoked much high-level discussion in both academic and policy settings. Sunstein and Thaler believe that steering, or "nudging," individuals is easier to justify than the bans or mandates that traditional paternalism involves. -/- This Article considers the connection between libertarian paternalism and the regulation of reproductive choice. I first discuss the use of nudges (...) to discourage women from exercising their right to choose an abortion, or from becoming or remaining pregnant. I then argue that reproductive choice cases illustrate the limitations of libertarian paternalism. Where choices are politicized or intimate, as reproductive choices often are, nudges become not much easier to justify than traditional mandates or prohibitions. Even beyond the context of reproductive choice, it is not obvious how much easier nudges are to justify than bans or mandates. -/- Part I of this Article briefly introduces Sunstein and Thaler's libertarian paternalism. Part II then turns to the context of reproductive choice. Part II.A reviews restrictions on the right to choose an abortion-particularly post-Casey regulations such as waiting periods, requirements that women receive certain types of information, and requirements that women undergo ultrasound-that pitch themselves as steering choice without entirely closing off the right to choose an abortion. This distinction between nudges and prohibitions echoes Sunstein and Thaler's proposals, but works to subordinate women's choices to the judgment of (often male) experts and administrators-hence my term "libertarian patriarchalism." Part II.B reviews efforts to nudge women-particularly teenagers, HIV-positive women, and others thought to be unsuitable mothers-to avoid pregnancy. -/- Part III considers the normative implications of nudging reproductive decisions. In Part III.A, I argue that the political nature of reproductive choices presents a problem for nudges. I do so by considering a parallel with voting rights. Empirical research shows that voters are more likely to choose the candidate listed first on the ballot. Yet we do not empower the administrator in charge of ballot design to choose a default rule that nudges individuals toward the candidate he sincerely believes would promote choosers' welfare. Given the political nature of reproductive choices, a policymaker's attempting to nudge reproductive decisionmaking in the direction he prefers-or indeed in any direction-fails to show adequate respect for the chooser's agency. In Part III.B, I offer an argument that targets the use of nudges in the context of pregnancy. Finally, in Part III.C, I argue that nudges do not merely add choices to an existing menu, but change the substantive choices available to individuals and thereby impose more-than-trivial costs on them. I conclude by exploring the implications of my arguments for nudges more generally. (shrink)
Advances in cognitive and behavioral science show that the way options are presented—commonly referred to as “choicearchitecture”—strongly influences our decisions: we tend to react to a particular option differently depending on how it is presented. Studies suggest that we often make irrational choices due to the interplay between choicearchitecture and systematic errors in our reasoning—cognitive biases. Based on this data, Richard Thaler and Cass Sunstein came up with the idea of a "nudge," which they (...) define as a small change in the choicearchitecture that steers people towards certain choices without limiting their options. Central to the debate on nudging is the question of when, or under what circumstances, it is ethical to nudge someone. Proponents of nudging argue that for ethical nudging, the nudge must be (1) easy to resist and (2) aimed towards the welfare of those nudged (nudgees). In this paper, I argue that this criterion excludes nudges that are both intuitively permissible and broadly accepted. I propose an alternative set of sufficient conditions for ethical nudging that expands the domain for the ethical application of nudges. My “no harm” criterion states that for a nudge to be ethical, the nudge must (1) be easy to resist, and (2) produce no significant harm for the nudgee. I identify three types of such nudges: Choice Architect nudges, Third Party nudges, and “Meh” nudges, and offer examples for each. (shrink)
This thesis contributes to a better conceptual understanding of how self-organized control works. I begin by analyzing the control problem and its solution space. I argue that the two prominent solutions offered by classical cognitive science (centralized control with rich commands, e.g., the Fodorian central systems) and embodied cognitive science (distributed control with simple commands, such as the subsumption architecture by Rodney Brooks) are merely two positions in a two-dimensional solution space. I outline two alternative positions: one is distributed (...) control with rich commands, defended by proponents of massive modularity hypothesis; the other is centralized control with simple commands. My goal is to develop a hybrid account that combines aspects of the second alternative position and that of the embodied cognitive science (i.e., centralized and distributed controls with simple commands). Before developing my account, I discuss the virtues and challenges of the first three. This discussion results in a set of criteria for successful neural control mechanisms. Then, I develop my account through analyzing neuroscientific models of decision-making and control with the theoretical lenses provided by formal decision and social choice theories. I contend that neural processes can be productively modeled as a collective of agents, and neural self-organization is analogous to democratic self-governance. In particular, I show that the basal ganglia, a set of subcortical structures, contribute to the production of coherent and intelligent behaviors through implementing “democratic" procedures. Unlike the Fodorian central system—which is a micro-managing “neural commander-in-chief”—the basal ganglia are a “central election commission.” They delegate control of habitual behaviors to other distributed control mechanisms. Yet, when novel problems arise, they engage and determine the result on the basis of simple information (the votes) from across the system with the principles of neurodemocracy, and control with simple commands of inhibition and disinhibition. By actively managing and taking advantage of the wisdom-of-the-crowd effect, these democratic processes enhance the intelligence and coherence of the mind’s final "collective" decisions. I end by defending this account from both philosophical and empirical criticisms and showing that it meets the criteria for successful solution. (shrink)
Toleration is one of the fundamental principles that inform the design of a democratic and liberal society. Unfortunately, its adoption seems inconsistent with the adoption of paternalistically benevolent policies, which represent a valuable mechanism to improve individuals’ well-being. In this paper, I refer to this tension as the dilemma of toleration. The dilemma is not new. It arises when an agent A would like to be tolerant and respectful towards another agent B’s choices but, at the same time, A is (...) altruistically concerned that a particular course of action would harm, or at least not improve, B’s well-being, so A would also like to be helpful and seeks to ensure that B does not pursue such course of action, for B’s sake and even against B’s consent. In the article, I clarify the specific nature of the dilemma and show that several forms of paternalism, including those based on ethics by design and structural nudging, may not be suitable to resolve it. I then argue that one form of paternalism, based on pro-ethical design, can be compatible with toleration and hence with the respect for B’s choices, by operating only at the informational and not at the structural level of a choicearchitecture. This provides a successful resolution of the dilemma, showing that tolerant paternalism is not an oxymoron but a viable approach to the design of a democratic and liberal society. (shrink)
In this article, we apply the literature on the ethics of choice-architecture (nudges) to the realm of virtual reality (VR) to point out ethical problems with using VR for empathy-based nudging. Specifically, we argue that VR simulations aiming to enhance empathic understanding of others via perspective-taking will almost always be unethical to develop or deploy. We argue that VR-based empathy enhancement not only faces traditional ethical concerns about nudge (autonomy, welfare, transparency), but also a variant of the semantic (...) variance problem that arises for intersectional perspective-taking. VR empathy simulations deceive and manipulate their users about their experiences. Despite their often laudable goals, such simulations confront significant ethical challenges. In light of these goals and challenges, we conclude by proposing that VR designers shift from designing simulations aimed at producing empathic perspective-taking to designing simulations aimed at generating sympathy. These simulations, we claim, can avoid the most serious ethical issues associated with VR nudges, semantic variance, and intersectionality. (shrink)
Nudging is the idea that people’s decisions and behaviors can be influenced in predictable, non-coercive ways by making small changes to the choicearchitecture. In this paper, I differentiate between type-1 nudges and type-2 nudges according to the thinking processes involved in each. With this distinction in hand, I present the libertarian paternalistic criteria for the moral permissibility of intentional nudges. Having done this, I motivate an objection to type-1 nudges. According to this objection, type-1 nudges do not (...) appear to be relevantly different than standard cases of manipulation, and manipulation is morally problematic. While I show that this objection fails, I argue that its evaluation raises a different challenge for Libertarian Paternalism. The libertarian paternalistic criteria fails because it ignores the moral distinction that exists between different kinds of nudges. That is, the distinction between what I call ‘counteractive’ and ‘non-counteractive’ nudges. I end by suggesting a revision of the criteria that avoids the problem. (shrink)
This volume considers forms of information manipulation and restriction in contemporary society. It explores whether and when manipulation of the conditions of inquiry without the consent of those manipulated is morally or epistemically justified. The contributors provide a wealth of examples of manipulation, and debate whether epistemic paternalism is distinct from other forms of paternalism debated in political theory. Special attention is given to medical practice, science communication, and research in science, technology, and society. Some of the contributors argue that (...) unconsenting interference with or "conceptual engineering" of people’s beliefs and ability of inquire is consistent with, and others that it is inconsistent with, efforts to democratize knowledge and decision-making. (shrink)
Well-being can be promoted in two ways. Firstly, by affecting the quantity, quality and allocation of bundles of consumption (the Resource Approach), and secondly, by influencing how people benefit from their goods (the Taste Approach). Whereas the former is considered an ingredient of economic analysis, the latter has conventionally not been included in that field. By identifying the gain the Taste Approach might yield, the article questions whether this asymmetry is justified. If successfully exercised, the Taste Approach might not only (...) enable people to raise their wellbeing, but also provide solutions to a number of issues such as sustainable development and global justice. The author argues that recently developed accounts such as Happiness Economics (HE) and Libertarian Paternalism (LP) both can be considered specifications of the Taste Approach. Furthermore a third specification is identified: Inexpensive Preference Formation (IPF). Whereas LP suggests that choicearchitecture should be exercised when rationality fails, IPF holds that governance in certain instances should improve choices also in absence of no such failure. (shrink)
While autonomous vehicles (AVs) are not designed to harm people, harming people is an inevitable by-product of their operation. How are AVs to deal ethically with situations where harming people is inevitable? Rather than focus on the much-discussed question of what choices AVs should make, we can also ask the much less discussed question of who gets to decide what AVs should do in such cases. Here there are two key options: AVs with a personal ethics setting (PES) or an (...) “ethical knob” that end users can control or AVs with a mandatory ethics setting (MES) that end users cannot control. Which option, a PES or an MES, is best and why? This chapter argues, by drawing on the choicearchitecture literature, in favor of a hybrid view that requires mandated default choice settings while allowing for limited end user control. (shrink)
Philosophers of mind and cognitive scientists have recently taken renewed interest in cognitive penetration, in particular, in the cognitive penetration of perceptual experience. The question is whether cognitive states like belief influence perceptual experience in some important way. Since the possible phenomenon is an empirical one, the strategy for analysis has, predictably, proceeded as follows: define the phenomenon and then, definition in hand, interpret various psychological data. However, different theorists offer different and apparently inconsistent definitions. And so in addition to (...) the usual problems (e.g., definitions being challenged by counterexample), an important result is that different theorists apply their definitions and accordingly get conflicting answers to the question “Is this a genuine case of cognitive penetration?”. This hurdle to philosophical and scientific progress can be remedied, I argue, by returning attention to the alleged consequences of the possible phenomenon. There are three: theory-ladenness of perception in contexts of scientific theory choice, a threat to the general epistemic role of perception, and implications for mental architecture. Any attempt to characterize or define, and then empirically test for, cognitive penetration should be constrained by these consequences. This is a method for interpreting and acquiring experimental data in a way that is agreeable to both sides of the cognitive penetration debate. Put crudely, the question shifts to “Is this a cognitive-perceptual relation that results in (or constitutes) one or more of the relevant consequences?” In answering this question, relative to various data, it may turn out that there is no single unified phenomenon of cognitive penetration. But this should be no matter, since it is the consequences that are of central importance to philosophers and scientists alike. (shrink)
This selective review explores biologically inspired learning as a model for intelligent robot control and sensing technology on the basis of specific examples. Hebbian synaptic learning is discussed as a functionally relevant model for machine learning and intelligence, as explained on the basis of examples from the highly plastic biological neural networks of invertebrates and vertebrates. Its potential for adaptive learning and control without supervision, the generation of functional complexity, and control architectures based on self-organization is brought forward. Learning without (...) prior knowledge based on excitatory and inhibitory neural mechanisms accounts for the process through which survival-relevant or task-relevant representations are either reinforced or suppressed. The basic mechanisms of unsupervised biological learning drive synaptic plasticity and adaptation for behavioral success in living brains with different levels of complexity. The insights collected here point toward the Hebbian model as a choice solution for “intelligent” robotics and sensor systems. Keywords: Hebbian learning; synaptic plasticity; neural networks; self-organization; brain; reinforcement; sensory processing; robot control . (shrink)
My research into Virtual Reality technology and its central property of immersion has indicated that immersion in Virtual Reality (VR) electronic systems is a significant key to the understanding of contemporary culture as well as considerable aspects of previous culture as detected in the histories of philosophy and the visual arts. The fundamental change in aesthetic perception engendered by immersion, a perception which is connected to the ideal of total-immersion in virtual space, identifies certain shifts in ontology which are relevant (...) to a better understanding of the human being. This understanding was achieved through a broad inquiry into the histories of Virtual Reality, philosophy, and the visual arts and has lead to the formulation of an aesthetic theory of immersive consciousness indicative of immersive culture. The primary subject of this discourse is immersion then: an experience which will be identified within the dissertation as the indispensable characteristic of Virtual Reality. The understanding of immersion arrived at here will be used to fashion a synchronous theory of art particularly informed by encounters and concepts of immersion into virtuality. To sufficiently address this subject in a scholarly fashion, I have researched, found and accumulated aesthetic and philosophic examples of immersive tendencies, as found within the histories of art and philosophy, which subsequently contributed towards the articulation of what I have come to call immersive consciousness. As a result of formulating such an immersive consciousness, a good deal of the basis for the questioning of the Western ontological tradition has been found in the Western tradition itself when we look with new eyes and ask new uncertain questions. Moreover, this immersive consciousness will be used to propose some abstract questions encircling today's electronic-based culture. Through the structuring of the argument within the thesis - and overtly within the conclusion - I have articulated a non-teleological creative strategy which provides the basis for an unconstraining integration of noologies (ways of semblancing the thinking process). This strategy provides a means of exemplifying - and for honoring - various methods of thinking. This structuring strategy is consistent with the 'hacker ethic' as defined by Steven Levy, as a demand that access to computers - and anything which might teach us something about the way the world works - should be unlimited and total. To follow this strategy, this dissertation has set out to understand how topical conceptions of virtual immersion connect to pre-existing systems of thought as revealed in art as they have extended out of antecedent ontological self-understandings, historical human self-understandings which have evidenced themselves in the elaboration of technological objectives. To do this I have forged a certain rhizomatic paternity/maternity for Virtual Reality within this dissertation by joining choice immersive examples of simulacra technology into mental connections with the relevant examples culled from the histories of art, architecture, information-technology, sex, myth, space, consciousness and philosophy. (shrink)
This work develops a philosophically credible and psychologically realisable account of control that is necessary for moral responsibility. We live, think, and act in an environment of subjective uncertainty and limited information. As a result, our decisions and actions are influenced by factors beyond our control. Our ability to act freely is restricted by uncertainty, ignorance, and luck. Through three articles, I develop a naturalistic theory of control for action as a process of error minimisation that extends over time. Thus (...) conceived, control can serve to minimise the influence of luck on action and enable freedom in uncertainty. -/- In Article 1, Thor Grünbaum and I argue for a psychologically plausible account of the kind of control that is necessary for moral responsibility. We begin by establishing the relationship between agentive contribution and responsibility-level control. One way to determine whether one the right kind and degree of control to be morally responsible is to track one’s degree of contribution to an action. However, a psychologically plausible account of control in terms of agentive contribution may seem to contradict the types of functional-mechanistic explanations used in cognitive science. In cognitive psychology and cognitive neuroscience, personal-level capacities are explained by a set of sub-personal mechanisms. Often, such explanations leave no room for a contribution by the agent. By integrating insights from theories of cognitive control and incorporating them into a philosophical account of intentions, we propose a way of thinking about the distribution of cognitive control resources as something the agent does. -/- Article 2 argues that a classic argument concerning luck, originally aimed at libertarianism, generalises beyond any specific theory of free will, and regardless of whether determinism or indeterminism is true. I call this mental luck. Because we all make decisions under conditions of relative uncertainty and limited information, it is possible for an agent to make decisions that are contrary to their own motivation. In such situations, it may be a matter of luck whether the agent makes the right decision. Mentally lucky decisions are not rationally governed by attitudes within the agent's perspective, and thus, may be indistinguishable from unlucky ones. From the perspective of the agent, such decisions may resemble the outcome of a lottery. Therefore, mental luck poses a challenge to most prominent theories of free action and moral responsibility. -/- Finally, article 3 engages with the issue of resultant luck, namely luck in how things turn out. Resultant luck raises a challenge for theories of moral responsibility because its existence suggests that one may be responsible for factors beyond one’s control. Prominent responses to resultant luck led to a choice between internalism and scepticism. I argue that familiar cases of resultant luck are based on the assumption that actions are events. Instead, I propose an alternative ontology of action as an ongoing goal-directed process with a many-shots structure. Described this way, cases of resultant luck are not representative of ordinary action. The proposal of action as a many-shots process is consistent with predictive coding, a cognitive architecture which centres around error minimisation. Under this framework, cases of resultant more luck are no longer failures of action, but rather anticipated errors to be settled within the ordinary process of action. (shrink)
The term gestalt, when used in the context of general systems theory, assumes the value of “systemic touchstone”, namely a figure of reference useful to categorize the properties or qualities of a set of systems. Typical gestalts used, e.g., in biology, are those based on anatomical or physiological characteristics, which correspond respectively to architectural and organizational design choices in natural and artificial systems. In this paper we discuss three gestalts of general systems theory: behavior, organization, and substance, which refer respectively (...) to the works of Wiener, Boulding, and Leibniz. Our major focus here is the system introduced by the latter. Through a discussion of some of the elements of the Leibnitian System, and by means of several novel interpretations of those elements in terms of today’s computer science, we highlight the debt that contemporary research still has with this Giant among the giant scholars of the past. (shrink)
The public constitutes a major stakeholder in the debate about, and resolution of privacy and ethical The public constitutes a major stakeholder in the debate about, and resolution of privacy and ethical about Big Data research seriously and how to communicate messages designed to build trust in specific big data projects and the institution of science in general. This chapter explores the implications of various examples of engaging the public in online activities such as Wikipedia that contrast with “Notice and (...) Consent” forms and offers models for scientists to consider when approaching their potential subjects in research. Drawing from Lessig, Code and Other Laws of Cyberspace, the chapter suggests that four main regulators drive the shape of online activity: Code (or Architecture), Laws, Markets, and Norms. Specifically, scientists should adopt best practices in protecting computerized Big Data (Code), remain completely transparent about their data management practices (Law), make smart choices when deploying digital solutions that place a premium on information protection (Market), and, critically, portray themselves to the public as seriously concerned with protecting the privacy of persons and security of data (Norms). The community of Big Data users and collectors should remember that such data are not just “out there” somewhere but are the intimate details of the lives of real persons who have just as deep an interest in protecting their privacy as they do in the good work that is conducted with such data. (shrink)
This paper develops a semantic solution to the puzzle of Free Choice permission. The paper begins with a battery of impossibility results showing that Free Choice is in tension with a variety of classical principles, including Disjunction Introduction and the Law of Excluded Middle. Most interestingly, Free Choice appears incompatible with a principle concerning the behavior of Free Choice under negation, Double Prohibition, which says that Mary can’t have soup or salad implies Mary can’t have soup (...) and Mary can’t have salad. Alonso-Ovalle 2006 and others have appealed to Double Prohibition to motivate pragmatic accounts of Free Choice. Aher 2012, Aloni 2018, and others have developed semantic accounts of Free Choice that also explain Double Prohibition. -/- This paper offers a new semantic analysis of Free Choice designed to handle the full range of impossibility results involved in Free Choice. The paper develops the hypothesis that Free Choice is a homogeneity effect. The claim possibly A or B is defined only when A and B are homogenous with respect to their modal status, either both possible or both impossible. Paired with a notion of entailment that is sensitive to definedness conditions, this theory validates Free Choice while retaining a wide variety of classical principles except for the transitivity of entailment. The homogeneity hypothesis is implemented in two different ways, homogeneous alternative semantics and homogeneous dynamic semantics, with interestingly different consequences. (shrink)
This paper argues that while Heidegger showed the importance of architecture in altering people's modes of being to avoid global ecological destruction, the work of Christopher Alexander offered a far more practical orientation to deal with this problem.
Expanding his collected essays on architectural theory and criticism, Chris Abel pursues his explorations across disciplinary and regional boundaries in search of a deeper understanding of architecture in the evolution of human culture and identity formation. From his earliest writings predicting the computer-based revolution in customised architectural production, through his novel studies on 'tacit knowing' in design or hybridisation in regional and colonial architecture, to his radical theory of the 'extended self', Abel has been a consistently fresh and (...) provocative thinker, contesting both conventions and intellectual fashions. This revised third edition includes a new introduction and six additional chapters by the author covering a broad range of related topics, up to recent concerns with genetic design methods and virtual selves. Together with the former essays, the book presents a unique global perspective on the changing cultural issues and technologies shaping human identities and the built environment in diverse parts of the world, both East and West (from the book cover). (shrink)
People cannot contemplate a proposition without believing that proposition. A model of belief fixation is sketched and used to explain hitherto disparate, recalcitrant, and somewhat mysterious psychological phenomena and philosophical paradoxes. Toward this end I also contend that our intuitive understanding of the workings of introspection is mistaken. In particular, I argue that propositional attitudes are beyond the grasp of our introspective capacities. We learn about our beliefs from observing our behavior, not from introspecting our stock beliefs. -/- The model (...) of belief fixation offered in the dissertation poses a novel dilemma for theories of rationality. One might have thought that the ability to contemplate ideas while withholding assent is a necessary condition on rationality. In short, it seems that rational creatures shouldn‘t just form their beliefs based on whatever they happen to think. However, it seems that we are creatures that automatically and reflexively form our beliefs based on whatever propositions we happen to consider. Thus, either the rational requirement that states that we must have evidence for our beliefs must be jettisoned or we must accept the conclusion that we are necessarily irrational. (shrink)
Normative thinking about addiction has traditionally been divided between, on the one hand, a medical model which sees addiction as a disease characterized by compulsive and relapsing drug use over which the addict has little or no control and, on the other, a moral model which sees addiction as a choice characterized by voluntary behaviour under the control of the addict. Proponents of the former appeal to evidence showing that regular consumption of drugs causes persistent changes in the brain (...) structures and functions known to be involved in the motivation of behavior. On this evidence, it is often concluded that becoming addicted involves a transition from voluntary, chosen drug use to non-voluntary compulsive drug use. Against this view, proponents of the moral model provide ample evidence that addictive drug use involves voluntary chosen behaviour. In this paper we argue that although they are right about something, both views are mistaken. We present a third model that neither rules out the view of addictive drug use as compulsive, nor that it involves voluntary chosen behavior. -/- . (shrink)
ABSTRACT: This paper serves two purposes: (i) it can be used by students as an introduction to chapters 1-5 of book iii of the NE; (ii) it suggests an answer to the unresolved question what overall objective this section of the NE has. The paper focuses primarily on Aristotle’s theory of what makes us responsible for our actions and character. After some preliminary observations about praise, blame and responsibility (Section 2), it sets out in detail how all the key notions (...) of NE iii 1-5 are interrelated (Sections 3-9). The setting-out of these interconnections makes it then possible to provide a comprehensive interpretation of the purpose of the passage. Its primary purpose is to explain how agents are responsible for their actions not just insofar as they are actions of this kind or that, but also insofar as they are noble or base: agents are responsible for their actions qua noble or base, because, typically via choice, their character dispositions are a causal factor of those actions (Section 10). The paper illustrates the different ways in which agents can be causes of their actions by means of Aristotle’s four basic types of agents (Section 11). A secondary purpose of NE iii 1-5 is to explain how agents can be held responsible for consequences of their actions (Section 12), in particular for their character dispositions insofar as these are noble or base, i.e. virtues or vices (Section 13). These two goals are not the only ones Aristotle pursues in the passage. But they are the ones Aristotle himself indicates in its first sentence and summarizes in its last paragraph; and the ones that give the passage a systematic unity. The paper also briefly consider the issues of freedom-to-do-otherwise, free choice and free-will in the contexts in which they occur (i.e. in the final paragraphs of Sections 6, 7, 12, 13). (shrink)
There is fast-growing awareness of the role atmospheres play in architecture. Of equal interest to contemporary architectural practice as it is to aesthetic theory, this 'atmospheric turn' owes much to the work of the German philosopher Gernot Böhme. Atmospheric Architectures: The Aesthetics of Felt Spaces brings together Böhme's most seminal writings on the subject, through chapters selected from his classic books and articles, many of which have hitherto only been available in German. This is the only translated version authorised (...) by Böhme himself, and is the first coherent collection deploying a consistent terminology. It is a work which will provide rich references and a theoretical framework for ongoing discussions about atmospheres and their relations to architectural and urban spaces. Combining philosophy with architecture, design, landscape design, scenography, music, art criticism, and visual arts, the essays together provide a key to the concepts that motivate the work of some of the best contemporary architects, artists, and theorists: from Peter Zumthor, Herzog & de Meuron and Juhani Pallasmaa to Olafur Eliasson and James Turrell. With a foreword by Professor Mark Dorrian and an afterword by Professor David Leatherbarrow,, the volume also includes a general introduction to the topic, including coverage of it history, development, areas of application and conceptual apparatus. (shrink)
Taken collectively, consumer food choices have a major impact on animal lives, human lives, and the environment. But it is far from clear how to move from facts about the power of collective consumer demand to conclusions about what one ought to do as an individual consumer. In particular, even if a large-scale shift in demand away from a certain product (e.g., factory-farmed meat) would prevent grave harms or injustices, it typically does not seem that it will make a difference (...) whether one refrains from purchasing that product oneself. Most present-day food companies operate at too large a scale for a single purchase to make a difference to production decisions. If that is true, then it is not clear what point there is in refraining. This is “the problem of collective impact.” This chapter explores a range of proposals for how to solve this problem. (shrink)
Several contemporary architects have designed architectural objects that are closely linked to their particular sites. An in-depth study of the relevant relationship holding between those objects and their sites is, however, missing. This paper addresses the issue, arguing that those architectural objects are akin to works of site-specific art. In section (1), I introduce the topic of the paper. In section (2), I critically analyse the debate on the categorisation of artworks as site-specific. In section (3), I apply to (...) class='Hi'>architecture the lesson learned from the analysis of the art debate. (shrink)
A decision maker (DM) selects a project from a set of alternatives with uncertain productivity. After the choice, she observes a signal about productivity and decides how much effort to put in. This paper analyzes the optimal decision problem of the DM who rationally filters information to deal with her post-decision cognitive dissonance. It is shown that the optimal effort level for a project can be affected by unchosen projects in her choice set, and the nature of the (...)choice set-dependence is determined by the signal structure. Some comparative statics of choice set-dependence is also provided. Finally, based on the results, the optimal choice set design is also explored. This paper offers a simple framework to explain the experimental finding in psychology that people’s effort level for a project can be enhanced when the project is chosen by themselves rather than by others. (shrink)
Causal Decision Theory reckons the choice-worthiness of an option to be completely independent of its evidential bearing on its non-effects. But after one has made a choice this bearing is relevant to future decisions. Therefore it is possible to construct problems of sequential choice in which Causal Decision Theory makes a guaranteed loss. So Causal Decision Theory is wrong. The source of the problem is the idea that agents have a special perspective on their own contemplated actions, (...) from which evidential connections that observers can see are either irrelevant or invisible. (shrink)
This article connects value-sensitive design to Gibson’s affordance theory: the view that we perceive in terms of the ease or difficulty with which we can negotiate space. Gibson’s ideas offer a nonsubjectivist way of grasping culturally relative values, out of which we develop a concept of political affordances, here understood as openings or closures for social action, often implicit. Political affordances are equally about environments and capacities to act in them. Capacities and hence the severity of affordances vary with age, (...) health, social status and more. This suggests settings are selectively permeable, or what postphenomenologists call multistable. Multistable settings are such that a single physical location shows up differently – as welcoming or hostile – depending on how individuals can act on it. In egregious cases, authoritarian governments redesign politically imbued spaces to psychologically cordon both them and the ideologies they represent. Selective permeability is also orchestrated according to business interests, which is symptomatic of commercial imperatives increasingly dictating what counts as moral and political goods. (shrink)
Hard Choices.Ruth Chang - 2017 - Journal of the American Philosophical Association 3 (1):1-21.details
What makes a choice hard? I discuss and criticize three common answers and then make a proposal of my own. Paradigmatic hard choices are not hard because of our ignorance, the incommensurability of values, or the incomparability of the alternatives. They are hard because the alternatives are on a par; they are comparable, but one is not better than the other, and yet nor are they equally good. So understood, hard choices open up a new way of thinking about (...) what it is to be a rational agent. (shrink)
The idea that there is a “Number Sense” (Dehaene, 1997) or “Core Knowledge” of number ensconced in a modular processing system (Carey, 2009) has gained popularity as the study of numerical cognition has matured. However, these claims are generally made with little, if any, detailed examination of which modular properties are instantiated in numerical processing. In this article, I aim to rectify this situation by detailing the modular properties on display in numerical cognitive processing. In the process, I review literature (...) from across the cognitive sciences and describe how the evidence reported in these works supports the hypothesis that numerical cognitive processing is modular. I outline the properties that would suffice for deeming a certain processing system a modular processing system. Subsequently, I use behavioral, neuropsychological, philosophical, and anthropological evidence to show that the number module is domain specific, informationally encapsulated, neurally localizable, subject to specific pathological breakdowns, mandatory, fast, and inaccessible at the person level; in other words, I use the evidence to demonstrate that some of our numerical capacity is housed in modular casing. (shrink)
In democracies citizens are supposed to have some control over the general direction of policy. According to a pretheoretical interpretation of this idea, the people have control if elections and other democratic institutions compel officials to do what the people want, or what the majority want. This interpretation of popular control fits uncomfortably with insights from social choice theory; some commentators—Riker, most famously—have argued that these insights should make us abandon the idea of popular rule as traditionally understood. This (...) article presents a formal theory of popular control that responds to the challenge from social choice theory. It makes precise a sense in which majorities may be said to have control even if the majority preference relation has an empty core. And it presents a simple game-theoretic model to illustrate how majorities can exercise control in this specified sense, even when incumbents are engaged in purely re-distributive policymaking and the majority rule core is empty. (shrink)
During the last decades, many cognitive architectures (CAs) have been realized adopting different assumptions about the organization and the representation of their knowledge level. Some of them (e.g. SOAR [35]) adopt a classical symbolic approach, some (e.g. LEABRA[ 48]) are based on a purely connectionist model, while others (e.g. CLARION [59]) adopt a hybrid approach combining connectionist and symbolic representational levels. Additionally, some attempts (e.g. biSOAR) trying to extend the representational capacities of CAs by integrating diagrammatical representations and reasoning are (...) also available [34]. In this paper we propose a reflection on the role that Conceptual Spaces, a framework developed by Peter G¨ardenfors [24] more than fifteen years ago, can play in the current development of the Knowledge Level in Cognitive Systems and Architectures. In particular, we claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by G¨ardenfors [23] for defending the need of a conceptual, intermediate, representation level between the symbolic and the sub-symbolic one. In particular we focus on the advantages offered by Conceptual Spaces (w.r.t. symbolic and sub-symbolic approaches) in dealing with the problem of compositionality of representations based on typicality traits. Additionally, we argue that Conceptual Spaces could offer a unifying framework for interpreting many kinds of diagrammatic and analogical representations. As a consequence, their adoption could also favor the integration of diagrammatical representation and reasoning in CAs. (shrink)
There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...) certain kinds of responsibility gap. The argument is based on the idea that human morality is often tragic. We frequently confront situations in which competing moral considerations pull in different directions and it is impossible to perfectly balance these considerations. This heightens the burden of responsibility associated with our choices. We cope with the tragedy of moral choice in different ways. Sometimes we delude ourselves into thinking the choices we make were not tragic ; sometimes we delegate the tragic choice to others ; sometimes we make the choice ourselves and bear the psychological consequences. Each of these strategies has its benefits and costs. One potential advantage of autonomous machines is that they enable a reduced cost form of delegation. However, we only gain the advantage of this reduced cost if we accept that some techno-responsibility gaps are virtuous. (shrink)
This paper seeks to refute the claim that architectural value is one and the same value as the artistic value of architecture. As few scholars explicitly endorse this claim, instead tacitly holding it, I term it the implicit claim. Three potential motivations for the implicit claim are offered before it is shown that, contrary to supporting the claim, they set the foundations for considering architectural value and the artistic value of architecture to be distinct. After refuting the potential (...) motivations and offering some counterexamples to the claim, I provide some comments upon the interaction(s) between aesthetic, artistic, and architectural values, which are benefitted and supported by Louise Hanson’s discussion of attributive value in the artistic domain. (shrink)
Choice experiment (CE) is a questionnaire based method that the accuracy of research questionnaire determines the validity of the research outcomes. Attribute selection has a prime importance in every CE studies. If respondents do not understand or do not have preference for a certain attribute, the attribute non-attendance problem might happen that biases overall results of the research. Qualitative approaches such as literature review, focus group discussion, and in depth discussion commonly applied in CE researches. However, especially in the (...) developing countries context where ethnical and cultural diversity is a challenge in conducting survey based questionnaires, qualitative methods are not sufficient in selecting attributes. Present study investigates the application of relative importance index (RII) in respondents¡¯ preference for attributes in a modal shift study in Klang Valley, Malaysia. The 5 point Likert scale questions were employed to enhance respondents¡¯ preferences for initial 24 selected attributes. The results of this study showed that from 24 pre-selected attributes, only 18 of them had RII>0.5 and could be included in the final CE design. The results of this study could help researchers to control for unobserved problems in selecting the attributes which could not be discovered through qualitative approaches. -/- . (shrink)
A book review of _Free Choice: A Self-referential Argument_ by J. M. Boyle, Jr., G. Grisez, and O. Tollefsen. The review concerns the pragmatical self-referential argument employed in the book, and points to the fact that the argument is itself self-referentially inconsistent, but on the level of metalogical self-reference.
The concepts of choice, negation, and infinity are considered jointly. The link is the quantity of information interpreted as the quantity of choices measured in units of elementary choice: a bit is an elementary choice between two equally probable alternatives. “Negation” supposes a choice between it and confirmation. Thus quantity of information can be also interpreted as quantity of negations. The disjunctive choice between confirmation and negation as to infinity can be chosen or not in (...) turn: This corresponds to set-theory or intuitionist approach to the foundation of mathematics and to Peano or Heyting arithmetic. Quantum mechanics can be reformulated in terms of information introducing the concept and quantity of quantum information. A qubit can be equivalently interpreted as that generalization of “bit” where the choice is among an infinite set or series of alternatives. The complex Hilbert space can be represented as both series of qubits and value of quantum information. The complex Hilbert space is that generalization of Peano arithmetic where any natural number is substituted by a qubit. “Negation”, “choice”, and “infinity” can be inherently linked to each other both in the foundation of mathematics and quantum mechanics by the meditation of “information” and “quantum information”. (shrink)
The main focus of this article is the question of the aesthetics of an unaesthetic ruined space.The author also pays particular attention to the ambivalence that exists in contemporary derelict architecture, referred to as ‘modern ruins’, and tries to show that such locations can be viewed as an ‘in‑between’ space.
A platitude that took hold with Kuhn is that there can be several equally good ways of balancing theoretical virtues for theory choice. Okasha recently modelled theory choice using technical apparatus from the domain of social choice: famously, Arrow showed that no method of social choice can jointly satisfy four desiderata, and each of the desiderata in social choice has an analogue in theory choice. Okasha suggested that one can avoid the Arrow analogue for (...) theory choice by employing a strategy used by Sen in social choice, namely, to enhance the information made available to the choice algorithms. I argue here that, despite Okasha’s claims to the contrary, the information-enhancing strategy is not compelling in the domain of theory choice. (shrink)
Industrial heritage as a relatively recent phenomenon is the production of mid-20th century. The industrial heritage represents the culture, historical situation, processes, technologies and outstanding achievements of each region. Based on the value of contemporary architecture, It is necessary to protect them. Nowadays, the protection of industrial heritage has become an international challenge. One of the prerequisites of the protection of industrial heritage is recognizing their value and their position. Proper protection of industrial heritage needs to study and deep (...) understanding it in a large- scale and recognition of the value of the heritage for conservation process, in a regional scale. However, these kind of buildings, due to the large-scale, locating in the proper point of the city, flexible plans (due to the modular structure) etc. have a great ability to transform into various functions, specially public and urban services. One of the most important values that can be imagined for industrial heritage, is architectural identity of the region. Therefore, it is necessary to understand the effect of this types of structures through survey survey. The aim of this paper is to evaluate the effectiveness of the industrial heritage on architectural identity in historic cities. The case study was selected by purpose - Dezful & “The MAKINEH of Flour” of Dezful, which now has become the mainstream of bazaar. The variable which is used in order to define local identity, is using this element in address, which the residents giving to someone else to mention the accurate location of specific place, near the “MAKINEH”. The statistical population are selected from among people who lives in eastern sahrabedar mahallah. The results of the paper shows that the industrial heritage has the value of architectural and urban identity in local level. The results indicate significance of comprehensive approach toward reunderstanding and redevelopment of contemporary buildings and industrial heritage sites. (shrink)
Where is imagination in imaginative resistance? We seek to answer this question by connecting two ongoing lines of inquiry in different subfields of philosophy. In philosophy of mind, philosophers have been trying to understand imaginative attitudes’ place in cognitive architecture. In aesthetics, philosophers have been trying to understand the phenomenon of imaginative resistance. By connecting these two lines of inquiry, we hope to find mutual illumination of an attitude (or cluster of attitudes) and a phenomenon that have vexed philosophers. (...) Our strategy is to reorient the imaginative resistance literature from the perspective of cognitive architecture. Whereas existing taxonomies of positions in the imaginative resistance literature have focused on disagreements over the source and scope of the phenomenon, our taxonomy focuses on the psychological components necessary for explaining imaginative resistance. (shrink)
Sometimes citizens disagree about political matters, but a decision must be made. We have two theoretical frameworks for resolving political disagreement. The first is the framework of social choice. In it, our goal is to treat parties to the dispute fairly, and there is no sense in which some are right and the others wrong. The second framework is that of collective decision-making. Here, we do believe that preferences are truth apt, and our moral consideration is owed not to (...) those who disagree but to the community that stands to benefit or suffer from the decision. In this chapter, I describe and explore these two frameworks. I conclude two things. First, analysis of real-world disagreement suggests that collective decision-making is the right way to model politics. In most, possibly even all, political disagreement, all parties believe (if implicitly) that there is an objective standard of correctness. Second, this matter is connected to the concept of pluralism. If pluralism is true, then collective decision-making cannot be applied to some political disagreements. More surprisingly, pluralism may rule out the applicability of social choice theory as well. (shrink)
Modern buildings do not easily harmonize with other buildings, regardless of whether the latter are also modern. This often-observed fact has not received a satisfactory explanation. To improve on existing explanations, this article first generalizes one of Ortega y Gasset’s observations concerning modern fine art, and then develops a metaphysics of styles that is inspired by work in the philosophy of biology. The resulting explanation is that modern architecture is incapable of developing patterns that facilitate harmonizing, because such patterns (...) would humanize buildings and modern architecture is a homeostatic property cluster with a dehumanizing motive at its core. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.