The American justice system, from police departments to the courts, is increasingly turning to information technology for help identifying potential offenders, determining where, geographically, to allocate enforcement resources, assessing flight risk and the potential for recidivism amongst arrestees, and making other judgments about when, where, and how to manage crime. In particular, there is a focus on machine learning and other data analytics tools, which promise to accurately predict where crime will occur and who will perpetrate it. Activists and academics (...) have begun to raise critical questions about the use of these tools in policing contexts. In this chapter, I review the emerging critical literature on predictivepolicing and contribute to it by raising ethical questions about the use of predictive analytics tools to identify potential offenders. Drawing from work on the ethics of profiling, I argue that the much-lauded move from reactive to preemptive policing can mean wrongfully generalizing about individuals, making harmful assumptions about them, instrumentalizing them, and failing to respect them as full ethical persons. I suggest that these problems stem both from the nature of predictivepolicing tools and from the sociotechnical contexts in which they are implemented... (shrink)
In light of the recent emergence of predictive techniques in law enforcement to forecast crimes before they occur, this paper examines the temporal operation of power exercised by predictivepolicing algorithms. I argue that predictivepolicing exercises power through a paranoid style that constitutes a form of temporal governmentality. Temporality is especially pertinent to understanding what is ethically at stake in predictivepolicing as it is continuous with a historical racialized practice of organizing, (...) managing, controlling, and stealing time. After first clarifying the concept of temporal governmentality, I apply this lens to Chicago Police Department’s Strategic Subject List. This predictive algorithm operates, I argue, through a paranoid logic that aims to preempt future possibilities of crime on the basis of a criminal past codified in historical crime data. (shrink)
This book provides a comprehensive examination of the police role from within a broader philosophical context. Contending that the police are in the midst of an identity crisis that exacerbates unjustified law enforcement tactics, Luke William Hunt examines various major conceptions of the police—those seeing them as heroes, warriors, and guardians. The book looks at the police role considering the overarching societal goal of justice and seeks to present a synthetic theory that draws upon history, law, society, psychology, and philosophy. (...) Each major conception of the police role is examined in light of how it affects the pursuit of justice, and how it may be contrary to seeking justice holistically and collectively. The book sets forth a conception of the police role that is consistent with the basic values of a constitutional democracy in the liberal tradition. Hunt’s intent is that clarifying the police role will likewise elucidate any constraints upon policing strategies, including algorithmic strategies such as predictivepolicing. This book is essential reading for thoughtful policing and legal scholars as well as those interested in political philosophy, political theory, psychology, and related areas. Now more than ever, the nature of the police role is a philosophical topic that is relevant not just to police officials and social scientists, but to everyone. (shrink)
Policing in many parts of the world—the United States in particular—has embraced an archetypal model: a conception of the police based on the tenets of individuated archetypes, such as the heroic police “warrior” or “guardian.” Such policing has in part motivated moves to (1) a reallocative model: reallocating societal resources such that the police are no longer needed in society (defunding and abolishing) because reform strategies cannot fix the way societal problems become manifest in (archetypal) policing; and (...) (2) an algorithmic model: subsuming policing into technocratic judgements encoded in algorithms through strategies such as predictivepolicing (mitigating archetypal bias). This paper begins by considering the normative basis of the relationship between political community and policing. It then examines the justification of reallocative and algorithmic models in light of the relationship between political community and police. Given commitments to the depth and distribution of security—and proscriptions against dehumanizing strategies—the paper concludes that a nonideal-theory priority rule promoting respect for personhood (manifest in community and dignity-promoting policing strategies) is a necessary condition for the justification of the above models. (shrink)
ABSTRACT: So far in this book, we have examined algorithmic decision systems from three autonomy-based perspectives: in terms of what we owe autonomous agents (chapters 3 and 4), in terms of the conditions required for people to act autonomously (chapters 5 and 6), and in terms of the responsibilities of agents (chapter 7). -/- In this chapter we turn to the ways in which autonomy underwrites democratic governance. Political authority, which is to say the ability of a government to exercise (...) power, may be justifiable or not. Whether it is justified and how it can come to be justified is a question of political legitimacy. Political legitimacy is another way in which autonomy and responsibility are linked. This relationship is the basis of the current chapter, and it is important in understanding the moral salience of algorithmic systems. We will draw the connection as follows. We begin, in section 8.1, by describing two uses of technology: crime predicting technology used to drive policing practices and social media technology used to influence elections (including by Cambridge Analytica and by the Internet Research Agency). In section 8.2 we consider several views of legitimacy and argue for a hybrid version of normative legitimacy based on one recently offered by Fabienne Peter. In section 8.3 we will explain that the connection between political legitimacy and autonomy is that legitimacy is grounded in legitimating processes, which are in turn based on autonomy. Algorithmic systems—among them PredPol and the Cambridge Analytica-Facebook-Internet Research Agency amalgam—can hinder that legitimation process and conflict with democratic legitimacy, as we argue in section 8.4. We will conclude by returning to several cases that serve as through-lines to the book: Loomis, Wagner, and Houston Schools. -/- The link below is to an open-access copy of the chapter. (shrink)
Law-enforcement agencies are increasingly able to leverage crime statistics to make risk predictions for particular individuals, employing a form of inference that some condemn as violating the right to be “treated as an individual.” I suggest that the right encodes agents’ entitlement to a fair distribution of the burdens and benefits of the rule of law. Rather than precluding statistical prediction, it requires that citizens be able to anticipate which variables will be used as predictors and act intentionally to avoid (...) them. Furthermore, it condemns reliance on various indexes of distributive injustice, or unchosen properties, as evidence of law-breaking. (shrink)
The paper examines the justification of warfare. The main thesis is that war is very difficult to justify, and justification by invoking “justice” is not the way to succeed it. Justification and justness are very different venues: while the first attempts to explain the nature of war and offer possible schemes of resolution, the second aims to endorse a specific type of warfare as correct and hence allowed – which is the crucial part of “just war theory.” However, “just war (...) theory,” somewhat Manichean in its nature, has very deep flaws. Its final result is criminalization of war, which reduces warfare to police action, and finally implies a very strange proviso that one side has a right to win. All that endangers the distinction between ius ad bellum and ius in bello, and destroys the collective character of warfare. Justification of war is actually quite different – it starts from the definition of war as a kind of conflict which cannot be solved peacefully, but for which there is mutual understanding that it cannot remain unresolved. The aim of war is not justice, but peace, i.e. either a new articulation of peace, or a restoration of the status quo ante. Additionally, unlike police actions, the result of war cannot be known or assumed in advance, giving war its main feature: the lack of control over the future. Control over the future, predictability, is a feature of peace. This might imply that war is a consequence of failed peace, or inability to maintain peace. The explanation of this inability forms the justification of war. Justice is always an important part of it, but justification cannot be reduced to it. The logic contained here refers to ius ad bellum, while ius in bello is relative to various parameters of sensitivity prevalent in a particular time, with the purpose to make warfare more humane and less expensive. (shrink)
There is increasing concern about “surveillance capitalism,” whereby for-profit companies generate value from data, while individuals are unable to resist (Zuboff 2019). Non-profits using data-enabled surveillance receive less attention. Higher education institutions (HEIs) have embraced data analytics, but the wide latitude that private, profit-oriented enterprises have to collect data is inappropriate. HEIs have a fiduciary relationship to students, not a narrowly transactional one (see Jones et al, forthcoming). They are responsible for facets of student life beyond education. In addition to (...) classrooms, learning management systems, and libraries, HEIs manage dormitories, gyms, dining halls, health facilities, career advising, police departments, and student employment. HEIs collect and use student data in all of these domains, ostensibly to understand learner behaviors and contexts, improve learning outcomes, and increase institutional efficiency through “learning analytics” (LA). ID card swipes and Wi-Fi log-ins can track student location, class attendance, use of campus facilities, eating habits, and friend groups. Course management systems capture how students interact with readings, video lectures, and discussion boards. Application materials provide demographic information. These data are used to identify students needing support, predict enrollment demands, and target recruiting efforts. These are laudable aims. However, current LA practices may be inconsistent with HEIs’ fiduciary responsibilities. HEIs often justify LA as advancing student interests, but some projects advance primarily organizational welfare and institutional interests. Moreover, LA advances a narrow conception of student interests while discounting privacy and autonomy. Students are generally unaware of the information collected, do not provide meaningful consent, and express discomfort and resigned acceptance about HEI data practices, especially for non-academic data (see Jones et al. forthcoming). The breadth and depth of student information available, combined with their fiduciary responsibility, create a duty that HEIs exercise substantial restraint and rigorous evaluation in data collection and use. (shrink)
This chapter begins with an empirical analysis of attitudes towards the law, which, in turn, inspires a philosophical re-examination of the moral status of the rule of law. In Section 2, we empirically analyse relevant survey data from the US. Although the survey, and the completion of our study, preceded the recent anti-police brutality protests sparked by the killing of George Floyd, the relevance of our observations extends to this recent development and its likely reverberations. Consistently with prior studies, we (...) find that people’s ascriptions of legitimacy to the legal system are predicted strongly by their perceptions of the procedural justice and lawfulness of police and court officials’ action. Two factors emerge as significant predictors of people’s compliance with the law: (i) their belief that they have a (content-independent, moral) duty to obey the law (which is one element of legitimacy, as defined here); and (ii) their moral assessment of the content of specific legal requirements (‘perceived moral content of laws’). We also observe an interactive relationship between these two factors. At higher levels of perceived moral content of laws, felt duty to obey is a better predictor of compliance. And, similarly, perceived moral content of laws is a better predictor of compliance at higher levels of felt duty to obey. This suggests that the moral content incorporated in specific laws interacts with the normative force people ascribe to legal authorities by virtue of other qualities, specifically here procedural justice and lawfulness. In Section 3, the focus shifts to a philosophical analysis, whereby we identify a parallel (similarly interactive) modality in the way that form and content mutually affect the value of the rule of law. We advocate a distinctive alternative to two rival approaches in jurisprudential discourse, the first of which claims that Lon Fuller’s eight precepts of legality embody moral qualities not contingent on the law’s content, while the second denies any independent moral value in these eight precepts, viewing them as entirely subservient to the law’s substantive goals. In contrast, on the view put forward here, Fuller’s principles possess (inter alia) an expressive moral quality, but their expressive effect does not materialise in isolation from other, contextual factors. In particular, the extent to which it materialises is partly sensitive to the moral quality of the law’s content. (shrink)
The paper examines the justification of warfare. The main thesis is that war is very difficult to justify, and justification by invoking “justice” is not the way to succeed it. Justification and justness (“justice”) are very different venues: while the first attempts to explain the nature of war and offer possible schemes of resolution (through adequate definitions), the second aims to endorse a specific type of warfare as correct and hence allowed – which is the crucial part of “just war (...) theory.” However, “just war theory,” somewhat Manichean in its nature, has very deep flaws. Its final result is criminalization of war, which reduces warfare to police action, and finally implies a very strange proviso that one side has a right to win. All that endangers the distinction between ius ad bellum and ius in bello, and destroys the collective character of warfare (reducing it to an incomprehensible individual level, as if a group of people entered a battle in hopes of finding another group of people willing to respond). Justification of war is actually quite different – it starts from the definition of war as a kind of conflict which cannot be solved peacefully, but for which there is mutual understanding that it cannot remain unresolved. The aim of war is not justice, but peace, i.e. either a new articulation of peace, or a restoration of the status quo ante. Additionally, unlike police actions, the result of war cannot be known or assumed in advance, giving war its main feature: the lack of control over the future. Control over the future, predictability (obtained through laws), is a feature of peace. This might imply that war is a consequence of failed peace, or inability to maintain peace. The explanation of this inability (which could simply be incompetence, or because peace, as a specific articulation of distribution of social power, is not tenable anymore) forms the justification of war. Justice is always an important part of it, but justification cannot be reduced to it. The logic contained here refers to ius ad bellum, while ius in bello is relative to various parameters of sensitivity prevalent in a particular time (and expressed in customary and legal rules of warfare), with the purpose to make warfare more humane and less expensive. (shrink)
Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...) to create the safest and simplest form of AI which may work as an AI Nanny, that is, a global surveillance state powered by a Narrow AI, or AI Police. A similar but more limited system has already been implemented in China for the prevention of ordinary crime. AI police will be able to predict the actions of and stop potential terrorists and bad actors in advance. Implementation of such AI police will probably consist of two steps: first, a strategic decisive advantage via Narrow AI created by an intelligence services of a nuclear superpower, and then ubiquitous control over potentially dangerous agents which could create unauthorized artificial general intelligence which could evolve into Superintelligence. (shrink)
Predictive processing (PP) accounts of perception are unique not merely in that they postulate a unity between perception and imagination. Rather, they are unique in claiming that perception should be conceptualised in terms of imagination and that the two involve an identity of neural implementation. This paper argues against this postulated unity, on both conceptual and empirical grounds. Conceptually, the manner in which PP theorists link perception and imagination belies an impoverished account of imagery as cloistered from the external (...) world in its intentionality, akin to a virtual reality, as well as endogenously generated. Yet this ignores a whole class of imagery whose intentionality is directed on the actual environment—projected mental imagery—and also ignores the fact that imagery may be triggered crossmodally in a bottom-up, stimulus-driven way. Empirically, claiming that imagery and perception share neural circuitry ignores relevant clinical results in this area. These evidence substantial perception/imagery neural dissociations, most notably in the case of aphantasia. Taken together, the arguments here suggest that PP theorists should substantially temper, if not outright abandon, their claim to a perception/imagination unity. (shrink)
Why does institutional police brutality continue so brazenly? Criminologists and other social scientists typically theorize about the causes of such violence, but less attention is given to normative questions regarding the demands of justice. Some philosophers have taken a teleological approach, arguing that social institutions such as the police exist to realize collective ends and goods based upon the idea of collective moral responsibility. Others have approached normative questions in policing from a more explicit social-contract perspective, suggesting that legitimacy (...) is derived by adhering to (limited) authority. This article examines methodologies within political philosophy for analyzing police injustice. The methodological inquiry leads to an account of how justice constrains the police through both special (or positional) moral requirements that officers assume voluntarily, as well as general moral requirements in virtue of a polity’s commitment to moral, political and legal values beyond law enforcement and crime reduction. The upshot is a conception of a police role that is constrained by justice from multiple foundational stances. (shrink)
According to the predictive coding theory of cognition , brains are predictive machines that use perception and action to minimize prediction error, i.e. the discrepancy between bottom–up, externally-generated sensory signals and top–down, internally-generated sensory predictions. Many consider PCT to have an explanatory scope that is unparalleled in contemporary cognitive science and see in it a framework that could potentially provide us with a unified account of cognition. It is also commonly assumed that PCT is a representational theory of (...) sorts, in the sense that it postulates that our cognitive contact with the world is mediated by internal representations. However, the exact sense in which PCT is representational remains unclear; neither is it clear that it deserves such status—that is, whether it really invokes structures that are truly and nontrivially representational in nature. In the present article, I argue that the representational pretensions of PCT are completely justified. This is because the theory postulates cognitive structures—namely action-guiding, detachable, structural models that afford representational error detection—that play genuinely representational functions within the cognitive system. (shrink)
Diabetes is one of the most common diseases worldwide where a cure is not found for it yet. Annually it cost a lot of money to care for people with diabetes. Thus the most important issue is the prediction to be very accurate and to use a reliable method for that. One of these methods is using artificial intelligence systems and in particular is the use of Artificial Neural Networks (ANN). So in this paper, we used artificial neural networks to (...) predict whether a person is diabetic or not. The criterion was to minimize the error function in neural network training using a neural network model. After training the ANN model, the average error function of the neural network was equal to 0.01 and the accuracy of the prediction of whether a person is diabetics or not was 87.3%. (shrink)
The best arguments against gun control invoke moral rights—it might be good if there were fewer guns in circulation, but there is a moral right to own firearms. Rather than emphasizing the potential benefits of gun control, this paper meets the best arguments on their home turf. I argue that there simply is no moral right to keep guns on one’s person or in one’s residence. In fact, our moral rights support the mutual disarmament of citizens and police.
In 2014, questionable police killings of Eric Garner, Michael Brown, and Tamir Rice sparked mass protests and put policing at the center of national debate. Mass protests erupted again in 2020 after the brutal police killing of George Floyd. These and other incidents have put a spotlight on a host of issues that threaten the legitimacy of policing—excessive force, racial bias, over-policing of marginalized communities, historic injustices that remain unaddressed, and new technology that increases police powers. This (...) introduction gives an overview of these ethical challenges facing police today and the democratic institutions that oversee them. It then outlines the various interdisciplinary perspectives—from Black studies, criminology, history, law, philosophy, political science, and sociology—collected in the volume. Together, these contributions aim to clarify the question of which ethical principles should guide police, where current practices fall short, and what strategies hold the most promise for addressing these failures. (shrink)
Under a ‘dirty hands’ model of undercover policing, it inevitably involves situations where whatever the law-enforcement agent does is morally problematic. Christopher Nathan argues against this model. Nathan’s criticism of the model is predicated on the contention that it entails the view, which he considers objectionable, that morally wrongful acts are central to undercover policing. We address this criticism, and some other aspects of Nathan’s discussion of the ‘dirty hands’ model, specifically in relation to legal entrapment to commit (...) a crime. Following János Kis’s work on political morality, we explain three dilemmatic versions of the ‘dirty hands’ model. We show that while two of these are inapplicable to legal entrapment, the third has better prospects. We then argue that, since the third model precludes Nathan’s criticism, a viable ‘dirty hands’ model of legal entrapment remains an open possibility. Finally, we generalize this result, showing that the case of legal entrapment is not special: the result holds good for policing practices more generally, including such routine practices as arrest, detention, and restraint. (shrink)
What should be a police department's policies and regulations on the use of deadly force? What is the relevance for this of the state law on capital punishment?
The number of police departments carrying Narcan keeps increasing at a fast pace throughout the U.S., as it is considered an effective measure to fight the opioid epidemic. However, there have been strong oppositions to the idea of the police Narcan use. Still, in 2018, the nation is debating about it. Though not clearly visible to the public, there are important ethical arguments against the police Narcan use which necessarily involve understanding of the ethical roles and responsibilities of police as (...) the law enforcement agency and apprehension of the moral status of a non-therapeutic opioid use. The authors of the paper investigate, primarily, the existing ethical controversies surrounding the police Narcan use while touching upon the issue of the decriminalizing drug policy in the U.S. The authors conclude that the police can carry and administer Narcan without self-contradiction and that the policymakers’ investigation on the drug decriminalization policy should begin with the understanding of the “common morality” of the American public, the ethical view shared and practiced by the greatest number of people. (shrink)
A number of the men who would become the 9/11 hijackers were stopped for minor traffic violations. They were pulled over by police officers for speeding or caught by random inspection without a driver’s license. For United States government commissions and the press, these brushes with the law were missed opportunities. For some police officers though, they were of personal and professional significance. These officers replayed the incidents of contact with the 19 men, which lay bare the uncertainty of every (...) encounter, whether a traffic stop, or with someone taking photos of a landmark. Representatives from law enforcement began to design policies to include local police in national intelligence, with the idea of capitalizing on what patrol officers already do in dealing with the general public. Several initiatives were launched, among these, the Suspicious Activity Reporting Initiative. Routine reporting of suspicious activity was developed into steps for gathering, assessing and sharing terrorism-related information with a larger law enforcement and intelligence network. Through empirical analysis of counterterrorism efforts and recent scholarship on it, this chapter discusses prevention, preemption, and anticipation as three technologies of security, focusing on how each deals with uncertainty. The Suspicious Activity Reporting Initiative, this analysis suggests, is an anticipatory technology which constitutes police officers and intelligence analysts as subjects who work in a mode of uncertainty. (shrink)
In this paper an Artificial Neural Network (ANN) model was used to help cars dealers recognize the many characteristics of cars, including manufacturers, their location and classification of cars according to several categories including: Buying, Maint, Doors, Persons, Lug_boot, Safety, and Overall. ANN was used in forecasting car acceptability. The results showed that ANN model was able to predict the car acceptability with 99.62 %. The factor of Safety has the most influence on car acceptability evaluation. Comparative study method is (...) suitable for the evaluation of car acceptability forecasting, can also be extended to all other areas. (shrink)
Abstract: Predication is an application of Artificial Neural Network (ANN). It is a supervised learning due to predefined input and output attributes. Multi-Layer ANN model is used for training, validating, and testing of the data. In this paper, Multi-Layer ANN model was used to train and test the mushroom dataset to predict whether it is edible or poisonous. The Mushrooms dataset was prepared for training, 8124 instances were used for the training. JustNN software was used to training and validating the (...) data. The most important attributes of the data set were identified, and the accuracy of the predication of whether Mushroom is edible or Poisonous was 99.25%. (shrink)
This article offers a normative analysis of some of the most controversial incidents involving police—what I call police-generated killings. In these cases, bad police tactics create a situation where deadly force becomes necessary, becomes perceived as necessary, or occurs unintentionally. Police deserve blame for such killings because they choose tactics that unnecessarily raise the risk of deadly force, thus violating their obligation to prioritize the protection of life. Since current law in the United States fails to ban many bad tactics, (...) police- generated killings often are treated as “lawful but awful.” To address these killings, some call on changes to departmental policies or voluntary reparations by local governments, yet such measures leave in place a troubling gap between ethics and law. I argue that police-generated killings merit legal sanctions by appealing to a relevant analogy: self-generated self-defense, where the person who engages in self-defense started the trouble. The persistent lack of accountability for police-generated killings threatens life, police legitimacy, and trust in democratic institutions. The article closes by identifying tools in law and policy to address this challenge. (shrink)
After a fatal police shooting in the United States, it is typical for city and police officials to view the family of the deceased through the lens of the law. If the family files a lawsuit, the city and police department consider it their legal right to defend themselves and to treat the plaintiffs as adversaries. However, reparations and the concept of “reparative justice” allow authorities to frame police killings in moral rather than legal terms. When a police officer kills (...) a person who was not liable to this outcome, officials should offer monetary reparations, an apology, and other redress measures to the victim’s family. To make this argument, the article presents a philosophical account of non-liability hailing from self-defense theory, centering the distinction between reasonableness and liability. Reparations provide a non-adversarial alternative to civil litigation after a non-liable person has been killed by a police officer. In cases where the officer nevertheless acted reasonably, “institutional agent-regret” rather than moral responsibility grounds the argument for reparations. Throughout the article, it is argued that there are distinct racial wrongs both when police kill a non-liable black person and when family members of a black victim are treated poorly by officials in the civil litigation process. (shrink)
Over the past decade, police departments in many countries have experimented with and increasingly adopted the use of police body-worn cameras. This article aims to examine the moral issues raised by the use of PBWCs, and to provide an overall assessment of the conditions under which the use of PBWCs is morally permissible. It first reviews the current evidence for the effects of using PBWCs. On the basis of this review the article sets out a teleological argument for the use (...) of PBWCs. The final two sections of the article review two deontological objections to the use of PBWCs: the idea that use of PBWCs is based on or expresses disrespectful mistrust, and the idea that the use of PBWCs violates a right to privacy. The article argues that neither of these objections is persuasive, and concludes that we should conditionally accept and support the use of PBWCs. (shrink)
POLICE ETHICS – Abstract Mark Lauchs -/- Police are an essential part of the justice system. They are the frontline actors in keeping the peace, social stability and cohesion. Thus good governance relies on honest policing. However, there will always be at least a small group of corrupt police officers, even though Australians are culturally averse to corruption (Khatri, Tsang, & Begley, 2006). There have been many cases where the allegations of police corruption have reached to the highest levels (...) of a state police force (Blanch, 1982) and, in the case of the Fitzgerald Inquiry (Fitzgerald, 1989), ended in a commissioner being convicted of corruption. -/- Any public official who places their own interests before those of the public have corrupted a system in which they are supposed to act as agents of the public, will undermine the good governance of a society (Lauchs, 2007). Police officers attract offers of corruption because of their ability to enforce or ignore the law. Police who are unethical or in financial stress are vulnerable to offers of illicit payments. Longstanding arrangements of corruption within a police branch can lead to a corruption network between police and criminals. Organised police corruption constitutes “social behaviour, conducted in groups within organisations, that is powerful enough to override the officer’s oath of office, personal conscience, departmental regulations and criminal laws.” (Punch, 2000) it is an even greater threat to the community because the damage done has more impact than the sum of the individual acts of corruption. This chapter will discuss the types of police corruption and focus on the organisation as the source of core police culture. -/- . (shrink)
I propose an account of the speech act of prediction that denies that the contents of prediction must be about the future and illuminates the relation between prediction and assertion. My account is a synthesis of two ideas: (i) that what is in the future in prediction is the time of discovery and (ii) that, as Benton and Turri recently argued, prediction is best characterized in terms of its constitutive norms.
Essay exploring the extent to which certain agreements between the police and informants are an affront (both procedurally and substantively) to basic tenets of the liberal tradition in legal and political philosophy.
To automate examination of massive amounts of sequence data for biological function, it is important to computerize interpretation based on empirical knowledge of sequence-function relationships. For this purpose, we have been constructing an Artificial Neural Network (ANN) by organizing various experimental and computational observations as a collection ANN models. Here we propose an ANN model which utilizes the Dataset for UCI Machine Learning Repository, for predicting localization sites of proteins. We collected data for 336 proteins with known localization sites and (...) divided them into training data and validating data. It was found that the accuracy rate for predicting Protein Localization Sites in Cells is 92.11%. This Indicates that Artificial Neural Network approach is powerful and flexible enough to be used in Protein Localization Sites prediction. (shrink)
In this research, an Artificial Neural Network (ANN) model was developed and tested to predict Birth Weight. A number of factors were identified that may affect birth weight. Factors such as smoke, race, age, weight (lbs) at last menstrual period, hypertension, uterine irritability, number of physician visits in 1st trimester, among others, as input variables for the ANN model. A model based on multi-layer concept topology was developed and trained using the data from some birth cases in hospitals. The evaluation (...) of testing the dataset shows that the ANN model is capable of correctly predicting the birth weight with 100% accuracy. (shrink)
Buildings energy consumption is growing gradually and put away around 40% of total energy use. Predicting heating and cooling loads of a building in the initial phase of the design to find out optimal solutions amongst different designs is very important, as ell as in the operating phase after the building has been finished for efficient energy. In this study, an artificial neural network model was designed and developed for predicting heating and cooling loads of a building based on a (...) dataset for building energy performance. The main factors for input variables are: relative compactness, roof area, overall height, surface area, glazing are a, wall area, glazing area distribution of a building, orientation, and the output variables: heating and cooling loads of the building. The dataset used for training are the data published in the literature for various 768 residential buildings. The model was trained and validated, most important factors affecting heating load and cooling load are identified, and the accuracy for the validation was 99.60%. (shrink)
We introduce the predictive processing account of body representation, according to which body representation emerges via a domain-general scheme of (long-term) prediction error minimisation. We contrast this account against one where body representation is underpinned by domain-specific systems, whose exclusive function is to track the body. We illustrate how the predictive processing account offers considerable advantages in explaining various empirical findings, and we draw out some implications for body representation research.
This chapter explores to what extent some core ideas of predictive processing can be applied to the phenomenology of time consciousness. The focus is on the experienced continuity of consciously perceived, temporally extended phenomena (such as enduring processes and successions of events). The main claim is that the hierarchy of representations posited by hierarchical predictive processing models can contribute to a deepened understanding of the continuity of consciousness. Computationally, such models show that sequences of events can be represented (...) as states of a hierarchy of dynamical systems. Phenomenologically, they suggest a more fine-grained analysis of the perceptual contents of the specious present, in terms of a hierarchy of temporal wholes. Visual perception of static scenes not only contains perceived objects and regions but also spatial gist; similarly, auditory perception of temporal sequences, such as melodies, involves not only perceiving individual notes but also slightly more abstract features (temporal gist), which have longer temporal durations (e.g., emotional character or rhythm). Further investigations into these elusive contents of conscious perception may be facilitated by findings regarding its neural underpinnings. Predictive processing models suggest that sensorimotor areas may influence these contents. (shrink)
Deep learning may transform health care, but model development has largely been dependent on availability of advanced technical expertise. The aim of this study is to develop a deep learning model to predict the gender from retinal fundus images. The proposed model was based on the Xception pre-trained model. The proposed model was trained on 20,000 retinal fundus images from Kaggle depository. The dataset was preprocessed them split into three datasets (training, validation, Testing). After training and cross-validating the proposed model, (...) it was evaluated using the testing dataset. The result of testing, the area under receiver operating characteristic curve (AUROC) of the model was 0.99, precision, recall, f1-score and accuracy were 99%, precision, recall, f1-score and accuracy were 96.83%, 96.83%, 96.82% and 96.83% respectively.. Clinicians are presently unaware of dissimilar retinal feature variants between females and males, stressing the importance of model explain ability for the prediction of gender from retinal fundus images. The proposed deep learning may enable clinician-driven automated discovery of novel visions and disease biomarkers. (shrink)
Abstract: Heart diseases are increasing daily at a rapid rate and it is alarming and vital to predict heart diseases early. The diagnosis of heart diseases is a challenging task i.e. it must be done accurately and proficiently. The aim of this study is to determine which patient is more likely to have heart disease based on a number of medical features. We organized a heart disease prediction model to identify whether the person is likely to be diagnosed with a (...) heart disease or not using the medical features of the person. We used many different algorithms of machine learning such as Gaussian Mixture, Nearest Centroid, MultinomialNB, Logistic RegressionCV, Linear SVC, Linear Discriminant Analysis, SGD Classifier, Extra Tree Classifier, Calibrated ClassifierCV, Quadratic Discriminant Analysis, GaussianNB, Random Forest Classifier, ComplementNB, MLP Classifier, BernoulliNB, Bagging Classifier, LGBM Classifier, Ada Boost Classifier, K Neighbors Classifier, Logistic Regression, Gradient Boosting Classifier, Decision Tree Classifier, and Deep Learning to predict and classify the patient with heart disease. A quite helpful approach was used to regulate how the model can be used to improve the accuracy of prediction of heart diseases in any person. The strength of the proposed model was very satisfying and was able to predict evidence of having a heart disease in a particular person by using Deep Learning and Random Forest Classifier which showed a good accuracy in comparison to the other used classifiers. The proposed heart disease prediction model will enhances medical care and reduces the cost. This study gives us significant knowledge that can help us predict the person with heart disease. The dataset was collected from Kaggle depository and the model is implemented using python. (shrink)
In this paper I consider Corneliu Porumboiu’s ‘Police, Adjective’ (Romania, 2009) as an instance of a puzzling work of art. Part of what is puzzling about it is the range of extreme responses to it, both positive and negative. I make sense of this puzzlement and try to alleviate it, while considering the film alongside Ludwig Wittgenstein’s arguably puzzling “Lectures on Aesthetics” (from 1938). I use each work to illuminate possible understandings of the other. The upshot is that it is (...) is plausible to regard both as engaged, in part, in preparing us to make sense both of themselves, and then also of other works. (shrink)
Predictive Processing theory, hotly debated in neuroscience, psychology and philosophy, promises to explain a number of perceptual and cognitive phenomena in a simple and elegant manner. In some of its versions, the theory is ambitiously advertised as a new theory of conscious perception. The task of this paper is to assess whether this claim is realistic. We will be arguing that the Predictive Processing theory cannot explain the transition from unconscious to conscious perception in its proprietary terms. The (...) explanations offer by PP theorists mostly concern the preconditions of conscious perception, leaving the genuine material substrate of consciousness untouched. (shrink)
Debates about terrorism and technology often focus on the potential uses of technology by non-state terrorist actors and by states as forms of counterterrorism. Yet, little has been written about how technology shapes how we think about terrorism. In this chapter I argue that technology, and the language we use to talk about technology, constrains and shapes our understanding of the nature, scope, and impact of terrorism, particularly in relation to state terrorism. After exploring the ways in which technology shapes (...) moral thinking, I use two case studies to demonstrate how technology simultaneously hides and enables terroristic forms of state violence: police control technologies and Unmanned Aerial Vehicles (UAVs), or drones. In both these cases, I argue that features of these technologies, combined with a narrative of precision and efficiency, serves to mask the terroristic nature of the violence that these practices inflict and reinforces the moral exclusion of those against whom these technologies are deployed. In conclusion, I propose that identifying acts of terrorism requires a focus on the impact of technologies of violence (whether they are “high tech” or not) on those most affected, regardless of whether users of these technologies conceive of their actions as terroristic. (shrink)
In this article, we explore some of the roles of cameras in policing in the United States. We outline the trajectory of key new media technologies, arguing that cameras and social media together generate the ambient surveillance through which graphic violence is now routinely captured and circulated. Drawing on Michel Foucault, we suggest that there are important intersections between this video footage and police subjectivity, and propose to look at two: recruit training at the Washington state Basic Law Enforcement (...) Academy and the Seattle Police Department’s body-worn camera project. We analyze these cases in relation to the major arguments for and against initiatives to increase police use of cameras, outlining what we see as techno-optimistic and techno-pessimistic positions. Drawing on the pragmatism of John Dewey, we argue for a third position that calls for field-based inquiry into the specific co-production of socio-techno subjectivities. (shrink)
In this paper an Artificial Neural Network (ANN) model, for predicting the category of a tumor was developed and tested. Taking patients’ tests, a number of information gained that influence the classification of the tumor. Such information as age, sex, histologic-type, degree-of-diffe, status of bone, bone-marrow, lung, pleura, peritoneum, liver, brain, skin, neck, supraclavicular, axillar, mediastinum, and abdominal. They were used as input variables for the ANN model. A model based on the Multilayer Perceptron Topology was established and trained using (...) data set which its title is “primary tumor” and was obtained from the University Medical Centre, Institute of Oncology, Ljubljana, Yugoslavia Test data evaluation shows that the ANN model is able to correctly predict the tumor category with 76.67 % accuracy. (shrink)
Shared activity is often simply willed into existence by individuals. This poses a problem. Philosophical reflection suggests that shared activity involves a distinctive, interlocking structure of intentions. But it is not obvious how one can form the intention necessary for shared activity without settling what fellow participants will do and thereby compromising their agency and autonomy. One response to this problem suggests that an individual can have the requisite intention if she makes the appropriate predictions about fellow participants. I argue (...) that implementing this predictive strategy risks derailing practical reasoning and preventing one from forming the intention. My alternative proposal for reconciling shared activity with autonomy appeals to the idea of acting directly on another's intention. In particular, I appeal to the entitlement one sometimes has to another's practical judgment, and the corresponding authority the other sometimes has to settle what one is to do. (shrink)
Inflationary cosmology has been widely accepted due to its successful predictions: for a “generic” initial state, inflation produces a homogeneous, flat, bubble with an appropriate spectrum of density perturbations. However, the discovery that inflation is “generically eternal,” leading to a vast multiverse of inflationary bubbles with different low-energy physics, threatens to undermine this account. There is a “predictability crisis” in eternal inflation, because extracting predictions apparently requires a well-defined measure over the multiverse. This has led to discussions of anthropic predictions (...) based on a measure over the multiverse, and an assumption that we are typical observers. I will give a pessimistic assessment of attempts to make predictions in this sense, emphasizing in particular problems that arise even if a unique measure can be found. (shrink)
Neuroscientists have in recent years turned to building models that aim to generate predictions rather than explanations. This “predictive turn” has swept across domains including law, marketing, and neuropsychiatry. Yet the norms of prediction remain undertheorized relative to those of explanation. I examine two styles of predictive modeling and show how they exemplify the normative dynamics at work in prediction. I propose an account of how predictive models, conceived of as technological devices for aiding decision-making, can come (...) to be adequate for purposes that are defined by both their guiding research questions and their larger social context of application. (shrink)
Abalones have long been a valuable food source for humans in every area of the world where a species is abundant. Predicting the age of abalone is done using physical measurements. The age of abalone is determined by cutting the shell through the cone, staining it, and counting the number of rings through a microscope -- a boring and time-consuming task. Other measurements, which are easier to obtain, are used to predict the age of abalone is using Artificial Neural Network (...) (ANN) which is a branch of Artificial Intelligence. The dataset was collected form UCI Machine learning Repository. To predict the age of abalone using physical measurements, an ANN with multi-layer model using JustNN (JNN) tool is proposed. The proposed model was trained and tested and the accuracy was obtained. The best accuracy rate was 92.22%. (shrink)
This review is republished to illustrate how important this book is to us today, with legislation and caselaw additions as follows: UK Anti-terrorism, Crime and Security Act 2001; UK Contempt o Court Act 1981; UK Obscene Publications Act 1964; UK Perjury Act 1911; UK Police Misconduct Regulations 1999; UK Prevention o Corruption Act 1996; UK Fraud Act 2006; UK Bribery Act 2010; UK Prevention o Corruption Act 1926; UK Public Bodies Corrupt Practices Act 1889; and UK Terrorism Act 2000. Case (...) law includes: R v Bellum [1989]; R v Dryden [1995]; R v Kiin [1994]; R v Loosley [2002]; the Steven Lawrence murder case; and Tsang Ping-nam v R [1981]. (shrink)
In this paper an Artificial Neural Network (ANN) model was used to help cars dealers recognize the many characteristics of cars, including manufacturers, their location and classification of cars according to several categories including: Buying, Maint, Doors, Persons, Lug_boot, Safety, and Overall. ANN was used in forecasting car acceptability. The results showed that ANN model was able to predict the car acceptability with 99.12 %. The factor of Safety has the most influence on car acceptability evaluation. Comparative study method is (...) suitable for the evaluation of car acceptability forecasting, can also be extended to all other areas. (shrink)
Should we insist on prediction, i.e. on correctly forecasting the future? Or can we rest content with accommodation, i.e. empirical success only with respect to the past? I apply general considerations about this issue to the case of economics. In particular, I examine various ways in which mere accommodation can be sufficient, in order to see whether those ways apply to economics. Two conclusions result. First, an entanglement thesis: the need for prediction is entangled with the methodological role of orthodox (...) economic theory. Second, a conditional predictivism: if we are not committed to orthodox economic theory, then we should demand prediction rather than accommodation – against most current practice. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.