Results for 'Ml Rovaletti'

74 found
Order:
  1. Translations between logical systems: a manifesto.Walter A. Carnielli & Itala Ml D'Ottaviano - 1997 - Logique Et Analyse 157:67-81.
    The main objective o f this descriptive paper is to present the general notion of translation between logical systems as studied by the GTAL research group, as well as its main results, questions, problems and indagations. Logical systems here are defined in the most general sense, as sets endowed with consequence relations; translations between logical systems are characterized as maps which preserve consequence relations (that is, as continuous functions between those sets). In this sense, logics together with translations form a (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  2. Do ML models represent their targets?Emily Sullivan - forthcoming - Philosophy of Science.
    I argue that ML models used in science function as highly idealized toy models. If we treat ML models as a type of highly idealized toy model, then we can deploy standard representational and epistemic strategies from the toy model literature to explain why ML models can still provide epistemic success despite their lack of similarity to their targets.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Link Uncertainty, Implementation, and ML Opacity: A Reply to Tamir and Shech.Emily Sullivan - 2022 - In Insa Lawler, Kareem Khalifa & Elay Shech (eds.), Scientific Understanding and Representation. Routledge. pp. 341-345.
    This chapter responds to Michael Tamir and Elay Shech’s chapter “Understanding from Deep Learning Models in Context.”.
    Download  
     
    Export citation  
     
    Bookmark  
  4. Making Intelligence: Ethics, IQ, and ML Benchmarks.Borhane Blili-Hamelin & Leif Hancox-Li - manuscript
    The ML community recognizes the importance of anticipating and mitigating the potential negative impacts of benchmark research. In this position paper, we argue that more attention needs to be paid to areas of ethical risk that lie at the technical and scientific core of ML benchmarks. We identify overlooked structural similarities between human IQ and ML benchmarks. Human intelligence and ML benchmarks share similarities in setting standards for describing, evaluating and comparing performance on tasks relevant to intelligence. This enables us (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Modeling Unicorns and Dead Cats: Applying Bressan’s ML ν to the Necessary Properties of Non-existent Objects.Tyke Nunez - 2018 - Journal of Philosophical Logic 47 (1):95–121.
    Should objects count as necessarily having certain properties, despite their not having those properties when they do not exist? For example, should a cat that passes out of existence, and so no longer is a cat, nonetheless count as necessarily being a cat? In this essay I examine different ways of adapting Aldo Bressan’s MLν so that it can accommodate an affirmative answer to these questions. Anil Gupta, in The Logic of Common Nouns, creates a number of languages that have (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Predicting Students' end-of-term Performances using ML Techniques and Environmental Data.Ahmed Mohammed Husien, Osama Hussam Eljamala, Waleed Bahgat Alwadia & Samy S. Abu-Naser - 2023 - International Journal of Academic Information Systems Research (IJAISR) 7 (10):19-25.
    Abstract: This study introduces a machine learning-based model for predicting student performance using a comprehensive dataset derived from educational sources, encompassing 15 key features and comprising 62,631 student samples. Our five-layer neural network demonstrated remarkable performance, achieving an accuracy of 89.14% and an average error of 0.000715, underscoring its effectiveness in predicting student outcomes. Crucially, this research identifies pivotal determinants of student success, including factors such as socio-economic background, prior academic history, study habits, and attendance patterns, shedding light on the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Widening Access to Applied Machine Learning With TinyML.Vijay Reddi, Brian Plancher, Susan Kennedy, Laurence Moroney, Pete Warden, Lara Suzuki, Anant Agarwal, Colby Banbury, Massimo Banzi, Matthew Bennett, Benjamin Brown, Sharad Chitlangia, Radhika Ghosal, Sarah Grafman, Rupert Jaeger, Srivatsan Krishnan, Maximilian Lam, Daniel Leiker, Cara Mann, Mark Mazumder, Dominic Pajak, Dhilan Ramaprasad, J. Evan Smith, Matthew Stewart & Dustin Tingley - 2022 - Harvard Data Science Review 4 (1).
    Broadening access to both computational and educational resources is crit- ical to diffusing machine learning (ML) innovation. However, today, most ML resources and experts are siloed in a few countries and organizations. In this article, we describe our pedagogical approach to increasing access to applied ML through a massive open online course (MOOC) on Tiny Machine Learning (TinyML). We suggest that TinyML, applied ML on resource-constrained embedded devices, is an attractive means to widen access because TinyML leverages low-cost and globally (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Securing the Internet of Things: A Study on Machine Learning-Based Solutions for IoT Security and Privacy Challenges.Aziz Ullah Karimy & P. Chandrasekhar Reddy - 2023 - Zkg International 8 (2):30-65.
    The Internet of Things (IoT) is a rapidly growing technology that connects and integrates billions of smart devices, generating vast volumes of data and impacting various aspects of daily life and industrial systems. However, the inherent characteristics of IoT devices, including limited battery life, universal connectivity, resource-constrained design, and mobility, make them highly vulnerable to cybersecurity attacks, which are increasing at an alarming rate. As a result, IoT security and privacy have gained significant research attention, with a particular focus on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Diachronic and synchronic variation in the performance of adaptive machine learning systems: the ethical challenges.Joshua Hatherley & Robert Sparrow - 2023 - Journal of the American Medical Informatics Association 30 (2):361-366.
    Objectives: Machine learning (ML) has the potential to facilitate “continual learning” in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such “adaptive” ML systems in medicine that have, thus far, been neglected in the literature. -/- Target audience: The target audiences for this tutorial are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Mapping Value Sensitive Design onto AI for Social Good Principles.Steven Umbrello & Ibo van de Poel - 2021 - AI and Ethics 1 (3):283–296.
    Value Sensitive Design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  11. An Unconventional Look at AI: Why Today’s Machine Learning Systems are not Intelligent.Nancy Salay - 2020 - In LINKs: The Art of Linking, an Annual Transdisciplinary Review, Special Edition 1, Unconventional Computing. pp. 62-67.
    Machine learning systems (MLS) that model low-level processes are the cornerstones of current AI systems. These ‘indirect’ learners are good at classifying kinds that are distinguished solely by their manifest physical properties. But the more a kind is a function of spatio-temporally extended properties — words, situation-types, social norms — the less likely an MLS will be able to track it. Systems that can interact with objects at the individual level, on the other hand, and that can sustain this interaction, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12.  62
    Are Generics and Negativity about Social Groups Common on Social Media? – A Comparative Analysis of Twitter (X) Data.Uwe Peters & Ignacio Ojea Quintana - forthcoming - Synthese.
    Many philosophers hold that generics (i.e., unquantified generalizations) are pervasive in communication and that when they are about social groups, this may offend and polarize people because generics gloss over variations between individuals. Generics about social groups might be particularly common on Twitter (X). This remains unexplored, however. Using machine learning (ML) techniques, we therefore developed an automatic classifier for social generics, applied it to 1.1 million tweets about people, and analyzed the tweets. While it is often suggested that generics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted toolkits that (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  14. The contest between parsimony and likelihood.Elliott Sober - 2004 - Systematic Biology 53 (4):644-653.
    Maximum Parsimony (MP) and Maximum Likelihood (ML) are two methods for evaluating which phlogenetic tree is best supported by data on the characteristics of leaf objects (which may be species, populations, or individual organisms). MP has been criticized for assuming that evolution proceeds parsimoniously -- that if a lineage begins in state i and ends in state j, the way it got from i to j is by the smallest number of changes. MP has been criticized for needing to assume (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  15.  50
    Personalized Patient Preference Predictors are Neither Technically Feasible Nor Ethically Desirable.Nathaniel Sharadin - forthcoming - American Journal of Bioethics.
    Except in extraordinary circumstances, patients' clinical care should reflect their preferences. Incapacitated patients cannot report their preferences. This is a problem. Extant solutions to the problem are inadequate: surrogates are unreliable, and advance directives are uncommon. In response, some authors have suggested developing algorithmic "patient preference predictors" (PPPs) to inform care for incapacitated patients. In a recent paper, Earp et al. propose a new twist on PPPs. Earp et al. suggest we personalize PPPs using modern machine learning (ML) techniques. In (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  17. Megaric Metaphysics.Dominic Bailey - 2012 - Ancient Philosophy 32 (2):303-321.
    I examine two startling claims attributed to some philosophers associated with Megara on the Isthmus of Corinth, namely: Ml. Something possesses a capacity at t if and only if it is exercising that capacity at t. M2. One can speak of a thing only by using its own proper A6yor;. In what follows, I will call the conjunction of Ml and M2 'Megaricism' .1 The lit­ erature on ancient philosophy contains several valuable discussions of Ml and M2 taken individually .2 (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  18. Interpretability and Unification.Adrian Erasmus & Tyler D. P. Brunet - 2022 - Philosophy and Technology 35 (2):1-6.
    In a recent reply to our article, “What is Interpretability?,” Prasetya argues against our position that artificial neural networks are explainable. It is claimed that our indefeasibility thesis—that adding complexity to an explanation of a phenomenon does not make the phenomenon any less explainable—is false. More precisely, Prasetya argues that unificationist explanations are defeasible to increasing complexity, and thus, we may not be able to provide such explanations of highly complex AI models. The reply highlights an important lacuna in our (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - manuscript
    As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  20.  59
    Görüngüsel Muhafazakarlık: Genel Bakış ve Bazı Yaygın Eleştirilere Alternatif Yanıtlar.Utku Ataş - 2023 - Kilikya Felsefe Dergisi / Cilicia Journal of Philosophy 10 (2):34-52.
    Turkish Epistemoloji rasyonel inançların felsefi analizini konu edinmesi nedeniyle gerekçelendirme edimine merkezi bir önem atfeder. Gerekçelendirme kişinin bir önermeye inanmak için gerekçeye sahip olunmasını sağlayan koşul veya koşullar dizisinin tespit edilmesini içerir. İnançlarımızın birçoğunun çıkarımsal olmayan gerekçelerinin bulunduğu şeklindeki ılımlı/yanılırcı temelci perspektifle uyum sağlayan bir gerekçelendirme teorisi olarak Michael Huemer tarafından ortaya konan görüngüsel muhafazakarlık ilkesi, bu türden bir koşulu tanımlar. GM formülasyonuna göre eğer S’ye p olarak görünüyorsa, çürütücü etmenlerin yokluğunda S’nin p’ye inanmak için en azından bir dereceye kadar (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 1.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  22. Preemption effects in visual search: Evidence for low-level grouping.Ronald A. Rensink & James T. Enns - 1995 - Psychological Review 102 (1):101-130.
    Experiments are presented showing that visual search for Mueller-Lyer (ML) stimuli is based on complete configurations, rather than component segments. Segments easily detected in isolation were difficult to detect when embedded in a configuration, indicating preemption by low-level groups. This preemption—which caused stimulus components to become inaccessible to rapid search—was an all-or-nothing effect, and so could serve as a powerful test of grouping. It is shown that these effects are unlikely to be due to blurring by simple spatial filters at (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  23. Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.Benjamin Davies & Thomas Douglas - 2022 - In Jesper Ryberg & Julian V. Roberts (eds.), Sentencing and Artificial Intelligence. Oxford: Oxford University Press.
    It is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed to predict recidivism. A similar approach could be taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: information about race could be withheld from the ML tool during its training phase, ensuring that the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  88
    Medical Image Classification with Machine Learning Classifier.Destiny Agboro - forthcoming - Journal of Computer Science.
    In contemporary healthcare, medical image categorization is essential for illness prediction, diagnosis, and therapy planning. The emergence of digital imaging technology has led to a significant increase in research into the use of machine learning (ML) techniques for the categorization of images in medical data. We provide a thorough summary of recent developments in this area in this review, using knowledge from the most recent research and cutting-edge methods.We begin by discussing the unique challenges and opportunities associated with medical image (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Persons or datapoints?: Ethics, artificial intelligence, and the participatory turn in mental health research.Joshua August Skorburg, Kieran O'Doherty & Phoebe Friesen - 2024 - American Psychologist 79 (1):137-149.
    This article identifies and examines a tension in mental health researchers’ growing enthusiasm for the use of computational tools powered by advances in artificial intelligence and machine learning (AI/ML). Although there is increasing recognition of the value of participatory methods in science generally and in mental health research specifically, many AI/ML approaches, fueled by an ever-growing number of sensors collecting multimodal data, risk further distancing participants from research processes and rendering them as mere vectors or collections of data points. The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. The debate on the ethics of AI in health care: a reconstruction and critical review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Ethical Issues in Text Mining for Mental Health.Joshua Skorburg & Phoebe Friesen - forthcoming - In M. Dehghani & R. Boyd (ed.), The Atlas of Language Analysis in Psychology.
    A recent systematic review of Machine Learning (ML) approaches to health data, containing over 100 studies, found that the most investigated problem was mental health (Yin et al., 2019). Relatedly, recent estimates suggest that between 165,000 and 325,000 health and wellness apps are now commercially available, with over 10,000 of those designed specifically for mental health (Carlo et al., 2019). In light of these trends, the present chapter has three aims: (1) provide an informative overview of some of the recent (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. MACHINE LEARNING IMPROVED ADVANCED DIAGNOSIS OF SOFT TISSUES TUMORS.M. Bavadharani - 2022 - Journal of Science Technology and Research (JSTAR) 3 (1):112-123.
    Delicate Tissue Tumors (STT) are a type of sarcoma found in tissues that interface, backing, and encompass body structures. Due to their shallow recurrence in the body and their extraordinary variety, they seem, by all accounts, to be heterogeneous when seen through Magnetic Resonance Imaging (MRI). They are effortlessly mistaken for different infections, for example, fibro adenoma mammae, lymphadenopathy, and struma nodosa, and these indicative blunders have an extensive unfavorable impact on the clinical treatment cycle of patients. Analysts have proposed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Axe the X in XAI: A Plea for Understandable AI.Andrés Páez - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. The purpose of qualia: What if human thinking is not (only) information processing?Martin Korth - manuscript
    Despite recent breakthroughs in the field of artificial intelligence (AI) – or more specifically machine learning (ML) algorithms for object recognition and natural language processing – it seems to be the majority view that current AI approaches are still no real match for natural intelligence (NI). More importantly, philosophers have collected a long catalogue of features which imply that NI works differently from current AI not only in a gradual sense, but in a more substantial way: NI is closely related (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Understanding Biology in the Age of Artificial Intelligence.Adham El Shazly, Elsa Lawerence, Srijit Seal, Chaitanya Joshi, Matthew Greening, Pietro Lio, Shantung Singh, Andreas Bender & Pietro Sormanni - manuscript
    Modern life sciences research is increasingly relying on artificial intelligence (AI) approaches to model biological systems, primarily centered around the use of machine learning (ML) models. Although ML is undeniably useful for identifying patterns in large, complex data sets, its widespread application in biological sciences represents a significant deviation from traditional methods of scientific inquiry. As such, the interplay between these models and scientific understanding in biology is a topic with important implications for the future of scientific research, yet it (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Gönderim Üzerine.Bertrand Russell & Alper Yavuz - 2015 - Felsefe Tartismalari 49:55-72.
    Belirli betimlemeler, bir belirli tanımlıkla (Türkçede seslendirilmeyen ancak İngilizcede karşılığı "the" olan) başlayan "İngiltere'nin kralı", "Çin'in başkenti" gibi deyimlerdir. Russell bu yazıda belirli betimlemelerin mantıksal olarak nasıl çözümlenmesi gerektiği ile ilgili kendi betimlemeler kuramını ortaya atar. Russell'ın savı, belirli betimlemeler doğru bir biçimde çözümlenirse bir karşılığı olmayan "Fransa'nın şimdiki kralı" gibi deyimlerin yol açtığı türden birçok felsefi bilmecenin ortadan kalkacağıdır.
    Download  
     
    Export citation  
     
    Bookmark  
  34. Artificial Intelligence for the Internal Democracy of Political Parties.Claudio Novelli, Giuliano Formisano, Prathm Juneja, Sandri Giulia & Luciano Floridi - manuscript
    The article argues that AI can enhance the measurement and implementation of democratic processes within political parties, known as Intra-Party Democracy (IPD). It identifies the limitations of traditional methods for measuring IPD, which often rely on formal parameters, self-reported data, and tools like surveys. Such limitations lead to the collection of partial data, rare updates, and significant demands on resources. To address these issues, the article suggests that specific data management and Machine Learning (ML) techniques, such as natural language processing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35.  6
    Should the use of adaptive machine learning systems in medicine be classified as research?Robert Sparrow, Joshua Hatherley, Justin Oakley & Chris Bain - forthcoming - American Journal of Bioethics.
    A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called “update problem,” which concerns how regulators should approach systems whose performance and parameters continue to change even after they have received regulatory (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Vitaminas e minerais na nutrição de bovinos.Joyanne Mirelle de Sousa Ferreira, Cleyton de Almeida Araújo, Rosa Maria dos Santos Pessoa, Glayciane Costa Gois, Fleming Sena Campos, Saullo Laet Almeida Vicente, Angela Maria dos Santos Pessoa, Dinah Correia da Cunha Castro Costa, Paulo César da Silva Azevêdo & Deneson Oliveira Lima - 2023 - Rev Colombiana Cienc Anim. Recia 15 (2):e969.
    RESUMO A alimentação é o fator que mais onera um sistema de produção animal. Assim, a utilização de diferentes estratégias de alimentação dos animais ainda é o grande desafio da nutrição animal, principalmente, levando em consideração as exigências nutricionais de diferentes categorias de ruminantes, em especial bovinos em regiões tropicais, haja vista que a sazonalidade na produção de forragens afeta diretamente a produção bovina, promovendo inadequação no atendimento das exigências nutricionais dos animais principalmente em minerais e vitaminas. Uma alimentação que (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Asociación en IA en beneficio de las personas y la sociedad, retos y perspectivas.Fabio Morandín-Ahuerma - 2023 - In Principios normativos para una ética de la Inteligencia Artificial. Puebla, México: Consejo de Ciencia y Tecnología del Estado de Puebla (Concytep). pp. 115-126.
    La PAI es la “Asociación sobre inteligencia artificial en beneficio de las personas y la sociedad” (Partnership on AI to Benefit People and Society) y es una organización sin fines de lucro con sede en San Francisco, California, que reúne a organizaciones académicas, de la sociedad civil, a empresas tecnológicas y de los medios de comu- nicación para abordar cuestiones sustanciales, básicamente sobre el futuro de la IA, pero también otros importantes retos mundiales como el cambio climático, la alimentación, la (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38.  90
    The Use of Machine Learning Methods for Image Classification in Medical Data.Destiny Agboro - forthcoming - International Journal of Ethics.
    Integrating medical imaging with computing technologies, such as Artificial Intelligence (AI) and its subsets: Machine learning (ML) and Deep Learning (DL) has advanced into an essential facet of present-day medicine, signaling a pivotal role in diagnostic decision-making and treatment plans (Huang et al., 2023). The significance of medical imaging is escalated by its sustained growth within the realm of modern healthcare (Varoquaux and Cheplygina, 2022). Nevertheless, the ever-increasing volume of medical images compared to the availability of imaging experts. Biomedical experts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39.  57
    From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap.Tianqi Kou - manuscript
    Two goals - improving replicability and accountability of Machine Learning research respectively, have accrued much attention from the AI ethics and the Machine Learning community. Despite sharing the measures of improving transparency, the two goals are discussed in different registers - replicability registers with scientific reasoning whereas accountability registers with ethical reasoning. Given the existing challenge of the Responsibility Gap - holding Machine Learning scientists accountable for Machine Learning harms due to them being far from sites of application, this paper (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Excavating “Excavating AI”: The Elephant in the Gallery.Michael J. Lyons - 2020 - arXiv 2009:1-15.
    Two art exhibitions, “Training Humans” and “Making Faces,” and the accompanying essay “Excavating AI: The politics of images in machine learning training sets” by Kate Crawford and Trevor Paglen, are making substantial impact on discourse taking place in the social and mass media networks, and some scholarly circles. Critical scrutiny reveals, however, a self-contradictory stance regarding informed consent for the use of facial images, as well as serious flaws in their critique of ML training sets. Our analysis underlines the non-negotiability (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. The Future of Human-Artificial Intelligence Nexus and its Environmental Costs.Petr Spelda & Vit Stritecky - 2020 - Futures 117.
    The environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and Artificial Intelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al., 2019 in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive inferences. The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Towards Knowledge-driven Distillation and Explanation of Black-box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello (eds.), Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, to target (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - forthcoming - ACM Computing Surveys.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44.  84
    Responding to the Watson-Sterkenburg debate on clustering algorithms and natural kinds.Warmhold Jan Thomas Mollema - manuscript
    In Philosophy and Technology 36, David Watson discusses the epistemological and metaphysical implications of unsupervised machine learning (ML) algorithms. Watson is sympathetic to the epistemological comparison of unsupervised clustering, abstraction and generative algorithms to human cognition and sceptical about ML’s mechanisms having ontological implications. His epistemological commitments are that we learn to identify “natural kinds through clustering algorithms”, “essential properties via abstraction algorithms”, and “unrealized possibilities via generative models” “or something very much like them.” The same issue contains a commentary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Emergence of Ciprofloxacin Resistance among Pseudomonas Aeruginosa Isolated from Burn Patients [hplimg].M. R. Shakibaie, S. Adeli & Y. Nikian - 2001 - Emergence: Complexity and Organization 26 (3&4).
    Background: Increasing resistance of Pseudomonas aeruginosa to ciprofloxacin in ICU/burn units has created a problem in the treatment of infections caused by this microorganism. -/- Methods: Fifty P. aeruginosa strains were isolated from burn patients hospitalized in the Kerman Hospital during May 1999-April 2000 and were tested for in-vitro sensitivity to different antibiotics by disc diffusion breakpoint assay. The isolates were subjected to minimum inhibitory concentration (MIC) test by agar dilution method. Existence of the plasmids was also investigated in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Vertrouwen in de geneeskunde en kunstmatige intelligentie.Lily Frank & Michal Klincewicz - 2021 - Podium Voor Bioethiek 3 (28):37-42.
    Kunstmatige intelligentie (AI) en systemen die met machine learning (ML) werken, kunnen veel onderdelen van het medische besluitvormingsproces ondersteunen of vervangen. Ook zouden ze artsen kunnen helpen bij het omgaan met klinische, morele dilemma’s. AI/ML-beslissingen kunnen zo in de plaats komen van professionele beslissingen. We betogen dat dit belangrijke gevolgen heeft voor de relatie tussen een patiënt en de medische professie als instelling, en dat dit onvermijdelijk zal leiden tot uitholling van het institutionele vertrouwen in de geneeskunde.
    Download  
     
    Export citation  
     
    Bookmark  
  47. Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Disease Identification using Machine Learning and NLP.S. Akila - 2022 - Journal of Science Technology and Research (JSTAR) 3 (1):78-92.
    Artificial Intelligence (AI) technologies are now widely used in a variety of fields to aid with knowledge acquisition and decision-making. Health information systems, in particular, can gain the most from AI advantages. Recently, symptoms-based illness prediction research and manufacturing have grown in popularity in the healthcare business. Several scholars and organisations have expressed an interest in applying contemporary computational tools to analyse and create novel approaches for rapidly and accurately predicting illnesses. In this study, we present a paradigm for assessing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Molla Sadrâ’da Vâci̇bü’l-Vücûd’un İspatinda Burhan-I Siddikîn Proof Of The Truthful In Proving The Necessary Existence In Mullā Sadrā.Sedat Baran - 2020 - Diyanet İlmî Dergi 56 (1):205-224.
    Mümkün varlıkları aracı kılmadan Vâcibü’l-Vücûd’un varlığını ispatlama çabalarının bir sonucu olan sıddıkîn burhanı ilk defa Müslüman filozoflar tarafından dillendirildi. İbn Sînâ (ö. 428/1037) da Fârâbî’nin etkisiyle yeni bir burhan açıkladı ve buna sıddıkîn adını verdi. Molla Sadrâ (ö. 1050/1641) varlığın asaleti ilkesini mutasavvıflardan, teşkîk ilkesini de Sühreverdî’den iktibas ederek yeni bir sıddıkîn burhanı dillendirdi. Bu burhanın, varlığın asaleti, basîtliği/yalınlığı, teşkîkî ve ma’lûlün illete ihtiyacı olmak üzere bazı öncülleri vardır. O, bu öncülleri açıkladıktan sonra teselsüle ihtiyaç duymadan Vâcibü’l-Vücûd’un varlığını ispatlar. Onun (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50.  62
    Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains.Thomas F. Burns - 2023 - Journal of Medical Ethics 49 (8):547-548.
    Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a prescriber who (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 74