Results for 'Algorithmic complexity'

997 found
Order:
  1. The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do not (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  2. Algorithmic fairness in mortgage lending: from absolute conditions to relational trade-offs.Michelle Seng Ah Lee & Luciano Floridi - 2020 - Minds and Machines 31 (1):165-191.
    To address the rising concern that algorithmic decision-making may reinforce discriminatory biases, researchers have proposed many notions of fairness and corresponding mathematical formalizations. Each of these notions is often presented as a one-size-fits-all, absolute condition; however, in reality, the practical and ethical trade-offs are unavoidable and more complex. We introduce a new approach that considers fairness—not as a binary, absolute mathematical condition—but rather, as a relational notion in comparison to alternative decisionmaking processes. Using US mortgage lending as an example (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  3. Load Balancing of Tasks on Cloud Computing Using Time Complexity of Proposed Algorithm.V. Smrithi & B. K. Tiwari - 2018 - International Journal of Scientific Research and Engineering Trends 4 (6).
    Cloud Computing is a developing field and lean toward by numerous one at current yet it's rage is part more rely upon its execution which thusly is excessively rely upon the powerful booking algorithm and load adjusting . In this paper we address this issue and propose an algorithm for private cloud which has high throughput and for open cloud which address the issue of condition awareness likewise with execution. To enhance the throughput in private cloud SJF is utilized for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Algorithmic Structuring of Cut-free Proofs.Matthias Baaz & Richard Zach - 1993 - In Börger Egon, Kleine Büning Hans, Jäger Gerhard, Martini Simone & Richter Michael M. (eds.), Computer Science Logic. CSL’92, San Miniato, Italy. Selected Papers. Springer. pp. 29–42.
    The problem of algorithmic structuring of proofs in the sequent calculi LK and LKB ( LK where blocks of quantifiers can be introduced in one step) is investigated, where a distinction is made between linear proofs and proofs in tree form. In this framework, structuring coincides with the introduction of cuts into a proof. The algorithmic solvability of this problem can be reduced to the question of k-l-compressibility: "Given a proof of length k , and l ≤ k (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Taste and the algorithm.Emanuele Arielli - 2018 - Studi di Estetica 12 (3):77-97.
    Today, a consistent part of our everyday interaction with art and aesthetic artefacts occurs through digital media, and our preferences and choices are systematically tracked and analyzed by algorithms in ways that are far from transparent. Our consumption is constantly documented, and then, we are fed back through tailored information. We are therefore witnessing the emergence of a complex interrelation between our aesthetic choices, their digital elaboration, and also the production of content and the dynamics of creative processes. All are (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Comparative Analysis of the Performance of Popular Sorting Algorithms on Datasets of Different Sizes and Characteristics.Ahmed S. Sabah, Samy S. Abu-Naser, Yasmeen Emad Helles, Ruba Fikri Abdallatif, Faten Y. A. Abu Samra, Aya Helmi Abu Taha, Nawal Maher Massa & Ahmed A. Hamouda - 2023 - International Journal of Academic Engineering Research (IJAER) 7 (6):76-84.
    Abstract: The efficiency and performance of sorting algorithms play a crucial role in various applications and industries. In this research paper, we present a comprehensive comparative analysis of popular sorting algorithms on datasets of different sizes and characteristics. The aim is to evaluate the algorithms' performance and identify their strengths and weaknesses under varying scenarios. We consider six commonly used sorting algorithms: QuickSort, TimSort, MergeSort, HeapSort, RadixSort, and ShellSort. These algorithms represent a range of approaches and techniques, including divide-and-conquer, hybrid (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Complexity and information.Panu Raatikainen - 1998 - In Complexity, Information and Incompleteness (doctoral dissertation). Reports from the Department of Philosophy, University of Helsinki, 2/1998.
    "Complexity" is a catchword of certain extremely popular and rapidly developing interdisciplinary new sciences, often called accordingly the sciences of complexity. It is often closely associated with another notably popular but ambiguous word, "information"; information, in turn, may be justly called the central new concept in the whole 20th century science. Moreover, the notion of information is regularly coupled with a key concept of thermodynamics, viz. entropy. And like this was not enough it is quite usual to add (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Descriptive Complexity, Computational Tractability, and the Logical and Cognitive Foundations of Mathematics.Markus Pantsar - 2020 - Minds and Machines 31 (1):75-98.
    In computational complexity theory, decision problems are divided into complexity classes based on the amount of computational resources it takes for algorithms to solve them. In theoretical computer science, it is commonly accepted that only functions for solving problems in the complexity class P, solvable by a deterministic Turing machine in polynomial time, are considered to be tractable. In cognitive science and philosophy, this tractability result has been used to argue that only functions in P can feasibly (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  9. "Life as Algorithm".S. M. Amadae - 2021 - In Jenny Andersson & Sandra Kemp (eds.), Twenty-First Century Approaches to Literature: Futures.
    This chapter uncovers the complex negotiations for authority in various representations about futures of life which have been advanced by different branches of the sciences, and have culminated in the emerging concept of life as algorithm. It charts the historical shifts in expertise and representations of life, from naturalists, to mathematical modellers, and specialists in computation, and argues that physicists, game theorists, and economists now take a leading role in explaining and projecting futures of life. The chapter identifies Richard Dawkins (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10.  80
    Neutrosophic speech recognition Algorithm for speech under stress by Machine learning.Florentin Smarandache, D. Nagarajan & Said Broumi - 2023 - Neutrosophic Sets and Systems 53.
    It is well known that the unpredictable speech production brought on by stress from the task at hand has a significant negative impact on the performance of speech processing algorithms. Speech therapy benefits from being able to detect stress in speech. Speech processing performance suffers noticeably when perceptually produced stress causes variations in speech production. Using the acoustic speech signal to objectively characterize speaker stress is one method for assessing production variances brought on by stress. Real-world complexity and ambiguity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Curious objects: How visual complexity guides attention and engagement.Zekun Sun & Chaz Firestone - 2021 - Cognitive Science: A Multidisciplinary Journal 45 (4):e12933.
    Some things look more complex than others. For example, a crenulate and richly organized leaf may seem more complex than a plain stone. What is the nature of this experience—and why do we have it in the first place? Here, we explore how object complexity serves as an efficiently extracted visual signal that the object merits further exploration. We algorithmically generated a library of geometric shapes and determined their complexity by computing the cumulative surprisal of their internal skeletons—essentially (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  12. Rituals and Algorithms: Genealogy of Reflective Faith and Postmetaphysical Thinking.Martin Beck Matuštík - 2019 - European Journal for Philosophy of Religion 11 (4):163-184.
    What happens when mindless symbols of algorithmic AI encounter mindful performative rituals? I return to my criticisms of Habermas’ secularising reading of Kierkegaard’s ethics. Next, I lay out Habermas’ claim that the sacred complex of ritual and myth contains the ur-origins of postmetaphysical thinking and reflective faith. If reflective faith shares with ritual same origins as does communicative interaction, how do we access these archaic ritual sources of human solidarity in the age of AI?
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Integrating Computer Vision Algorithms and Ontologies for Spectator Crowd Behavior Analysis.Davide Conigliaro, Celine Hudelot, Roberta Ferrario & Daniele Porello - 2017 - In Vittorio Murino, Marco Cristani, Shishir Shah & Silvio Savarese (eds.), Group and Crowd Behavior for Computer Vision, 1st Edition. pp. 297-319.
    In this paper, building on these previous works, we propose to go deeper into the understanding of crowd behavior by proposing an approach which integrates ontologi- cal models of crowd behavior and dedicated computer vision algorithms, with the aim of recognizing some targeted complex events happening in the playground from the observation of the spectator crowd behavior. In order to do that, we first propose an ontology encoding available knowledge on spectator crowd behavior, built as a spe- cialization of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Public Trust, Institutional Legitimacy, and the Use of Algorithms in Criminal Justice.Duncan Purves & Jeremy Davis - 2022 - Public Affairs Quarterly 36 (2):136-162.
    A common criticism of the use of algorithms in criminal justice is that algorithms and their determinations are in some sense ‘opaque’—that is, difficult or impossible to understand, whether because of their complexity or because of intellectual property protections. Scholars have noted some key problems with opacity, including that opacity can mask unfair treatment and threaten public accountability. In this paper, we explore a different but related concern with algorithmic opacity, which centers on the role of public trust (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15. Cognitive and Computational Complexity: Considerations from Mathematical Problem Solving.Markus Pantsar - 2019 - Erkenntnis 86 (4):961-997.
    Following Marr’s famous three-level distinction between explanations in cognitive science, it is often accepted that focus on modeling cognitive tasks should be on the computational level rather than the algorithmic level. When it comes to mathematical problem solving, this approach suggests that the complexity of the task of solving a problem can be characterized by the computational complexity of that problem. In this paper, I argue that human cognizers use heuristic and didactic tools and thus engage in (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  16. Neutrosophic Association Rule Mining Algorithm for Big Data Analysis.Mohamed Abdel-Basset, Mai Mohamed, Florentin Smarandache & Victor Chang - 2018 - Symmetry 10 (4):1-19.
    Big Data is a large-sized and complex dataset, which cannot be managed using traditional data processing tools. Mining process of big data is the ability to extract valuable information from these large datasets. Association rule mining is a type of data mining process, which is indented to determine interesting associations between items and to establish a set of association rules whose support is greater than a specific threshold. The classical association rules can only be extracted from binary data where an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Local Complexity Adaptable Trajectory Partitioning via Minimum Message Length.Charles R. Twardy - 2011 - In 18th IEEE International Conference on Image Processing. IEEE.
    We present a minimum message length (MML) framework for trajectory partitioning by point selection, and use it to automatically select the tolerance parameter ε for Douglas-Peucker partitioning, adapting to local trajectory complexity. By examining a range of ε for synthetic and real trajectories, it is easy to see that the best ε does vary by trajectory, and that the MML encoding makes sensible choices and is robust against Gaussian noise. We use it to explore the identification of micro-activities within (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Probable General Intelligence algorithm.Anton Venglovskiy - manuscript
    Contains a description of a generalized and constructive formal model for the processes of subjective and creative thinking. According to the author, the algorithm presented in the article is capable of real and arbitrarily complex thinking and is potentially able to report on the presence of consciousness.
    Download  
     
    Export citation  
     
    Bookmark  
  19.  83
    Neutrosophic Treatment of the Modified Simplex Algorithm to find the Optimal Solution for Linear Models.Maissam Jdid & Florentin Smarandache - 2023 - International Journal of Neutrosophic Science 23.
    Science is the basis for managing the affairs of life and human activities, and living without knowledge is a form of wandering and a kind of loss. Using scientific methods helps us understand the foundations of choice, decision-making, and adopting the right solutions when solutions abound and options are numerous. Operational research is considered the best that scientific development has provided because its methods depend on the application of scientific methods in solving complex issues and the optimal use of available (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Environmental Variability and the Emergence of Meaning: Simulational Studies across Imitation, Genetic Algorithms, and Neural Nets.Patrick Grim - 2006 - In Angelo Loula & Ricardo Gudwin (eds.), Artificial Cognition Systems. Idea Group. pp. 284-326.
    A crucial question for artificial cognition systems is what meaning is and how it arises. In pursuit of that question, this paper extends earlier work in which we show that emergence of simple signaling in biologically inspired models using arrays of locally interactive agents. Communities of "communicators" develop in an environment of wandering food sources and predators using any of a variety of mechanisms: imitation of successful neighbors, localized genetic algorithms and partial neural net training on successful neighbors. Here we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Contents, vehicles, and complex data analysis in neuroscience.Daniel C. Burnston - 2020 - Synthese 199 (1-2):1617-1639.
    The notion of representation in neuroscience has largely been predicated on localizing the components of computational processes that explain cognitive function. On this view, which I call “algorithmic homuncularism,” individual, spatially and temporally distinct parts of the brain serve as vehicles for distinct contents, and the causal relationships between them implement the transformations specified by an algorithm. This view has a widespread influence in philosophy and cognitive neuroscience, and has recently been ably articulated and defended by Shea. Still, I (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22.  60
    Irreversibility and Complexity.Lapin Yair - manuscript
    Complexity is a relatively new field of study that is still heavily influenced by philosophy. However, with the advent of modern computing, it has become easier to conduct thorough investigations of complex systems using computational simulations. Despite significant progress, there remain certain characteristics of complex systems that are difficult to comprehend. To better understand these features, information can be applied using simple models of complex systems. The concepts of Shannon's information theory, Kolgomorov complexity, and logical depth are helpful (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Compositionality and Complexity in Multiple Negation.Francis Corblin - 1995 - Logic Journal of the IGPL 3 (2-3):449-471.
    This paper considers negative triggers and the interpretation of simple sentences containing more than one occurrence of those items . In the most typical interpretations those sentences have more negative expressions than negations in their semantic representation. It is first shown that this compositionality problem remains in current approaches. A principled algorithm for deriving the representation of sentences with multiple negative quantifiers in a DRT framework is then introduced. The algorithm is under the control of an on-line check-in, keeping the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  24. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies from the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. A Beginner’s Guide to Crossing the Road: Towards an Epistemology of Successful Action in Complex Systems.Ragnar van Der Merwe & Alex Broadbent - forthcoming - Interdisciplinary Science Reviews.
    Crossing the road within the traffic system is an example of an action human agents perform successfully day-to-day in complex systems. How do they perform such successful actions given that the behaviour of complex systems is often difficult to predict? The contemporary literature contains two contrasting approaches to the epistemology of complex systems: an analytic and a post-modern approach. We argue that neither approach adequately accounts for how successful action is possible in complex systems. Agents regularly perform successful actions without (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. The future of condition based monitoring: risks of operator removal on complex platforms.Marie Oldfield, Murray McMonies & Ella Haig - 2022 - AI and Society 2:1-12.
    Complex systems are difficult to manage, operate and maintain. This is why we see teams of highly specialised engineers in industries such as aerospace, nuclear and subsurface. Condition based monitoring is also employed to maximise the efficiency of extensive maintenance programmes instead of using periodic maintenance. A level of automation is often required in such complex engineering platforms in order to effectively and safely manage them. Advances in Artificial Intelligence related technologies have offered greater levels of automation but this potentially (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. An Improbable God Between Simplicity and Complexity: Thinking about Dawkins’s Challenge.Philippe Gagnon - 2013 - International Philosophical Quarterly 53 (4):409-433.
    Richard Dawkins has popularized an argument that he thinks sound for showing that there is almost certainly no God. It rests on the assumptions (1) that complex and statistically improbable things are more difficult to explain than those that are not and (2) that an explanatory mechanism must show how this complexity can be built up from simpler means. But what justifies claims about the designer’s own complexity? One comes to a different understanding of order and of simplicity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Information of the chassis and information of the program in synthetic cells.Antoine Danchin - 2009 - Systems and Synthetic Biology 3:125-134.
    Synthetic biology aims at reconstructing life to put to the test the limits of our understanding. It is based on premises similar to those which permitted invention of computers, where a machine, which reproduces over time, runs a program, which replicates. The underlying heuristics explored here is that an authentic category of reality, information, must be coupled with the standard categories, matter, energy, space and time to account for what life is. The use of this still elusive category permits us (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  29. A compromise between reductionism and non-reductionism.Eray Özkural - 2007 - In Carlos Gershenson, Diederik Aerts & Bruce Edmonds (eds.), Worldviews, Science, and Us: Philosophy and Complexity. World Scientific. pp. 285.
    This paper investigates the seeming incompatibility of reductionism and non-reductionism in the context of complexity sciences. I review algorithmic information theory for this purpose. I offer two physical metaphors to form a better understanding of algorithmic complexity, and I briefly discuss its advantages, shortcomings and applications. Then, I revisit the non-reductionist approaches in philosophy of mind which are often arguments from ignorance to counter physicalism. A new approach called mild non-reductionism is proposed which reconciliates the necessities (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Computable bi-embeddable categoricity.Luca San Mauro, Nikolay Bazhenov, Ekaterina Fokina & Dino Rossegger - 2018 - Algebra and Logic 5 (57):392-396.
    We study the algorithmic complexity of isomorphic embeddings between computable structures.
    Download  
     
    Export citation  
     
    Bookmark  
  31. On interpreting Chaitin's incompleteness theorem.Panu Raatikainen - 1998 - Journal of Philosophical Logic 27 (6):569-586.
    The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin's famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims that the limiting constant is determined by the complexity of the theory itself, which is assumed to be good (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  32. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  33. AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. What We Informationally Owe Each Other.Alan Rubel, Clinton Castro & Adam Pham - forthcoming - In Algorithms & Autonomy: The Ethics of Automated Decision Systems. Cambridge University Press: Cambridge University Press. pp. 21-42.
    ABSTRACT: One important criticism of algorithmic systems is that they lack transparency. Such systems can be opaque because they are complex, protected by patent or trade secret, or deliberately obscure. In the EU, there is a debate about whether the General Data Protection Regulation (GDPR) contains a “right to explanation,” and if so what such a right entails. Our task in this chapter is to address this informational component of algorithmic systems. We argue that information access is integral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. What is data ethics?Luciano Floridi & Mariarosaria Taddeo - 2016 - Philosophical Transactions of the Royal Society A 374 (2083).
    This theme issue has the founding ambition of landscaping Data Ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data Ethics builds on the foundation provided by Computer and Information (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  36. On the Solvability of the Mind-Body Problem.Jan Scheffel - manuscript
    The mind-body problem is analyzed in a physicalist perspective. By combining the concepts of emergence and algorithmic information theory in a thought experiment employing a basic nonlinear process, it is shown that epistemically strongly emergent properties may develop in a physical system. Turning to the significantly more complex neural network of the brain it is subsequently argued that consciousness is epistemically emergent. Thus reductionist understanding of consciousness appears not possible; the mind-body problem does not have a reductionist solution. The (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. Intractability and the use of heuristics in psychological explanations.Iris Rooij, Cory Wright & Todd Wareham - 2012 - Synthese 187 (2):471-487.
    Many cognitive scientists, having discovered that some computational-level characterization f of a cognitive capacity φ is intractable, invoke heuristics as algorithmic-level explanations of how cognizers compute f. We argue that such explanations are actually dysfunctional, and rebut five possible objections. We then propose computational-level theory revision as a principled and workable alternative.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  38. Logically possible machines.Eric Steinhart - 2002 - Minds and Machines 12 (2):259-280.
    I use modal logic and transfinite set-theory to define metaphysical foundations for a general theory of computation. A possible universe is a certain kind of situation; a situation is a set of facts. An algorithm is a certain kind of inductively defined property. A machine is a series of situations that instantiates an algorithm in a certain way. There are finite as well as transfinite algorithms and machines of any degree of complexity (e.g., Turing and super-Turing machines and more). (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  39. A fresh look at research strategies in computational cognitive science: The case of enculturated mathematical problem solving.Regina E. Fabry & Markus Pantsar - 2019 - Synthese 198 (4):3221-3263.
    Marr’s seminal distinction between computational, algorithmic, and implementational levels of analysis has inspired research in cognitive science for more than 30 years. According to a widely-used paradigm, the modelling of cognitive processes should mainly operate on the computational level and be targeted at the idealised competence, rather than the actual performance of cognisers in a specific domain. In this paper, we explore how this paradigm can be adopted and revised to understand mathematical problem solving. The computational-level approach applies methods (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  40. Probability and Randomness.Antony Eagle - 2016 - In Alan Hájek & Christopher Hitchcock (eds.), The Oxford Handbook of Probability and Philosophy. Oxford: Oxford University Press. pp. 440-459.
    Early work on the frequency theory of probability made extensive use of the notion of randomness, conceived of as a property possessed by disorderly collections of outcomes. Growing out of this work, a rich mathematical literature on algorithmic randomness and Kolmogorov complexity developed through the twentieth century, but largely lost contact with the philosophical literature on physical probability. The present chapter begins with a clarification of the notions of randomness and probability, conceiving of the former as a property (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  41. The unsolvability of the mind-body problem liberates the will.Scheffel Jan - manuscript
    The mind-body problem is analyzed in a physicalist perspective. By combining the concepts of emergence and algorithmic information theory in a thought experiment employing a basic nonlinear process, it is argued that epistemically strongly emergent properties may develop in a physical system. A comparison with the significantly more complex neural network of the brain shows that also consciousness is epistemically emergent in a strong sense. Thus reductionist understanding of consciousness appears not possible; the mind-body problem does not have a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Wittgenstein and the Status of Contradictions.Louis Caruana - 2004 - In A. Coliva & E. Picardi (eds.), Wittgenstein Today. Padova: Poligrafo. pp. 223-232.
    Ludwig Wittgenstein, in the "Remarks on the Foundation of Mathematics", often refers to contradictions as deserving special study. He is said to have predicted that there will be mathematical investigations of calculi containing contradictions and that people will pride themselves on having emancipated themselves from consistency. This paper examines a way of taking this prediction seriously. It starts by demonstrating that the easy way of understanding the role of contradictions in a discourse, namely in terms of pure convention within a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Liberalism and Automated Injustice.Chad Lee-Stronach - 2024 - In Duncan Ivison (ed.), Research Handbook on Liberalism. Cheltenham: Edward Elgar Publishing.
    Many of the benefits and burdens we might experience in our lives — from bank loans to bail terms — are increasingly decided by institutions relying on algorithms. In a sense, this is nothing new: algorithms — instructions whose steps can, in principle, be mechanically executed to solve a decision problem — are at least as old as allocative social institutions themselves. Algorithms, after all, help decision-makers to navigate the complexity and variation of whatever domains they are designed for. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Exploring Randomness.Panu Raatikainen - 2001 - Notices of the AMS 48 (9):992-6.
    Review of "Exploring Randomness" (200) and "The Unknowable" (1999) by Gregory Chaitin.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Information, learning and falsification.David Balduzzi - 2011
    There are (at least) three approaches to quantifying information. The first, algorithmic information or Kolmogorov complexity, takes events as strings and, given a universal Turing machine, quantifies the information content of a string as the length of the shortest program producing it [1]. The second, Shannon information, takes events as belonging to ensembles and quantifies the information resulting from observing the given event in terms of the number of alternate events that have been ruled out [2]. The third, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. The Logical Structure of the Cognitive Mechanisms Guiding Psychological Development.George Osborne - 1995 - Dissertation, University of Cambridge
    This Thesis presents a model of cognitive development inspired by Piaget's "Genetic Epistemology". It is observed that the epigenetic process described by Piaget posess mechanisms and behaviour that characterise complex adaptive systems. A model of bipedal motion based around the "Bucket Brigade" algorithm of Holland is presened to explore this relationship.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. God is Random: A Novel Argument for the Existence of God.Serkan Zorba - 2016 - European Journal of Science and Theology 12 (1):51-67.
    Applying the concepts of Kolmogorov-Chaitin complexity and Turing’s uncomputability from the computability and algorithmic information theories to the irreducible and incomputable randomness of quantum mechanics, a novel argument for the existence of God is presented. Concepts of ‘transintelligence’ and ‘transcausality’ are introduced, and from them, it is posited that our universe must be epistemologically and ontologically an open universe. The proposed idea also proffers a new perspective on the nonlocal nature and the infamous wave-function-collapse problem of quantum mechanics.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. The crucial roles of biodiversity loss belief and perception in urban residents’ consumption attitude and behavior towards animal-based products.Minh-Hoang Nguyen, Tam-Tri Le, Thomas Jones & Quan-Hoang Vuong - manuscript
    Products made from animal fur and skin have been a major part of human civilization. However, in modern society, the unsustainable consumption of these products – often considered luxury goods – has many negative environmental impacts. This study explores how people’s perceptions of biodiversity affect their attitudes and behaviors toward consumption. To investigate the information process deeper, we add the moderation of beliefs about biodiversity loss. Following the Bayesian Mindsponge Framework (BMF) analytics, we use mindsponge-based reasoning for constructing conceptual models (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Toward an Ethics of AI Assistants: an Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  50. The crucial roles of biodiversity loss belief and perception in urban residents’ consumption attitude and behavior towards animal-based products.Nguyen Minh-Hoang, Tam-Tri Le, Thomas E. Jones & Quan-Hoang Vuong - manuscript
    Products made from animal fur and skin have been a major part of human civilization. However, in modern society, the unsustainable consumption of these products – often considered luxury goods – has many negative environmental impacts. This study explores how people’s perceptions of biodiversity affect their attitudes and behaviors toward consumption. To investigate the information process deeper, we add the moderation of beliefs about biodiversity loss. Following the Bayesian Mindsponge Framework (BMF) analytics, we use mindsponge-based reasoning for constructing conceptual models (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 997