Switch to: Citations

Add references

You must login to add references.
  1. The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   176 citations  
  • Agent, Action, and Reason.Donald Davidson - 1971 - In Robert Williams Binkley, Richard N. Bronaugh & Ausonio Marras (eds.), Agent, action, and reason. [Toronto]: University of Toronto Press.
    Download  
     
    Export citation  
     
    Bookmark   59 citations  
  • Brain to computer communication: Ethical perspectives on interaction models. [REVIEW]Guglielmo Tamburrini - 2009 - Neuroethics 2 (3):137-149.
    Brain Computer Interfaces (BCIs) enable one to control peripheral ICT and robotic devices by processing brain activity on-line. The potential usefulness of BCI systems, initially demonstrated in rehabilitation medicine, is now being explored in education, entertainment, intensive workflow monitoring, security, and training. Ethical issues arising in connection with these investigations are triaged taking into account technological imminence and pervasiveness of BCI technologies. By focussing on imminent technological developments, ethical reflection is informatively grounded into realistic protocols of brain-to-computer communication. In particular, (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   171 citations  
  • Ethical monitoring of brain-machine interfaces.Federica Lucivero & Guglielmo Tamburrini - 2008 - AI and Society 22 (3):449-460.
    The ethical monitoring of brain-machine interfaces (BMIs) is discussed in connection with the potential impact of BMIs on distinguishing traits of persons, changes of personal identity, and threats to personal autonomy. It is pointed out that philosophical analyses of personhood are conducive to isolating an initial thematic framework for this ethical monitoring problem, but a contextual refinement of this initial framework depends on applied ethics analyses of current BMI models and empirical case-studies. The personal autonomy-monitoring problem is approached by identifying (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Can humans perceive their brain states?Boris Kotchoubey, Andrea Kübler, Ute Strehl, Herta Flor & Niels Birbaumer - 2002 - Consciousness and Cognition 11 (1):98-113.
    Although the brain enables us to perceive the external world and our body, it remains unknown whether brain processes themselves can be perceived. Brain tissue does not have receptors for its own activity. However, the ability of humans to acquire self-control of brain processes indicates that the perception of these processes may also be achieved by learning. In this study patients learned to control low-frequency components of their EEG: the so-called slow cortical potentials (SCPs). In particular ''probe'' sessions, the patients (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations