Abstract
Care is more than dispensing pills or cleaning beds. It is about responding to the entire patient. What is called “bedside manner” in medical personnel is a quality of treating the patient not as a mechanism but as a being—much like the caregiver—with desires, ideas, dreams, aspirations, and the gamut of mental and emotional character. As automata, answering an increasing functional need in care, are designed to enact care, the pressure is on their becoming more humanlike to carry out the function more effectively. The question becomes not merely whether the care automaton can effect good bedside manner but whether the patient’s response is not feeling deceived by the humanlikeness. It seems the device must be designed either to effect explicit mere human-“likeness,” thus likely undermining its bedside-manner potential, or to fool the patient completely. Neither option is attractive. This article examines the social problems of designing maximally humanlike care automata and how problems may start to erode the human rights of users/patients. The article then investigates the alternatives for dealing with this problem, whether by industrial and professional self-regulation or public-policy initiatives. It then frames the problem in the broader historical perspective in terms of previous bans, moratoria, and other means of control of hazardous and potentially rights-violating techniques and materials.