Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems

In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282 (2019)
Download Edit this record How to cite View on PhilPapers
Abstract
Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they are presumptuous. After elaborating this moral concern, I explore the possibility that carefully procuring the training data for image recognition systems could ensure that the systems avoid the problem. The lesson of this paper extends beyond just the particular case of image recognition systems and the challenge of responsibly identifying a person’s intentions. Reflection on this particular case demonstrates the importance (as well as the difficulty) of evaluating machine learning systems and their training data from the standpoint of moral considerations that are not encompassed by ordinary assessments of predictive accuracy.
PhilPapers/Archive ID
KINMLA
Revision history
Archival date: 2019-05-05
View upload history
References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Added to PP index
2019-02-01

Total views
56 ( #30,563 of 40,150 )

Recent downloads (6 months)
54 ( #9,626 of 40,150 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks to external links.