Word vector embeddings hold social ontological relations capable of reflecting meaningful fairness assessments

AI and Society (March 2021):1-20 (2021)
  Copy   BIBTEX

Abstract

Programming artificial intelligence to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair or fair. It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would be willing to accept if done to themselves. Secondly, that such a construal is ontologically reflected in Word Embeddings, by virtue of their ability to reflect the dimensions of such a perception. These dimensions being: responsibility vs. irresponsibility, gain vs. loss, reward vs. sanction, joy vs. pain, all as a single vector. The paper finds it possible to quantify and qualify a verb as fair or unfair by calculating the cosine similarity of the said verb’s embedding vector against FairVec—which represents the above dimensions. We apply this to Glove and Word2Vec embeddings. Testing on a list of verbs produces an F1 score of 95.7, which is improved to 97.0. Lastly, a demonstration of the method’s applicability to sentence measurement is carried out.

Author's Profile

Ahmed Izzidien
Cambridge University

Analytics

Added to PP
2021-03-24

Downloads
362 (#46,604)

6 months
154 (#21,393)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?