papers AI Learner
The Github is limit! Click to go to the new site.

Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings

2019-05-13
Thomas Manzini, Yao Chong Lim, Yulia Tsvetkov, Alan W Black

Abstract

Online texts – across genres, registers, domains, and styles – are riddled with human stereotypes, expressed in overt or subtle ways. Word embeddings, trained on these texts, perpetuate and amplify these stereotypes, and propagate biases to machine learning models that use word embeddings as features. In this work, we propose a method to debias word embeddings in multiclass settings such as race and religion, extending the work of (Bolukbasi et al., 2016) from the binary setting, such as binary gender. Next, we propose a novel methodology for the evaluation of multiclass debiasing. We demonstrate that our multiclass debiasing is robust and maintains the efficacy in standard NLP tasks.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.04047

PDF

http://arxiv.org/pdf/1904.04047


Comments

Content