papers AI Learner
The Github is limit! Click to go to the new site.

Racial Bias in Hate Speech and Abusive Language Detection Datasets

2019-05-29
Thomas Davidson, Debasmita Bhattacharya, Ingmar Weber

Abstract

Technologies for abusive language detection are being developed and applied with little consideration of their potential biases. We examine racial bias in five different sets of Twitter data annotated for hate speech and abusive language. We train classifiers on these datasets and compare the predictions of these classifiers on tweets written in African-American English with those written in Standard American English. The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates. If these abusive language detection systems are used in the field they will therefore have a disproportionate negative impact on African-American social media users. Consequently, these systems may discriminate against the groups who are often the targets of the abuse we are trying to detect.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.12516

PDF

http://arxiv.org/pdf/1905.12516


Similar Posts

Comments