papers AI Learner
The Github is limit! Click to go to the new site.

Imagining an Engineer: On GAN-Based Data Augmentation Perpetuating Biases

2018-11-09
Niharika Jain, Lydia Manikonda, Alberto Olmo Hernandez, Sailik Sengupta, Subbarao Kambhampati

Abstract

The use of synthetic data generated by Generative Adversarial Networks (GANs) has become quite a popular method to do data augmentation for many applications. While practitioners celebrate this as an economical way to get more synthetic data that can be used to train downstream classifiers, it is not clear that they recognize the inherent pitfalls of this technique. In this paper, we aim to exhort practitioners against deriving any false sense of security against data biases based on data augmentation. To drive this point home, we show that starting with a dataset consisting of head-shots of engineering researchers, GAN-based augmentation “imagines” synthetic engineers, most of whom have masculine features and white skin color (inferred from a human subject study conducted on Amazon Mechanical Turk). This demonstrates how biases inherent in the training data are reinforced, and sometimes even amplified, by GAN-based data augmentation; it should serve as a cautionary tale for the lay practitioners.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1811.03751

PDF

https://arxiv.org/pdf/1811.03751


Similar Posts

Comments