Abstract
Automated detection of cervical cancer cells or cell clumps has the potential to significantly reduce error and increase productivity in cervical cancer screening. However, most traditional methods rely on the success of accurate cell segmentation and discriminative hand-crafted features extraction. Recently there are emerging deep learning-based methods which train convolutional neural networks (CNN) to classify image patches, but they are computationally expensive. In this paper we propose to exploit contemporary object detection methods for cervical cancer detection. To deal with the limited size of training samples, we develop the comparison classifier into the state-of-the-art two-stage object detection method based on the comparison with the reference images of each category. In addition, we propose to learn the reference images of the background from the data instead of manually choosing them by some heuristic rules. This architecture, called the Comparison detector, shows significant improvement for small size dataset, achieving a mean Average Precision (mAP) 26.3% and an Average Recall (AR) 35.7%, both improving about 20 points compared to baseline model. Moreover, Comparison detector achieves same mAP performance as the current state-of-the-art model when training on the medium size dataset, and improves AR by 4 points. Our method is promising for the development of automation-assisted cervical cancer screening systems.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1810.05952