Abstract
In radiology, radiologists not only detect lesions from the medical image, but also describe them with various attributes such as their type, location, size, shape, and intensity. While these lesion attributes are rich and useful in many downstream clinical applications, how to extract them from the radiology reports is less studied. This paper outlines a novel deep learning method to automatically extract attributes of lesions of interest from the clinical text. Different from classical CNN models, we integrated the multi-head self-attention mechanism to handle the long-distance information in the sentence, and to jointly correlate different portions of sentence representation subspaces in parallel. Evaluation on an in-house corpus demonstrates that our method can achieve high performance with 0.848 in precision, 0.788 in recall, and 0.815 in F-score. The new method and constructed corpus will enable us to build automatic systems with a higher-level understanding of the radiological world.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1904.13018