papers AI Learner
The Github is limit! Click to go to the new site.

Generating Continuous Representations of Medical Texts

2018-05-15
Graham Spinks, Marie-Francine Moens

Abstract

We present an architecture that generates medical texts while learning an informative, continuous representation with discriminative features. During training the input to the system is a dataset of captions for medical X-Rays. The acquired continuous representations are of particular interest for use in many machine learning techniques where the discrete and high-dimensional nature of textual input is an obstacle. We use an Adversarially Regularized Autoencoder to create realistic text in both an unconditional and conditional setting. We show that this technique is applicable to medical texts which often contain syntactic and domain-specific shorthands. A quantitative evaluation shows that we achieve a lower model perplexity than a traditional LSTM generator.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1805.05691

PDF

https://arxiv.org/pdf/1805.05691


Similar Posts

Comments