Abstract
This report contains the details regarding our submission to the OffensEval 2019 (SemEval2019 - Task 6). We first discuss the details of the classifier implemented and the type of input data used and preprocessing performed. We then move onto critically evaluating our performance. Indeed we have achieved a macro-average F1-score of 0.76, 0.68, and 0.54, respectively for Task a, Task b, and Task c, which we believe reflects on the level of sophistication of the models implemented. Finally, we will be discussing the difficulties encountered and possible improvements for the future. Our code can be found at https://goo.gl/mdtuwF
Abstract (translated by Google)
URL
http://arxiv.org/abs/1903.08734