papers AI Learner
The Github is limit! Click to go to the new site.

SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing

2018-08-19
Taku Kudo, John Richardson

Abstract

This paper describes SentencePiece, a language-independent subword tokenizer and detokenizer designed for Neural-based text processing, including Neural Machine Translation. It provides open-source C++ and Python implementations for subword units. While existing subword segmentation tools assume that the input is pre-tokenized into word sequences, SentencePiece can train subword models directly from raw sentences, which allows us to make a purely end-to-end and language independent system. We perform a validation experiment of NMT on English-Japanese machine translation, and find that it is possible to achieve comparable accuracy to direct subword training from raw sentences. We also compare the performance of subword training and segmentation with various configurations. SentencePiece is available under the Apache 2 license at this https URL.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1808.06226

PDF

https://arxiv.org/pdf/1808.06226


Comments

Content