Tencent AI Lab 官网
Towards Robust Neural Machine Translation
Abstract
Small perturbations in the input can severely distort intermediate representations and thus impact translation quality of neural machine translation (NMT) models. In this paper, we propose to im- prove the robustness of NMT models with adversarial stability training. The basic idea is to make both the encoder and decoder in NMT models robust against input perturbations by enabling them to behave similarly for the original input and its perturbed counterpart. Experimental results on Chinese-English, English-German and English-French translation tasks show that our approaches can not only achieve significant improvements over the strong NMT systems but also improve the robustness of NMT models.
Venue
2018 ACL
Publication Time
2018
Authors
Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu