Tanmoy Chakraborty, Ramasuri Narayanam
EMNLP 2016
In this paper, we propose a novel finetuning algorithm for the recently introduced multi-way, multilingual neural machine translate that enables zero-resource machine translation. When used together with novel many-to-one translation strategies, we empirically show that this finetuning algorithm allows the multi-way, multilingual model to translate a zero-resource language pair (1) as well as a single-pair neural translation model trained with up to 1M direct parallel sentences of the same language pair and (2) better than pivot-based translation strategy, while keeping only one additional copy of attention-related parameters.
Tanmoy Chakraborty, Ramasuri Narayanam
EMNLP 2016
Jian Ni, Radu Florian
EMNLP 2016
Gakuto Kurata, Bowen Zhou, et al.
EMNLP 2016
Yaser Al-Onaizan, Kishore Papineni
COLING/ACL 2006