Co-regularized alignment for unsupervised domain adaptation
Abhishek Kumar, Kahini Wadhawan, et al.
NeurIPS 2018
This work expands upon existing approaches to text style transfer, the task of changing the style of a sentence without changing its content. This task could have many applications, from creating headlines for a general audience based on scientific paper headlines to making offensive language more suitable for children. We focus specifically on sentiment transfer, due to the availability of large sets of data. Text style transfer is uniquely difficult since it is hard to find parallel data, sentence-for-sentence translations, to train on. In this work, we begin by training an encoder neural network to produce representations of text that encode only content, not style, using an adversarial approach. We then introduce a new partly-shared decoder architecture to turn these representations back into sentences, attempting to achieve a better mix of content preservation and style modification. We find that a partly-shared decoder results in content preservation and style modification scores in between different baselines, suggesting a potential utility depending on the desired application.
Abhishek Kumar, Kahini Wadhawan, et al.
NeurIPS 2018
Payel Das, Tom Sercu, et al.
Nature Biomedical Engineering
Jatin Ganhotra, Robert Moore, et al.
EMNLP 2020
Tom Sercu, Sebastian Gehrmann, et al.
DGS@ICLR Workshop 2019