Conference paper
An automatic approach for document-level topic model evaluation
Shraey Bhatia, Jey Han Lau, et al.
CoNLL 2017
The Conference on Computational Natural Language Learning (CoNLL) features a shared task, in which participants train and test their learning systems on the same data sets. In 2017, one of two tasks was devoted to learning dependency parsers for a large number of languages, in a real-world setting without any gold-standard annotation on input. All test sets followed a unified annotation scheme, namely that of Universal Dependencies. In this paper, we define the task and evaluation methodology, describe data preparation, report and analyze the main results, and provide a brief categorization of the different approaches of the participating systems.
Shraey Bhatia, Jey Han Lau, et al.
CoNLL 2017
Abulhair Saparov, Vijay Saraswat, et al.
CoNLL 2017