Cross-Lingual Content Scoring
We investigate the feasibility of cross-lingual content scoring, a scenario where training and test data in an automatic scoring task are from two different languages. Cross-lingual scoring can contribute to educational equality by allowing answers in multiple languages. Training a model in one language and applying it to another language might also help to overcome data sparsity issues by reusing trained models from other languages. As there is nosuitable dataset available for this new task, wecreate a comparable bi-lingual corpus by extending the English ASAP dataset with German answers. Our experiments with cross-lingual scoring based on machine-translating either training or test data show a considerable drop in scoring quality.
Preview
Cite
Citation style:
Could not load citation form.
Rights
Use and reproduction:
This work may be used under a
.