The Influence of Variance in Learner Answers on Automatic Content Scoring

Automatic content scoring is an important application in the area of automatic educational assessment. Short texts written by learners are scored based on their content while spelling and grammar mistakes are usually ignored. The difficulty of automatically scoring such texts varies according to the variance within the learner answers. In this paper, we first discuss factors that influence variance in learner answers, so that practitioners can better estimate if automatic scoring might be applicable to their usage scenario. We then compare the two main paradigms in content scoring: (i) similarity-based and (ii) instance-based methods, and discuss how well they can deal with each of the variance-inducing factors described before.


Citation style:
Could not load citation form.


Use and reproduction:
This work may be used under a
CC BY 4.0 LogoCreative Commons Attribution 4.0 License (CC BY 4.0)