Reducing Annotation Efforts in Supervised Short Answer Scoring
Automated short answer scoring is increasingly used to give students timely feedback about their learning progress. Building scoring models comes with high costs, as state-of-the-art methods using supervised learning require large amounts of hand-annotated data. We analyze the potential of recently proposed methods for semi-supervised learning based on clustering. We find that all examined methods (centroids, all clusters, selected pure clusters) are mainly effective for very short answers and do not generalize well to several-sentence responses.
Preview
Cite
Citation style:
Could not load citation form.
Rights
Use and reproduction:
This work may be used under a
.