Automated feedback at task level : error analysis or worked out examples – which type is more effective?
This paper reports on a small-scale quantitative study conducted in a middle school in Germany that compared the effects of two types of feedback on reactivating procedural skills with fractions. Tasks and feedback were implemented in a STACK-based digital learning environment that allowed randomization of numerical and graphical elements of a task as well as automated analysis of student responses to each of the numerical or graphical variations of each task. Due to a small data basis, observations are statistically not verified, but nevertheless point to unexpected results: Especially low achievers seem to benefit more from the error analysis type feedback than from feedback that provided fully worked out solutions. If true, this result suggests that for reactivating and practising skills, error-based feedback is more effective than worked out examples.