Automated analysis of software artefacts - a use case in e-assessment
Automated grading and feedback generation for programming and modeling exercises has become a usual means of supporting teachers and students at universities and schools. Tools used in this context engage general software engineering techniques for the analysis of software artefacts. Experiences with the current state-of-the-art show good results, but also a gap between the potential power of such techniques and the actual power used in current e-assessment systems. This thesis contributes to closing this gap by developing and testing approaches that are more universal than the currently used approaches and provide novel means of feedback generation. It can be shown that these approaches can be used effectively and efficiently for the mass validation of exercises, and that they result in a high feedback quality according to students' perception.