Prototype-based Knowledge Representation for an Improved Human-robot Interaction
We propose a knowledge representation based on prototype theory in order to improve human-robot interaction. Since robots are becoming increasingly important in our everyday life, one day they might be used to do the chores, e.g., in kitchens. In order to tidy objects, however, robots have to be able to find the places where items belong; thus they need to categorise them. We develop a paradigm that mimics human categorisation, in order to provide flexible, human-like solutions. In order to identify a suitable approach to the prototype theory, we augmented and implemented the approach by Hampton and the one described by Minda and Smith, and compared their performance. We found that prototype models represent similarities between objects well. Furthermore, we found that the approach described by Minda and Smith is preferable over Hampton's, although on the whole they do not differ to a great extent. We provide an idea how the approach by Minda and Smith - augmented by us - could be used in order to contribute to a human-like knowledge representation. One final question is to what
extent the proposed knowledge representation is able to reflect human categorisations and if the resulting behaviour of a robot is intuitively understandable for users.