Credibility Attributions in Human-Robot Interactions : Empirical Investigations on Users’ Perceptions of Social Robots as a Source of Information
This dissertation’s aim is to improve the understanding of the credibility attribution process toward social robots. Their intended use as information-sharing agents in educational institutions, retail stores, or at home results in the apparent need to comprehend users’ ways of evaluating the suitability of social robots as a source of information. This applies in particular because credibility is not a static feature of a robot but an ascription that is depending on users’ perceptions. Thus, misattributions can occur that either impair the credibility of well-designed robotic systems or create unjustified good credibility ratings of questionably programmed robots. As a result, the potential influence of highly credible robots has severe implications if it is used to manipulate people’s behavior in favor of a third party’s interests or if it determines how much of the informational content of the robot is consumed. Thus, understanding the credibility attribution process toward social robots is crucial in order to provide guidance for users to avoid misattributions of credibility as well as unfavorable behavioral decisions. Based on theoretical arguments, the credibility attribution process is assumed to be influenced not exclusively by the features of a respective robot itself, such as its design or mentalizing abilities, but also by user-related characteristics such as a person’s reliance on technology as well as by the context in which an interaction takes place, for example, a setting where a robot is used for recommending a company’s own products. Additionally, the attribution of credibility is connected with the way a robot’s information is used for upcoming behavioral decisions. These decisions can be as simple as whether to purchase a product recommended by a robot or whether to continue an interaction with it. Since all three factors, the robot-related, the user-related, and the context-related one are supposed to explain the outcome of people’s credibility evaluations as well as the behavioral counterpart of these evaluations, they were all focused on during the investigations of this research project. In three experimental studies, people’s credibility ratings of social robots were measured after having been exposed to various manipulations that were expected to affect these evaluations. Based on considerations related to the phenomena of algorithm aversion and algorithm appreciation, the first online study (N = 338) let participants listen to either true or false news reports presented to them by either a human, a humanoid social robot, or a non-human- like smart speaker to allow for comparisons of credibility attributions. Due to the significantly lower ascription of credibility and theory of mind abilities to the robots and smart speakers compared to the human sources, the second laboratory study (N = 200) experimentally manipulated the degree of a robot’s mentalizing abilities in order to analyze whether the demonstration of such abilities supports the credibility attribution. Additionally, this was combined with the manipulation of external manipulative intent that was either present or not during the interaction with the study’s robot. While a positive effect of more advanced mentalizing abilities on credibility attributions was assumed as well as this relation becoming reversed in case of external manipulative intent exerted by an organization using the robot to maximize its own benefits, none of these influences could be supported by the empirical data, neither with regard to explicit nor indirect measures. However, the robot’s mentalizing abilities did significantly affect its empathy ratings, with higher mentalizing abilities being related to higher levels of ascribed empathy. Finally, the last laboratory experiment (N = 115) focused on how behavioral and persona adaptations by a social robot from one person to another, which are experienced by the users, influence their (credibility) evaluations of the respective robot. Based on the assumption that a high number of human-robot interactions will take place in multi-user scenarios, a robot’s adaptation process will become very prominent and thus potentially problematic if the adaptations’ results are too severe and lack consistency between each other. Despite these deductions, the statistical results of this study did neither detect differences in people’s evaluation of a robot nor their interaction time with it between the three conditions, each demonstrating one kind of potential adaptation (no, behavioral, or persona adaptation). Although some of these results are contrary to previous empirical findings or their underlying theories, the overall contribution of this dissertation can be seen in the analyses of different multifaceted credibility challenges social robots will potentially face. On the one hand, the empirical data collected for this dissertation points out a difference between devices and human sources in terms of credibility and theory of mind attributions, regardless of the content presented by them. On the other hand, robots’ credibility ratings revealed themselves to be very robust against various forms of potential influences as neither the demonstration of a robot’s mentalizing abilities, the perceived manipulative intent of an organization using the robot, nor the experienced switch between its conflicting personas were able to cause different user evaluations or behaviors. In sum, while a social robot’s credibility was found to be only as high or even worse compared to other sources, this credibility rating is difficult to increase and at the same time does not necessarily protect users from falling for manipulative strategies for which the robot might be used. As a result, these findings demonstrate the need for thoroughly planned human-robot interactions and increased user awareness of manipulation attempts in order to use social robots as a reliable strategy to create meaningful information-sharing tasks without their credibility being too low or misused.