Explanatory visualization for transparent user modeling and recommendation
This dissertation investigates how explainable recommender systems (RSs) can be reconceptualized through a human-centered lens, with a particular emphasis on explanatory visualizations as a means of enhancing the transparency of both user modeling and recommendation processes. The primary aim of this research is to advance the domain of Human-Centered Explainable RS by addressing persistent gaps in the literature, including the opacity of RS algorithms, the absence of systematic design principles for visual explanations, limited attention to user model transparency, and a lack of personalization in explanatory interactions. We propose a novel approach to visual explanations in RSs that are interactive, personalized, and layered in three levels of detail (i.e., basic, intermediate, and advanced), addressing diverse user goals, needs, and preferences. This approach is informed by well-established design practices from the human-computer interaction (HCI) and information visualization fields. To operationalize this novel explanation model, the dissertation presents RIMA (a transparent Recommendation and Interest Modeling Application), a system specifically developed to deliver explainable user models and recommendations in the domain of scientific publications. A series of qualitative and quantitative user studies was conducted to evaluate the proposed explanations. Using thematic analysis and statistical techniques, the studies explored the impact of different explanation scopes and levels of detail on various explanation aims, including scrutability, transparency, trust, and satisfaction. The findings indicate that the proposed explanation strategies tend to enhance users’ perceptions of system transparency and understanding. However, these outcomes are also shaped by factors such as individual user characteristics and the explanation scope. The contributions of this dissertation are anchored in three core pillars: (1) visual explanation, through the development of systematic and interactive visualizations that make both recommendation and user modeling processes transparent; (2) transparent user modeling, by introducing the EDUSS framework for self-actualization that enables users to access, scrutinize, and manage their profiles; (3) and personalized explanation, which adopts a user-driven approach by giving users control over the explanation process through various intelligibility types (What, What-if, Why, and How) and personalize the explanation based on their preferences by adapting the level of detail to users’ characteristics and goals, and changing the explanation’s scope to focus on the input, process, or output. We derived design guidelines to support researchers and practitioners in systematically designing visual explanations based on (G,U,D) triplet that links explanation goals (G), user personal characteristics (U), and design choices (D) to get effective explanations of both the user model and the recommendation. These contributions collectively offer a pathway toward more human-centered and transparent RSs.