Towards hybrid moral responsibility by allocating human tasks to virtual agents and the effects on user’s perception
Through technologies such as ChatGPT, virtual agents (VAs) have not only become more accessible to many individuals but are also increasingly utilized in various business processes and tasks. VAs are computer-based systems that mimic human behavior and, in some cases, even their appearance. Despite numerous advantages, the deployment of VAs can raise ethical concerns, such as discrimination in the hiring process or a lack of transparency when employing VAs for medical treatment. Ethical issues can stem from humans, when discrimination occurs in the hiring process, or a VA, whose algorithm may disadvantage certain groups based on historical data. This dissertation aimed to explore how moral responsibility can be allocated between humans and VAs, facilitating better moral decision-making than either actor could achieve individually. Additionally, it investigated how VAs are perceived when they take over human tasks related to moral responsibility. The findings indicate that VAs can serve as proactive guides in moral decision-making, yet ultimate responsibility lies with human actors. This work also provides concrete approaches to the allocation of moral responsibility from various perspectives of normative ethics. Furthermore, the results suggest that explicability and self-disclosure only influence the perception of VAs in specific contexts, and when not clearly labeled as non-human actors, this leads to uncertainty. This dissertation contributes theoretical value by elucidating how organizations can allocate moral responsibility to arrive at improved moral decisions for affected individuals, groups, or society. Additionally, it enhances the understanding of the perception of VAs.