PT Unknown AU Kari, M TI Situated and Semantics-Aware Mixed Reality PD 08 PY 2024 DI 10.17185/duepublico/82367 LA en DE Erweiterte Realität <Informatik>; Augmented Reality; Mixed Reality; Virtual Reality; Spatial Computing; Situated Computing; Scene Understanding; Mensch-Maschine-Kommunikation; Maschinelles Sehen; Computer Vision; Human-Computer Interaction; Computer Graphics; Echtzeitsystem AB Computers moved from the basement onto the user's desk into their pocket and around their wrist. Not only did they physically converge to the user's space of action, but sensors such as CMOS, GPS, NFC, and LiDAR also began to provide access to the environment, thus situating them in the user's direct reality. However, the user and their computer exhibit fundamentally different perceptual processes, resulting in substantially different internal representations of reality. Augmented and Mixed Reality systems, as the most advanced situated computing devices of today, are mostly centered around geometric representations of the world around them, such as planes, point clouds, or meshes. This way of thinking has proven useful in tackling classical problems including tracking or visual coherence. However, little attention has been directed toward a semantic and user-oriented understanding of the world and the specific situational parameters a user is facing. This is despite the fact that humans not only characterize their environment by geometries, but they also consider objects, the relationships between objects, their meanings, and the affordance of objects. Furthermore, humans observe other humans, how they interact with objects and other people, all together stimulating intent, desire, and behavior. The resulting gap between the human-perceived and the machine-perceived world impedes the computer's potential to seamlessly integrate with the user's reality. Instead, the computer continues to exist as a separate device, in its own reality, reliant on user input to align its functionality to the user's objectives. This dissertation on Situated and Semantics-Aware Mixed Reality aims to get closer to Augmented and Mixed Reality experiences, applications, and interactions that address this gap and seamlessly blend with a user's space and mind. It aims to contribute to the integration of interactions and experiences with the physicality of the world based on creating an understanding of both the user and their environment in dynamic situations across space and time. Method-wise, the research presented in this dissertation is concerned with the design, implementation, and evaluation of novel, distributed, semantics-aware, spatial, interactive systems, and, to this end, the application of state-of-the-art computer vision for scene reconstruction and understanding, together enabling Situated and Semantics-Aware Mixed Reality. ER