Towards a Human-Robot Interaction Design for People with Motor Disabilities by Enhancing the Visual Space

People with motor disabilities experience several physical limitations that affect not only their activities of daily living but their integration into the labor market. Human-Robot Collaboration presents opportunities to enhance human capabilities and counters physical limitations through different interaction paradigms and technological devices. However, little is known about the needs, expectations, and perspectives of people with motor disabilities within a human-robot collaborative work environment.

In this thesis, we aim to shed light on the perspectives of people with motor disabilities when designing a teleoperation concept that could enable them to perform manipulation tasks in a manufacturing environment. First, we provide the concerns of different people with motor disabilities, social workers, and caregivers about including a collaborative robotic arm in assembly lines. Second, we identify specific opportunities and potential challenges in hands-free interaction design for robot control. Third, we present a multimodal hands-free interaction for robot control that uses augmented reality to display the user interface. On top of that, we propose a feedback concept that provides augmented visual cues to aid robot operators in gaining a better perception of the location of the objects in the workspace and improve performance in pick-and-place tasks.

We present our contributions through six studies with people with and without disabilities, and the empirical findings are reported in eight publications.  Publications I, II, and IV aim to extend the research efforts of designing human-robot collaborative spaces for people with motor disabilities. Publication III sheds a light on the reasoning for hands-free modality choices and Publication VIII evaluates a hands-free teleoperation concept with an individual with motor disabilities. Publications V - VIII explore augmented reality to present a user interface that facilitates hands-free robot control and uses augmented visual cues to address depth perception issues improving thus performance in pick-and-place tasks.

Our findings can be summarized as follows. We point out concerns grouped into three themes: the robot fitting in the social and organizational structure, human-robot synergy, and human-robot problem management. Additionally, we provide five lessons learned derived from the pragmatic use of participatory design for people with motor disabilities, (1) approach participants through different channels and allow for multidisciplinarity in the research team, (2) consider the relationship between social dependencies in the selection of a participatory design technique, (3) plan for early exposure to robots and other technology, (4) take into account all opinions in design sessions, and (5) acknowledge that ethical implications go beyond consent. Also, we introduce findings about the nature of modality choices in hands-free interaction, which point to the user’s own abilities and individual experiences as determining factors in interaction evaluation. Finally, we present and evaluate a possible hands-free multimodal interaction design for robot control using augmented reality and augmented visual cues. We propose that augmented visual cues can improve depth perception and performance in pick-and-place tasks. Thus, we evaluated our designs of visual cues by taking into account depth-related variables (target’s distance and pose) and subjective certainty. Our results highlight that shorter distances and a clear pose lead to higher success, faster grasping time, and higher certainty. In addition, we re-designed our augmented visual cues considering visualization techniques and monocular cues that could be used to enhance the visual space for robot teleoperation. Our results demonstrate that our augmented visual cues can assist robot control and increase accuracy in pick-and-place tasks.

In conclusion, our findings on people with motor disabilities in a human-robot collaborative workplace, a hands-free multimodal interaction design, and augmented visual cues can extend the knowledge about using mixed reality in human-robot interaction. Further,  these contributions have the potential to promote future research to design inclusive environments for people with disabilities.


Citation style:
Could not load citation form.


Use and reproduction:
All rights reserved