Pose Estimation for Facilitating Movement Learning from Online Videos

Published in International Conference on Advanced Visual Interfaces, 2020

There exists a multitude of online video tutorials to teach physical movements such as exercises. Yet, users lack support to verify the accuracy of their movements when following such videos and have to rely on their own perception. To address this, we developed a web-based application that performs human pose estimation using both video inputs from the online video and web camera, then provides different types of visual feedback to a user. Our study suggests that the user’s skeleton overlaid on the user’s camera feed improved user performance, whereas the user’s skeleton on its own or trainer’s skeleton with the trainer video offered limited benefits. We believe that our application demonstrates the potential to enhance learning physical movements from online videos and provides a basis for other guidance systems to design suitable visualizations. [pre-print] [pdf]

Recommended citation: Tharatipyakul, Atima, Kenny TW Choo, and Simon T. Perrault. "Pose estimation for facilitating movement learning from online videos." Proceedings of the International Conference on Advanced Visual Interfaces. 2020.
Download Paper