Utilization of Mediapipe Posture Recognition for the Usage in Estimating ASD Children Engagement Interacting with QTrobot

Authors

  • M. F. El-Muhammady Department of Mechatronics Engineering, International Islamic University Malaysia, 53100 Kuala Lumpur, Malaysia
  • A. S. Ghazali Department of Mechatronics Engineering, International Islamic University Malaysia, 53100 Kuala Lumpur, Malaysia
  • M. K. Anwar Department of Mechatronics Engineering, International Islamic University Malaysia, 53100 Kuala Lumpur, Malaysia
  • H. M. Yusof Department of Mechatronics Engineering, International Islamic University Malaysia, 53100 Kuala Lumpur, Malaysia
  • S. N. Sidek Department of Mechatronics Engineering, International Islamic University Malaysia, 53100 Kuala Lumpur, Malaysia

DOI:

https://doi.org/10.15282/mekatronika.v6i2.10754

Keywords:

HRI, HCI, ASD, QTrobot, MediaPipe

Abstract

Imitation skills are one of the most important learning skills that are naturally developed by typically developed (TD) children at a young age. Unfortunately, this skill is lacking in special children who are diagnosed with Autism Spectrum Disorder (ASD). To enhance the ASD children’s imitation skills for a better social life, this paper proposes to develop and embed a robust gesture recognition system onto a therapy robot called QTrobot. This paper will discuss the utilization of Mediapipe posture recognition as part of estimating the ASD children engagement. MediaPipe posture recognition has the average accuracy of 96% and 60% for both straight facing the camera and 60 degrees away from the camera, respectively. Further enhancements have been done to embed the selected gesture recognition algorithm into QTrobot for developing an efficient Human-Robotic interaction (HRI). Using twenty healthy adult participants, the enhanced algorithm has achieved an average of 94.33% accuracy with an average of 10.5 frame rates per second in recognizing five selected gestures to be imitated by the participants, which are T pose, Strong pose, Super pose, Victory pose, and V pose. Plus, the participants experienced a useful and enjoyable interaction with the robot based on a the 5-point Likert scale of the Technology Acceptance Model (TAM) questionnaire.

References

Alabdulkareem, A.; Alhakbani, N.; Al-Nafjan, A. A Systematic Review of Research on Robot-Assisted Therapy for Children with Autism. Sensors 2022, 22, 944. https://doi.org/10.3390/s22030944

Chiurco, A., Frangella, J., Longo, F., Nicoletti, L., Padovano, A., Solina, V., Mirabelli, G., & Citraro, C. (2022). Real-time Detection of Worker’s Emotions for Advanced Human-Robot Interaction during Collaborative Tasks in Smart Factories. Procedia Computer Science, 200(2019), 1875–1884. https://doi.org/10.1016/j.procs.2022.01.388

Datta, A. K., Datta, M., & Banerjee, P. K. (2015). Face detection and recognition techniques. Face Detection and Recognition, Icces, 45–66. https://doi.org/10.1201/b19349-8

Kopp, T., Baumgartner, M., & Kinkel, S. (2020). Success factors for introducing industrial human-robot interaction in practice: an empirically driven framework. International Journal of Advanced Manufacturing Technology, 685–704. https://doi.org/10.1007/s00170-020-06398-0

Kukil. (2022, November 18). Building a Poor Body Posture Detection and Alert System using MediaPipe. LearnOpenCV. Retrieved October 11, 2023, from https://learnopencv.com/building-a-body-posture-analysis-system-using-mediapipe/

Lugaresi, C., Tang, J., Nash, H., McClanahan, C., Uboweja, E., Hays, M., Zhang, F., Chang, C.-L., Yong, M. G., Lee, J., Chang, W.-T., Hua, W., Georg, M., & Grundmann, M. (2019). MediaPipe: A Framework for Building Perception Pipelines. http://arxiv.org/abs/1906.08172

Ma, J., Ma, L., Ruan, W., Chen, H., & Feng, J. (2022). A Wushu Posture Recognition System Based on MediaPipe. 10–13. https://doi.org/10.1109/tcs56119.2022.9918744

Olaronke, I., Oluwaseun, O., & Rhoda, I. (2017). State Of The Art: A Study of Human-Robot Interaction in Healthcare. International Journal of Information Engineering and Electronic Business, 9(3), 43–55. https://doi.org/10.5815/ijieeb.2017.03.06

Savin, A. V., Sablina, V. A., & Nikiforov, M. B. (2021). Comparison of Facial Landmark Detection Methods for Micro-Expressions Analysis. 2021 10th Mediterranean Conference on Embedded Computing, MECO 2021, 7–10. https://doi.org/10.1109/MECO52532.2021.9460191

Sheridan, T. B. (2016). Human-Robot Interaction. Human Factors, 58(4), 525–532. https://doi.org/10.1177/0018720816644364

Yanco, H. A., & Drury, J. (2004). Classifying human-robot interaction: An updated taxonomy. Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, 3, 2841–2846. https://doi.org/10.1109/ICSMC.2004.1400763

Zhang. (2022). Application of Google MediaPipe Pose Estimation Using A Single Camera. California State Polytechnic University, Pomona. https://scholarworks.calstate.edu/downloads/n009w777f

Downloads

Published

2024-09-06

How to Cite

[1]
M. F. El-Muhammady, A. S. . Ghazali, M. K. . Anwar, H. M. Yusof, and S. N. . Sidek, “Utilization of Mediapipe Posture Recognition for the Usage in Estimating ASD Children Engagement Interacting with QTrobot”, Mekatronika: J. Intell. Manuf. Mechatron., vol. 6, no. 2, pp. 1–11, Sep. 2024.

Issue

Section

Original Article