Deep Learning Based Human Presence Detection
DOI:
https://doi.org/10.15282/mekatronika.v2i2.6768Keywords:
Deep Learning, Computer Vision, YOLOv4, CrowdHuman, FPSAbstract
Human detection and tracking have been progressively demanded in various industries. The concern over human safety has inhibited the deployment of advanced and collaborative robotics, mainly attributed to the dimensionality limitation of present safety sensing. This study entails developing a deep learning-based human presence detector for deployment in smart factory environments to overcome dimensionality limitations. The objective is to develop a suitable human presence detector based on state-of-the-art YOLO variation to achieve real-time detection with high inference accuracy for feasible deployment at TT Vision Holdings Berhad. It will cover the fundamentals of modern deep learning based object detectors and the methods to accomplish the human presence detection task. The YOLO family of object detectors have truly revolutionized the Computer Vision and object detection industry and have continuously evolved since its development. At present, the most recent variation of YOLO includes YOLOv4 and YOLOv4 - Tiny. These models are acquired and pre-evaluated on the public CrowdHuman benchmark dataset. These algorithms mentioned are pre-trained on the CrowdHuman models and benchmarked at the preliminary stage. YOLOv4 and YOLOv4 – Tiny are trained on the CrowdHuman dataset for 4000 iterations and achieved a mean Average Precision of 78.21% at 25FPS and 55.59% 80FPS, respectively. The models are further fine-tuned on a Custom CCTV dataset and achieved significant precision improvements up to 88.08% at 25 FPS and 77.70% at 80FPS, respectively. The final evaluation justified YOLOv4 as the most feasible model for deployment.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2020 Venketaramana Balachandran, Muhammad Nur Aiman Shapiee, Ahmad Fakhri Ab. Nasir, Mohd Azraai Mohd Razman, Anwar P.P. Abdul Majeed
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.