PHASEY: A Contrastive Learning Approach for Enhanced Human Gait Phases Recognition
- 1 Centre for Research Impact & Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India
- 2 Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India
- 3 Centre for Research Impact & Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India
Abstract
Human gait has gained much attention in behavioral biometrics as it possesses unique and distinctive characteristics. Gait phases, which describe the different patterns of human walking, are significant for the analysis and understanding of movement in an individual. Hence, the identification of gait phases is important for the accurate determination and interpretation of walking patterns, ranging from healthcare and security to rehabilitation. This study aims to propose an efficient model, called Precision Human Gait Activity Segmentation for Gait Phases Recognition using YOLOv9 (PHASEY), and a contrastive learning method that localizes and recognizes the gait stance phase and swing phase more efficiently and correctly. The proposed PHASEY model localizes the walking Gait phase patterns and distinguishes movement patterns in each of the phases. It uses CSPDarknet 53 as its backbone, which is further trained to identify swing and stance gait phases using silhouette images. The PHASEY model has three prime components- backbone, neck, and head. There is feature extraction from the backbone, then, visualization of those features through Grad-CAM within the neck is provided. Lastly, the head unit is accountable for the gait phase classification. By training the CSPDarknet 53 in the PHASEY model, the accuracy, as well as Intersection over Union (IoU), and inference time were calculated with different epochs. The experimental results show that the model attained the highest accuracy of 0.9907 at the epoch value 50. After comparing the YOLO models, it was evident that YOLOv9 achieved the highest accuracy of 94.8%, with a Precision value of 93.1%, Recall 91.9% and IoU with 87.8%. By utilizing this real-time object detection model for determining the phases of the gait cycle, the approach demonstrated exceptional performance in both localization and classification across different subjects.
DOI: https://doi.org/10.3844/jcssp.2025.1795.1810
Copyright: © 2025 Urvashi, Deepak Kumar, Vinay Kukreja and Ayush Dogra. This is an open access article distributed under the terms of the
Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
- 277 Views
- 35 Downloads
- 0 Citations
Download
Keywords
- Swing Gait Phase
- Stance Gait Phase
- Pretrained Network
- You Only Look Once (YOLO)
- Pretrained Network