Badatya, Bikash KumarBikash KumarBadatyaBaghel, VipulVipulBaghelHegde, Ravi S.Ravi S.Hegde2026-02-252026-02-252025-08-1410.1109/ICIPW68931.2025.113862312-s2.0-105035615977https://repository.iitgn.ac.in/handle/IITG2025/34680Fine-grained action localization in untrimmed sports videos presents a significant challenge due to rapid and subtle motion transitions over short durations. Existing supervised and weakly supervised solutions often rely on extensive annotated datasets and high-capacity models, making them computationally intensive and less adaptable to real-world scenarios. In this work, we introduce a lightweight and unsupervised skeleton-based action localization pipeline that leverages spatio-temporal graph neural representations. Our approach pre-trains an Attention-based Spatio-Temporal Graph Convolutional Network (ASTGCN) on a pose-sequence denoising task with blockwise partitions, enabling it to learn intrinsic motion dynamics without any manual labeling. At inference, we define a novel Action Dynamics Metric (ADM), computed directly from low-dimensional ASTGCN embeddings, which detects motion boundaries by identifying inflection points in its curvature profile. Our method achieves a mean Average Precision (mAP) of 82.66% and average localization latency of 29.09 ms on the DSV Diving dataset, matching state-of-the-art supervised performance while maintaining computational efficiency. Furthermore, it generalizes robustly to unseen, in-the-wild diving footage without retraining, demonstrating its practical applicability for lightweight, real-time action analysis systems in embedded or dynamic environments.en-USSports AnalyticsSkeleton-based Action LocalizationGraph ConvolutionRepresentation LearningInterpretabilityUtal-Gnn: unsupervised temporal action localization using graph neural networksConference Paper