Abstract:In the field of computer vision-driven ice hockey analytics, one of the most challenging and least studied tasks is goalie pose estimation. Unlike general human pose estimation, goalie pose estimation is much more complex as it involves not only the detection of keypoints corresponding to the joints of the goalie concealed under thick padding and mask, but also a large number of non-human keypoints corresponding to the large leg pads and gloves worn, the stick, as well as the hockey net. To tackle this challenge, we introduce GoalieNet, a multi-stage deep neural network for jointly estimating the pose of the goalie, their equipment, and the net. Experimental results using NHL benchmark data demonstrate that the proposed GoalieNet can achieve an average of 84\% accuracy across all keypoints, where 22 out of 29 keypoints are detected with more than 80\% accuracy. This indicates that such a joint pose estimation approach can be a promising research direction.
Abstract:Recognizing actions in ice hockey using computer vision poses challenges due to bulky equipment and inadequate image quality. A novel two-stream framework has been designed to improve action recognition accuracy for hockey using three main components. First, pose is estimated via the Part Affinity Fields model to extract meaningful cues from the player. Second, optical flow (using LiteFlowNet) is used to extract temporal features. Third, pose and optical flow streams are fused and passed to fully-connected layers to estimate the hockey player's action. A novel publicly available dataset named HARPET (Hockey Action Recognition Pose Estimation, Temporal) was created, composed of sequences of annotated actions and pose of hockey players including their hockey sticks as an extension of human body pose. Three contributions are recognized. (1) The novel two-stream architecture achieves 85% action recognition accuracy, with the inclusion of optical flows increasing accuracy by about 10%. (2) The unique localization of hand-held objects (e.g., hockey sticks) as part of pose increases accuracy by about 13%. (3) For pose estimation, a bigger and more general dataset, MSCOCO, is successfully used for transfer learning to a smaller and more specific dataset, HARPET, achieving a PCKh of 87%.