Abstract:This research represents a pioneering application of automated pose estimation from drone data to study elephant behavior in the wild, utilizing video footage captured from Samburu National Reserve, Kenya. The study evaluates two pose estimation workflows: DeepLabCut, known for its application in laboratory settings and emerging wildlife fieldwork, and YOLO-NAS-Pose, a newly released pose estimation model not previously applied to wildlife behavioral studies. These models are trained to analyze elephant herd behavior, focusing on low-resolution ($\sim$50 pixels) subjects to detect key points such as the head, spine, and ears of multiple elephants within a frame. Both workflows demonstrated acceptable quality of pose estimation on the test set, facilitating the automated detection of basic behaviors crucial for studying elephant herd dynamics. For the metrics selected for pose estimation evaluation on the test set -- root mean square error (RMSE), percentage of correct keypoints (PCK), and object keypoint similarity (OKS) -- the YOLO-NAS-Pose workflow outperformed DeepLabCut. Additionally, YOLO-NAS-Pose exceeded DeepLabCut in object detection evaluation. This approach introduces a novel method for wildlife behavioral research, including the burgeoning field of wildlife drone monitoring, with significant implications for wildlife conservation.