In this study, we present a pragmatic lightweight pose estimation model. Our model can achieve real-time predictions using low-power embedded devices. This system was found to be very accurate and achieved a 94.5% accuracy of SOTA HRNet 256x192 using a computational cost of only 3.8% on COCO test dataset. Our model adopts an encoder-decoder architecture and is carefully downsized to improve its efficiency. We especially focused on optimizing the deconvolution layers and observed that the channel reduction of the deconvolution layers contributes significantly to reducing computational resource consumption without degrading the accuracy of this system. We also incorporated recent model agnostic techniques such as DarkPose and distillation training to maximize the efficiency of our model. Furthermore, we applied model quantization to exploit multi/mixed precision features. Our FP16'ed model (COCO AP 70.0) operates at ~60-fps on NVIDIA Jetson AGX Xavier and ~200 fps on NVIDIA Quadro RTX6000.