Egocentric action recognition is essential for healthcare and assistive technology that relies on egocentric cameras because it allows for the automatic and continuous monitoring of activities of daily living (ADLs) without requiring any conscious effort from the user. This study explores the feasibility of using 2D hand and object pose information for egocentric action recognition. While current literature focuses on 3D hand pose information, our work shows that using 2D skeleton data is a promising approach for hand-based action classification, might offer privacy enhancement, and could be less computationally demanding. The study uses a state-of-the-art transformer-based method to classify sequences and achieves validation results of 94%, outperforming other existing solutions. The accuracy of the test subset drops to 76%, indicating the need for further generalization improvement. This research highlights the potential of 2D hand and object pose information for action recognition tasks and offers a promising alternative to 3D-based methods.