Human emotion recognition plays a crucial role in facilitating seamless interactions between humans and computers. In this paper, we present our innovative methodology for tackling the Valence-Arousal (VA) Estimation Challenge, the Expression Recognition Challenge, and the Action Unit (AU) Detection Challenge, all within the framework of the 8th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW). Our approach introduces a novel framework aimed at enhancing continuous emotion recognition. This is achieved by fine-tuning the CLIP model with the aff-wild2 dataset, which provides annotated expression labels. The result is a fine-tuned model that serves as an efficient visual feature extractor, significantly improving its robustness. To further boost the performance of continuous emotion recognition, we incorporate Temporal Convolutional Network (TCN) modules alongside Transformer Encoder modules into our system architecture. The integration of these advanced components allows our model to outperform baseline performance, demonstrating its ability to recognize human emotions with greater accuracy and efficiency.