Prompt learning represents a promising method for adapting pre-trained visual-language models (VLMs) to various downstream tasks by learning a set of text embeddings. One challenge inherent to these methods is the poor generalization performance due to the invalidity of the learned text embeddings for unseen tasks. A straightforward approach to bridge this gap is to freeze the text embeddings in prompts, which results in a lack of capacity to adapt VLMs for downstream tasks. To address this dilemma, we proposeto introduce an External Layer (EnLa) of text branch and learnable visual embeddings of the visual branch for adapting VLMs to downstream tasks. The learnable external layer is built upon valid embeddings of pre-trained CLIP. This design considers the balance of learning capabilities between the two branches. To align the textual and visual features, we propose a novel two-pronged approach: i) we introduce the optimal transport as the discrepancy metric to align the vision and text modalities, and ii) we introducea novel strengthening feature to enhance the interaction between these two modalities. Extensive experiments show that our method performs favorably well on 4 types of representative tasks across 11 datasets compared to the existing prompt learning methods.