In cross-domain few-shot learning, the core issue is that the model trained on source tasks from source domains can not generalize well to target tasks from the target domain, especially when the domain shift is very large. Motivated by the observation that the domain shift between training tasks and target tasks usually can reflect in their style variation, we propose Task Augmented Meta-Learning (TAML) to conduct style transfer-based task augmentation to improve the domain generalization ability. Firstly, Multi-task Interpolation (MTI) is introduced to perform feature fusion on tasks from different tasks with different styles, which makes more diverse styles available. Furthermore, a novel task-augmentation strategy called Multi-Task Style Transfer (MTST) is put forward to perform style transfer on existing tasks to learn discriminative style-independent features. At last, we introduce Feature Modulation module (FM) to add random styles, which aims to improve the generalization of our model. The proposed TAML increases the diversity of styles of training tasks, and contributes to training a model with better domain generalization ability. The effectiveness is demonstrated via theoretical analysis and thorough experiments on two popular cross-domain few-shot benchmarks.