Federated Learning (FL) facilitates distributed model learning to protect users' privacy. In the absence of labels for a new user's data, the knowledge transfer in FL allows a learned global model to adapt to the new samples quickly. The multi-source domain adaptation in FL aims to improve the model's generality in a target domain by learning domain-invariant features from different clients. In this paper, we propose Federated Knowledge Alignment (FedKA) that aligns features from different clients and those of the target task. We identify two types of negative transfer arising in multi-source domain adaptation of FL and demonstrate how FedKA can alleviate such negative transfers with the help of a global features disentangler enhanced by embedding matching. To further facilitate representation learning of the target task, we devise a federated voting mechanism to provide labels for samples from the target domain via a consensus from querying local models and fine-tune the global model with these labeled samples. Extensive experiments, including an ablation study, on an image classification task of Digit-Five and a text sentiment classification task of Amazon Review, show that FedKA could be augmented to existing FL algorithms to improve the generality of the learned model for tackling a new task.