Federated learning (FL) enables decentralized model training while preserving privacy. Recently, integrating Foundation Models (FMs) into FL has boosted performance but also introduced a novel backdoor attack mechanism. Attackers can exploit the FM's capabilities to embed backdoors into synthetic data generated by FMs used for model fusion, subsequently infecting all client models through knowledge sharing without involvement in the long-lasting FL process. These novel attacks render existing FL backdoor defenses ineffective, as they primarily detect anomalies among client updates, which may appear uniformly malicious under this attack. Our work proposes a novel data-free defense strategy by constraining abnormal activations in the hidden feature space during model aggregation on the server. The activation constraints, optimized using synthetic data alongside FL training, mitigate the attack while barely affecting model performance, as the parameters remain untouched. Extensive experiments demonstrate its effectiveness against both novel and classic backdoor attacks, outperforming existing defenses while maintaining model performance.