Multi-Source-Free Unsupervised Domain Adaptation (MSFDA) aims to transfer knowledge from multiple well-labeled source domains to an unlabeled target domain, using source models instead of source data. Existing MSFDA methods limited that each source domain provides only a single model, with a uniform structure. This paper introduces a new MSFDA setting: Model-Agnostic Multi-Source-Free Unsupervised Domain Adaptation (MMDA), allowing diverse source models with varying architectures, without quantitative restrictions. While MMDA holds promising potential, incorporating numerous source models poses a high risk of including undesired models, which highlights the source model selection problem. To address it, we first provide a theoretical analysis of this problem. We reveal two fundamental selection principles: transferability principle and diversity principle, and introduce a selection algorithm to integrate them. Then, considering the measure of transferability is challenging, we propose a novel Source-Free Unsupervised Transferability Estimation (SUTE). This novel formulation enables the assessment and comparison of transferability across multiple source models with different architectures in the context of domain shift, without requiring access to any target labels or source data. Based on the above, we introduce a new framework to address MMDA. Specifically, we first conduct source model selection based on the proposed selection principles. Subsequently, we design two modules to aggregate knowledge from included models and recycle useful knowledge from excluded models. These modules enable us to leverage source knowledge efficiently and effectively, thereby supporting us in learning a discriminative target model via adaptation. We validate the effectiveness of our method through numerous experimental results, and demonstrate that our approach achieves state-of-the-art performance.