Meta-solver approaches exploits a number of individual solvers to potentially build a better solver. To assess the performance of meta-solvers, one can simply adopt the metrics typically used for individual solvers (e.g., runtime or solution quality), or employ more specific evaluation metrics (e.g., by measuring how close the meta-solver gets to its virtual best performance). In this paper, based on some recently published works, we provide an overview of different performance metrics for evaluating (meta-)solvers, by underlying their strengths and weaknesses.