State-of-the-art recommender system (RS) mostly rely on complex deep neural network (DNN) model structure, which makes it difficult to provide explanations along with RS decisions. Previous researchers have proved that providing explanations along with recommended items can help users make informed decisions and improve their trust towards the uninterpretable blackbox system. In model-agnostic explainable recommendation, system designers deploy a separate explanation model to take as input from the decision model, and generate explanations to meet the goal of persuasiveness. In this work, we explore the task of ranking textual rationales (supporting evidences) for model-agnostic explainable recommendation. Most of existing rationales ranking algorithms only utilize the rationale IDs and interaction matrices to build latent factor representations; and the semantic information within the textual rationales are not learned effectively. We argue that such design is suboptimal as the important semantic information within the textual rationales may be used to better profile user preferences and item features. Seeing this gap, we propose a model named Semantic-Enhanced Bayesian Personalized Explanation Ranking (SE-BPER) to effectively combine the interaction information and semantic information. SE-BPER first initializes the latent factor representations with contextualized embeddings generated by transformer model, then optimizes them with the interaction data. Extensive experiments show that such methodology improves the rationales ranking performance while simplifying the model training process (fewer hyperparameters and faster convergence). We conclude that the optimal way to combine semantic and interaction information remains an open question in the task of rationales ranking.