Recent advances in artificial intelligence (AI) and robotics have drawn attention to the need for AI systems and robots to be understandable to human users. The explainable AI (XAI) and explainable robots literature aims to enhance human understanding and human-robot team performance by providing users with necessary information about AI and robot behavior. Simultaneously, the human factors literature has long addressed important considerations that contribute to human performance, including human trust in autonomous systems. In this paper, drawing from the human factors literature, we discuss three important trust-related considerations for the design of explainable robot systems: the bases of trust, trust calibration, and trust specificity. We further detail existing and potential metrics for assessing trust in robotic systems based on explanations provided by explainable robots.