With social media becoming ubiquitous, information consumption from this media has also increased. However, one of the serious problems that have emerged with this increase, is the propagation of rumors. Therefore, rumor identification is a very critical task with significant implications to economy, democracy as well as public health and safety. We tackle the problem of automated detection of rumors in social media in this paper by designing a modular explainable architecture that uses both latent and handcrafted features and can be expanded to as many new classes of features as desired. This approach will allow the end user to not only determine whether the piece of information on the social media is real of a rumor, but also give explanations on why the algorithm arrived at its conclusion. Using attention mechanisms, we are able to interpret the relative importance of each of these features as well as the relative importance of the feature classes themselves. The advantage of this approach is that the architecture is expandable to more handcrafted features as they become available and also to conduct extensive testing to determine the relative influences of these features in the final decision. Extensive experimentation on popular datasets and benchmarking against eleven contemporary algorithms, show that our approach performs significantly better in terms of F-score and accuracy while also being interpretable.