Recent research in deep learning methodology has led to a variety of complex modelling techniques in computer vision (CV) that reach or even outperform human performance. Although these black-box deep learning models have obtained astounding results, they are limited in their interpretability and transparency which are critical to take learning machines to the next step to include them in sensitive decision-support systems involving human supervision. Hence, the development of explainable techniques for computer vision (XCV) has recently attracted increasing attention. In the realm of XCV, Class Activation Maps (CAMs) have become widely recognized and utilized for enhancing interpretability and insights into the decision-making process of deep learning models. This work presents a comprehensive overview of the evolution of Class Activation Map methods over time. It also explores the metrics used for evaluating CAMs and introduces auxiliary techniques to improve the saliency of these methods. The overview concludes by proposing potential avenues for future research in this evolving field.