In the past few years, artificial intelligence (AI) techniques have been implemented in almost all verticals of human life. However, the results generated from the AI models often lag explainability. AI models often appear as a blackbox wherein developers are unable to explain or trace back the reasoning behind a specific decision. Explainable AI (XAI) is a rapid growing field of research which helps to extract information and also visualize the results generated with an optimum transparency. The present study provides and extensive review of the use of XAI in cybersecurity. Cybersecurity enables protection of systems, networks and programs from different types of attacks. The use of XAI has immense potential in predicting such attacks. The paper provides a brief overview on cybersecurity and the various forms of attack. Then the use of traditional AI techniques and its associated challenges are discussed which opens its doors towards use of XAI in various applications. The XAI implementations of various research projects and industry are also presented. Finally, the lessons learnt from these applications are highlighted which act as a guide for future scope of research.