Abstract:The performance of Large language models (LLMs) across a broad range of domains has been impressive but have been critiqued as not being able to reason about their process and conclusions derived. This is to explain the conclusions draw, and also for determining a plan or strategy for their approach. This paper explores the current research in investigating symbolic reasoning and LLMs, and whether an LLM can inherently provide some form of reasoning or whether supporting components are necessary, and, if there is evidence for a reasoning capability, is this evident in a specific domain or is this a general capability? In addition, this paper aims to identify the current research gaps and future trends of LLM explainability, presenting a review of the literature, identifying current research into this topic and suggests areas for future work.