Abstract:Commonsense reasoning is a basic ingredient of intelligence in humans, empowering the ability to deduce conclusions based on the observations of surroundings. Large language models (LLMs) are emerging as potent tools increasingly capable of performing human-level tasks. The recent development in the form of GPT-4 and its demonstrated success in tasks complex to humans such as medical exam, bar exam and others has led to an increased confidence in the LLMs to become perfect instruments of intelligence. Though, the GPT-4 paper has shown performance on some common sense reasoning tasks, a comprehensive assessment of GPT-4 on common sense reasoning tasks, particularly on the existing well-established datasets is missing. In this study, we focus on the evaluation of GPT-4's performance on a set of common sense reasoning questions from the widely used CommonsenseQA dataset along with tools from cognitive psychology. In doing so, we understand how GPT-4 processes and integrates common sense knowledge with contextual information, providing insight into the underlying cognitive processes that enable its ability to generate common sense responses. We show that GPT-4 exhibits a high level of accuracy in answering common sense questions, outperforming its predecessor, GPT-3 and GPT-3.5. We show that the accuracy of GPT-4 on CommonSenseQA is 83 % and it has been shown in the original study that human accuracy over the same data was 89 %. Although, GPT-4 falls short of the human performance, it is a substantial improvement from the original 56.5 % in the original language model used by the CommonSenseQA study. Our results strengthen the already available assessments and confidence on GPT-4's common sense reasoning abilities which have significant potential to revolutionize the field of AI, by enabling machines to bridge the gap between human and machine reasoning.