Artificial Intelligence (AI) is being increasingly used to develop systems that produce intelligent solutions. However, there is a major concern that whether the systems built will be trusted by humans. In order to establish trust in AI systems, there is a need for the user to understand the reasoning behind their solutions and therefore, the system should be able to explain and justify its output. In this paper, we use argumentation to provide explanations in the domain of AI planning. We present argument schemes to create arguments that explain a plan and its components; and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Finally, we present some properties of the plan arguments.