Single image de-hazing is a challenging problem, and it is far from solved. Most current solutions require paired image datasets that include both hazy images and their corresponding haze-free ground-truth images. However, in reality, lighting conditions and other factors can produce a range of haze-free images that can serve as ground truth for a hazy image, and a single ground truth image cannot capture that range. This limits the scalability and practicality of paired image datasets in real-world applications. In this paper, we focus on unpaired single image de-hazing and we do not rely on the ground truth image or physical scattering model. We reduce the image de-hazing problem to an image-to-image translation problem and propose a dehazing Global-Local Cycle-consistent Generative Adversarial Network (Dehaze-GLCGAN). Generator network of Dehaze-GLCGAN combines an encoder-decoder architecture with residual blocks to better recover the haze free scene. We also employ a global-local discriminator structure to deal with spatially varying haze. Through ablation study, we demonstrate the effectiveness of different factors in the performance of the proposed network. Our extensive experiments over three benchmark datasets show that our network outperforms previous work in terms of PSNR and SSIM while being trained on smaller amount of data compared to other methods.