Although GAN-based methods have received many achievements in the last few years, they have not been such successful in generating discrete data. The most important challenge of these methods is the difficulty of passing the gradient from the discriminator to the generator when the generator outputs are discrete. Despite several attempts done to alleviate this problem, none of the existing GAN-based methods has improved the performance of text generation (using measures that evaluate both the quality and the diversity of generated samples) compared to a generative RNN that is simply trained by the maximum likelihood approach. In this paper, we propose a new framework for generating discrete data by an adversarial approach in which we do not need to pass the gradient to the generator. In the proposed method, the update of either the generator or the discriminator can be accomplished straightforwardly. Moreover, we leverage the discreteness of data to explicitly model the data distribution and ensure the normalization of the generated distribution and consequently the convergence properties of the proposed method. Experimental results generally show the superiority of the proposed DGSAN method compared to the other GAN-based approaches for generating discrete sequential data.