Network Embedding aims to learn a function mapping the nodes to Euclidean space contribute to multiple learning analysis tasks on networks. However, the noisy information behind the real-world networks and the overfitting problem both negatively impact the quality of embedding vectors. To tackle these problems, researchers utilize Adversarial Training for Network Embedding (AdvTNE) and achieve state-of-the-art performance. Unlike the mainstream methods introducing perturbations on the network structure or the data feature, AdvTNE directly perturbs the model parameters, which provides a new chance to understand the mechanism behind. In this paper, we explain AdvTNE theoretically from an optimization perspective. Considering the Power-law property of networks and the optimization objective, we analyze the reason for its excellent results. According to the above analysis, we propose a new activation to enhance the performance of AdvTNE. We conduct extensive experiments on four real networks to validate the effectiveness of our method in node classification and link prediction. The results demonstrate that our method is superior to the state-of-the-art baseline methods.