As concerns over data privacy intensify, unlearning in Graph Neural Networks (GNNs) has emerged as a prominent research frontier in academia. This concept is pivotal in enforcing the right to be forgotten, which entails the selective removal of specific data from trained GNNs upon user request. Our research focuses on edge unlearning, a process of particular relevance to real-world applications, owing to its widespread applicability. Current state-of-the-art approaches like GNNDelete can eliminate the influence of specific edges, yet our research has revealed a critical limitation in these approaches, termed over-forgetting. It occurs when the unlearning process inadvertently removes excessive information beyond specific data, leading to a significant decline in prediction accuracy for the remaining edges. To address this issue, we have identified the loss functions of GNNDelete as the primary source of the over-forgetting phenomenon. Furthermore, our analysis also suggests that loss functions may not be essential for effective edge unlearning. Building on these insights, we have simplified GNNDelete to develop Unlink-to-Unlearn (UtU), a novel method that facilitates unlearning exclusively through unlinking the forget edges from graph structure. Our extensive experiments demonstrate that UtU delivers privacy protection on par with that of a retrained model while preserving high accuracy in downstream tasks. Specifically, UtU upholds over 97.3% of the retrained model's privacy protection capabilities and 99.8% of its link prediction accuracy. Meanwhile, UtU requires only constant computational demands, underscoring its advantage as a highly lightweight and practical edge unlearning solution.