Implicit Neural Representation (INR) has emerged as an effective method for unsupervised image denoising. However, INR models are typically overparameterized; consequently, these models are prone to overfitting during learning, resulting in suboptimal results, even noisy ones. To tackle this problem, we propose a general recipe for regularizing INR models in image denoising. In detail, we propose to iteratively substitute the supervision signal with the mean value derived from both the prediction and supervision signal during the learning process. We theoretically prove that such a simple iterative substitute can gradually enhance the signal-to-noise ratio of the supervision signal, thereby benefiting INR models during the learning process. Our experimental results demonstrate that INR models can be effectively regularized by the proposed approach, relieving overfitting and boosting image denoising performance.