Self-Supervised learning aims to eliminate the need for expensive annotation in graph representation learning, where graph contrastive learning (GCL) is trained with the self-supervision signals containing data-data pairs. These data-data pairs are generated with augmentation employing stochastic functions on the original graph. We argue that some features can be more critical than others depending on the downstream task, and applying stochastic function uniformly, will vandalize the influential features, leading to diminished accuracy. To fix this issue, we introduce a Feature Based Adaptive Augmentation (FebAA) approach, which identifies and preserves potentially influential features and corrupts the remaining ones. We implement FebAA as plug and play layer and use it with state-of-the-art Deep Graph Contrastive Learning (GRACE) and Bootstrapped Graph Latents (BGRL). We successfully improved the accuracy of GRACE and BGRL on eight graph representation learning's benchmark datasets.