Contrastive learning (CL) has emerged as a dominant technique for unsupervised representation learning which embeds augmented versions of the anchor close to each other (positive samples) and pushes the embeddings of other samples (negative samples) apart. As revealed in recent works, CL can benefit from hard negative samples (negative samples that are difficult to distinguish from the anchor). However, we observe minor improvement or even performance drop when we adopt existing hard negative mining techniques in Graph Contrastive Learning (GCL). We find that many hard negative samples similar to anchor point are false negative ones (samples from the same class as anchor point) in GCL, which is different from CL in computer vision and will lead to unsatisfactory performance of existing hard negative mining techniques in GCL. To eliminate this bias, we propose Debiased Graph Contrastive Learning (DGCL), a novel and effective method to estimate the probability whether each negative sample is true or not. With this probability, we devise two schemes (i.e., DGCL-weight and DGCL-mix) to boost the performance of GCL. Empirically, DGCL outperforms or matches previous unsupervised state-of-the-art results on several benchmarks and even exceeds the performance of supervised ones.