This paper proposes the Doubly Compressed Momentum-assisted Stochastic Gradient Tracking algorithm (DoCoM-SGT) for communication efficient decentralized learning. DoCoM-SGT utilizes two compression steps per communication round as the algorithm tracks simultaneously the averaged iterate and stochastic gradient. Furthermore, DoCoM-SGT incorporates a momentum based technique for reducing variances in the gradient estimates. We show that DoCoM-SGT finds a solution $\bar{\theta}$ in $T$ iterations satisfying $\mathbb{E} [ \| \nabla f(\bar{\theta}) \|^2 ] = {\cal O}( 1 / T^{2/3} )$ for non-convex objective functions; and we provide competitive convergence rate guarantees for other function classes. Numerical experiments on synthetic and real datasets validate the efficacy of our algorithm.