Existing state-of-the-art disparity estimation works mostly leverage the 4D concatenation volume and construct a very deep 3D convolution neural network for disparity regression, which is inefficient considering the high memory consumption and slow inference speed. In this paper, we propose a network named EDNet for efficient disparity estimation. To be specific, we construct a combination volume which incorporates contextual information from the concatenation volume and feature similarity measurement from the correlation volume. The combination volume can be aggregated by 2D convolutions which require less running memory. We further propose a spatial attention based residual learning module to generate attention-aware residual features. Accurate disparity correction can be provided even in low-texture regions as the residual learning process can specifically concentrate on inaccurate regions. Extensive experiments on Scene Flow and KITTI datasets show that our network outperforms previous 3D convolution based works and achieves state-of-the-art performance with significantly faster speed and less memory consumption, demonstrating the effectiveness of our proposed method.