Leveraging the disparity information from both the left and right views is crucial for stereo disparity estimation. Left-right consistency check is an effective way to enhance the disparity estimation by referring to the information from the opposite view. However, the conventional left-right consistency check is an isolated post-processing step and heavily hand-crafted. This paper proposes a novel left-right comparative recurrent model to perform left-right consistency checking jointly with the disparity estimation. At each recurrent step, it produces disparity results for both views, and then performs online left-right comparison to identify the mismatched regions which may probably contain erroneously labeled pixels. A soft attention mechanism is introduced which leverages the learned error maps for better guiding the model to selectively focus on refining the unreliable regions at the next recurrent step. In this way, the generated disparity maps are progressively improved by the left-right comparative recurrent model. Extensive evaluations on the KITTI 2015, Scene Flow and Middlebury benchmarks validate the effectiveness of the proposed model, and demonstrate that state-of-the-art stereo disparity estimation results can be achieved by the new model.