The matching function for the problem of stereo reconstruction or optical flow has been traditionally designed as a function of the distance between the features describing matched pixels. This approach works under assumption, that the appearance of pixels in two stereo cameras or in two consecutive video frames does not change dramatically. However, this might not be the case, if we try to match pixels over a large interval of time. In this paper we propose a method, which learns the matching function, that automatically finds the space of allowed changes in visual appearance, such as due to the motion blur, chromatic distortions, different colour calibration or seasonal changes. Furthermore, it automatically learns the importance of matching scores of contextual features at different relative locations and scales. Proposed classifier gives reliable estimations of pixel disparities already without any form of regularization. We evaluated our method on two standard problems - stereo matching on KITTI outdoor dataset, optical flow on Sintel data set, and on newly introduced TimeLapse change detection dataset. Our algorithm obtained very promising results comparable to the state-of-the-art.