Researchers in functional neuroimaging mostly use activation coordinates to formulate their hypotheses. Instead, we propose to use the full statistical images to define regions of interest (ROIs). This paper presents two machine learning approaches, transfer learning and selection transfer, that are compared upon their ability to identify the common patterns between brain activation maps related to two functional tasks. We provide some preliminary quantification of these similarities, and show that selection transfer makes it possible to set a spatial scale yielding ROIs that are more specific to the context of interest than with transfer learning. In particular, selection transfer outlines well known regions such as the Visual Word Form Area when discriminating between different visual tasks.