Neuroimaging techniques have shown to be useful when studying the brain's activity. This paper uses Magnetoencephalography (MEG) data, provided by the Human Connectome Project (HCP), in combination with various deep artificial neural network models to perform brain decoding. More specifically, here we investigate to which extent can we infer the task performed by a subject based on its MEG data. Three models based on compact convolution, combined convolutional and long short-term architecture as well as a model based on multi-view learning that aims at fusing the outputs of the two stream networks are proposed and examined. These models exploit the spatio-temporal MEG data for learning new representations that are used to decode the relevant tasks across subjects. In order to realize the most relevant features of the input signals, two attention mechanisms, i.e. self and global attention, are incorporated in all the models. The experimental results of cross subject multi-class classification on the studied MEG dataset show that the inclusion of attention improves the generalization of the models across subjects.