Abstract:In the field of behavior-related brain computation, it is necessary to meaningfully align raw neural population activities against the drastic shift between them. However, the alignment is non-trivial since most neural population activities are in a multivariate time-series manner. An instrumental framework within neuroscience research posits that trial-based neural population activities rely on low-dimensional latent dynamics. Focusing on such latent dynamics greatly facilitates the alignment procedure. Despite the considerable progress we have reached, existing methods usually ignore the intrinsic spatio-temporal structures within latent dynamics. Thus, those solutions lead to poor quality in dynamics structures and overall performance after alignment. To tackle this problem, we propose a method leveraging the expressiveness of diffusion model to relieve such issues. Specifically, the latent dynamics structures of the source domain are first extracted by the diffusion model. Then, such structures are well-recovered through a maximum likelihood alignment procedure on the target domain. We first demonstrate the effectiveness of our proposed method on a synthetic dataset. Then, when applied to neural recordings from primate motor cortex, under both cross-day and inter-subject settings, our method consistently manifests its capability of preserving the spatio-temporal structure of latent dynamics and outperforms existing approaches in alignment quality.
Abstract:Deep unsupervised domain adaptation (UDA) has recently received increasing attention from researchers. However, existing methods are computationally intensive due to the computation cost of Convolutional Neural Networks (CNN) adopted by most work. To date, there is no effective network compression method for accelerating these models. In this paper, we propose a unified Transfer Channel Pruning (TCP) approach for accelerating UDA models. TCP is capable of compressing the deep UDA model by pruning less important channels while simultaneously learning transferable features by reducing the cross-domain distribution divergence. Therefore, it reduces the impact of negative transfer and maintains competitive performance on the target task. To the best of our knowledge, TCP is the first approach that aims at accelerating deep UDA models. TCP is validated on two benchmark datasets-Office-31 and ImageCLEF-DA with two common backbone networks-VGG16 and ResNet50. Experimental results demonstrate that TCP achieves comparable or better classification accuracy than other comparison methods while significantly reducing the computational cost. To be more specific, in VGG16, we get even higher accuracy after pruning 26% floating point operations (FLOPs); in ResNet50, we also get higher accuracy on half of the tasks after pruning 12% FLOPs. We hope that TCP will open a new door for future research on accelerating transfer learning models.