Abstract:One of the topics in machine learning that is becoming more and more relevant is multivariate time series classification. Current techniques concentrate on identifying the local important sequence segments or establishing the global long-range dependencies. They frequently disregard the merged data from both global and local features, though. Using dual attention, we explore a novel network (DA-Net) in this research to extract local and global features for multivariate time series classification. The two distinct layers that make up DA-Net are the Squeeze-Excitation Window Attention (SEWA) layer and the Sparse Self-Attention within Windows (SSAW) layer. DA- Net can mine essential local sequence fragments that are necessary for establishing global long-range dependencies based on the two expanded layers.