Time-series classification is essential across diverse domains, including medical diagnosis, industrial monitoring, financial forecasting, and human activity recognition. The Rocket algorithm has emerged as a simple yet powerful method, achieving state-of-the-art performance through random convolutional kernels applied to time-series data, followed by non-linear transformation. Its architecture approximates a one-hidden-layer convolutional neural network while eliminating parameter training, ensuring computational efficiency. Despite its empirical success, fundamental questions about its theoretical foundations remain unexplored. We bridge theory and practice by formalizing Rocket's random convolutional filters within the compressed sensing framework, proving that random projections preserve discriminative patterns in time-series data. This analysis reveals relationships between kernel parameters and input signal characteristics, enabling more principled approaches to algorithm configuration. Moreover, we demonstrate that its non-linearity, based on the proportion of positive values after convolutions, expresses the inherent sparsity of time-series data. Our theoretical investigation also proves that Rocket satisfies two critical conditions: translation invariance and noise robustness. These findings enhance interpretability and provide guidance for parameter optimization in extreme cases, advancing both theoretical understanding and practical application of time-series classification.