In this paper, we propose a set of transform-based neural network layers as an alternative to the $3\times3$ Conv2D layers in Convolutional Neural Networks (CNNs). The proposed layers can be implemented based on orthogonal transforms such as Discrete Cosine Transform (DCT) and Hadamard transform (HT), and the biorthogonal Block Wavelet Transform (BWT). Convolutional filtering operations are performed in the transform domain using element-wise multiplications by taking advantage of the convolution theorems. Trainable soft-thresholding layers that remove noise in the transform domain bring nonlinearity to the transform domain layers. Compared to the Conv2D layer which is spatial-agnostic and channel-specific, the proposed layers are location-specific and channel-specific. The proposed layers reduce the number of parameters and multiplications significantly while improving the accuracy results of regular ResNets on the ImageNet-1K classification task. Furthermore, the proposed layers can be inserted with a batch normalization layer before the global average pooling layer in the conventional ResNets as an additional layer to improve classification accuracy with a negligible increase in the number of parameters and computational cost.