It has been shown that perfectly trained networks exhibit drastic reduction in performance when presented with distorted images. Streaming Network (STNet) is a novel architecture capable of robust classification of the distorted images while been trained on undistorted images. The distortion robustness is enabled by means of sparse input and isolated parallel streams with decoupled weights. Recent results prove STNet is robust to 20 types of noise and distortions. STNet exhibits state-of-the-art performance for classification of low light images, while being of much smaller size when other networks. In this paper, we construct STNets by using scaled versions (number of filters in each layer is reduced by factor of n) of popular networks like VGG16, ResNet50 and MobileNetV2 as parallel streams. These new STNets are tested on several datasets. Our results indicate that more efficient (less FLOPS), new STNets exhibit higher or equal accuracy in comparison with original networks. Considering a diversity of datasets and networks used for tests, we conclude that a new type of STNets is an efficient tool for robust classification of distorted images.