Convolutional neural networks (CNNs) have become the state-of-the-art tool for dealing with unsolved problems in computer vision and image processing. Since the convolution operator is a linear operator, several generalizations have been proposed to improve the performance of CNNs. One way to increase the capability of the convolution operator is by applying activation functions on the inner product operator. In this paper, we will introduce PowerLinear activation functions, which are based on the polynomial kernel generalization of the convolution operator. EvenPowLin functions are the main branch of the PowerLinear activation functions. This class of activation functions is saturated neither in the positive input region nor in the negative one. Also, the negative inputs are activated with the same magnitude as the positive inputs. These features made the EvenPowLin activation functions able to be utilized in the first layer of CNN architectures and learn complex features of input images. Additionally, EvenPowLin activation functions are used in CNN models to classify the inversion of grayscale images as accurately as the original grayscale images, which is significantly better than commonly used activation functions.