Abstract:This work investigates the structure and representation capacity of $sinusoidal$ MLPs, which have recently shown promising results in encoding low-dimensional signals. This success can be attributed to its smoothness and high representation capacity. The first allows the use of the network's derivatives during training, enabling regularization. However, defining the architecture and initializing its parameters to achieve a desired capacity remains an empirical task. This work provides theoretical and experimental results justifying the capacity property of sinusoidal MLPs and offers control mechanisms for their initialization and training. We approach this from a Fourier series perspective and link the training with the model's spectrum. Our analysis is based on a $harmonic$ expansion of the sinusoidal MLP, which says that the composition of sinusoidal layers produces a large number of new frequencies expressed as integer linear combinations of the input frequencies (weights of the input layer). We use this novel $identity$ to initialize the input neurons which work as a sampling in the signal spectrum. We also note that each hidden neuron produces the same frequencies with amplitudes completely determined by the hidden weights. Finally, we give an upper bound for these amplitudes, which results in a $bounding$ scheme for the network's spectrum during training.