Abstract:Problems of linear system identification have closed-form solutions, e.g., using least-squares or maximum-likelihood methods on input-output data. However, already the seemingly simplest problems of nonlinear system identification present more difficulties related to the optimisation of the furrowed error surface. Those cases include the Hammerstein plant with typically a bilinear model representation based on polynomial or Fourier expansion of its nonlinear element. Wiener plants induce actual nonlinearity in the parameters, which further complicates the optimisation. Neural network models and related optimisers are, however, well-prepared to represent and solve nonlinear problems. Unfortunately, the available data for nonlinear system identification might be too diverse to support accurate and consistent model representation. This diversity may refer to different impulse responses and nonlinear functions that arise in different measurements of (different) plants. We therefore propose multikernel neural network models to represent nonlinear plants with a subset of trainable weights shared between different measurements and another subset of plant-specific (i.e., multikernel) weights to adhere to the characteristics of specific measurements. We demonstrate that in this way we can fit neural network models to the diverse data which cannot be done with some standard methods of nonlinear system identification. For model testing, the subset of shared weights of the entire trained model is reused to support the identification and representation of unseen plant measurements, while the plant-specific model weights are readjusted to specifically meet the test data.
Abstract:Neural network modeling is a key technology of science and research and a platform for deployment of algorithms to systems. In wireless communications, system modeling plays a pivotal role for interference cancellation with specifically high requirements of accuracy regarding the elimination of self-interference in full-duplex relays. This paper hence investigates the potential of identification and representation of the self-interference channel by neural network architectures. The approach is promising for its ability to cope with nonlinear representations, but the variability of channel characteristics is a first obstacle in straightforward application of data-driven neural networks. We therefore propose architectures with a touch of "adaptivity" to accomplish a successful training. For reproducibility of results and further investigations with possibly stronger models and enhanced performance, we document and share our data.