Abstract:Deep learning methods have recently been used to construct non-linear codes for the additive white Gaussian noise (AWGN) channel with feedback. However, there is limited understanding of how these black-box-like codes with many learned parameters use feedback. This study aims to uncover the fundamental principles underlying the first deep-learned feedback code, known as Deepcode, which is based on an RNN architecture. Our interpretable model based on Deepcode is built by analyzing the influence length of inputs and approximating the non-linear dynamics of the original black-box RNN encoder. Numerical experiments demonstrate that our interpretable model -- which includes both an encoder and a decoder -- achieves comparable performance to Deepcode while offering an interpretation of how it employs feedback for error correction.
Abstract:A formal framework is given for the characterizability of a class of belief revision operators, defined using minimization over a class of partial preorders, by postulates. It is shown that for partial orders characterizability implies a definability property of the class of partial orders in monadic second-order logic. Based on a non-definability result for a class of partial orders, an example is given of a non-characterizable class of revision operators. This appears to be the first non-characterizability result in belief revision.