Abstract:The paper explores the capability of continuous-time recurrent neural networks to store and recall precisely timed spike patterns. We show (by numerical experiments) that this is indeed possible: within some range of parameters, any random score of spike trains (for all neurons in the network) can be robustly memorized and autonomously reproduced with stable accurate relative timing of all spikes, with probability close to one. We also demonstrate associative recall under noisy conditions. In these experiments, the required synaptic weights are computed offline, to satisfy a template that encourages temporal stability.
Abstract:Normals with unknown variance (NUV) can represent many useful priors including $L_p$ norms and other sparsifying priors, and they blend well with linear-Gaussian models and Gaussian message passing algorithms. In this paper, we elaborate on recently proposed discretizing NUV priors, and we propose new NUV representations of half-space constraints and box constraints. We then demonstrate the use of such NUV representations with exemplary applications in model predictive control, with a variety of constraints on the input, the output, or the internal stateof the controlled system. In such applications, the computations boil down to iterations of Kalman-type forward-backward recursions, with a complexity (per iteration) that is linear in the planning horizon. In consequence, this approach can handle long planning horizons, which distinguishes it from the prior art. For nonconvex constraints, this approach has no claim to optimality, but it is empirically very effective.
Abstract:The paper considers the calibration of control-bounded analog-to-digital converters. It is demonstrated that variations of the analog frontend can be addressed by calibrating the digital estimation filter. In simulations (both behavioral and transistor level) of a leapfrog analog frontend, the proposed calibration method restores essentially the nominal performance. Moreover, with digital-filter calibration in mind, the paper reformulates the design problem of control-bounded converters and thereby clarifies the role of sampling, desired filter shape, and nominal conversion error.
Abstract:Normals with unknown variance (NUV) can represent many useful priors and blend well with Gaussian models and message passing algorithms. NUV representations of sparsifying priors have long been known, and NUV representations of binary (and M-level) priors have been proposed very recently. In this document, we propose NUV representations of half-space constraints and box constraints, which allows to add such constraints to any linear Gaussian model with any of the previously known NUV priors without affecting the computational tractability.
Abstract:Priors with a NUV representation (normal with unknown variance) have mostly been used for sparsity. In this paper, a novel NUV prior is proposed that effectively binarizes. While such a prior may have many uses, in this paper, we explore its use for discrete-level control (with M $\geq$ 2 levels) including, in particular, a practical scheme for digital-to-analog conversion. The resulting computations, for each planning period, amount to iterating forward-backward Gaussian message passing recursions (similar to Kalman smoothing), with a complexity (per iteration) that is linear in the planning horizon. In consequence, the proposed method is not limited to a short planning horizon and can therefore outperform "optimal" methods. A preference for sparse level switches can easily be incorporated.
Abstract:The paper proposes a new method to determine a binary control signal for an analog linear system such that the state, or some output, of the system follows a given target trajectory. The method can also be used for digital-to-analog conversion. The heart of the proposed method is a new binary-enforcing NUV prior (normal with unknown variance). The resulting computations, for each planning period, amount to iterating forward-backward Gaussian message passing recursions (similar to Kalman smoothing), with a complexity (per iteration) that is linear in the planning horizon. In consequence, the proposed method is not limited to a short planning horizon.
Abstract:This paper studies the capability of a recurrent neural network model to memorize random dynamical firing patterns by a simple local learning rule. Two modes of learning/memorization are considered: The first mode is strictly online, with a single pass through the data, while the second mode uses multiple passes through the data. In both modes, the learning is strictly local (quasi-Hebbian): At any given time step, only the weights between the neurons firing (or supposed to be firing) at the previous time step and those firing (or supposed to be firing) at the present time step are modified. The main result of the paper is an upper bound on the probability that the single-pass memorization is not perfect. It follows that the memorization capacity in this mode asymptotically scales like that of the classical Hopfield model (which, in contrast, memorizes static patterns). However, multiple-rounds memorization is shown to achieve a higher capacity (with a nonvanishing number of bits per connection/synapse). These mathematical findings may be helpful for understanding the functions of short-term memory and long-term memory in neuroscience.
Abstract:A factor-graph representation of quantum-mechanical probabilities (involving any number of measurements) is proposed. Unlike standard statistical models, the proposed representation uses auxiliary variables (state variables) that are not random variables. All joint probability distributions are marginals of some complex-valued function $q$, and it is demonstrated how the basic concepts of quantum mechanics relate to factorizations and marginals of $q$.