Abstract:Sensory stimuli in animals are encoded into spike trains by neurons, offering advantages such as sparsity, energy efficiency, and high temporal resolution. This paper presents a signal processing framework that deterministically encodes continuous-time signals into biologically feasible spike trains, and addresses the questions about representable signal classes and reconstruction bounds. The framework considers encoding of a signal through spike trains generated by an ensemble of neurons using a convolve-then-threshold mechanism with various convolution kernels. A closed-form solution to the inverse problem, from spike trains to signal reconstruction, is derived in the Hilbert space of shifted kernel functions, ensuring sparse representation of a generalized Finite Rate of Innovation (FRI) class of signals. Additionally, inspired by real-time processing in biological systems, an efficient iterative version of the optimal reconstruction is formulated that considers only a finite window of past spikes, ensuring robustness of the technique to ill-conditioned encoding; convergence guarantees of the windowed reconstruction to the optimal solution are then provided. Experiments on a large audio dataset demonstrate excellent reconstruction accuracy at spike rates as low as one-fifth of the Nyquist rate, while showing clear competitive advantage in comparison to state-of-the-art sparse coding techniques in the low spike rate regime.
Abstract:The characterization of neural responses to sensory stimuli is a central problem in neuroscience. Spike-triggered average (STA), an influential technique, has been used to extract optimal linear kernels in a variety of animal subjects. However, when the model assumptions are not met, it can lead to misleading and imprecise results. We introduce a technique, called spike-triggered descent (STD), which can be used alone or in conjunction with STA to increase precision and yield success in scenarios where STA fails. STD works by simulating a model neuron that learns to reproduce the observed spike train. Learning is achieved via parameter optimization that relies on a metric induced on the space of spike trains modeled as a novel inner product space. This technique can precisely learn higher order kernels using limited data. Kernels extracted from a Locusta migratoria tympanal nerve dataset demonstrate the strength of this approach.
Abstract:In many animal sensory pathways, the transformation from external stimuli to spike trains is essentially deterministic. In this context, a new mathematical framework for coding and reconstruction, based on a biologically plausible model of the spiking neuron, is presented. The framework considers encoding of a signal through spike trains generated by an ensemble of neurons via a standard convolve-then-threshold mechanism. Neurons are distinguished by their convolution kernels and threshold values. Reconstruction is posited as a convex optimization minimizing energy. Formal conditions under which perfect reconstruction of the signal from the spike trains is possible are then identified in this setup. Finally, a stochastic gradient descent mechanism is proposed to achieve these conditions. Simulation experiments are presented to demonstrate the strength and efficacy of the framework
Abstract:We address the problem of learning feedback control where the controller is a network constructed solely of deterministic spiking neurons. In contrast to previous investigations that were based on a spike rate model of the neuron, the control signal here is determined by the precise temporal positions of spikes generated by the output neurons of the network. We model the problem formally as a hybrid dynamical system comprised of a closed loop between a plant and a spiking neuron network. We derive a novel synaptic weight update rule via which the spiking neuron network controller learns to hold process variables at desired set points. The controller achieves its learning objective based solely on access to the plant's process variables and their derivatives with respect to changing control signals; in particular, it requires no internal model of the plant. We demonstrate the efficacy of the rule by applying it to the classical control problem of the cart-pole (inverted pendulum) and a model of fish locomotion. Experiments show that the proposed controller has a stability region comparable to a traditional PID controller, its trajectories differ qualitatively from those of a PID controller, and in many instances the controller achieves its objective using very sparse spike train outputs.
Abstract:Despite recent advances, estimating optical flow remains a challenging problem in the presence of illumination change, large occlusions or fast movement. In this paper, we propose a novel optical flow estimation framework which can provide accurate dense correspondence and occlusion localization through a multi-scale generalized plane matching approach. In our method, we regard the scene as a collection of planes at multiple scales, and for each such plane, compensate motion in consensus to improve match quality. We estimate the square patch plane distortion using a robust plane model detection method and iteratively apply a plane matching scheme within a multi-scale framework. During the flow estimation process, our enhanced plane matching method also clearly localizes the occluded regions. In experiments on MPI-Sintel datasets, our method robustly estimated optical flow from given noisy correspondences, and also revealed the occluded regions accurately. Compared to other state-of-the-art optical flow methods, our method shows accurate occlusion localization, comparable optical flow quality, and better thin object detection.
Abstract:We derive a synaptic weight update rule for learning temporally precise spike train to spike train transformations in multilayer feedforward networks of spiking neurons. The framework, aimed at seamlessly generalizing error backpropagation to the deterministic spiking neuron setting, is based strictly on spike timing and avoids invoking concepts pertaining to spike rates or probabilistic models of spiking. The derivation is founded on two innovations. First, an error functional is proposed that compares the spike train emitted by the output neuron of the network to the desired spike train by way of their putative impact on a virtual postsynaptic neuron. This formulation sidesteps the need for spike alignment and leads to closed form solutions for all quantities of interest. Second, virtual assignment of weights to spikes rather than synapses enables a perturbation analysis of individual spike times and synaptic weights of the output as well as all intermediate neurons in the network, which yields the gradients of the error functional with respect to the said entities. Learning proceeds via a gradient descent mechanism that leverages these quantities. Simulation experiments demonstrate the efficacy of the proposed learning framework. The experiments also highlight asymmetries between synapses on excitatory and inhibitory neurons.
Abstract:Spike Timing Dependent Plasticity (STDP) is a Hebbian like synaptic learning rule. The basis of STDP has strong experimental evidences and it depends on precise input and output spike timings. In this paper we show that under biologically plausible spiking regime, slight variability in the spike timing leads to drastically different evolution of synaptic weights when its dynamics are governed by the additive STDP rule.
Abstract:We prove a novel result wherein the density function of the gradients---corresponding to density function of the derivatives in one dimension---of a thrice differentiable function S (obtained via a random variable transformation of a uniformly distributed random variable) defined on a closed, bounded interval \Omega \subset R is accurately approximated by the normalized power spectrum of \phi=exp(iS/\tau) as the free parameter \tau-->0. The result is shown using the well known stationary phase approximation and standard integration techniques and requires proper ordering of limits. Experimental results provide anecdotal visual evidence corroborating the result.