Abstract:We consider the problem of estimating self-exciting generalized linear models from limited binary observations, where the history of the process serves as the covariate. We analyze the performance of two classes of estimators, namely the $\ell_1$-regularized maximum likelihood and greedy estimators, for a canonical self-exciting process and characterize the sampling tradeoffs required for stable recovery in the non-asymptotic regime. Our results extend those of compressed sensing for linear and generalized linear models with i.i.d. covariates to those with highly inter-dependent covariates. We further provide simulation studies as well as application to real spiking data from the mouse's lateral geniculate nucleus and the ferret's retinal ganglion cells which agree with our theoretical predictions.
Abstract:We consider the problem of estimating the parameters of a linear univariate autoregressive model with sub-Gaussian innovations from a limited sequence of consecutive observations. Assuming that the parameters are compressible, we analyze the performance of the $\ell_1$-regularized least squares as well as a greedy estimator of the parameters and characterize the sampling trade-offs required for stable recovery in the non-asymptotic regime. In particular, we show that for a fixed sparsity level, stable recovery of AR parameters is possible when the number of samples scale sub-linearly with the AR order. Our results improve over existing sampling complexity requirements in AR estimation using the LASSO, when the sparsity level scales faster than the square root of the model order. We further derive sufficient conditions on the sparsity level that guarantee the minimax optimality of the $\ell_1$-regularized least squares estimate. Applying these techniques to simulated data as well as real-world datasets from crude oil prices and traffic speed data confirm our predicted theoretical performance gains in terms of estimation accuracy and model selection.
Abstract:In this paper, we consider linear state-space models with compressible innovations and convergent transition matrices in order to model spatiotemporally sparse transient events. We perform parameter and state estimation using a dynamic compressed sensing framework and develop an efficient solution consisting of two nested Expectation-Maximization (EM) algorithms. Under suitable sparsity assumptions on the innovations, we prove recovery guarantees and derive confidence bounds for the state estimates. We provide simulation studies as well as application to spike deconvolution from calcium imaging data which verify our theoretical results and show significant improvement over existing algorithms.
Abstract:We consider the problem of estimating the sparse time-varying parameter vectors of a point process model in an online fashion, where the observations and inputs respectively consist of binary and continuous time series. We construct a novel objective function by incorporating a forgetting factor mechanism into the point process log-likelihood to enforce adaptivity and employ $\ell_1$-regularization to capture the sparsity. We provide a rigorous analysis of the maximizers of the objective function, which extends the guarantees of compressed sensing to our setting. We construct two recursive filters for online estimation of the parameter vectors based on proximal optimization techniques, as well as a novel filter for recursive computation of statistical confidence regions. Simulation studies reveal that our algorithms outperform several existing point process filters in terms of trackability, goodness-of-fit and mean square error. We finally apply our filtering algorithms to experimentally recorded spiking data from the ferret primary auditory cortex during attentive behavior in a click rate discrimination task. Our analysis provides new insights into the time-course of the spectrotemporal receptive field plasticity of the auditory neurons.