Abstract:In target tracking and sensor fusion contexts it is not unusual to deal with a large number of Gaussian densities that encode the available information (multiple hypotheses), as in applications where many sensors, affected by clutter or multimodal noise, take measurements on the same scene. In such cases reduction procedures must be implemented, with the purpose of limiting the computational load. In some situations it is required to fuse all available information into a single hypothesis, and this is usually done by computing the barycenter of the set. However, such computation strongly depends on the chosen dissimilarity measure, and most often it must be performed making use of numerical methods, since in very few cases the barycenter can be computed analytically. Some issues, like the constraint on the covariance, that must be symmetric and positive definite, make it hard the numerical computation of the barycenter of a set of Gaussians. In this work, Fixed-Point Iterations (FPI) are presented for the computation of barycenters according to several dissimilarity measures, making up a useful toolbox for fusion/reduction of Gaussian sets in applications where specific dissimilarity measures are required.
Abstract:An approximate mean square error (MSE) expression for the performance analysis of implicitly defined estimators of non-random parameters is proposed. An implicitly defined estimator (IDE) declares the minimizer/maximizer of a selected cost/reward function as the parameter estimate. The maximum likelihood (ML) and the least squares estimators are among the well known examples of this class. In this paper, an exact MSE expression for implicitly defined estimators with a symmetric and unimodal objective function is given. It is shown that the expression reduces to the Cramer-Rao lower bound (CRLB) and misspecified CRLB in the large sample size regime for ML and misspecified ML estimation, respectively. The expression is shown to yield the Ziv-Zakai bound (without the valley filling function) when it is used in a Bayesian setting, that is, when an a-priori distribution is assigned to the unknown parameter. In addition, extension of the suggested expression to the case of nuisance parameters is studied and some approximations are given to ease the computations for this case. Numerical results indicate that the suggested MSE expression not only predicts the estimator performance in the asymptotic region; but it is also applicable for the threshold region analysis, even for IDEs whose objective functions do not satisfy the symmetry and unimodality assumptions. Advantages of the suggested MSE expression are its conceptual simplicity and its relatively straightforward numerical calculation due to the reduction of the estimation problem to a binary hypothesis testing problem, similar to the usage of Ziv-Zakai bounds in random parameter estimation problems.
Abstract:In this paper, a Bayesian inference technique based on Taylor series approximation of the logarithm of the likelihood function is presented. The proposed approximation is devised for the case, where the prior distribution belongs to the exponential family of distributions. The logarithm of the likelihood function is linearized with respect to the sufficient statistic of the prior distribution in exponential family such that the posterior obtains the same exponential family form as the prior. Similarities between the proposed method and the extended Kalman filter for nonlinear filtering are illustrated. Furthermore, an extended target measurement update for target models where the target extent is represented by a random matrix having an inverse Wishart distribution is derived. The approximate update covers the important case where the spread of measurement is due to the target extent as well as the measurement noise in the sensor.
Abstract:We propose a greedy mixture reduction algorithm which is capable of pruning mixture components as well as merging them based on the Kullback-Leibler divergence (KLD). The algorithm is distinct from the well-known Runnalls' KLD based method since it is not restricted to merging operations. The capability of pruning (in addition to merging) gives the algorithm the ability of preserving the peaks of the original mixture during the reduction. Analytical approximations are derived to circumvent the computational intractability of the KLD which results in a computationally efficient method. The proposed algorithm is compared with Runnalls' and Williams' methods in two numerical examples, using both simulated and real world data. The results indicate that the performance and computational complexity of the proposed approach make it an efficient alternative to existing mixture reduction methods.