Abstract:In many applications, knowledge of the sound pressure transfer to the eardrum is important. The transfer is highly influenced by the shape of the ear canal and its acoustic properties, such as the acoustic impedance at the eardrum. Invasive procedures to measure the sound pressure at the eardrum are usually elaborate or costly. In this work, we propose a numerical method to estimate the transfer impedance at the eardrum given only input impedance measurements at the ear canal entrance by using one-dimensional first-order finite elements and Nelder-Mead optimization algorithm. Estimations on the area function of the ear canal and the acoustic impedance at the eardrum are achieved. Results are validated through numerical simulations on ten different ear canal geometries and three different acoustic impedances at the eardrum using synthetically generated data from three-dimensional finite element simulations.
Abstract:In the framework of prediction with expert advice, we consider a recently introduced kind of regret bounds: the bounds that depend on the effective instead of nominal number of experts. In contrast to the Normal- Hedge bound, which mainly depends on the effective number of experts but also weakly depends on the nominal one, we obtain a bound that does not contain the nominal number of experts at all. We use the defensive forecasting method and introduce an application of defensive forecasting to multivalued supermartingales.
Abstract:The note presents a modified proof of a loss bound for the exponentially weighted average forecaster with time-varying potential. The regret term of the algorithm is upper-bounded by sqrt{n ln(N)} (uniformly in n), where N is the number of experts and n is the number of steps.
Abstract:We study prediction with expert advice in the setting where the losses are accumulated with some discounting---the impact of old losses may gradually vanish. We generalize the Aggregating Algorithm and the Aggregating Algorithm for Regression to this case, propose a suitable new variant of exponential weights algorithm, and prove respective loss bounds.
Abstract:We apply the method of defensive forecasting, based on the use of game-theoretic supermartingales, to prediction with expert advice. In the traditional setting of a countable number of experts and a finite number of outcomes, the Defensive Forecasting Algorithm is very close to the well-known Aggregating Algorithm. Not only the performance guarantees but also the predictions are the same for these two methods of fundamentally different nature. We discuss also a new setting where the experts can give advice conditional on the learner's future decision. Both the algorithms can be adapted to the new setting and give the same performance guarantees as in the traditional setting. Finally, we outline an application of defensive forecasting to a setting with several loss functions.
Abstract:The paper deals with on-line regression settings with signals belonging to a Banach lattice. Our algorithms work in a semi-online setting where all the inputs are known in advance and outcomes are unknown and given step by step. We apply the Aggregating Algorithm to construct a prediction method whose cumulative loss over all the input vectors is comparable with the cumulative loss of any linear functional on the Banach lattice. As a by-product we get an algorithm that takes signals from an arbitrary domain. Its cumulative loss is comparable with the cumulative loss of any predictor function from Besov and Triebel-Lizorkin spaces. We describe several applications of our setting.
Abstract:We introduce a new protocol for prediction with expert advice in which each expert evaluates the learner's and his own performance using a loss function that may change over time and may be different from the loss functions used by the other experts. The learner's goal is to perform better or not much worse than each expert, as evaluated by that expert, for all experts simultaneously. If the loss functions used by the experts are all proper scoring rules and all mixable, we show that the defensive forecasting algorithm enjoys the same performance guarantee as that attainable by the Aggregating Algorithm in the standard setting and known to be optimal. This result is also applied to the case of "specialist" (or "sleeping") experts. In this case, the defensive forecasting algorithm reduces to a simple modification of the Aggregating Algorithm.
Abstract:We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor M from the true distribution m by the algorithmic complexity of m. Here we assume we are at a time t>1 and already observed x=x_1...x_t. We bound the future prediction performance on x_{t+1}x_{t+2}... by a new variant of algorithmic complexity of m given x, plus the complexity of the randomness deficiency of x. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.