Abstract:This paper focuses on optimal unimodal transformation of the score outputs of a univariate learning model under linear loss functions. We demonstrate that the optimal mapping between score values and the target region is a rectangular function. To produce this optimal rectangular fit for the observed samples, we propose a sequential approach that can its estimation with each incoming new sample. Our approach has logarithmic time complexity per iteration and is optimally efficient.
Abstract:We investigate the nonlinear regression problem under L2 loss (square loss) functions. Traditional nonlinear regression models often result in non-convex optimization problems with respect to the parameter set. We show that a convex nonlinear regression model exists for the traditional least squares problem, which can be a promising towards designing more complex systems with easier to train models.
Abstract:This study presents an effective global optimization technique designed for multivariate functions that are H\"older continuous. Unlike traditional methods that construct lower bounding proxy functions, this algorithm employs a predetermined query creation rule that makes it computationally superior. The algorithm's performance is assessed using the average or cumulative regret, which also implies a bound for the simple regret and reflects the overall effectiveness of the approach. The results show that with appropriate parameters the algorithm attains an average regret bound of $O(T^{-\frac{\alpha}{n}})$ for optimizing a H\"older continuous target function with H\"older exponent $\alpha$ in an $n$-dimensional space within a given time horizon $T$. We demonstrate that this bound is minimax optimal.
Abstract:Our study focuses on determining the best weight windows for a weighted moving average smoother under squared loss. We show that there exists an optimal weight window that is symmetrical around its center. We study the class of tapered weight windows, which decrease in weight as they move away from the center. We formulate the corresponding least squares problem as a quadratic program and finally as a projection of the origin onto a convex polytope. Additionally, we provide some analytical solutions to the best window when some conditions are met on the input data.
Abstract:Our research deals with the optimization version of the set partition problem, where the objective is to minimize the absolute difference between the sums of the two disjoint partitions. Although this problem is known to be NP-hard and requires exponential time to solve, we propose a less demanding version of this problem where the goal is to find a locally optimal solution. In our approach, we consider the local optimality in respect to any movement of at most two elements. To accomplish this, we developed an algorithm that can generate a locally optimal solution in at most $O(N^2)$ time and $O(N)$ space. Our algorithm can handle arbitrary input precisions and does not require positive or integer inputs. Hence, it can be applied in various problem scenarios with ease.
Abstract:We study the adversarial online learning problem and create a completely online algorithmic framework that has data dependent regret guarantees in both full expert feedback and bandit feedback settings. We study the expected performance of our algorithm against general comparators, which makes it applicable for a wide variety of problem scenarios. Our algorithm works from a universal prediction perspective and the performance measure used is the expected regret against arbitrary comparator sequences, which is the difference between our losses and a competing loss sequence. The competition class can be designed to include fixed arm selections, switching bandits, contextual bandits, periodic bandits or any other competition of interest. The sequences in the competition class are generally determined by the specific application at hand and should be designed accordingly. Our algorithm neither uses nor needs any preliminary information about the loss sequences and is completely online. Its performance bounds are data dependent, where any affine transform of the losses has no effect on the normalized regret.
Abstract:In this work, we propose a meta algorithm that can solve a multivariate global optimization problem using univariate global optimizers. Although the univariate global optimization does not receive much attention compared to the multivariate case, which is more emphasized in academia and industry; we show that it is still relevant and can be directly used to solve problems of multivariate optimization. We also provide the corresponding regret bounds in terms of the time horizon $T$ and the average regret of the univariate optimizer, when it is robust against nonnegative noises with robust regret guarantees.
Abstract:We study the prediction with expert advice setting, where the aim is to produce a decision by combining the decisions generated by a set of experts, e.g., independently running algorithms. We achieve the min-max optimal dynamic regret under the prediction with expert advice setting, i.e., we can compete against time-varying (not necessarily fixed) combinations of expert decisions in an optimal manner. Our end-algorithm is truly online with no prior information, such as the time horizon or loss ranges, which are commonly used by different algorithms in the literature. Both our regret guarantees and the min-max lower bounds are derived with the general consideration that the expert losses can have time-varying properties and are possibly unbounded. Our algorithm can be adapted for restrictive scenarios regarding both loss feedback and decision making. Our guarantees are universal, i.e., our end-algorithm can provide regret guarantee against any competitor sequence in a min-max optimal manner with logarithmic complexity. Note that, to our knowledge, for the prediction with expert advice problem, our algorithms are the first to produce such universally optimal, adaptive and truly online guarantees with no prior knowledge.
Abstract:We investigate an auto-regressive formulation for the problem of smoothing time-series by manipulating the inherent objective function of the traditional moving mean smoothers. Not only the auto-regressive smoothers enforce a higher degree of smoothing, they are just as efficient as the traditional moving means and can be optimized accordingly with respect to the input dataset. Interestingly, the auto-regressive models result in moving means with exponentially tapered windows.
Abstract:In this work, we propose an efficient minimax optimal global optimization algorithm for multivariate Lipschitz continuous functions. To evaluate the performance of our approach, we utilize the average regret instead of the traditional simple regret, which, as we show, is not suitable for use in the multivariate non-convex optimization because of the inherent hardness of the problem itself. Since we study the average regret of the algorithm, our results directly imply a bound for the simple regret as well. Instead of constructing lower bounding proxy functions, our method utilizes a predetermined query creation rule, which makes it computationally superior to the Piyavskii-Shubert variants. We show that our algorithm achieves an average regret bound of $O(L\sqrt{n}T^{-\frac{1}{n}})$ for the optimization of an $n$-dimensional $L$-Lipschitz continuous objective in a time horizon $T$, which we show to be minimax optimal.