Abstract:We consider a variant of online semi-definite programming problem (OSDP): The decision space consists of semi-definite matrices with bounded $\Gamma$-trace norm, which is a generalization of trace norm defined by a positive definite matrix $\Gamma.$ To solve this problem, we utilise the follow-the-regularized-leader algorithm with a $\Gamma$-dependent log-determinant regularizer. Then we apply our generalised setting and our proposed algorithm to online matrix completion(OMC) and online similarity prediction with side information. In particular, we reduce the online matrix completion problem to the generalised OSDP problem, and the side information is represented as the $\Gamma$ matrix. Hence, due to our regret bound for the generalised OSDP, we obtain an optimal mistake bound for the OMC by removing the logarithmic factor.
Abstract:We consider online linear optimization over symmetric positive semi-definite matrices, which has various applications including the online collaborative filtering. The problem is formulated as a repeated game between the algorithm and the adversary, where in each round t the algorithm and the adversary choose matrices X_t and L_t, respectively, and then the algorithm suffers a loss given by the Frobenius inner product of X_t and L_t. The goal of the algorithm is to minimize the cumulative loss. We can employ a standard framework called Follow the Regularized Leader (FTRL) for designing algorithms, where we need to choose an appropriate regularization function to obtain a good performance guarantee. We show that the log-determinant regularization works better than other popular regularization functions in the case where the loss matrices L_t are all sparse. Using this property, we show that our algorithm achieves an optimal performance guarantee for the online collaborative filtering. The technical contribution of the paper is to develop a new technique of deriving performance bounds by exploiting the property of strong convexity of the log-determinant with respect to the loss matrices, while in the previous analysis the strong convexity is defined with respect to a norm. Intuitively, skipping the norm analysis results in the improved bound. Moreover, we apply our method to online linear optimization over vectors and show that the FTRL with the Burg entropy regularizer, which is the analogue of the log-determinant regularizer in the vector case, works well.