Abstract:The nonconvex formulation of matrix completion problem has received significant attention in recent years due to its affordable complexity compared to the convex formulation. Gradient descent (GD) is the simplest yet efficient baseline algorithm for solving nonconvex optimization problems. The success of GD has been witnessed in many different problems in both theory and practice when it is combined with random initialization. However, previous works on matrix completion require either careful initialization or regularizers to prove the convergence of GD. In this work, we study the rank-1 symmetric matrix completion and prove that GD converges to the ground truth when small random initialization is used. We show that in logarithmic amount of iterations, the trajectory enters the region where local convergence occurs. We provide an upper bound on the initialization size that is sufficient to guarantee the convergence and show that a larger initialization can be used as more samples are available. We observe that implicit regularization effect of GD plays a critical role in the analysis, and for the entire trajectory, it prevents each entry from becoming much larger than the others.
Abstract:We study hypergraph clustering under the weighted $d$-uniform hypergraph stochastic block model ($d$-WHSBM), where each edge consisting of $d$ nodes has higher expected weight if $d$ nodes are from the same community compared to edges consisting of nodes from different communities. We propose a new hypergraph clustering algorithm, which is a convex relaxation of truncated maximum likelihood estimator (CRTMLE), that can handle the relatively sparse, high-dimensional regime of the $d$-WHSBM with community sizes of different orders. We provide performance guarantees of this algorithm under a unified framework for different parameter regimes, and show that it achieves the order-wise optimal or the best existing results for approximately balanced community sizes. We also demonstrate the first recovery guarantees for the setting with growing number of communities of unbalanced sizes.
Abstract:Crowdsourcing systems have emerged as an effective platform to label data and classify objects with relatively low cost by exploiting non-expert workers. To ensure reliable recovery of unknown labels with as few number of queries as possible, we consider an effective query type that asks "group attribute" of a chosen subset of objects. In particular, we consider the problem of classifying $m$ binary labels with XOR queries that ask whether the number of objects having a given attribute in the chosen subset of size $d$ is even or odd. The subset size $d$, which we call query degree, can be varying over queries. Since a worker needs to make more efforts to answer a query of a higher degree, we consider a noise model where the accuracy of worker's answer changes depending both on the worker reliability and query degree $d$. For this general model, we characterize the information-theoretic limit on the optimal number of queries to reliably recover $m$ labels in terms of a given combination of degree-$d$ queries and noise parameters. Further, we propose an efficient inference algorithm that achieves this limit even when the noise parameters are unknown.