In this thesis, we settle the computational complexity of some fundamental questions in polynomial optimization. These include the questions of (i) finding a local minimum, (ii) testing local minimality of a point, and (iii) deciding attainment of the optimal value. Our results characterize the complexity of these three questions for all degrees of the defining polynomials left open by prior literature. Regarding (i) and (ii), we show that unless P=NP, there cannot be a polynomial-time algorithm that finds a point within Euclidean distance $c^n$ (for any constant $c$) of a local minimum of an $n$-variate quadratic program. By contrast, we show that a local minimum of a cubic polynomial can be found efficiently by semidefinite programming (SDP). We prove that second-order points of cubic polynomials admit an efficient semidefinite representation, even though their critical points are NP-hard to find. We also give an efficiently-checkable necessary and sufficient condition for local minimality of a point for a cubic polynomial. Regarding (iii), we prove that testing whether a quadratically constrained quadratic program with a finite optimal value has an optimal solution is NP-hard. We also show that testing coercivity of the objective function, compactness of the feasible set, and the Archimedean property associated with the description of the feasible set are all NP-hard. We also give a new characterization of coercive polynomials that lends itself to a hierarchy of SDPs. In our final chapter, we present an SDP relaxation for finding approximate Nash equilibria in bimatrix games. We show that for a symmetric game, a $1/3$-Nash equilibrium can be efficiently recovered from any rank-2 solution to this relaxation. We also propose SDP relaxations for NP-hard problems related to Nash equilibria, such as that of finding the highest achievable welfare under any Nash equilibrium.