Min-max optimization, with a nonconvex-nonconcave objective function $f: \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$, arises in many areas, including optimization, economics, and deep learning. The nonconvexity-nonconcavity of $f$ means that the problem of finding a global $\varepsilon$-min-max point cannot be solved in $\mathrm{poly}(d, \frac{1}{\varepsilon})$ evaluations of $f$. Thus, most algorithms seek to obtain a certain notion of local min-max point where, roughly speaking, each player optimizes her payoff in a local sense. However, the classes of local min-max solutions which prior algorithms seek are only guaranteed to exist under very strong assumptions on $f$, such as convexity or monotonicity. We propose a notion of a greedy equilibrium point for min-max optimization and prove the existence of such a point for any function such that it and its first three derivatives are bounded. Informally, we say that a point $(x^\star, y^\star)$ is an $\varepsilon$-greedy min-max equilibrium point of a function $f: \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$ if $y^\star$ is a second-order local maximum for $f(x^\star,\cdot)$ and, roughly, $x^\star$ is a local minimum for a greedy optimization version of the function $\max_y f(x,y)$ which can be efficiently estimated using greedy algorithms. The existence follows from an algorithm that converges from any starting point to such a point in a number of gradient and function evaluations that is polynomial in $\frac{1}{\varepsilon}$, the dimension $d$, and the bounds on $f$ and its first three derivatives. Our results do not require convexity, monotonicity, or special starting points.