We show hardness of improperly learning halfspaces in the agnostic model based on worst-case lattice problems, e.g., approximating shortest vectors within polynomial factors. In particular, we show that under this assumption there is no efficient algorithm that outputs any binary hypothesis, not necessarily a halfspace, achieving misclassfication error better than $\frac 1 2 - \epsilon$ even if the optimal misclassification error is as small is as small as $\delta$. Here, $\epsilon$ can be smaller than the inverse of any polynomial in the dimension and $\delta$ as small as $\mathrm{exp}\left(-\Omega\left(\log^{1-c}(d)\right)\right)$, where $0 < c < 1$ is an arbitrary constant and $d$ is the dimension. Previous hardness results [Daniely16] of this problem were based on average-case complexity assumptions, specifically, variants of Feige's random 3SAT hypothesis. Our work gives the first hardness for this problem based on a worst-case complexity assumption. It is inspired by a sequence of recent works showing hardness of learning well-separated Gaussian mixtures based on worst-case lattice problems.