We study the problem of contextual search, a multidimensional generalization of binary search that captures many problems in contextual decision-making. In contextual search, a learner is trying to learn the value of a hidden vector $v \in [0,1]^d$. Every round the learner is provided an adversarially-chosen context $u_t \in \mathbb{R}^d$, submits a guess $p_t$ for the value of $\langle u_t, v\rangle$, learns whether $p_t < \langle u_t, v\rangle$, and incurs loss $\ell(\langle u_t, v\rangle, p_t)$ (for some loss function $\ell$). The learner's goal is to minimize their total loss over the course of $T$ rounds. We present an algorithm for the contextual search problem for the symmetric loss function $\ell(\theta, p) = |\theta - p|$ that achieves $O_{d}(1)$ total loss. We present a new algorithm for the dynamic pricing problem (which can be realized as a special case of the contextual search problem) that achieves $O_{d}(\log \log T)$ total loss, improving on the previous best known upper bounds of $O_{d}(\log T)$ and matching the known lower bounds (up to a polynomial dependence on $d$). Both algorithms make significant use of ideas from the field of integral geometry, most notably the notion of intrinsic volumes of a convex set. To the best of our knowledge this is the first application of intrinsic volumes to algorithm design.