Learning rates for regularized least-squares algorithms are in most cases expressed with respect to the excess risk, or equivalently, the $L_2$-norm. For some applications, however, guarantees with respect to stronger norms such as the $L_\infty$-norm, are desirable. We address this problem by establishing learning rates for a continuous scale of norms between the $L_2$- and the RKHS norm. As a byproduct we derive $L_\infty$-norm learning rates, and in the case of Sobolev RKHSs we actually obtain Sobolev norm learning rates, which may also imply $L_\infty$-norm rates for some derivatives. In all cases, we do not need to assume the target function to be contained in the used RKHS. Finally, we show that in many cases the derived rates are minimax optimal.