Invariance (defined in a general sense) has been one of the most effective priors for representation learning. Direct factorization of parametric models is feasible only for a small range of invariances, while regularization approaches, despite improved generality, lead to nonconvex optimization. In this work, we develop a convex representation learning algorithm for a variety of generalized invariances that can be modeled as semi-norms. It is much more efficient than Haar kernels and distributionally robust methods. Our approach is based on Euclidean embeddings of kernel representers in a semi-inner-product space, and experimental results confirm its effectiveness in learning invariant representations and making accurate predictions.