A key assumption in the theory of adaptive control for nonlinear systems is that the uncertainty of the system can be expressed in the linear span of a set of known basis functions. While this assumption leads to efficient algorithms, verifying it in practice can be difficult, particularly for complex systems. Here we leverage connections between reproducing kernel Hilbert spaces, random Fourier features, and universal approximation theory to propose a computationally tractable algorithm for both adaptive control and adaptive prediction that does not rely on a linearly parameterized unknown. Specifically, we approximate the unknown dynamics with a finite expansion in $\textit{random}$ basis functions, and provide an explicit guarantee on the number of random features needed to track a desired trajectory with high probability. Remarkably, our explicit bounds only depend $\textit{polynomially}$ on the underlying parameters of the system, allowing our proposed algorithms to efficiently scale to high-dimensional systems. We study a setting where the unknown dynamics splits into a component that can be modeled through available physical knowledge of the system and a component that lives in a reproducing kernel Hilbert space. Our algorithms simultaneously adapt over parameters for physical basis functions and random features to learn both components of the dynamics online.