This thesis rigorously studies fundamental reinforcement learning (RL) methods in modern practical considerations, including robust RL, distributional RL, and offline RL with neural function approximation. The thesis first prepares the readers with an overall overview of RL and key technical background in statistics and optimization. In each of the settings, the thesis motivates the problems to be studied, reviews the current literature, provides computationally efficient algorithms with provable efficiency guarantees, and concludes with future research directions. The thesis makes fundamental contributions to the three settings above, both algorithmically, theoretically, and empirically, while staying relevant to practical considerations.