In this work, we study policy-based methods for solving the reinforcement learning problem, where off-policy sampling and linear function approximation are employed for policy evaluation, and various policy update rules, including natural policy gradient (NPG), are considered for policy update. To solve the policy evaluation sub-problem in the presence of the deadly triad, we propose a generic algorithm framework of multi-step TD-learning with generalized importance sampling ratios, which includes two specific algorithms: the $\lambda$-averaged $Q$-trace and the two-sided $Q$-trace. The generic algorithm is single time-scale, has provable finite-sample guarantees, and overcomes the high variance issue in off-policy learning. As for the policy update, we provide a universal analysis using only the contraction property and the monotonicity property of the Bellman operator to establish the geometric convergence under various policy update rules. Importantly, by viewing NPG as an approximate way of implementing policy iteration, we establish the geometric convergence of NPG without introducing regularization, and without using mirror descent type of analysis as in existing literature. Combining the geometric convergence of the policy update with the finite-sample analysis of the policy evaluation, we establish for the first time an overall $\mathcal{O}(\epsilon^{-2})$ sample complexity for finding an optimal policy (up to a function approximation error) using policy-based methods under off-policy sampling and linear function approximation.