Recursive least-squares algorithms often use forgetting factors as a heuristic to adapt to non-stationary data streams. % The first contribution of this paper rigorously characterizes the effect of forgetting factors for a class of online Newton algorithms. % For exp-concave and strongly convex objectives, the algorithms achieve a dynamic regret of $\max\{O(\log T),O(\sqrt{TV})\}$, where $V$ is a bound on the path length of the comparison sequence. % In particular, we show how classic recursive least-squares with a forgetting factor achieves this dynamic regret bound. % By varying $V$, we obtain a trade-off between static and dynamic regret. % Furthermore, we show how the forgetting factor can be tuned to obtain % trade-offs between static and dynamic regret. % In order to obtain more computationally efficient algorithms, our second contribution is a novel gradient descent step size rule for strongly convex functions. % Our gradient descent rule recovers the dynamic regret bounds described above. % For smooth problems, we can also obtain static regret of $O(T^{1-\beta})$ and dynamic regret of $O(T^\beta V^*)$, where $\beta \in (0,1)$ and $V^*$ is the path length of the sequence of minimizers. % By varying $\beta$, we obtain a trade-off between static and dynamic regret.