We prove that two popular linear contextual bandit algorithms, OFUL and Thompson Sampling, can be made efficient using Frequent Directions, a deterministic online sketching technique. More precisely, we show that a sketch of size $m$ allows a $\mathcal{O}(md)$ update time for both algorithms, as opposed to $\Omega(d^2)$ required by their non-sketched versions (where $d$ is the dimension of context vectors). When the selected contexts span a subspace of dimension at most $m$, we show that this computational speedup is accompanied by an improved regret of order $m\sqrt{T}$ for sketched OFUL and of order $m\sqrt{dT}$ for sketched Thompson Sampling (ignoring log factors in both cases). Vice versa, when the dimension of the span exceeds $m$, the regret bounds become of order $(1+\varepsilon_m)^{3/2}d\sqrt{T}$ for OFUL and of order $\big((1+\varepsilon_m)d\big)^{3/2}\sqrt{T}$ for Thompson Sampling, where $\varepsilon_m$ is bounded by the sum of the tail eigenvalues not covered by the sketch. Experiments on real-world datasets corroborate our theoretical results.