In this paper, we implement and evaluate a two-stage retrieval pipeline for a course recommender system that ranks courses for skill-occupation pairs. The in-production recommender system BrightFit provides course recommendations from multiple sources. Some of the course descriptions are long and noisy, while retrieval and ranking in an online system have to be highly efficient. We developed a two-step retrieval pipeline with RankT5 finetuned on MSMARCO as re-ranker. We compare two summarizers for course descriptions: a LongT5 model that we finetuned for the task, and a generative LLM (Vicuna) with in-context learning. We experiment with quantization to reduce the size of the ranking model and increase inference speed. We evaluate our rankers on two newly labelled datasets, with an A/B test, and with a user questionnaire. On the two labelled datasets, our proposed two-stage ranking with automatic summarization achieves a substantial improvement over the in-production (BM25) ranker: nDCG@10 scores improve from 0.482 to 0.684 and from 0.447 to 0.844 on the two datasets. We also achieve a 40% speed-up by using a quantized version of RankT5. The improved quality of the ranking was confirmed by the questionnaire completed by 29 respondents, but not by the A/B test. In the A/B test, a higher clickthrough rate was observed for the BM25-ranking than for the proposed two-stage retrieval. We conclude that T5-based re-ranking and summarization for online course recommendation can obtain much better effectiveness than single-step lexical retrieval, and that quantization has a large effect on RankT5. In the online evaluation, however, other factors than relevance play a role (such as speed and interpretability of the retrieval results), as well as individual preferences.