Personalized federated learning (PFL) aims to harness the collective wisdom of clients' data to build customized models tailored to individual clients' data distributions. Existing works offer personalization primarily to clients who participate in the FL process, making it hard to encompass new clients who were absent or newly show up. In this paper, we propose FedBasis, a novel PFL framework to tackle such a deficiency. FedBasis learns a set of few, shareable ``basis'' models, which can be linearly combined to form personalized models for clients. Specifically for a new client, only a small set of combination coefficients, not the models, needs to be learned. This notion makes FedBasis more parameter-efficient, robust, and accurate compared to other competitive PFL baselines, especially in the low data regime, without increasing the inference cost. To demonstrate its applicability, we also present a more practical PFL testbed for image classification, featuring larger data discrepancies across clients in both the image and label spaces as well as more faithful training and test splits.