The evolution of Outfit Recommendation (OR) in the realm of fashion has progressed through two distinct phases: Pre-defined Outfit Recommendation and Personalized Outfit Composition. Despite these advancements, both phases face limitations imposed by existing fashion products, hindering their effectiveness in meeting users' diverse fashion needs. The emergence of AI-generated content has paved the way for OR to overcome these constraints, demonstrating the potential for personalized outfit generation. In pursuit of this, we introduce an innovative task named Generative Outfit Recommendation (GOR), with the goal of synthesizing a set of fashion images and assembling them to form visually harmonious outfits customized to individual users. The primary objectives of GOR revolve around achieving high fidelity, compatibility, and personalization of the generated outfits. To accomplish these, we propose DiFashion, a generative outfit recommender model that harnesses exceptional diffusion models for the simultaneous generation of multiple fashion images. To ensure the fulfillment of these objectives, three types of conditions are designed to guide the parallel generation process and Classifier-Free-Guidance are employed to enhance the alignment between generated images and conditions. DiFashion is applied to both personalized Fill-In-The-Blank and GOR tasks, and extensive experiments are conducted on the iFashion and Polyvore-U datasets. The results of quantitative and human-involved qualitative evaluations highlight the superiority of DiFashion over competitive baselines.