Abstract:In the long-tailed recognition field, the Decoupled Training paradigm has demonstrated remarkable capabilities among various methods. This paradigm decouples the training process into separate representation learning and classifier re-training. Previous works have attempted to improve both stages simultaneously, making it difficult to isolate the effect of classifier re-training. Furthermore, recent empirical studies have demonstrated that simple regularization can yield strong feature representations, emphasizing the need to reassess existing classifier re-training methods. In this study, we revisit classifier re-training methods based on a unified feature representation and re-evaluate their performances. We propose a new metric called Logits Magnitude as a superior measure of model performance, replacing the commonly used Weight Norm. However, since it is hard to directly optimize the new metric during training, we introduce a suitable approximate invariant called Regularized Standard Deviation. Based on the two newly proposed metrics, we prove that reducing the absolute value of Logits Magnitude when it is nearly balanced can effectively decrease errors and disturbances during training, leading to better model performance. Motivated by these findings, we develop a simple logits retargeting approach (LORT) without the requirement of prior knowledge of the number of samples per class. LORT divides the original one-hot label into small true label probabilities and large negative label probabilities distributed across each class. Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist2018.
Abstract:Virtual try-on is a critical image synthesis task that aims to transfer clothes from one image to another while preserving the details of both humans and clothes. While many existing methods rely on Generative Adversarial Networks (GANs) to achieve this, flaws can still occur, particularly at high resolutions. Recently, the diffusion model has emerged as a promising alternative for generating high-quality images in various applications. However, simply using clothes as a condition for guiding the diffusion model to inpaint is insufficient to maintain the details of the clothes. To overcome this challenge, we propose an exemplar-based inpainting approach that leverages a warping module to guide the diffusion model's generation effectively. The warping module performs initial processing on the clothes, which helps to preserve the local details of the clothes. We then combine the warped clothes with clothes-agnostic person image and add noise as the input of diffusion model. Additionally, the warped clothes is used as local conditions for each denoising process to ensure that the resulting output retains as much detail as possible. Our approach, namely Diffusion-based Conditional Inpainting for Virtual Try-ON (DCI-VTON), effectively utilizes the power of the diffusion model, and the incorporation of the warping module helps to produce high-quality and realistic virtual try-on results. Experimental results on VITON-HD demonstrate the effectiveness and superiority of our method.