Abstract:Virtual fitting room is a challenging task yet useful feature for e-commerce platforms and fashion designers. Existing works can only detect very few types of fashion items. Besides they did poorly in changing the texture and style of the selected fashion items. In this project, we propose a novel approach to address this problem. We firstly used Mask R-CNN to find the regions of different fashion items, and secondly used Neural Style Transfer to change the style of the selected fashion items. The dataset we used is composed of images from PaperDoll dataset and annotations provided by eBay's ModaNet. We trained 8 models and our best model massively outperformed baseline models both quantitatively and qualitatively, with 68.72% mAP, 0.2% ASDR.
Abstract:In this paper, we present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps. Automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous radial velocity measurements. However, there are factors that make the fusion of lidar and radar information challenging, such as the relatively low angular resolution of radar measurements, their sparsity and the lack of exact time synchronization with lidar. To overcome these challenges, we propose an efficient spatio-temporal radar feature extraction scheme which achieves state-of-the-art performance on multiple large-scale datasets.Further, by incorporating radar information, we show a 52% reduction in prediction error for objects with high acceleration and a 16% reduction in prediction error for objects at longer range.