Abstract:Recommender systems easily face the issue of user preference shifts. User representations will become out-of-date and lead to inappropriate recommendations if user preference has shifted over time. To solve the issue, existing work focuses on learning robust representations or predicting the shifting pattern. There lacks a comprehensive view to discover the underlying reasons for user preference shifts. To understand the preference shift, we abstract a causal graph to describe the generation procedure of user interaction sequences. Assuming user preference is stable within a short period, we abstract the interaction sequence as a set of chronological environments. From the causal graph, we find that the changes of some unobserved factors (e.g., becoming pregnant) cause preference shifts between environments. Besides, the fine-grained user preference over categories sparsely affects the interactions with different items. Inspired by the causal graph, our key considerations to handle preference shifts lie in modeling the interaction generation procedure by: 1) capturing the preference shifts across environments for accurate preference prediction, and 2) disentangling the sparse influence from user preference to interactions for accurate effect estimation of preference. To this end, we propose a Causal Disentangled Recommendation (CDR) framework, which captures preference shifts via a temporal variational autoencoder and learns the sparse influence from multiple environments. Specifically, an encoder is adopted to infer the unobserved factors from user interactions while a decoder is to model the interaction generation process. Besides, we introduce two learnable matrices to disentangle the sparse influence from user preference to interactions. Lastly, we devise a multi-objective loss to optimize CDR. Extensive experiments on three datasets show the superiority of CDR.
Abstract:We present a Non-parametric Network for 3D point cloud analysis, Point-NN, which consists of purely non-learnable components: farthest point sampling (FPS), k-nearest neighbors (k-NN), and pooling operations, with trigonometric functions. Surprisingly, it performs well on various 3D tasks, requiring no parameters or training, and even surpasses existing fully trained models. Starting from this basic non-parametric model, we propose two extensions. First, Point-NN can serve as a base architectural framework to construct Parametric Networks by simply inserting linear layers on top. Given the superior non-parametric foundation, the derived Point-PN exhibits a high performance-efficiency trade-off with only a few learnable parameters. Second, Point-NN can be regarded as a plug-and-play module for the already trained 3D models during inference. Point-NN captures the complementary geometric knowledge and enhances existing methods for different 3D benchmarks without re-training. We hope our work may cast a light on the community for understanding 3D point clouds with non-parametric methods. Code is available at https://github.com/ZrrSkywalker/Point-NN.
Abstract:Performances on standard 3D point cloud benchmarks have plateaued, resulting in oversized models and complex network design to make a fractional improvement. We present an alternative to enhance existing deep neural networks without any redesigning or extra parameters, termed as Spatial-Neighbor Adapter (SN-Adapter). Building on any trained 3D network, we utilize its learned encoding capability to extract features of the training dataset and summarize them as prototypical spatial knowledge. For a test point cloud, the SN-Adapter retrieves k nearest neighbors (k-NN) from the pre-constructed spatial prototypes and linearly interpolates the k-NN prediction with that of the original 3D network. By providing complementary characteristics, the proposed SN-Adapter serves as a plug-and-play module to economically improve performance in a non-parametric manner. More importantly, our SN-Adapter can be effectively generalized to various 3D tasks, including shape classification, part segmentation, and 3D object detection, demonstrating its superiority and robustness. We hope our approach could show a new perspective for point cloud analysis and facilitate future research.
Abstract:Pre-training by numerous image data has become de-facto for robust 2D representations. In contrast, due to the expensive data acquisition and annotation, a paucity of large-scale 3D datasets severely hinders the learning for high-quality 3D features. In this paper, we propose an alternative to obtain superior 3D representations from 2D pre-trained models via Image-to-Point Masked Autoencoders, named as I2P-MAE. By self-supervised pre-training, we leverage the well learned 2D knowledge to guide 3D masked autoencoding, which reconstructs the masked point tokens with an encoder-decoder architecture. Specifically, we first utilize off-the-shelf 2D models to extract the multi-view visual features of the input point cloud, and then conduct two types of image-to-point learning schemes on top. For one, we introduce a 2D-guided masking strategy that maintains semantically important point tokens to be visible for the encoder. Compared to random masking, the network can better concentrate on significant 3D structures and recover the masked tokens from key spatial cues. For another, we enforce these visible tokens to reconstruct the corresponding multi-view 2D features after the decoder. This enables the network to effectively inherit high-level 2D semantics learned from rich image data for discriminative 3D modeling. Aided by our image-to-point pre-training, the frozen I2P-MAE, without any fine-tuning, achieves 93.4% accuracy for linear SVM on ModelNet40, competitive to the fully trained results of existing methods. By further fine-tuning on on ScanObjectNN's hardest split, I2P-MAE attains the state-of-the-art 90.11% accuracy, +3.68% to the second-best, demonstrating superior transferable capacity. Code will be available at https://github.com/ZrrSkywalker/I2P-MAE.