Abstract:Visual search plays an essential role for E-commerce. To meet the search demands of users and promote shopping experience at Alibaba, visual search relevance of real-shot images is becoming the bottleneck. Traditional visual search paradigm is usually based upon supervised learning with labeled data. However, large-scale categorical labels are required with expensive human annotations, which limits its applicability and also usually fails in distinguishing the real-shot images. In this paper, we propose to discover Virtual ID from user click behavior to improve visual search relevance at Alibaba. As a totally click-data driven approach, we collect various types of click data for training deep networks without any human annotations at all. In particular, Virtual ID are learned as classification supervision with co-click embedding, which explores image relationship from user co-click behaviors to guide category prediction and feature learning. Concretely, we deploy Virtual ID Category Network by integrating first-clicks and switch-clicks as regularizer. Incorporating triplets and list constraints, Virtual ID Feature Network is trained in a joint classification and ranking manner. Benefiting from exploration of user click data, our networks are more effective to encode richer supervision and better distinguish real-shot images in terms of category and feature. To validate our method for visual search relevance, we conduct an extensive set of offline and online experiments on the collected real-shot images. We consistently achieve better experimental results across all components, compared with alternative and state-of-the-art methods.
Abstract:As emojis are widely used in social media, people not only use an emoji to express their emotions or mention things but also extend its usage to represent complicate emotions, concepts or activities by combining multiple emojis. In this work, we study how emoji combination, a consecutive emoji sequence, is used like a new language. We propose a novel algorithm called Retrieval Strategy to predict what emoji combination follows given a short text as context. Our algorithm treats emoji combinations as phrase in language, ranking sets of emoji combinations like retrieving words from dictionary. We show that our algorithm largely improves the F1 score from 0.141 to 0.204 on emoji combination prediction task.