Abstract:Opinion phrase extraction is one of the key tasks in fine-grained sentiment analysis. While opinion expressions could be generic subjective expressions, aspect specific opinion expressions contain both the aspect as well as the opinion expression within the original sentence context. In this work, we formulate the task as an instance of token-level sequence labeling. When multiple aspects are present in a sentence, detection of opinion phrase boundary becomes difficult and label of each word depend not only upon the surrounding words but also with the concerned aspect. We propose a neural network architecture with bidirectional LSTM (Bi-LSTM) and a novel attention mechanism. Bi-LSTM layer learns the various sequential pattern among the words without requiring any hand-crafted features. The attention mechanism captures the importance of context words on a particular aspect opinion expression when multiple aspects are present in a sentence via location and content based memory. A Conditional Random Field (CRF) model is incorporated in the final layer to explicitly model the dependencies among the output labels. Experimental results on Hotel dataset from Tripadvisor.com showed that our approach outperformed several state-of-the-art baselines.
Abstract:Stickers are popularly used in messaging apps such as Hike to visually express a nuanced range of thoughts and utterances and convey exaggerated emotions. However, discovering the right sticker at the right time in a chat from a large and ever expanding pool of stickers can be cumbersome. In this paper, we describe a system for recommending stickers as users chat based on what the user is typing and the conversational context. We decompose the sticker recommendation problem into two steps. First, we predict the next message that the user is likely to send in the chat. Second, we substitute the predicted message with an appropriate sticker. Majority of Hike's users transliterate messages from their native language to English. This leads to numerous orthographic variations of the same message and thus complicates message prediction. To address this issue, we cluster the messages that have the same meaning and predict the message cluster instead of the message. We experiment with different approaches to train embedding for chat messages and study their efficacy in learning similar dense representations for messages that have the same intent. We propose a novel hybrid message prediction model, which can run with low latency on low end phones that have severe computational limitations.