Abstract:Conversational User Interfaces such as Voice Assistants are hugely popular. Yet they are designed to be monolingual by default, lacking support for, or sensitivity to, the bilingual dialogue experience. In this provocation paper, we highlight the language production challenges faced in VA interaction for bilingual users. We argue that, by facilitating phenomena seen in bilingual interaction, such as code-switching, we can foster a more inclusive and improved user experience for bilingual users. We also explore ways that this might be achieved, through the support of multiple language recognition as well as being sensitive to the preferences of code-switching in speech output.
Abstract:Neural language modelling has progressed the state-of-the-art in different downstream Natural Language Processing (NLP) tasks. One such area is of open-domain dialog modelling, neural dialog models based on GPT-2 such as DialoGPT have shown promising performance in single-turn conversation. However, such (neural) dialog models have been criticized for generating responses which although may have relevance to the previous human response, tend to quickly dissipate human interest and descend into trivial conversation. One reason for such performance is the lack of explicit conversation strategy being employed in human-machine conversation. Humans employ a range of conversation strategies while engaging in a conversation, one such key social strategies is Self-disclosure(SD). A phenomenon of revealing information about one-self to others. Social penetration theory (SPT) proposes that communication between two people moves from shallow to deeper levels as the relationship progresses primarily through self-disclosure. Disclosure helps in creating rapport among the participants engaged in a conversation. In this paper, Self-disclosure enhancement architecture (SDEA) is introduced utilizing Self-disclosure Topic Model (SDTM) during inference stage of a neural dialog model to re-rank response candidates to enhance self-disclosure in single-turn responses from from the model.