Abstract:A growing body of work has focused on text classification methods for detecting the increasing amount of hate speech posted online. This progress has been limited to only a select number of highly-resourced languages causing detection systems to either under-perform or not exist in limited data contexts. This is majorly caused by a lack of training data which is expensive to collect and curate in these settings. In this work, we propose a data augmentation approach that addresses the problem of lack of data for online hate speech detection in limited data contexts using synthetic data generation techniques. Given a handful of hate speech examples in a high-resource language such as English, we present three methods to synthesize new examples of hate speech data in a target language that retains the hate sentiment in the original examples but transfers the hate targets. We apply our approach to generate training data for hate speech classification tasks in Hindi and Vietnamese. Our findings show that a model trained on synthetic data performs comparably to, and in some cases outperforms, a model trained only on the samples available in the target domain. This method can be adopted to bootstrap hate speech detection models from scratch in limited data contexts. As the growth of social media within these contexts continues to outstrip response efforts, this work furthers our capacities for detection, understanding, and response to hate speech.
Abstract:Several changes occur in the brain in response to voluntary and involuntary activities performed by a person. The ability to retrieve data from the brain within a time space provides basis for in-depth analyses that offer insight on what changes occur in the brain during its decision making processes. In this work, we present the technical description and software implementation of an electroencephalographic (EEG) based intelligent communication system. We use EEG dry sensors to read brain waves data in real-time with which we compute the likelihood that a voluntary eye blink has been made by a person and use the decision to trigger buttons on a user interface in order to produce text using a modification of the T9 algorithm. Our results indicate that EEG-based technology can be effectively applied in facilitating speech for people with severe speech and muscular disabilities, providing a foundation for future work in the area.