The University of Chicago
Abstract:As a type of figurative language, an East Asian idiom condenses rich cultural background into only a few characters. Translating such idioms is challenging for human translators, who often resort to choosing a context-aware translation from an existing list of candidates. However, compiling a dictionary of candidate translations demands much time and creativity even for expert translators. To alleviate such burden, we evaluate if GPT-4 can help generate high-quality translations. Based on automatic evaluations of faithfulness and creativity, we first identify Pareto-optimal prompting strategies that can outperform translation engines from Google and DeepL. Then, at a low cost, our context-aware translations can achieve far more high-quality translations per idiom than the human baseline. We open-source all code and data to facilitate further research.
Abstract:Masked language models pick up gender biases during pre-training. Such biases are usually attributed to a certain model architecture and its pre-training corpora, with the implicit assumption that other variations in the pre-training process, such as the choices of the random seed or the stopping point, have no effect on the biases measured. However, we show that severe fluctuations exist at the fundamental level of individual templates, invalidating the assumption. Further against the intuition of how humans acquire biases, these fluctuations are not correlated with the certainty of the predicted pronouns or the profession frequencies in pre-training corpora. We release our code and data to benefit future research.
Abstract:Idioms are an important language phenomenon in Chinese, but idiom translation is notoriously hard. Current machine translation models perform poorly on idiom translation, while idioms are sparse in many translation datasets. We present PETCI, a parallel English translation dataset of Chinese idioms, aiming to improve idiom translation by both human and machine. The dataset is built by leveraging human and machine effort. Baseline generation models show unsatisfactory abilities to improve translation, but structure-aware classification models show good performance on distinguishing good translations. Furthermore, the size of PETCI can be easily increased without expertise. Overall, PETCI can be helpful to language learners and machine translation systems.