Abstract:Speech-driven gesture synthesis is a field of growing interest in virtual human creation. However, a critical challenge is the inherent intricate one-to-many mapping between speech and gestures. Previous studies have explored and achieved significant progress with generative models. Notwithstanding, most synthetic gestures are still vastly less natural. This paper presents DiffMotion, a novel speech-driven gesture synthesis architecture based on diffusion models. The model comprises an autoregressive temporal encoder and a denoising diffusion probability Module. The encoder extracts the temporal context of the speech input and historical gestures. The diffusion module learns a parameterized Markov chain to gradually convert a simple distribution into a complex distribution and generates the gestures according to the accompanied speech. Compared with baselines, objective and subjective evaluations confirm that our approach can produce natural and diverse gesticulation and demonstrate the benefits of diffusion-based models on speech-driven gesture synthesis.
Abstract:Proximity Distribution Kernel is an effective method for bag-of-featues based image representation. In this paper, we investigate the soft assignment of visual words to image features for proximity distribution. Visual word contribution function is proposed to model ambiguous proximity distributions. Three ambiguous proximity distributions is developed by three ambiguous contribution functions. The experiments are conducted on both classification and retrieval of medical image data sets. The results show that the performance of the proposed methods, Proximity Distribution Kernel (PDK), is better or comparable to the state-of-the-art bag-of-features based image representation methods.