Abstract:Computers still have a long way to go before they can interact with users in a truly natural fashion. From a users perspective, the most natural way to interact with a computer would be through a speech and gesture interface. Although speech recognition has made significant advances in the past ten years, gesture recognition has been lagging behind. Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Statements dealing with sign language occupy a significant interest in the Automatic Natural Language Processing (ANLP) domain. In this work, we are dealing with sign language recognition, in particular of French Sign Language (FSL). FSL has its own specificities, such as the simultaneity of several parameters, the important role of the facial expression or movement and the use of space for the proper utterance organization. Unlike speech recognition, Frensh sign language (FSL) events occur both sequentially and simultaneously. Thus, the computational processing of FSL is too complex than the spoken languages. We present a novel approach based on HMM to reduce the recognition complexity.
Abstract:Modern computational linguistic software cannot produce important aspects of sign language translation. Using some researches we deduce that the majority of automatic sign language translation systems ignore many aspects when they generate animation; therefore the interpretation lost the truth information meaning. Our goals are: to translate written text from any language to ASL animation; to model maximum raw information using machine learning and computational techniques; and to produce a more adapted and expressive form to natural looking and understandable ASL animations. Our methods include linguistic annotation of initial text and semantic orientation to generate the facial expression. We use the genetic algorithms coupled to learning/recognized systems to produce the most natural form. To detect emotion we are based on fuzzy logic to produce the degree of interpolation between facial expressions. Roughly, we present a new expressive language Text Adapted Sign Modeling Language TASML that describes all maximum aspects related to a natural sign language interpretation. This paper is organized as follow: the next section is devoted to present the comprehension effect of using Space/Time/SVO form in ASL animation based on experimentation. In section 3, we describe our technical considerations. We present the general approach we adopted to develop our tool in section 4. Finally, we give some perspectives and future works.
Abstract:This works aims to design a statistical machine translation from English text to American Sign Language (ASL). The system is based on Moses tool with some modifications and the results are synthesized through a 3D avatar for interpretation. First, we translate the input text to gloss, a written form of ASL. Second, we pass the output to the WebSign Plug-in to play the sign. Contributions of this work are the use of a new couple of language English/ASL and an improvement of statistical machine translation based on string matching thanks to Jaro-distance.