Modern computational linguistic software cannot produce important aspects of sign language translation. Using some researches we deduce that the majority of automatic sign language translation systems ignore many aspects when they generate animation; therefore the interpretation lost the truth information meaning. Our goals are: to translate written text from any language to ASL animation; to model maximum raw information using machine learning and computational techniques; and to produce a more adapted and expressive form to natural looking and understandable ASL animations. Our methods include linguistic annotation of initial text and semantic orientation to generate the facial expression. We use the genetic algorithms coupled to learning/recognized systems to produce the most natural form. To detect emotion we are based on fuzzy logic to produce the degree of interpolation between facial expressions. Roughly, we present a new expressive language Text Adapted Sign Modeling Language TASML that describes all maximum aspects related to a natural sign language interpretation. This paper is organized as follow: the next section is devoted to present the comprehension effect of using Space/Time/SVO form in ASL animation based on experimentation. In section 3, we describe our technical considerations. We present the general approach we adopted to develop our tool in section 4. Finally, we give some perspectives and future works.