Computers still have a long way to go before they can interact with users in a truly natural fashion. From a users perspective, the most natural way to interact with a computer would be through a speech and gesture interface. Although speech recognition has made significant advances in the past ten years, gesture recognition has been lagging behind. Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Statements dealing with sign language occupy a significant interest in the Automatic Natural Language Processing (ANLP) domain. In this work, we are dealing with sign language recognition, in particular of French Sign Language (FSL). FSL has its own specificities, such as the simultaneity of several parameters, the important role of the facial expression or movement and the use of space for the proper utterance organization. Unlike speech recognition, Frensh sign language (FSL) events occur both sequentially and simultaneously. Thus, the computational processing of FSL is too complex than the spoken languages. We present a novel approach based on HMM to reduce the recognition complexity.