A large amount of feedback was collected over the years. Many feedback analysis models have been developed focusing on the English language. Recognizing the concept of feedback is challenging and crucial in languages which do not have applicable corpus and tools employed in Natural Language Processing (i.e., vocabulary corpus, sentence structure rules, etc). However, in this paper, we study a feedback classification in Mongolian language using two different word embeddings for deep learning. We compare the results of proposed approaches. We use feedback data in Cyrillic collected from 2012-2018. The result indicates that word embeddings using their own dataset improve the deep learning based proposed model with the best accuracy of 80.1% and 82.7% for two classification tasks.