Being able to express our thoughts, feelings, and ideas to one another is essential for human survival and development. A considerable portion of the population encounters communication obstacles in environments where hearing is the primary means of communication, leading to unfavorable effects on daily activities. An autonomous sign language recognition system that works effectively can significantly reduce this barrier. To address the issue, we proposed a large scale dataset namely Multi-View Bangla Sign Language dataset (MV- BSL) which consist of 115 glosses and 350 isolated words in 15 different categories. Furthermore, We have built a recurrent neural network (RNN) with attention based bidirectional gated recurrent units (Bi-GRU) architecture that models the temporal dynamics of the pose information of an individual communicating through sign language. Human pose information, which has proven effective in analyzing sign pattern as it ignores people's body appearance and environmental information while capturing the true movement information makes the proposed model simpler and faster with state-of-the-art accuracy.