The natural exponential function is widely used in modeling many engineering and scientific systems. It is also an integral part of many neural network activation function such as sigmoid, tanh, ELU, RBF etc. Dedicated hardware accelerator and processors are designed for faster execution of such applications. Such accelerators can immensely benefit from an optimal implementation of exponential function. This can be achieved for most applications with the knowledge that the exponential function for a negative domain is more widely used than the positive domain. This paper presents an optimized implementation of exponential function for variable precision fixed point negative input. The implementation presented here significantly reduces the number of multipliers and adders. This is further optimized using mixed world-length implementation for the series expansion. The reduction in area and power consumption is more than 30% and 50% respectively over previous equivalent method.