3D hand pose estimation is a long-standing challenge in both robotics and computer vision communities due to its implicit depth ambiguity and often strong self-occlusion. Recently, in addition to the hand skeleton, jointly estimating hand pose and shape has gained more attraction. State-of-the-art methods adopt a model-free approach, estimating the vertices of the hand mesh directly and providing superior accuracy compared to traditional model-based methods directly regressing the parameters of the parametric hand mesh. However, with the large number of mesh vertices to estimate, these methods are often slow in inference. We propose an efficient variation of the previously proposed image-to-lixel approach to efficiently estimate hand meshes from the images. Leveraging recent developments in efficient neural architectures, we significantly reduce the computation complexity without sacrificing the estimation accuracy. Furthermore, we introduce an inverted kinematic(IK) network to translate the estimated hand mesh to a biomechanically feasible set of joint rotation parameters, which is necessary for applications that leverage pose estimation for controlling robotic hands. Finally, an optional post-processing module is proposed to refine the rotation and shape parameters to compensate for the error introduced by the IK net. Our Lite I2L Mesh Net achieves state-of-the-art joint and mesh estimation accuracy with less than $13\%$ of the total computational complexity of the original I2L hand mesh estimator. Adding the IK net and post-optimization modules can improve the accuracy slightly at a small computation cost, but more importantly, provide the kinematic parameters required for robotic applications.