In Spiking Neural Networks (SNNs), learning rules are based on neuron spiking behavior, that is, if and when spikes are generated due to a neuron's membrane potential exceeding that neuron's firing threshold, and this spike timing encodes vital information. However, the threshold is generally treated as a hyperparameter, and incorrect selection can lead to neurons that do not spike for large portions of the training process, hindering the effective rate of learning. Inspired by homeostatic mechanisms in biological neurons, this work (Rouser) presents a study to rouse training-inactive neurons and improve the SNN training by using an in-loop adaptive threshold learning mechanism. Rouser's adaptive threshold allows for dynamic adjustments based on input data and network hyperparameters, influencing spike timing and improving training. This study focuses primarily on investigating the significance of learning neuron thresholds alongside weights in SNNs. We evaluate the performance of Rouser on the spatiotemporal datasets NMNIST, DVS128 and Spiking Heidelberg Digits (SHD), compare our results with state-of-the-art SNN training techniques, and discuss the strengths and limitations of our approach. Our results suggest that promoting threshold from a hyperparameter to a parameter can effectively address the issue of dead neurons during training, resulting in a more robust training algorithm that leads to improved training convergence, increased test accuracy, and substantial reductions in the number of training epochs needed to achieve viable accuracy. Rouser achieves up to 70% lower training latency while providing up to 2% higher accuracy over state-of-the-art SNNs with similar network architecture on the neuromorphic datasets NMNIST, DVS128 and SHD.