The joint detection and classification of RF signals has been a critical problem in the field of wideband RF spectrum sensing. Recent advancements in deep learning models have revolutionized this field, remarkably through the application of state-of-the-art computer vision algorithms such as YOLO (You Only Look Once) and DETR (Detection Transformer) to the spectrogram images. This paper focuses on optimizing the preprocessing stage to enhance the performance of these computer vision models. Specifically, we investigated the generation of training spectrograms via the classical Short-Time Fourier Transform (STFT) approach, examining four classical STFT parameters: FFT size, window type, window length, and overlapping ratio. Our study aims to maximize the mean average precision (mAP) scores of YOLOv10 models in detecting and classifying various digital modulation signals within a congested spectrum environment. Firstly, our results reveal that additional zero padding in FFT does not enhance detection and classification accuracy and introduces unnecessary computational cost. Secondly, our results indicated that there exists an optimal window size that balances the trade-offs between and the time and frequency resolution, with performance losses of approximately 10% and 30% if the window size is four or eight times off from the optimal. Thirdly, regarding the choice of window functions, the Hamming window yields optimal performance, with non-optimal windows resulting in up to a 10% accuracy loss. Finally, we found a 10% accuracy score performance gap between using 10% and 90% overlap. These findings highlight the potential for significant performance improvements through optimized spectrogram parameters when applying computer vision models to the problem of wideband RF spectrum sensing.