In this study, we conduct a comprehensive benchmark of the Safe Reinforcement Learning (Safe RL) algorithms for the task of vision-driven river following of Unmanned Aerial Vehicle (UAV) in a Unity-based photo-realistic simulation environment. We empirically validate the effectiveness of semantic-augmented image encoding method, assessing its superiority based on Relative Entropy and the quality of water pixel reconstruction. The determination of the encoding dimension, guided by reconstruction loss, contributes to a more compact state representation, facilitating the training of Safe RL policies. Across all benchmarked Safe RL algorithms, we find that First Order Constrained Optimization in Policy Space achieves the optimal balance between reward acquisition and safety compliance. Notably, our results reveal that on-policy algorithms consistently outperform both off-policy and model-based counterparts in both training and testing environments. Importantly, the benchmarking outcomes and the vision encoding methodology extend beyond UAVs, and are applicable to Autonomous Surface Vehicles (ASVs) engaged in autonomous navigation in confined waters.