This work proposes for the first time to utilize the regular smartphone -- a popular assistive gadget -- to design a novel, non-invasive method for self-monitoring of one's hydration level on a scale of 1 to 4. The proposed method involves recording a small video of a fingertip using the smartphone camera. Subsequently, a photoplethysmography (PPG) signal is extracted from the video data, capturing the fluctuations in peripheral blood volume as a reflection of a person's hydration level changes over time. To train and evaluate the artificial intelligence models, a custom multi-session labeled dataset was constructed by collecting video-PPG data from 25 fasting subjects during the month of Ramadan in 2023. With this, we solve two distinct problems: 1) binary classification (whether a person is hydrated or not), 2) four-class classification (whether a person is fully hydrated, mildly dehydrated, moderately dehydrated, or extremely dehydrated). For both classification problems, we feed the pre-processed and augmented PPG data to a number of machine learning, deep learning and transformer models which models provide a very high accuracy, i.e., in the range of 95% to 99%. We also propose an alternate method where we feed high-dimensional PPG time-series data to a DL model for feature extraction, followed by t-SNE method for feature selection and dimensionality reduction, followed by a number of ML classifiers that do dehydration level classification. Finally, we interpret the decisions by the developed deep learning model under the SHAP-based explainable artificial intelligence framework. The proposed method allows rapid, do-it-yourself, at-home testing of one's hydration level, is cost-effective and thus inline with the sustainable development goals 3 & 10 of the United Nations, and a step-forward to patient-centric healthcare systems, smart homes, and smart cities of future.