In this study, a novel open-source brain-computer interface (BCI) platform was developed to decode scalp electroencephalography (EEG) signals associated with sustained attention. The EEG signal collection was conducted using a wireless headset during a sustained visual attention task, where participants were instructed to discriminate between composite images superimposed with scenes and faces, responding only to the relevant subcategory while ignoring the irrelevant ones. Seven volunteers participated in this experiment. The data collected were subjected to analyses through event-related potential (ERP), Hilbert Transform, and Wavelet Transform to extract temporal and spectral features. For each participant, utilizing its extracted features, personalized Support Vector Machine (SVM) and Random Forest (RF) models with tuned hyperparameters were developed. The models aimed to decode the participant's attentional state towards the face and scene stimuli. The SVM models achieved a higher average accuracy of 80\% and an Area Under the Curve (AUC) of 0.86, while the RF models showed an average accuracy of 78\% and AUC of 0.8. This work suggests potential applications for the evaluation of visual attention and the development of closed-loop brainwave regulation systems in the future.