Intracranial EEG (iEEG) recording, characterized by high spatial and temporal resolution and superior signal-to-noise ratio (SNR), enables the development of precise brain-computer interface (BCI) systems for neural decoding. However, the invasive nature of the procedure significantly limits the availability of iEEG datasets in terms of both the number of participants and the duration of recorded sessions. To address this limitation, we propose a single-participant machine learning model optimized for decoding iEEG signals. The model employs 18 key features and operates in two modes: best channel and combined channel. The combined channel mode integrates spatial information from multiple brain regions, leading to superior classification performance. Evaluations across three datasets -- Music Reconstruction, Audio Visual, and AJILE12 -- demonstrate that the combined channel mode consistently outperforms the best channel mode across all classifiers. In the best-performing cases, Random Forest achieved an F1 score of 0.81 +/- 0.05 in the Music Reconstruction dataset and 0.82 +/- 0.10 in the Audio Visual dataset, while XGBoost achieved an F1 score of 0.84 +/- 0.08 in the AJILE12 dataset. Furthermore, the analysis of brain region contributions in the combined channel mode revealed that the model identifies relevant brain regions aligned with physiological expectations for each task and effectively combines data from electrodes in these regions to achieve high performance. These findings highlight the potential of integrating spatial information across brain regions to improve task decoding, offering new avenues for advancing BCI systems and neurotechnological applications.