Abstract:Parkinson's disease (PD), the second most prevalent neurodegenerative disorder worldwide, frequently presents with early-stage speech impairments. Recent advancements in Artificial Intelligence (AI), particularly deep learning (DL), have significantly enhanced PD diagnosis through the analysis of speech data. Nevertheless, the progress of research is restricted by the limited availability of publicly accessible speech-based PD datasets, primarily due to privacy and ethical concerns. This review covers the latest DL-based AI approaches for speech-based PD classification, focusing on performance, available resources and associated challenges of 33 scientific works published between 2020 and March 2024. These DL approaches are categorized into end-to-end (E2E) learning, transfer learning (TL) and deep acoustic features (DAF) extraction. Among E2E approaches, Convolutional Neural Networks (CNNs) are prevalent, though Transformers are increasingly popular. E2E approaches face challenges such as limited data and computational resources, especially with Transformers. TL addresses these issues by providing more robust PD diagnosis and better generalizability across languages. DAF extraction aims to improve the explainability and interpretability of results by examining the specific effects of deep features on both other DL approaches and more traditional machine learning (ML) methods. However, it often underperforms compared to E2E and TL approaches. This review also discusses unresolved issues related to bias, explainability and privacy, highlighting the need for future research.