Multilingual Large Language Models (LLMs) have recently shown great capability in various tasks, exhibiting state-of-the-art performance using few-shot or zero-shot prompting methods. While these models have been extensively studied in tasks where inputs are assumed to be in a single language, less attention has been paid to exploring their performance when inputs involve code-switching (CSW). In this paper, we provide an extensive empirical study of various multilingual LLMs and benchmark their performance in three tasks: sentiment analysis, machine translation, and word-level language identification. Our findings indicate that despite multilingual LLMs showing promising outcomes in certain tasks when using zero-/few-shot prompting, their performance still falls short on average when compared to smaller finetuned models. We argue that LLMs that are "multilingual" are not necessarily code-switching compatible and extensive future research is required to fully bridge this gap.