Time series data plays a critical role across diverse domains such as healthcare, energy, and finance, where tasks like classification, anomaly detection, and forecasting are essential for informed decision-making. Recently, large language models (LLMs) have gained prominence for their ability to handle complex data and extract meaningful insights. This study investigates whether LLMs are effective for time series data analysis by comparing their performance with non-LLM-based approaches across three tasks: classification, anomaly detection, and forecasting. Through a series of experiments using GPT4TS and autoregressive models, we evaluate their performance on benchmark datasets and assess their accuracy, precision, and ability to generalize. Our findings indicate that while LLM-based methods excel in specific tasks like anomaly detection, their benefits are less pronounced in others, such as forecasting, where simpler models sometimes perform comparably or better. This research highlights the role of LLMs in time series analysis and lays the groundwork for future studies to systematically explore their applications and limitations in handling temporal data.