Natural Language Processing (NLP) operations, such as semantic sentiment analysis and text synthesis, may often impair users' privacy and demand significant on device computational resources. Centralized learning (CL) on the edge offers an alternative energy-efficient approach, yet requires the collection of raw information, which affects the user's privacy. While Federated learning (FL) preserves privacy, it requires high computational energy on board tiny user devices. We introduce split learning (SL) as an energy-efficient alternative, privacy-preserving tiny machine learning (TinyML) scheme and compare it to FL and CL in the presence of Rayleigh fading and additive noise. Our results show that SL reduces processing power and CO2 emissions while maintaining high accuracy, whereas FL offers a balanced compromise between efficiency and privacy. Hence, this study provides insights into deploying energy-efficient, privacy-preserving NLP models on edge devices.