We propose an innovative token representation and update method in a new ultra-small language model: the Wave network. Specifically, we use a \textbf{complex vector} to represent each token, encoding both global and local semantics of the input text. A \textbf{complex vector} consists of two components: a magnitude vector representing the \textit{global semantics} of the input text, and a phase vector capturing the \textit{relationships between individual tokens and global semantics}. Experiments on the AG News text classification task demonstrate that, when generating complex vectors from randomly initialized token embeddings, our single-layer Wave Network achieves 90.91\% accuracy with wave interference and 91.66\% with wave modulation -- outperforming a single Transformer layer using BERT pre-trained embeddings by 19.23\% and 19.98\%, respectively, and approaching the accuracy of the pre-trained and fine-tuned BERT base model (94.64\%). Additionally, compared to BERT base, the Wave Network reduces video memory usage and training time by 77.34\% and 85.62\% during wave modulation. In summary, we used a 2.4-million-parameter small language model to achieve accuracy comparable to a 100-million-parameter BERT model in text classification.