Abstract:As wireless communication advances toward the 6G era, the demand for ultra-reliable, high-speed, and ubiquitous connectivity is driving the exploration of new degrees-of-freedom (DoFs) in communication systems. Among the key enabling technologies, Movable Antennas (MAs) integrated into Flexible Cylindrical Arrays (FCLA) have shown great potential in optimizing wireless communication by providing spatial flexibility. This paper proposes an innovative optimization framework that leverages the dynamic mobility of FCLAs to improve communication rates and overall system performance. By employing Fractional Programming (FP) for alternating optimization of beamforming and antenna positions, the system enhances throughput and resource utilization. Additionally, a novel Constrained Grid Search-Based Adaptive Moment Estimation Algorithm (CGS-Adam) is introduced to optimize antenna positions while adhering to antenna spacing constraints. Extensive simulations validate that the proposed system, utilizing movable antennas, significantly outperforms traditional fixed antenna optimization, achieving up to a 31\% performance gain in general scenarios. The integration of FCLAs in wireless networks represents a promising solution for future 6G systems, offering improved coverage, energy efficiency, and flexibility.
Abstract:Despite the growing global demand for large language models (LLMs) that serve users from diverse linguistic backgrounds, most cutting-edge LLMs remain predominantly English-centric. This creates a performance gap across languages, restricting access to advanced AI services for non-English speakers. Current methods to enhance multilingual capabilities largely rely on data-driven post-training techniques, such as multilingual instruction tuning or continual pre-training. However, these approaches encounter significant challenges, including the scarcity of high-quality multilingual datasets and the limited enhancement of multilingual capabilities. They often suffer from off-target issues and catastrophic forgetting of central language abilities. To this end, we propose Lens, a novel approach to enhance multilingual capabilities of LLMs by leveraging their internal language representation spaces. Specially, Lens operates by manipulating the hidden representations within the language-agnostic and language-specific subspaces from top layers of LLMs. Using the central language as a pivot, the target language is drawn closer to it within the language-agnostic subspace, allowing it to inherit well-established semantic representations. Meanwhile, in the language-specific subspace, the representations of the target and central languages are pushed apart, enabling the target language to express itself distinctly. Extensive experiments on one English-centric and two multilingual LLMs demonstrate that Lens effectively improves multilingual performance without sacrificing the original central language capabilities of the backbone model, achieving superior results with much fewer computational resources compared to existing post-training approaches.