Abstract:Near-field channel estimation is a fundamental challenge in sixth-generation (6G) wireless communication, where extremely large antenna arrays (ELAA) enable near-field communication (NFC) but introduce significant signal processing complexity. Traditional model-based methods suffer from high computational costs and limited scalability in large-scale ELAA systems, while existing learning-based approaches often lack robustness across diverse channel conditions. To overcome these limitations, we propose the Residual Attention Convolutional Neural Network (RACNN), which integrates convolutional layers with self-attention mechanisms to enhance feature extraction by focusing on key regions within the CNN feature maps. Experimental results show that RACNN outperforms both traditional and learning-based methods, including XLCNet, across various scenarios, particularly in mixed far-field and near-field conditions. Notably, in these challenging settings, RACNN achieves a normalized mean square error (NMSE) of 4.8*10^(-3) at an SNR of 20dB, making it a promising solution for near-field channel estimation in 6G.
Abstract:Massive Multiple-Input Multiple-Output (massive MIMO) technology stands as a cornerstone in 5G and beyonds. Despite the remarkable advancements offered by massive MIMO technology, the extreme number of antennas introduces challenges during the channel estimation (CE) phase. In this paper, we propose a single-step Deep Neural Network (DNN) for CE, termed Iterative Sequential DNN (ISDNN), inspired by recent developments in data detection algorithms. ISDNN is a DNN based on the projected gradient descent algorithm for CE problems, with the iterative iterations transforming into a DNN using the deep unfolding method. Furthermore, we introduce the structured channel ISDNN (S-ISDNN), extending ISDNN to incorporate side information such as directions of signals and antenna array configurations for enhanced CE. Simulation results highlight that ISDNN significantly outperforms another DNN-based CE (DetNet), in terms of training time (13%), running time (4.6%), and accuracy (0.43 dB). Furthermore, the S-ISDNN demonstrates even faster than ISDNN in terms of training time, though its overall performance still requires further improvement.