Abstract:The pinching-antenna system is a novel flexible-antenna technology, which has the capabilities not only to combat large-scale path loss, but also to reconfigure the antenna array in a flexible manner. The key idea of pinching antennas is to apply small dielectric particles on a waveguide of arbitrary length, so that they can be positioned close to users to avoid significant large-scale path loss. This paper investigates the graph neural network (GNN) enabled transmit design for the joint optimization of antenna placement and power allocation in pinching-antenna systems. We formulate the downlink communication system equipped with pinching antennas as a bipartite graph, and propose a graph attention network (GAT) based model, termed bipartite GAT (BGAT), to solve an energy efficiency (EE) maximization problem. With the tailored readout processes, the BGAT guarantees a feasible solution, which also facilitates the unsupervised training. Numerical results demonstrate the effectiveness of pinching antennas in enhancing the system EE as well as the proposed BGAT in terms of optimality, scalability and computational efficiency.
Abstract:This paper proposes a graph neural network (GNN) enabled power allocation scheme for non-orthogonal multiple access (NOMA) networks. In particular, a downlink scenario with one base station serving multiple users over several subchannels is considered, where the number of subchannels is less than the number of users, and thus, some users have to share a subchannel via NOMA. Our goal is to maximize the system energy efficiency subject to the rate requirement of each user and the overall budget. We propose a deep learning based approach termed NOMA net (NOMANet) to address the considered problem. Particularly, NOMANet is GNN-based, which maps channel state information to the desired power allocation scheme for all subchannels. The multi-head attention and the residual/dense connection are adopted to enhance the feature extraction. The output of NOMANet is guaranteed to be feasible via the customized activation function and the penalty method. Numerical results show that NOMANet trained unsupervised achieves performance close to that of the successive convex approximation method but with a faster inference speed by about $700$ times. Besides, NOMANet is featured by its scalability to both users and subchannels.
Abstract:This paper investigates the graph neural network (GNN)-enabled beamforming design for interference channels. We propose a model termed interference channel GNN (ICGNN) to solve a quality-of-service constrained energy efficiency maximization problem. The ICGNN is two-stage, where the direction and power parts of beamforming vectors are learned separately but trained jointly via unsupervised learning. By formulating the dimensionality of features independent of the transceiver pairs, the ICGNN is scalable with the number of transceiver pairs. Besides, to improve the performance of the ICGNN, the hybrid maximum ratio transmission and zero-forcing scheme reduces the output ports, the feature enhancement module unifies the two types of links into one type, the subgraph representation enhances the message passing efficiency, and the multi-head attention and residual connection facilitate the feature extracting. Furthermore, we present the over-the-air distributed implementation of the ICGNN. Ablation studies validate the effectiveness of key components in the ICGNN. Numerical results also demonstrate the capability of ICGNN in achieving near-optimal performance with an average inference time less than 0.1 ms. The scalability of ICGNN for unseen problem sizes is evaluated and enhanced by transfer learning with limited fine-tuning cost. The results of the centralized and distributed implementations of ICGNN are illustrated.
Abstract:This paper investigates the deep learning based approaches for simultaneous wireless information and power transfer (SWIPT). The quality-of-service (QoS) constrained sum-rate maximization problems are, respectively, formulated for power-splitting (PS) receivers and time-switching (TS) receivers and solved by a unified graph neural network (GNN) based model termed SWIPT net (SWIPTNet). To improve the performance of SWIPTNet, we first propose a single-type output method to reduce the learning complexity and facilitate the satisfaction of QoS constraints, and then, utilize the Laplace transform to enhance input features with the structural information. Besides, we adopt the multi-head attention and layer connection to enhance feature extracting. Furthermore, we present the implementation of transfer learning to the SWIPTNet between PS and TS receivers. Ablation studies show the effectiveness of key components in the SWIPTNet. Numerical results also demonstrate the capability of SWIPTNet in achieving near-optimal performance with millisecond-level inference speed which is much faster than the traditional optimization algorithms. We also show the effectiveness of transfer learning via fast convergence and expressive capability improvement.
Abstract:An emerging fluid antenna system (FAS) brings a new dimension, i.e., the antenna positions, to deal with the deep fading, but simultaneously introduces challenges related to the transmit design. This paper proposes an ``unsupervised learning to optimize" paradigm to optimize the FAS. Particularly, we formulate the sum-rate and energy efficiency (EE) maximization problems for a multiple-user multiple-input single-output (MU-MISO) FAS and solved by a two-stage graph neural network (GNN) where the first stage and the second stage are for the inference of antenna positions and beamforming vectors, respectively. The outputs of the two stages are jointly input into a unsupervised loss function to train the two-stage GNN. The numerical results demonstrates that the advantages of the FAS for performance improvement and the two-stage GNN for real-time and scalable optimization. Besides, the two stages can function separately.
Abstract:In E-commerce platforms, a full advertising image is composed of a background image and marketing taglines. Automatic ad image design reduces human costs and plays a crucial role. For the convenience of users, a novel automatic framework named Product-Centric Advertising Image Design (PAID) is proposed in this work. PAID takes the product foreground image, required taglines, and target size as input and creates an ad image automatically. PAID consists of four sequential stages: prompt generation, layout generation, background image generation, and graphics rendering. Different expert models are trained to conduct these sub-tasks. A visual language model (VLM) based prompt generation model is leveraged to produce a product-matching background prompt. The layout generation model jointly predicts text and image layout according to the background prompt, product, and taglines to achieve the best harmony. An SDXL-based layout-controlled inpainting model is trained to generate an aesthetic background image. Previous ad image design methods take a background image as input and then predict the layout of taglines, which limits the spatial layout due to fixed image content. Innovatively, our PAID adjusts the stages to produce an unrestricted layout. To complete the PAID framework, we created two high-quality datasets, PITA and PIL. Extensive experimental results show that PAID creates more visually pleasing advertising images than previous methods.
Abstract:Most facial expression recognition (FER) models are trained on large-scale expression data with centralized learning. Unfortunately, collecting a large amount of centralized expression data is difficult in practice due to privacy concerns of facial images. In this paper, we investigate FER under the framework of personalized federated learning, which is a valuable and practical decentralized setting for real-world applications. To this end, we develop a novel uncertainty-Aware label refineMent on hYpergraphs (AMY) method. For local training, each local model consists of a backbone, an uncertainty estimation (UE) block, and an expression classification (EC) block. In the UE block, we leverage a hypergraph to model complex high-order relationships between expression samples and incorporate these relationships into uncertainty features. A personalized uncertainty estimator is then introduced to estimate reliable uncertainty weights of samples in the local client. In the EC block, we perform label propagation on the hypergraph, obtaining high-quality refined labels for retraining an expression classifier. Based on the above, we effectively alleviate heterogeneous sample uncertainty across clients and learn a robust personalized FER model in each client. Experimental results on two challenging real-world facial expression databases show that our proposed method consistently outperforms several state-of-the-art methods. This indicates the superiority of hypergraph modeling for uncertainty estimation and label refinement on the personalized federated FER task. The source code will be released at https://github.com/mobei1006/AMY.
Abstract:Automatic X-ray prohibited item detection is vital for public safety. Existing deep learning-based methods all assume that the annotations of training X-ray images are correct. However, obtaining correct annotations is extremely hard if not impossible for large-scale X-ray images, where item overlapping is ubiquitous.As a result, X-ray images are easily contaminated with noisy annotations, leading to performance deterioration of existing methods.In this paper, we address the challenging problem of training a robust prohibited item detector under noisy annotations (including both category noise and bounding box noise) from a novel perspective of data augmentation, and propose an effective label-aware mixed patch paste augmentation method (Mix-Paste). Specifically, for each item patch, we mix several item patches with the same category label from different images and replace the original patch in the image with the mixed patch. In this way, the probability of containing the correct prohibited item within the generated image is increased. Meanwhile, the mixing process mimics item overlapping, enabling the model to learn the characteristics of X-ray images. Moreover, we design an item-based large-loss suppression (LLS) strategy to suppress the large losses corresponding to potentially positive predictions of additional items due to the mixing operation. We show the superiority of our method on X-ray datasets under noisy annotations. In addition, we evaluate our method on the noisy MS-COCO dataset to showcase its generalization ability. These results clearly indicate the great potential of data augmentation to handle noise annotations. The source code is released at https://github.com/wscds/Mix-Paste.
Abstract:Mental health risk prediction is a growing field in the speech community, but many studies are based on small corpora. This study illustrates how variations in test and train set sizes impact performance in a controlled study. Using a corpus of over 65K labeled data points, results from a fully crossed design of different train/test size combinations are provided. Two model types are included: one based on language and the other on speech acoustics. Both use methods current in this domain. An age-mismatched test set was also included. Results show that (1) test sizes below 1K samples gave noisy results, even for larger training set sizes; (2) training set sizes of at least 2K were needed for stable results; (3) NLP and acoustic models behaved similarly with train/test size variations, and (4) the mismatched test set showed the same patterns as the matched test set. Additional factors are discussed, including label priors, model strength and pre-training, unique speakers, and data lengths. While no single study can specify exact size requirements, results demonstrate the need for appropriately sized train and test sets for future studies of mental health risk prediction from speech and language.
Abstract:Machine learning models for speech-based depression classification offer promise for health care applications. Despite growing work on depression classification, little is understood about how the length of speech-input impacts model performance. We analyze results for speaker-independent depression classification using a corpus of over 1400 hours of speech from a human-machine health screening application. We examine performance as a function of response input length for two NLP systems that differ in overall performance. Results for both systems show that performance depends on natural length, elapsed length, and ordering of the response within a session. Systems share a minimum length threshold, but differ in a response saturation threshold, with the latter higher for the better system. At saturation it is better to pose a new question to the speaker, than to continue the current response. These and additional reported results suggest how applications can be better designed to both elicit and process optimal input lengths for depression classification.