Abstract:The evolution of colour vision is captivating, as it reveals the adaptive strategies of extinct species while simultaneously inspiring innovations in modern imaging technology. In this study, we present a simplified model of visual transduction in the retina, introducing a novel opsin layer. We quantify evolutionary pressures by measuring machine vision recognition accuracy on colour images shaped by specific opsins. Building on this, we develop an evolutionary conservation optimisation algorithm to reconstruct the spectral sensitivity of opsins, enabling mutation-driven adaptations to to more effectively spot fruits or predators. This model condenses millions of years of evolution within seconds on GPU, providing an experimental framework to test long-standing hypotheses in evolutionary biology , such as vision of early mammals, primate trichromacy from gene duplication, retention of colour blindness, blue-shift of fish rod and multiple rod opsins with bioluminescence. Moreover, the model enables speculative explorations of hypothetical species, such as organisms with eyes adapted to the conditions on Mars. Our findings suggest a minimalist yet effective approach to task-specific camera filter design, optimising the spectral response function to meet application-driven demands. The code will be made publicly available upon acceptance.
Abstract:Researches on leveraging big artificial intelligence model (BAIM) technology to drive the intelligent evolution of wireless networks are emerging. However, since the breakthrough in generalization brought about by BAIM techniques mainly occurs in natural language processing, there is still a lack of a clear technical roadmap on how to efficiently apply BAIM techniques to wireless systems with many additional peculiarities. To this end, this paper first reviews recent research works on BAIM for wireless and assesses the current research situation. Then, this paper analyzes and compares the differences between language intelligence and wireless intelligence on multiple levels, including scientific foundations, core usages, and technical details. It highlights the necessity and scientific significance of developing BAIM technology in a wireless-native way, as well as new issues that need to be considered in specific technical implementation. Finally, by synthesizing the evolutionary laws of language models with the particularities of wireless system, this paper provides several instructive methodologies for how to develop wireless-native BAIM.
Abstract:Do neural network models of vision learn brain-aligned representations because they share architectural constraints and task objectives with biological vision or because they learn universal features of natural image processing? We characterized the universality of hundreds of thousands of representational dimensions from visual neural networks with varied construction. We found that networks with varied architectures and task objectives learn to represent natural images using a shared set of latent dimensions, despite appearing highly distinct at a surface level. Next, by comparing these networks with human brain representations measured with fMRI, we found that the most brain-aligned representations in neural networks are those that are universal and independent of a network's specific characteristics. Remarkably, each network can be reduced to fewer than ten of its most universal dimensions with little impact on its representational similarity to the human brain. These results suggest that the underlying similarities between artificial and biological vision are primarily governed by a core set of universal image representations that are convergently learned by diverse systems.
Abstract:The integration of Large Language Models (LLMs) with Knowledge Representation Learning (KRL) signifies a pivotal advancement in the field of artificial intelligence, enhancing the ability to capture and utilize complex knowledge structures. This synergy leverages the advanced linguistic and contextual understanding capabilities of LLMs to improve the accuracy, adaptability, and efficacy of KRL, thereby expanding its applications and potential. Despite the increasing volume of research focused on embedding LLMs within the domain of knowledge representation, a thorough review that examines the fundamental components and processes of these enhanced models is conspicuously absent. Our survey addresses this by categorizing these models based on three distinct Transformer architectures, and by analyzing experimental data from various KRL downstream tasks to evaluate the strengths and weaknesses of each approach. Finally, we identify and explore potential future research directions in this emerging yet underexplored domain, proposing pathways for continued progress.
Abstract:Recently, studies have shown the potential of integrating field-type iterative methods with deep learning (DL) techniques in solving inverse scattering problems (ISPs). In this article, we propose a novel Variational Born Iterative Network, namely, VBIM-Net, to solve the full-wave ISPs with significantly improved flexibility and inversion quality. The proposed VBIM-Net emulates the alternating updates of the total electric field and the contrast in the variational Born iterative method (VBIM) by multiple layers of subnetworks. We embed the calculation of the contrast variation into each of the subnetworks, converting the scattered field residual into an approximate contrast variation and then enhancing it by a U-Net, thus avoiding the requirement of matched measurement dimension and grid resolution as in existing approaches. The total field and contrast of each layer's output is supervised in the loss function of VBIM-Net, which guarantees the physical interpretability of variables of the subnetworks. In addition, we design a training scheme with extra noise to enhance the model's stability. Extensive numerical results on synthetic and experimental data both verify the inversion quality, generalization ability, and robustness of the proposed VBIM-Net. This work may provide some new inspiration for the design of efficient field-type DL schemes.
Abstract:How to reduce the pilot overhead required for channel estimation? How to deal with the channel dynamic changes and error propagation in channel prediction? To jointly address these two critical issues in next-generation transceiver design, in this paper, we propose a novel framework named channel deduction for high-dimensional channel acquisition in multiple-input multiple-output (MIMO)-orthogonal frequency division multiplexing (OFDM) systems. Specifically, it makes use of the outdated channel information of past time slots, performs coarse estimation for the current channel with a relatively small number of pilots, and then fuses these two information to obtain a complete representation of the present channel. The rationale is to align the current channel representation to both the latent channel features within the past samples and the coarse estimate of current channel at the pilots, which, in a sense, behaves as a complementary combination of estimation and prediction and thus reduces the overall overhead. To fully exploit the highly nonlinear correlations in time, space, and frequency domains, we resort to learning-based implementation approaches. By using the highly efficient complex-domain multilayer perceptron (MLP)-mixer for crossing space-frequency domain representation and the recurrence-based or attention-based mechanisms for the past-present interaction, we respectively design two different channel deduction neural networks (CDNets). We provide a general procedure of data collection, training, and deployment to standardize the application of CDNets. Comprehensive experimental evaluations in accuracy, robustness, and efficiency demonstrate the superiority of the proposed approach, which reduces the pilot overhead by up to 88.9% compared to state-of-the-art estimation approaches and enables continuous operating even under unknown user movement and error propagation.
Abstract:In multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems, representing the whole channel only based on partial subchannels will significantly reduce the channel acquisition overhead. For such a channel mapping task, inspired by the intrinsic coupling across the space and frequency domains, this letter proposes to use interleaved learning with partial antenna and subcarrier characteristics to represent the whole MIMO-OFDM channel. Specifically, we design a complex-domain multilayer perceptron (MLP)-Mixer (CMixer), which utilizes two kinds of complex-domain MLP modules to learn the space and frequency characteristics respectively and then interleaves them to couple the learned properties. The complex-domain computation facilitates the learning on the complex-valued channel data, while the interleaving tightens the coupling of space and frequency domains. These two designs jointly reduce the learning burden, making the physics-inspired CMixer more effective on channel representation learning than existing data-driven approaches. Simulation shows that the proposed scheme brings 4.6~10dB gains in mapping accuracy compared to existing schemes under different settings. Besides, ablation studies show the necessity of complex-domain computation as well as the extent to which the interleaved learning matches the channel properties.
Abstract:Knowledge graphs generally suffer from incompleteness, which can be alleviated by completing the missing information. Deep knowledge convolutional embedding models based on neural networks are currently popular methods for knowledge graph completion. However, most existing methods use external convolution kernels and traditional plain convolution processes, which limits the feature interaction capability of the model. In this paper, we propose a novel dynamic convolutional embedding model ConvD for knowledge graph completion, which directly reshapes the relation embeddings into multiple internal convolution kernels to improve the external convolution kernels of the traditional convolutional embedding model. The internal convolution kernels can effectively augment the feature interaction between the relation embeddings and entity embeddings, thus enhancing the model embedding performance. Moreover, we design a priori knowledge-optimized attention mechanism, which can assign different contribution weight coefficients to multiple relation convolution kernels for dynamic convolution to improve the expressiveness of the model further. Extensive experiments on various datasets show that our proposed model consistently outperforms the state-of-the-art baseline methods, with average improvements ranging from 11.30\% to 16.92\% across all model evaluation metrics. Ablation experiments verify the effectiveness of each component module of the ConvD model.
Abstract:Recently, big artificial intelligence (AI) models represented by chatGPT have brought an incredible revolution. With the pre-trained big AI model (BAIM) in certain fields, numerous downstream tasks can be accomplished with only few-shot or even zero-shot learning and exhibit state-of-the-art performances. As widely envisioned, the big AI models are to rapidly penetrate into major intelligent services and applications, and are able to run at low unit cost and high flexibility. In 6G wireless networks, to fully enable intelligent communication, sensing and computing, apart from providing other intelligent wireless services and applications, it is of vital importance to design and deploy certain wireless BAIMs (wBAIMs). However, there still lacks investigation on architecture design and system evaluation for wBAIM. In this paper, we provide a comprehensive discussion as well as some in-depth prospects on the demand, design and deployment aspects of the wBAIM. We opine that wBAIM will be a recipe of the 6G wireless networks to build high-efficient, sustainable, versatile, and extensible wireless intelligence for numerous promising visions. Then, we present the core characteristics and principles to guide the design of wBAIMs, and discuss the key aspects of developing wBAIMs through identifying the differences between the existing BAIMs and the emerging wBAIMs. Finally, related research directions and potential solutions are outlined.
Abstract:In this paper, we propose an innovative learning-based channel prediction scheme so as to achieve higher prediction accuracy and reduce the requirements of huge amount and strict sequential format of channel data. Inspired by the idea of the neural ordinary differential equation (Neural ODE), we first prove that the channel prediction problem can be modeled as an ODE problem with a known initial value through analyzing the physical process of electromagnetic wave propagation within a varying space. Then, we design a novel physics-inspired spatial channel gradient network (SCGNet), which represents the derivative process of channel varying as a special neural network and can obtain the gradients at any relative displacement needed for the ODE solving. With the SCGNet, the static channel at any location served by the base station is accurately inferred through consecutive propagation and integration. Finally, we design an efficient recurrent positioning algorithm based on some prior knowledge of user mobility to obtain the velocity vector, and propose an approximate Doppler compensation method to make up the instantaneous angular-delay domain channel. Only discrete historical channel data is needed for the training, whereas only a few fresh channel measurements is needed for the prediction, which ensures the scheme's practicability.