Abstract:Objective: Ensuring the precision in motion tracking for MRI-guided Radiotherapy (MRIgRT) is crucial for the delivery of effective treatments. This study refined the motion tracking accuracy in MRIgRT through the innovation of an automatic real-time tracking method, leveraging an enhanced Tracking-Learning-Detection (ETLD) framework coupled with automatic segmentation. Methods: We developed a novel MRIgRT motion tracking method by integrating two primary methods: the ETLD framework and an improved Chan-Vese model (ICV), named ETLD+ICV. The TLD framework was upgraded to suit real-time cine MRI, including advanced image preprocessing, no-reference image quality assessment, an enhanced median-flow tracker, and a refined detector with dynamic search region adjustments. Additionally, ICV was combined for precise coverage of the target volume, which refined the segmented region frame by frame using tracking results, with key parameters optimized. Tested on 3.5D MRI scans from 10 patients with liver metastases, our method ensures precise tracking and accurate segmentation vital for MRIgRT. Results: An evaluation of 106,000 frames across 77 treatment fractions revealed sub-millimeter tracking errors of less than 0.8mm, with over 99% precision and 98% recall for all subjects, underscoring the robustness and efficacy of the ETLD. Moreover, the ETLD+ICV yielded a dice global score of more than 82% for all subjects, demonstrating the proposed method's extensibility and precise target volume coverage. Conclusions: This study successfully developed an automatic real-time motion tracking method for MRIgRT that markedly surpasses current methods. The novel method not only delivers exceptional precision in tracking and segmentation but also demonstrates enhanced adaptability to clinical demands, positioning it as an indispensable asset in the quest to augment the efficacy of radiotherapy treatments.
Abstract:Unmanned aerial vehicles (UAVs) are desirable platforms for time-efficient and cost-effective task execution. 3-D path planning is a key challenge for task decision-making. This paper proposes an improved multi-objective evolutionary algorithm based on decomposition (MOEA/D) with an adaptive areal weight adjustment (AAWA) strategy to make a tradeoff between the total flight path length and the terrain threat. AAWA is designed to improve the diversity of the solutions. More specifically, AAWA first removes a crowded individual and its weight vector from the current population and then adds a sparse individual from the external elite population to the current population. To enable the newly-added individual to evolve towards the sparser area of the population in the objective space, its weight vector is constructed by the objective function value of its neighbors. The effectiveness of MOEA/D-AAWA is validated in twenty synthetic scenarios with different number of obstacles and four realistic scenarios in comparison with other three classical methods.
Abstract:Metaheuristics are popularly used in various fields, and they have attracted much attention in the scientific and industrial communities. In recent years, the number of new metaheuristic names has been continuously growing. Generally, the inventors attribute the novelties of these new algorithms to inspirations from either biology, human behaviors, physics, or other phenomena. In addition, these new algorithms, compared against basic versions of other metaheuristics using classical benchmark problems without shift/rotation, show competitive performances. In this study, we exhaustively tabulate more than 500 metaheuristics. To comparatively evaluate the performance of the recent competitive variants and newly proposed metaheuristics, 11 newly proposed metaheuristics and 4 variants of established metaheuristics are comprehensively compared on the CEC2017 benchmark suite. In addition, whether these algorithms have a search bias to the center of the search space is investigated. The results show that the performance of the newly proposed EBCM (effective butterfly optimizer with covariance matrix adaptation) algorithm performs comparably to the 4 well performing variants of the established metaheuristics and possesses similar properties and behaviors, such as convergence, diversity, exploration and exploitation trade-offs, in many aspects. The performance of all 15 of the algorithms is likely to deteriorate due to certain transformations, while the 4 state-of-the-art metaheuristics are less affected by transformations such as the shifting of the global optimal point away from the center of the search space. It should be noted that, except EBCM, the other 10 new algorithms proposed mostly during 2019-2020 are inferior to the well performing 2017 variants of differential evolution and evolution strategy in terms of convergence speed and global search ability on CEC 2017 functions.
Abstract:This paper studies deep reinforcement learning (DRL) for the task scheduling problem of multiple unmanned aerial vehicles (UAVs). Current approaches generally use exact and heuristic algorithms to solve the problem, while the computation time rapidly increases as the task scale grows and heuristic rules need manual design. As a self-learning method, DRL can obtain a high-quality solution quickly without hand-engineered rules. However, the huge decision space makes the training of DRL models becomes unstable in situations with large-scale tasks. In this work, to address the large-scale problem, we develop a divide and conquer-based framework (DCF) to decouple the original problem into a task allocation and a UAV route planning subproblems, which are solved in the upper and lower layers, respectively. Based on DCF, a double-layer deep reinforcement learning approach (DL-DRL) is proposed, where an upper-layer DRL model is designed to allocate tasks to appropriate UAVs and a lower-layer DRL model [i.e., the widely used attention model (AM)] is applied to generate viable UAV routes. Since the upper-layer model determines the input data distribution of the lower-layer model, and its reward is calculated via the lower-layer model during training, we develop an interactive training strategy (ITS), where the whole training process consists of pre-training, intensive training, and alternate training processes. Experimental results show that our DL-DRL outperforms mainstream learning-based and most traditional methods, and is competitive with the state-of-the-art heuristic method [i.e., OR-Tools], especially on large-scale problems. The great generalizability of DL-DRL is also verified by testing the model learned for a problem size to larger ones. Furthermore, an ablation study demonstrates that our ITS can reach a compromise between the model performance and training duration.
Abstract:Remembering and forgetting mechanisms are two sides of the same coin in a human learning-memory system. Inspired by human brain memory mechanisms, modern machine learning systems have been working to endow machine with lifelong learning capability through better remembering while pushing the forgetting as the antagonist to overcome. Nevertheless, this idea might only see the half picture. Up until very recently, increasing researchers argue that a brain is born to forget, i.e., forgetting is a natural and active process for abstract, rich, and flexible representations. This paper presents a learning model by active forgetting mechanism with artificial neural networks. The active forgetting mechanism (AFM) is introduced to a neural network via a "plug-and-play" forgetting layer (P\&PF), consisting of groups of inhibitory neurons with Internal Regulation Strategy (IRS) to adjust the extinction rate of themselves via lateral inhibition mechanism and External Regulation Strategy (ERS) to adjust the extinction rate of excitatory neurons via inhibition mechanism. Experimental studies have shown that the P\&PF offers surprising benefits: self-adaptive structure, strong generalization, long-term learning and memory, and robustness to data and parameter perturbation. This work sheds light on the importance of forgetting in the learning process and offers new perspectives to understand the underlying mechanisms of neural networks.
Abstract:Vehicle routing problem (VRP) is a typical discrete combinatorial optimization problem, and many models and algorithms have been proposed to solve VRP and variants. Although existing approaches has contributed a lot to the development of this field, these approaches either are limited in problem size or need manual intervening in choosing parameters. To tackle these difficulties, many studies consider learning-based optimization algorithms to solve VRP. This paper reviews recent advances in this field and divides relevant approaches into end-to-end approaches and step-by-step approaches. We design three part experiments to justly evaluate performance of four representative learning-based optimization algorithms and conclude that combining heuristic search can effectively improve learning ability and sampled efficiency of LBO models. Finally we point out that research trend of LBO algorithms is to solve large-scale and multiple constraints problems from real world.
Abstract:Graph neural networks (GNNs) have achieved great success in many graph-based tasks. Much work is dedicated to empowering GNNs with the adaptive locality ability, which enables measuring the importance of neighboring nodes to the target node by a node-specific mechanism. However, the current node-specific mechanisms are deficient in distinguishing the importance of nodes in the topology structure. We believe that the structural importance of neighboring nodes is closely related to their importance in aggregation. In this paper, we introduce discrete graph curvature (the Ricci curvature) to quantify the strength of structural connection of pairwise nodes. And we propose Curvature Graph Neural Network (CGNN), which effectively improves the adaptive locality ability of GNNs by leveraging the structural property of graph curvature. To improve the adaptability of curvature to various datasets, we explicitly transform curvature into the weights of neighboring nodes by the necessary Negative Curvature Processing Module and Curvature Normalization Module. Then, we conduct numerous experiments on various synthetic datasets and real-world datasets. The experimental results on synthetic datasets show that CGNN effectively exploits the topology structure information, and the performance is improved significantly. CGNN outperforms the baselines on 5 dense node classification benchmark datasets. This study deepens the understanding of how to utilize advanced topology information and assign the importance of neighboring nodes from the perspective of graph curvature and encourages us to bridge the gap between graph theory and neural networks.
Abstract:There hardly exists a general solver that is efficient for scheduling problems due to their diversity and complexity. In this study, we develop a two-stage framework, in which reinforcement learning (RL) and traditional operations research (OR) algorithms are combined together to efficiently deal with complex scheduling problems. The scheduling problem is solved in two stages, including a finite Markov decision process (MDP) and a mixed-integer programming process, respectively. This offers a novel and general paradigm that combines RL with OR approaches to solving scheduling problems, which leverages the respective strengths of RL and OR: The MDP narrows down the search space of the original problem through an RL method, while the mixed-integer programming process is settled by an OR algorithm. These two stages are performed iteratively and interactively until the termination criterion has been met. Under this idea, two implementation versions of the combination methods of RL and OR are put forward. The agile Earth observation satellite scheduling problem is selected as an example to demonstrate the effectiveness of the proposed scheduling framework and methods. The convergence and generalization capability of the methods are verified by the performance of training scenarios, while the efficiency and accuracy are tested in 50 untrained scenarios. The results show that the proposed algorithms could stably and efficiently obtain satisfactory scheduling schemes for agile Earth observation satellite scheduling problems. In addition, it can be found that RL-based optimization algorithms have stronger scalability than non-learning algorithms. This work reveals the advantage of combining reinforcement learning methods with heuristic methods or mathematical programming methods for solving complex combinatorial optimization problems.
Abstract:One of the key problems of GNNs is how to describe the importance of neighbor nodes in the aggregation process for learning node representations. A class of GNNs solves this problem by learning implicit weights to represent the importance of neighbor nodes, which we call implicit GNNs such as Graph Attention Network. The basic idea of implicit GNNs is to introduce graph information with special properties followed by Learnable Transformation Structures (LTS) which encode the importance of neighbor nodes via a data-driven way. In this paper, we argue that LTS makes the special properties of graph information disappear during the learning process, resulting in graph information unhelpful for learning node representations. We call this phenomenon Graph Information Vanishing (GIV). Also, we find that LTS maps different graph information into highly similar results. To validate the above two points, we design two sets of 70 random experiments on five Implicit GNNs methods and seven benchmark datasets by using a random permutation operator to randomly disrupt the order of graph information and replacing graph information with random values. We find that randomization does not affect the model performance in 93\% of the cases, with about 7 percentage causing an average 0.5\% accuracy loss. And the cosine similarity of output results, generated by LTS mapping different graph information, over 99\% with an 81\% proportion. The experimental results provide evidence to support the existence of GIV in Implicit GNNs and imply that the existing methods of Implicit GNNs do not make good use of graph information. The relationship between graph information and LTS should be rethought to ensure that graph information is used in node representation.
Abstract:Nonlinear equations systems (NESs) are widely used in real-world problems while they are also difficult to solve due to their characteristics of nonlinearity and multiple roots. Evolutionary algorithm (EA) is one of the methods for solving NESs, given their global search capability and an ability to locate multiple roots of a NES simultaneously within one run. Currently, the majority of research on using EAs to solve NESs focuses on transformation techniques and improving the performance of the used EAs. By contrast, the problem domain knowledge of NESs is particularly investigated in this study, using which we propose to incorporate the variable reduction strategy (VRS) into EAs to solve NESs. VRS makes full use of the systems of expressing a NES and uses some variables (i.e., core variable) to represent other variables (i.e., reduced variables) through the variable relationships existing in the equation systems. It enables to reduce partial variables and equations and shrink the decision space, thereby reducing the complexity of the problem and improving the search efficiency of the EAs. To test the effectiveness of VRS in dealing with NESs, this paper integrates VRS into two existing state-of-the-art EA methods (i.e., MONES and DRJADE), respectively. Experimental results show that, with the assistance of VRS, the EA methods can significantly produce better results than the original methods and other compared methods.