Abstract:The study of Graph Neural Networks has received considerable interest in the past few years. By extending deep learning to graph-structured data, GNNs can solve a diverse set of tasks in fields including social science, chemistry, and medicine. The development of GNN architectures has largely been focused on improving empirical performance on tasks like node or graph classification. However, a line of recent work has instead sought to find GNN architectures that have desirable theoretical properties - by studying their expressive power and designing architectures that maximize this expressiveness. While there is no consensus on the best way to define the expressiveness of a GNN, it can be viewed from several well-motivated perspectives. Perhaps the most natural approach is to study the universal approximation properties of GNNs, much in the way that this has been studied extensively for MLPs. Another direction focuses on the extent to which GNNs can distinguish between different graph structures, relating this to the graph isomorphism test. Besides, a GNN's ability to compute graph properties such as graph moments has been suggested as another form of expressiveness. All of these different definitions are complementary and have yielded different recommendations for GNN architecture choices. In this paper, we would like to give an overview of the notion of "expressive power" of GNNs and provide some valuable insights regarding the design choices of GNNs.
Abstract:Multi-modal magnetic resonance imaging (MRI) is a crucial method for analyzing the human brain. It is usually used for diagnosing diseases and for making valuable decisions regarding the treatments - for instance, checking for gliomas in the human brain. With varying degrees of severity and detection, properly diagnosing gliomas is one of the most daunting and significant analysis tasks in modern-day medicine. Our primary focus is on working with different approaches to perform the segmentation of brain tumors in multimodal MRI scans. Now, the quantity, variability of the data used for training has always been considered to be crucial for developing excellent models. Hence, we also want to experiment with Knowledge Distillation techniques.