Abstract:Directed evolution as a widely-used engineering strategy faces obstacles in finding desired mutants from the massive size of candidate modifications. While deep learning methods learn protein contexts to establish feasible searching space, many existing models are computationally demanding and fail to predict how specific mutational tests will affect a protein's sequence or function. This research introduces a lightweight graph representation learning scheme that efficiently analyzes the microenvironment of wild-type proteins and recommends practical higher-order mutations exclusive to the user-specified protein and function of interest. Our method enables continuous improvement of the inference model by limited computational resources and a few hundred mutational training samples, resulting in accurate prediction of variant effects that exhibit near-perfect correlation with the ground truth across deep mutational scanning assays of 19 proteins. With its affordability and applicability to both computer scientists and biochemical laboratories, our solution offers a wide range of benefits that make it an ideal choice for the community.
Abstract:Advances in deep learning models have revolutionized the study of biomolecule systems and their mechanisms. Graph representation learning, in particular, is important for accurately capturing the geometric information of biomolecules at different levels. This paper presents a comprehensive review of the methodologies used to represent biological molecules and systems as computer-recognizable objects, such as sequences, graphs, and surfaces. Moreover, it examines how geometric deep learning models, with an emphasis on graph-based techniques, can analyze biomolecule data to enable drug discovery, protein characterization, and biological system analysis. The study concludes with an overview of the current state of the field, highlighting the challenges that exist and the potential future research directions.
Abstract:Gigapixel medical images provide massive data, both morphological textures and spatial information, to be mined. Due to the large data scale in histology, deep learning methods play an increasingly significant role as feature extractors. Existing solutions heavily rely on convolutional neural networks (CNNs) for global pixel-level analysis, leaving the underlying local geometric structure such as the interaction between cells in the tumor microenvironment unexplored. The topological structure in medical images, as proven to be closely related to tumor evolution, can be well characterized by graphs. To obtain a more comprehensive representation for downstream oncology tasks, we propose a fusion framework for enhancing the global image-level representation captured by CNNs with the geometry of cell-level spatial information learned by graph neural networks (GNN). The fusion layer optimizes an integration between collaborative features of global images and cell graphs. Two fusion strategies have been developed: one with MLP which is simple but turns out efficient through fine-tuning, and the other with Transformer gains a champion in fusing multiple networks. We evaluate our fusion strategies on histology datasets curated from large patient cohorts of colorectal and gastric cancers for three biomarker prediction tasks. Both two models outperform plain CNNs or GNNs, reaching a consistent AUC improvement of more than 5% on various network backbones. The experimental results yield the necessity for combining image-level morphological features with cell spatial relations in medical image analysis. Codes are available at https://github.com/yiqings/HEGnnEnhanceCnn.