CHIMERE
Abstract:Cardiovascular diseases (CVD) remain a leading health concern and contribute significantly to global mortality rates. While clinical advancements have led to a decline in CVD mortality, accurately identifying individuals who could benefit from preventive interventions remains an unsolved challenge in preventive cardiology. Current CVD risk prediction models, recommended by guidelines, are based on limited traditional risk factors or use CT imaging to acquire quantitative biomarkers, and still have limitations in predictive accuracy and applicability. On the other hand, end-to-end trained CVD risk prediction methods leveraging deep learning on CT images often fail to provide transparent and explainable decision grounds for assisting physicians. In this work, we proposed a novel joint representation that integrates discrete quantitative biomarkers and continuous deep features extracted from chest CT scans. Our approach initiated with a deep CVD risk classification model by capturing comprehensive continuous deep learning features while jointly obtaining currently clinical-established quantitative biomarkers via segmentation models. In the feature joint representation stage, we use an instance-wise feature-gated mechanism to align the continuous and discrete features, followed by a soft instance-wise feature interaction mechanism fostering independent and effective feature interaction for the final CVD risk prediction. Our method substantially improves CVD risk predictive performance and offers individual contribution analysis of each biomarker, which is important in assisting physicians' decision-making processes. We validated our method on a public chest low-dose CT dataset and a private external chest standard-dose CT patient cohort of 17,207 CT volumes from 6,393 unique subjects, and demonstrated superior predictive performance, achieving AUCs of 0.875 and 0.843, respectively.
Abstract:Shortwave-infrared(SWIR) spectral information,ranging from 1 {\mu}m to 2.5{\mu}m, breaks the limitations of traditional color cameras in acquiring scene information and has been used in many fields. However, conventional SWIR hyperspectral imaging systems face challenges due to their bulky setups and low acquisition speed. In this work, we introduce a snapshot SWIR hyperspectral imaging system based on a metasurface filter and a corresponding filter selection method to achieve the lowest correlation coefficient among these filters.This systemhas the advantages of small size and snapshot imaging. We propose a novel inter and intra prior learning unfolding framework proposed to achieve high-quality SWIR hyperspectral image reconstruction, which bridges the gap between prior learning and cross-stage information interaction. We also design an adaptive feature transfer mechanism to adaptively the transfer contextual correlation of multi-scale encoder features to prevent detailed information loss in the decoder. Experiment results demonstrate that our method can reconstruct HSI with high speed and superior performance over existing methods.
Abstract:The exponential growth of scientific literature requires effective management and extraction of valuable insights. While existing scientific search engines excel at delivering search results based on relational databases, they often neglect the analysis of collaborations between scientific entities and the evolution of ideas, as well as the in-depth analysis of content within scientific publications. The representation of heterogeneous graphs and the effective measurement, analysis, and mining of such graphs pose significant challenges. To address these challenges, we present AceMap, an academic system designed for knowledge discovery through academic graph. We present advanced database construction techniques to build the comprehensive AceMap database with large-scale academic publications that contain rich visual, textual, and numerical information. AceMap also employs innovative visualization, quantification, and analysis methods to explore associations and logical relationships among academic entities. AceMap introduces large-scale academic network visualization techniques centered on nebular graphs, providing a comprehensive view of academic networks from multiple perspectives. In addition, AceMap proposes a unified metric based on structural entropy to quantitatively measure the knowledge content of different academic entities. Moreover, AceMap provides advanced analysis capabilities, including tracing the evolution of academic ideas through citation relationships and concept co-occurrence, and generating concise summaries informed by this evolutionary process. In addition, AceMap uses machine reading methods to generate potential new ideas at the intersection of different fields. Exploring the integration of large language models and knowledge graphs is a promising direction for future research in idea evolution. Please visit \url{https://www.acemap.info} for further exploration.
Abstract:Compared with CINE phase contrast MRI (CINE-PC), echo-planar imaging phase contrast (EPI-PC) can achieve realtime quantification of blood flow, with lower SNR. In this study, the pulsating real model of the simulated cerebral vasculature was used to verify the accuracy of EPI-PC. The imaging time of EPI-PC was 62ms/image at 100*60 spatial resolution. The reconstructed EPI-PC flow curve was extracted by homemade post-processing software. After comparison with the CINE-PC flow curve, it was concluded that EPI-PC can provide an average flow with less than 3% error, and its flow curve will be similar to the CINE-PC flow curve in shape.
Abstract:It is still debated how breathing interacts with the CSF. New Phase contrast MRI sequence based on Echo Planar imaging (EPI-PC) can now produce continuously during minutes a velocity map, more or less every 100 ms. We did not found in the literature quantitative evaluation of the CSF stroke volume change during breathing. The aim of this work is to quantify CSF dynamics change in the aqueduct and in the spinal canal during the breathing and cardiac period using EPI-PC.
Abstract:Cerebral arterial blood flow (CABF) can be investigated in few seconds without any synchronization by Real-Time phase contrast. Significant changes in CABF were found between expiration and inspiration during normal breathing of healthy volunteers. Synopsis (100/100) Real-time phase contrast MRI has been applied to investigate cerebral arterial blood flow (CABF) during normal breathing of healthy volunteers. We developed a novel time-domain analysis method to quantify the effect of normal breathing on several parameters of CABF. We found the existence of a delay between the recorded respiratory signal from the belt sensor and the breathing frequency component present in the reconstructed arterial blood flows. During the expiratory, the mean flow rate of CABF increased by 4.4$\pm$1.7%, stroke volume of CABF increased by 9.8$\pm$3.1% and the duration of the cardiac period of CABF increased by 8.1$\pm$3%.
Abstract:Flow 2.0 is an end-to-end easy-of-use software that allows us to quickly, robustly and accurately perform a batch process real-time phase contrast data and multivariate analysis of the effect of respiration on cerebral fluids circulation. Synopsis (99/100) Real-time phase contrast sequences (RT-PC) have potential value as a scientific and clinical tool in quantifying the effects of respiration on cerebral circulation. To simplify its complicated post-processing process, we developed Flow 2.0 software, which provides a complete post-processing workflow including converting DICOM data, image segmentation, image processing, data extraction, background field correction, antialiasing filter, signal processing and analysis and a novel time-domain method for quantifying the effect of respiration on the cerebral circulation. This end-to-end software allows us to quickly, robustly and accurately perform batch process RT-PC and multivariate analysis of the effects of respiration on cerebral circulation.
Abstract:Loss function are an essential part in modern data-driven approach, such as bi-level training scheme and machine learnings. In this paper we propose a loss function consisting of a $r$-order (an)-isotropic total variation semi-norms $TV^r$, $r\in \mathbb{R}^+$, defined via the Riemann-Liouville (R-L) fractional derivative. We focus on studying key theoretical properties, such as the lower semi-continuity and compactness with respect to both the function and the order of derivative $r$, of such loss functions.
Abstract:Divide-and-conquer is a general strategy to deal with large scale problems. It is typically applied to generate ensemble instances, which potentially limits the problem size it can handle. Additionally, the data are often divided by random sampling which may be suboptimal. To address these concerns, we propose the $DC^2$ algorithm. Instead of ensemble instances, we produce structure-preserving signature pieces to be assembled and conquered. $DC^2$ achieves the efficiency of sampling-based large scale kernel methods while enabling parallel multicore or clustered computation. The data partition and subsequent compression are unified by recursive random projections. Empirically dividing the data by random projections induces smaller mean squared approximation errors than conventional random sampling. The power of $DC^2$ is demonstrated by our clustering algorithm $rpfCluster^+$, which is as accurate as some fastest approximate spectral clustering algorithms while maintaining a running time close to that of K-means clustering. Analysis on $DC^2$ when applied to spectral clustering shows that the loss in clustering accuracy due to data division and reduction is upper bounded by the data approximation error which would vanish with recursive random projections. Due to its easy implementation and flexibility, we expect $DC^2$ to be applicable to general large scale learning problems.