Abstract:With the rapid development of computer vision and deep learning, significant advancements have been made in 3D vision, partic- ularly in autonomous driving, robotic perception, and augmented reality. 3D point cloud data, as a crucial representation of 3D information, has gained widespread attention. However, the vast scale and complexity of point cloud data present significant chal- lenges for loading and processing and traditional algorithms struggle to handle large-scale datasets.The diversity of storage formats for point cloud datasets (e.g., PLY, XYZ, BIN) adds complexity to data handling and results in inefficiencies in data preparation. Al- though binary formats like BIN and NPY have been used to speed up data access, they still do not fully address the time-consuming data loading and processing phase. To overcome these challenges, we propose the .PcRecord format, a unified data storage solution designed to reduce the storage occupation and accelerate the processing of point cloud data. We also introduce a high-performance data processing pipeline equipped with multiple modules. By leveraging a multi-stage parallel pipeline architecture, our system optimizes the use of computational resources, significantly improving processing speed and efficiency. This paper details the im- plementation of this system and demonstrates its effectiveness in addressing the challenges of handling large-scale point cloud datasets.On average, our system achieves performance improvements of 6.61x (ModelNet40), 2.69x (S3DIS), 2.23x (ShapeNet), 3.09x (Kitti), 8.07x (SUN RGB-D), and 5.67x (ScanNet) with GPU and 6.9x, 1.88x, 1.29x, 2.28x, 25.4x, and 19.3x with Ascend.
Abstract:Dynamic functional connectivity networks (dFCN) based on rs-fMRI have demonstrated tremendous potential for brain function analysis and brain disease classification. Recently, studies have applied deep learning techniques (i.e., convolutional neural network, CNN) to dFCN classification, and achieved better performance than the traditional machine learning methods. Nevertheless, previous deep learning methods usually perform successive convolutional operations on the input dFCNs to obtain high-order brain network aggregation features, extracting them from each sliding window using a series split, which may neglect non-linear correlations among different regions and the sequentiality of information. Thus, important high-order sequence information of dFCNs, which could further improve the classification performance, is ignored in these studies. Nowadays, inspired by the great success of Transformer in natural language processing and computer vision, some latest work has also emerged on the application of Transformer for brain disease diagnosis based on rs-fMRI data. Although Transformer is capable of capturing non-linear correlations, it lacks accounting for capturing local spatial feature patterns and modelling the temporal dimension due to parallel computing, even equipped with a positional encoding technique. To address these issues, we propose a self-attention (SA) based convolutional recurrent network (SA-CRN) learning framework for brain disease classification with rs-fMRI data. The experimental results on a public dataset (i.e., ADNI) demonstrate the effectiveness of our proposed SA-CRN method.
Abstract:Practices in the built environment have become more digitalized with the rapid development of modern design and construction technologies. However, the requirement of practitioners or scholars to gather complicated professional knowledge in the built environment has not been satisfied yet. In this paper, more than 80,000 paper abstracts in the built environment field were obtained to build a knowledge graph, a knowledge base storing entities and their connective relations in a graph-structured data model. To ensure the retrieval accuracy of the entities and relations in the knowledge graph, two well-annotated datasets have been created, containing 2,000 instances and 1,450 instances each in 29 relations for the named entity recognition task and relation extraction task respectively. These two tasks were solved by two BERT-based models trained on the proposed dataset. Both models attained an accuracy above 85% on these two tasks. More than 200,000 high-quality relations and entities were obtained using these models to extract all abstract data. Finally, this knowledge graph is presented as a self-developed visualization system to reveal relations between various entities in the domain. Both the source code and the annotated dataset can be found here: https://github.com/HKUST-KnowComp/BEKG.