Abstract:Early detection of surgical complications allows for timely therapy and proactive risk mitigation. Machine learning (ML) can be leveraged to identify and predict patient risks for postoperative complications. We developed and validated the effectiveness of predicting postoperative complications using a novel surgical Variational Autoencoder (surgVAE) that uncovers intrinsic patterns via cross-task and cross-cohort presentation learning. This retrospective cohort study used data from the electronic health records of adult surgical patients over four years (2018 - 2021). Six key postoperative complications for cardiac surgery were assessed: acute kidney injury, atrial fibrillation, cardiac arrest, deep vein thrombosis or pulmonary embolism, blood transfusion, and other intraoperative cardiac events. We compared prediction performances of surgVAE against widely-used ML models and advanced representation learning and generative models under 5-fold cross-validation. 89,246 surgeries (49% male, median (IQR) age: 57 (45-69)) were included, with 6,502 in the targeted cardiac surgery cohort (61% male, median (IQR) age: 60 (53-70)). surgVAE demonstrated superior performance over existing ML solutions across all postoperative complications of cardiac surgery patients, achieving macro-averaged AUPRC of 0.409 and macro-averaged AUROC of 0.831, which were 3.4% and 3.7% higher, respectively, than the best alternative method (by AUPRC scores). Model interpretation using Integrated Gradients highlighted key risk factors based on preoperative variable importance. surgVAE showed excellent discriminatory performance for predicting postoperative complications and addressing the challenges of data complexity, small cohort sizes, and low-frequency positive events. surgVAE enables data-driven predictions of patient risks and prognosis while enhancing the interpretability of patient risk profiles.
Abstract:The rapid and accurate detection of biochemical compositions in fish is a crucial real-world task that facilitates optimal utilization and extraction of high-value products in the seafood industry. Raman spectroscopy provides a promising solution for quickly and non-destructively analyzing the biochemical composition of fish by associating Raman spectra with biochemical reference data using machine learning regression models. This paper investigates different regression models to address this task and proposes a new design of Convolutional Neural Networks (CNNs) for jointly predicting water, protein, and lipids yield. To the best of our knowledge, we are the first to conduct a successful study employing CNNs to analyze the biochemical composition of fish based on a very small Raman spectroscopic dataset. Our approach combines a tailored CNN architecture with the comprehensive data preparation procedure, effectively mitigating the challenges posed by extreme data scarcity. The results demonstrate that our CNN can significantly outperform two state-of-the-art CNN models and multiple traditional machine learning models, paving the way for accurate and automated analysis of fish biochemical composition.
Abstract:Manual pruning of radiata pine trees presents significant safety risks due to their substantial height and the challenging terrains in which they thrive. To address these risks, this research proposes the development of a drone-based pruning system equipped with specialized pruning tools and a stereo vision camera, enabling precise detection and trimming of branches. Deep learning algorithms, including YOLO and Mask R-CNN, are employed to ensure accurate branch detection, while the Semi-Global Matching algorithm is integrated to provide reliable distance estimation. The synergy between these techniques facilitates the precise identification of branch locations and enables efficient, targeted pruning. Experimental results demonstrate that the combined implementation of YOLO and SGBM enables the drone to accurately detect branches and measure their distances from the drone. This research not only improves the safety and efficiency of pruning operations but also makes a significant contribution to the advancement of drone technology in the automation of agricultural and forestry practices, laying a foundational framework for further innovations in environmental management.
Abstract:The goal of few-shot learning is to generalize and achieve high performance on new unseen learning tasks, where each task has only a limited number of examples available. Gradient-based meta-learning attempts to address this challenging task by learning how to learn new tasks by embedding inductive biases informed by prior learning experiences into the components of the learning algorithm. In this work, we build upon prior research and propose Neural Procedural Bias Meta-Learning (NPBML), a novel framework designed to meta-learn task-adaptive procedural biases. Our approach aims to consolidate recent advancements in meta-learned initializations, optimizers, and loss functions by learning them simultaneously and making them adapt to each individual task to maximize the strength of the learned inductive biases. This imbues each learning task with a unique set of procedural biases which is specifically designed and selected to attain strong learning performance in only a few gradient steps. The experimental results show that by meta-learning the procedural biases of a neural network, we can induce strong inductive biases towards a distribution of learning tasks, enabling robust learning performance across many well-established few-shot learning benchmarks.
Abstract:The study of 5G base station antenna array performance for self-interference reduction is derived. The line of sight signal channel model and Rayleigh channel model are developed. The relevant calculations for channel capacities are shown. This is the pre-material for this study. More results and conclusions will be presented soon.
Abstract:This paper introduces a novel approach to experimentally characterize effective human skin permittivity at sub-Terahertz (sub-THz) frequencies, specifically from $140$~to $210$~GHz, utilizing a quasi-optical measurement system. To ensure accurate measurement of the reflection coefficients of human skin, a planar, rigid, and thick reference plate with a low-loss dielectric is utilized to flatten the human skin surface. A permittivity characterization method is proposed to reduce permittivity estimation deviations resulting from the pressure effects on the phase displacements of skins under the measurements but also to ensure repeatability of the measurement. In practical permittivity characterizations, the complex permittivities of the finger, palm, and arm of seven volunteers show small standard deviations for the repeated measurements, respectively, while those show significant variations across different regions of the skins and for different persons. The proposed measurement system holds significant potential for future skin permittivity estimation in sub-THz bands, facilitating further studies on human-electromagnetic-wave interactions based on the measured permittivity values.
Abstract:In recent years, genetic programming (GP)-based evolutionary feature construction has achieved significant success. However, a primary challenge with evolutionary feature construction is its tendency to overfit the training data, resulting in poor generalization on unseen data. In this research, we draw inspiration from PAC-Bayesian theory and propose using sharpness-aware minimization in function space to discover symbolic features that exhibit robust performance within a smooth loss landscape in the semantic space. By optimizing sharpness in conjunction with cross-validation loss, as well as designing a sharpness reduction layer, the proposed method effectively mitigates the overfitting problem of GP, especially when dealing with a limited number of instances or in the presence of label noise. Experimental results on 58 real-world regression datasets show that our approach outperforms standard GP as well as six state-of-the-art complexity measurement methods for GP in controlling overfitting. Furthermore, the ensemble version of GP with sharpness-aware minimization demonstrates superior performance compared to nine fine-tuned machine learning and symbolic regression algorithms, including XGBoost and LightGBM.
Abstract:This manuscript proposes a method for characterizing the complex permittivity of the human finger skin based on an open-ended waveguide covered with a thin dielectric sheet at sub-terahertz frequencies. The measurement system is initially analyzed through full-wave simulations with a detailed finger model. Next, the model is simplified by replacing the finger with an infinite sheet of human skin to calculate the forward electromagnetic problem related to the permittivity characterization. Following this, a radial basis network is employed to train the inverse problem solver. Finally, the complex permittivities of finger skins are characterized for 10 volunteers. The variations in complex relative permittivity across different individuals and skin regions are analyzed at 140~GHz, revealing a maximum deviation of $\pm 0.7$ for both the real and imaginary parts. Repeated measurements at the same location on the finger demonstrate good repeatability with a relative estimation uncertainty $<\pm 1\%$.
Abstract:This manuscript presents a novel method for characterizing the permittivities of low-loss dielectric slabs in sub-terahertz (sub-THz) frequencies, specifically above 100 GHz using a quasi-optical system. The algorithm is introduced with detailed derivations, and the measurement sensitivity is analyzed through simulations. Subsequently, the method's validity is established via simulations, demonstrating high accuracy (error 0.1% for the loss tangent) for a 30 mm thick plate material and relatively lower accuracy (error <5% for the loss tangent) for a 6 mm thick plate material. Notably, this accuracy surpasses that of the approach presented in [1] when the same window width is used to extract signals. Furthermore, a comparison between the permittivities of plexiglass with a 30 mm thickness characterized by the proposed method and the approach in [1] reveals a maximum difference in the dielectric constant of 0.011 and in loss tangent of 0.00071 from 140 to 220 GHz. Finally, the relative complex permittivities of plexiglass at 142.86 GHz obtained by both methods are compared with the reference values provided in [2], exhibiting differences of 0.06 in the dielectric constant.
Abstract:Manifold learning techniques play a pivotal role in machine learning by revealing lower-dimensional embeddings within high-dimensional data, thus enhancing both the efficiency and interpretability of data analysis by transforming the data into a lower-dimensional representation. However, a notable challenge with current manifold learning methods is their lack of explicit functional mappings, crucial for explainability in many real-world applications. Genetic programming, known for its interpretable functional tree-based models, has emerged as a promising approach to address this challenge. Previous research leveraged multi-objective GP to balance manifold quality against embedding dimensionality, producing functional mappings across a range of embedding sizes. Yet, these mapping trees often became complex, hindering explainability. In response, in this paper, we introduce Genetic Programming for Explainable Manifold Learning (GP-EMaL), a novel approach that directly penalises tree complexity. Our new method is able to maintain high manifold quality while significantly enhancing explainability and also allows customisation of complexity measures, such as symmetry balancing, scaling, and node complexity, catering to diverse application needs. Our experimental analysis demonstrates that GP-EMaL is able to match the performance of the existing approach in most cases, while using simpler, smaller, and more interpretable tree structures. This advancement marks a significant step towards achieving interpretable manifold learning.