Abstract:Recent advancements in machine learning, particularly in deep learning and object detection, have significantly improved performance in various tasks, including image classification and synthesis. However, challenges persist, particularly in acquiring labeled data that accurately represents specific use cases. In this work, we propose an automatic pipeline for generating synthetic image datasets using Stable Diffusion, an image synthesis model capable of producing highly realistic images. We leverage YOLOv8 for automatic bounding box detection and quality assessment of synthesized images. Our contributions include demonstrating the feasibility of training image classifiers solely on synthetic data, automating the image generation pipeline, and describing the computational requirements for our approach. We evaluate the usability of different modes of Stable Diffusion and achieve a classification accuracy of 75%.
Abstract:Experimental exploration of high-cost systems with safety constraints, common in engineering applications, is a challenging endeavor. Data-driven models offer a promising solution, but acquiring the requisite data remains expensive and is potentially unsafe. Safe active learning techniques prove essential, enabling the learning of high-quality models with minimal expensive data points and high safety. This paper introduces a safe active learning framework tailored for time-varying systems, addressing drift, seasonal changes, and complexities due to dynamic behavior. The proposed Time-aware Integrated Mean Squared Prediction Error (T-IMSPE) method minimizes posterior variance over current and future states, optimizing information gathering also in the time domain. Empirical results highlight T-IMSPE's advantages in model quality through toy and real-world examples. State of the art Gaussian processes are compatible with T-IMSPE. Our theoretical contributions include a clear delineation which Gaussian process kernels, domains, and weighting measures are suitable for T-IMSPE and even beyond for its non-time aware predecessor IMSPE.
Abstract:In this paper, we explore the optimization of metal recycling with a focus on real-time differentiation between alloys of copper and aluminium. Spectral data, obtained through Prompt Gamma Neutron Activation Analysis (PGNAA), is utilized for classification. The study compares data from two detectors, cerium bromide (CeBr$_{3}$) and high purity germanium (HPGe), considering their energy resolution and sensitivity. We test various data generation, preprocessing, and classification methods, with Maximum Likelihood Classifier (MLC) and Conditional Variational Autoencoder (CVAE) yielding the best results. The study also highlights the impact of different detector types on classification accuracy, with CeBr$_{3}$ excelling in short measurement times and HPGe performing better in longer durations. The findings suggest the importance of selecting the appropriate detector and methodology based on specific application requirements.
Abstract:Model selection aims to find the best model in terms of accuracy, interpretability or simplicity, preferably all at once. In this work, we focus on evaluating model performance of Gaussian process models, i.e. finding a metric that provides the best trade-off between all those criteria. While previous work considers metrics like the likelihood, AIC or dynamic nested sampling, they either lack performance or have significant runtime issues, which severely limits applicability. We address these challenges by introducing multiple metrics based on the Laplace approximation, where we overcome a severe inconsistency occuring during naive application of the Laplace approximation. Experiments show that our metrics are comparable in quality to the gold standard dynamic nested sampling without compromising for computational speed. Our model selection criteria allow significantly faster and high quality model selection of Gaussian process models.
Abstract:In industrial manufacturing, numerous tasks of visually inspecting or detecting specific objects exist that are currently performed manually or by classical image processing methods. Therefore, introducing recent deep learning models to industrial environments holds the potential to increase productivity and enable new applications. However, gathering and labeling sufficient data is often intractable, complicating the implementation of such projects. Hence, image synthesis methods are commonly used to generate synthetic training data from 3D models and annotate them automatically, although it results in a sim-to-real domain gap. In this paper, we investigate the sim-to-real generalization performance of standard object detectors on the complex industrial application of terminal strip object detection. Combining domain randomization and domain knowledge, we created an image synthesis pipeline for automatically generating the training data. Moreover, we manually annotated 300 real images of terminal strips for the evaluation. The results show the cruciality of the objects of interest to have the same scale in either domain. Nevertheless, under optimized scaling conditions, the sim-to-real performance difference in mean average precision amounts to 2.69 % for RetinaNet and 0.98 % for Faster R-CNN, qualifying this approach for industrial requirements.
Abstract:Active learning of physical systems must commonly respect practical safety constraints, which restricts the exploration of the design space. Gaussian Processes (GPs) and their calibrated uncertainty estimations are widely used for this purpose. In many technical applications the design space is explored via continuous trajectories, along which the safety needs to be assessed. This is particularly challenging for strict safety requirements in GP methods, as it employs computationally expensive Monte-Carlo sampling of high quantiles. We address these challenges by providing provable safety bounds based on the adaptively sampled median of the supremum of the posterior GP. Our method significantly reduces the number of samples required for estimating high safety probabilities, resulting in faster evaluation without sacrificing accuracy and exploration speed. The effectiveness of our safe active learning approach is demonstrated through extensive simulations and validated using a real-world engine example.
Abstract:This paper addresses the challenges of detecting anomalies in cellular networks in an interpretable way and proposes a new approach using variational autoencoders (VAEs) that learn interpretable representations of the latent space for each Key Performance Indicator (KPI) in the dataset. This enables the detection of anomalies based on reconstruction loss and Z-scores. We ensure the interpretability of the anomalies via additional information centroids (c) using the K-means algorithm to enhance representation learning. We evaluate the performance of the model by analyzing patterns in the latent dimension for specific KPIs and thereby demonstrate the interpretability and anomalies. The proposed framework offers a faster and autonomous solution for detecting anomalies in cellular networks and showcases the potential of deep learning-based algorithms in handling big data.
Abstract:Partial differential equations (PDEs) are important tools to model physical systems, and including them into machine learning models is an important way of incorporating physical knowledge. Given any system of linear PDEs with constant coefficients, we propose a family of Gaussian process (GP) priors, which we call EPGP, such that all realizations are exact solutions of this system. We apply the Ehrenpreis-Palamodov fundamental principle, which works like a non-linear Fourier transform, to construct GP kernels mirroring standard spectral methods for GPs. Our approach can infer probable solutions of linear PDE systems from any data such as noisy measurements, or initial and boundary conditions. Constructing EPGP-priors is algorithmic, generally applicable, and comes with a sparse version (S-EPGP) that learns the relevant spectral frequencies and works better for big data sets. We demonstrate our approach on three families of systems of PDE, the heat equation, wave equation, and Maxwell's equations, where we improve upon the state of the art in computation time and precision, in some experiments by several orders of magnitude.
Abstract:There is a pressing market demand to minimize the test time of Prompt Gamma Neutron Activation Analysis (PGNAA) spectra measurement machine, so that it could function as an instant material analyzer, e.g. to classify waste samples instantaneously and determine the best recycling method based on the detected compositions of the testing sample. This article introduces a new development of the deep learning classification and contrive to reduce the test time for PGNAA machine. We propose both Random Sampling Methods and Class Activation Map (CAM) to generate "downsized" samples and train the CNN model continuously. Random Sampling Methods (RSM) aims to reduce the measuring time within a sample, and Class Activation Map (CAM) is for filtering out the less important energy range of the downsized samples. We shorten the overall PGNAA measuring time down to 2.5 seconds while ensuring the accuracy is around 96.88 % for our dataset with 12 different species of substances. Compared with classifying different species of materials, it requires more test time (sample count rate) for substances having the same elements to archive good accuracy. For example, the classification of copper alloys requires nearly 24 seconds test time to reach 98 % accuracy.
Abstract:For environmental, sustainable economic and political reasons, recycling processes are becoming increasingly important, aiming at a much higher use of secondary raw materials. Currently, for the copper and aluminium industries, no method for the non-destructive online analysis of heterogeneous materials are available. The Promt Gamma Neutron Activation Analysis (PGNAA) has the potential to overcome this challenge. A difficulty when using PGNAA for real-time classification arises from the small amount of noisy data, due to short-term measurements. In this case, classical evaluation methods using detailed peak by peak analysis fail. Therefore, we propose to view spectral data as probability distributions. Then, we can classify material using maximum log-likelihood with respect to kernel density estimation and use discrete sampling to optimize hyperparameters. For measurements of pure aluminium alloys we achieve near perfect classification of aluminium alloys under 0.25 second.