Abstract:We present a novel algorithm explicitly tailored to estimate motion from time series of 3D images of concrete. Such volumetric images are usually acquired by Computed Tomography and can contain for example in situ tests, or more complex procedures like self-healing. Our algorithm is specifically designed to tackle the challenge of large scale in situ investigations of concrete. That means it cannot only cope with big images, but also with discontinuous displacement fields that often occur in in situ tests of concrete. We show the superior performance of our algorithm, especially regarding plausibility and time efficient processing. Core of the algorithm is a novel multiscale representation based on morphological wavelets. We use two examples for validation: A classical in situ test on refractory concrete and and a three point bending test on normal concrete. We show that for both applications structural changes like crack initiation can be already found at low scales -- a central achievement of our algorithm.
Abstract:Quantum computers possess the potential to process data using a remarkably reduced number of qubits compared to conventional bits, as per theoretical foundations. However, recent experiments have indicated that the practical feasibility of retrieving an image from its quantum encoded version is currently limited to very small image sizes. Despite this constraint, variational quantum machine learning algorithms can still be employed in the current noisy intermediate scale quantum (NISQ) era. An example is a hybrid quantum machine learning approach for edge detection. In our study, we present an application of quantum transfer learning for detecting cracks in gray value images. We compare the performance and training time of PennyLane's standard qubits with IBM's qasm\_simulator and real backends, offering insights into their execution efficiency.
Abstract:Scattering networks yield powerful and robust hierarchical image descriptors which do not require lengthy training and which work well with very few training data. However, they rely on sampling the scale dimension. Hence, they become sensitive to scale variations and are unable to generalize to unseen scales. In this work, we define an alternative feature representation based on the Riesz transform. We detail and analyze the mathematical foundations behind this representation. In particular, it inherits scale equivariance from the Riesz transform and completely avoids sampling of the scale dimension. Additionally, the number of features in the representation is reduced by a factor four compared to scattering networks. Nevertheless, our representation performs comparably well for texture classification with an interesting addition: scale equivariance. Our method yields superior performance when dealing with scales outside of those covered by the training dataset. The usefulness of the equivariance property is demonstrated on the digit classification task, where accuracy remains stable even for scales four times larger than the one chosen for training. As a second example, we consider classification of textures.
Abstract:Scale invariance of an algorithm refers to its ability to treat objects equally independently of their size. For neural networks, scale invariance is typically achieved by data augmentation. However, when presented with a scale far outside the range covered by the training set, neural networks may fail to generalize. Here, we introduce the Riesz network, a novel scale invariant neural network. Instead of standard 2d or 3d convolutions for combining spatial information, the Riesz network is based on the Riesz transform which is a scale equivariant operation. As a consequence, this network naturally generalizes to unseen or even arbitrary scales in a single forward pass. As an application example, we consider detecting and segmenting cracks in tomographic images of concrete. In this context, 'scale' refers to the crack thickness which may vary strongly even within the same sample. To prove its scale invariance, the Riesz network is trained on one fixed crack width. We then validate its performance in segmenting simulated and real tomographic images featuring a wide range of crack widths. An additional experiment is carried out on the MNIST Large Scale data set.
Abstract:Elongated anisotropic Gaussian filters are used for the orientation estimation of fibers. In cases where computed tomography images are noisy, roughly resolved, and of low contrast, they are the method of choice even if being efficient only in virtual 2D slices. However, minor inaccuracies in the anisotropic Gaussian filters can carry over to the orientation estimation. Therefore, we propose a modified algorithm for 2D anisotropic Gaussian filters and show that this improves their precision. Applied to synthetic images of fiber bundles, it is more accurate and robust to noise. Finally, we demonstrate the effectiveness of our approach by applying it to real-world images of sheet molding compounds.
Abstract:The homogeneity of filter media is important for material selection and quality control, along with the specific weight (nominal grammage) and the distribution of the local weight. Cloudiness or formation is a concept used to describe deviations from homogeneity in filter media. We suggest to derive the cloudiness index from the power spectrum of the relative local areal weight, integrated over a selected frequency range. The power spectrum captures the energy density in a broad spectral range. Moreover, under certain conditions, the structure of a nonwoven is fully characterized by the areal weight, the variance of the local areal weight, and the power spectrum. Consequently, the power spectrum is the parameter that exclusively reflects the cloudiness. Here, we address questions arising from practical application. The most prominent is the choice of the spectral band. It certainly depends on the characteristic "size of the clouds", but is limited by the size and lateral resolution of the images. We show that the cloudiness index based on the power spectrum of the relative local areal weight is theoretically well founded and can be robustly measured from image data. Choosing the spectral band allows to capture the cloudiness either visually perceived or found to be decisive for product properties. It is thus well suited to build a technical standard on it.
Abstract:Edges are image locations where the gray value intensity changes suddenly. They are among the most important features to understand and segment an image. Edge detection is a standard task in digital image processing, solved for example using filtering techniques. However, the amount of data to be processed grows rapidly and pushes even supercomputers to their limits. Quantum computing promises exponentially lower memory usage in terms of the number of qubits compared to the number of classical bits. In this paper, we propose a hybrid method for quantum edge detection based on the idea of a quantum artificial neuron. Our method can be practically implemented on quantum computers, especially on those of the current noisy intermediate-scale quantum era. We compare six variants of the method to reduce the number of circuits and thus the time required for the quantum edge detection. Taking advantage of the scalability of our method, we can practically detect edges in images considerably larger than reached before.
Abstract:Cloudiness or formation is a concept routinely used in industry to address deviations from homogeneity in nonwovens and papers. Measuring a cloudiness index based on image data is a common task in industrial quality assurance. The two most popular ways of quantifying cloudiness are based on power spectrum or correlation function on the one hand or the Laplacian pyramid on the other hand. Here, we recall the mathematical basis of the first approach comprehensively, derive a cloudiness index, and demonstrate its practical estimation. We prove that the Laplacian pyramid as well as other quantities characterizing cloudiness like the range of interaction and the intensity of small-angle scattering are very closely related to the power spectrum. Finally, we show that the power spectrum is easy to be measured image analytically and carries more information than the alternatives.
Abstract:Concrete is the standard construction material for buildings, bridges, and roads. As safety plays a central role in the design, monitoring, and maintenance of such constructions, it is important to understand the cracking behavior of concrete. Computed tomography captures the microstructure of building materials and allows to study crack initiation and propagation. Manual segmentation of crack surfaces in large 3d images is not feasible. In this paper, automatic crack segmentation methods for 3d images are reviewed and compared. Classical image processing methods (edge detection filters, template matching, minimal path and region growing algorithms) and learning methods (convolutional neural networks, random forests) are considered and tested on semi-synthetic 3d images. Their performance strongly depends on parameter selection which should be adapted to the grayvalue distribution of the images and the geometric properties of the concrete. In general, the learning methods perform best, in particular for thin cracks and low grayvalue contrast.
Abstract:In image processing, the amount of data to be processed grows rapidly, in particular when imaging methods yield images of more than two dimensions or time series of images. Thus, efficient processing is a challenge, as data sizes may push even supercomputers to their limits. Quantum image processing promises to encode images with logarithmically less qubits than classical pixels in the image. In theory, this is a huge progress, but so far not many experiments have been conducted in practice, in particular on real backends. Often, the precise conversion of classical data to quantum states, the exact implementation, and the interpretation of the measurements in the classical context are challenging. We investigate these practical questions in this paper. In particular, we study the feasibility of the Flexible Representation of Quantum Images (FRQI). Furthermore, we check experimentally what is the limit in the current noisy intermediate-scale quantum era, i.e. up to which image size an image can be encoded, both on simulators and on real backends. Finally, we propose a method for simplifying the circuits needed for the FRQI. With our alteration, the number of gates needed, especially of the error-prone controlled-NOT gates, can be reduced. As a consequence, the size of manageable images increases.