Abstract:Implementation of a fast, robust, and fully-automated pipeline for crystal structure determination and underlying strain mapping for crystalline materials is important for many technological applications. Scanning electron nanodiffraction offers a procedure for identifying and collecting strain maps with good accuracy and high spatial resolutions. However, the application of this technique is limited, particularly in thick samples where the electron beam can undergo multiple scattering, which introduces signal nonlinearities. Deep learning methods have the potential to invert these complex signals, but previous implementations are often trained only on specific crystal systems or a small subset of the crystal structure and microscope parameter phase space. In this study, we implement a Fourier space, complex-valued deep neural network called FCU-Net, to invert highly nonlinear electron diffraction patterns into the corresponding quantitative structure factor images. We trained the FCU-Net using over 200,000 unique simulated dynamical diffraction patterns which include many different combinations of crystal structures, orientations, thicknesses, microscope parameters, and common experimental artifacts. We evaluated the trained FCU-Net model against simulated and experimental 4D-STEM diffraction datasets, where it substantially out-performs conventional analysis methods. Our simulated diffraction pattern library, implementation of FCU-Net, and trained model weights are freely available in open source repositories, and can be adapted to many different diffraction measurement problems.
Abstract:Three-dimensional electron tomography is used to understand the structure and properties of samples in chemistry, materials science, geoscience, and biology. With the recent development of high-resolution detectors and algorithms that can account for multiple-scattering events, thicker samples can be examined at finer resolution, resulting in larger reconstruction volumes than previously possible. In this work, we propose a distributed computing framework that reconstructs large volumes by decomposing a projected tilt-series into smaller datasets such that sub-volumes can be simultaneously reconstructed on separate compute nodes using a cluster. We demonstrate our method by reconstructing a multiple-scattering layered clay (montmorillonite) sample at high resolution from a large field-of-view tilt-series phase contrast transmission electron microscopty dataset.
Abstract:Phase contrast transmission electron microscopy (TEM) is a powerful tool for imaging the local atomic structure of materials. TEM has been used heavily in studies of defect structures of 2D materials such as monolayer graphene due to its high dose efficiency. However, phase contrast imaging can produce complex nonlinear contrast, even for weakly-scattering samples. It is therefore difficult to develop fully-automated analysis routines for phase contrast TEM studies using conventional image processing tools. For automated analysis of large sample regions of graphene, one of the key problems is segmentation between the structure of interest and unwanted structures such as surface contaminant layers. In this study, we compare the performance of a conventional Bragg filtering method to a deep learning routine based on the U-Net architecture. We show that the deep learning method is more general, simpler to apply in practice, and produces more accurate and robust results than the conventional algorithm. We provide easily-adaptable source code for all results in this paper, and discuss potential applications for deep learning in fully-automated TEM image analysis.