Abstract:One of the most significant challenges in statistical signal processing and machine learning is how to obtain a generative model that can produce samples of large-scale data distribution, such as images and speeches. Generative Adversarial Network (GAN) is an effective method to address this problem. The GANs provide an appropriate way to learn deep representations without widespread use of labeled training data. This approach has attracted the attention of many researchers in computer vision since it can generate a large amount of data without precise modeling of the probability density function (PDF). In GANs, the generative model is estimated via a competitive process where the generator and discriminator networks are trained simultaneously. The generator learns to generate plausible data, and the discriminator learns to distinguish fake data created by the generator from real data samples. Given the rapid growth of GANs over the last few years and their application in various fields, it is necessary to investigate these networks accurately. In this paper, after introducing the main concepts and the theory of GAN, two new deep generative models are compared, the evaluation metrics utilized in the literature and challenges of GANs are also explained. Moreover, the most remarkable GAN architectures are categorized and discussed. Finally, the essential applications in computer vision are examined.
Abstract:The diagnosis of cancer is mainly performed by visual analysis of the pathologists, through examining the morphology of the tissue slices and the spatial arrangement of the cells. If the microscopic image of a specimen is not stained, it will look colorless and textured. Therefore, chemical staining is required to create contrast and help identify specific tissue components. During tissue preparation due to differences in chemicals, scanners, cutting thicknesses, and laboratory protocols, similar tissues are usually varied significantly in appearance. This diversity in staining, in addition to Interpretive disparity among pathologists more is one of the main challenges in designing robust and flexible systems for automated analysis. To address the staining color variations, several methods for normalizing stain have been proposed. In our proposed method, a Stain-to-Stain Translation (STST) approach is used to stain normalization for Hematoxylin and Eosin (H&E) stained histopathology images, which learns not only the specific color distribution but also the preserves corresponding histopathological pattern. We perform the process of translation based on the pix2pix framework, which uses the conditional generator adversarial networks (cGANs). Our approach showed excellent results, both mathematically and experimentally against the state of the art methods. We have made the source code publicly available.
Abstract:Shape is one of the main features in content based image retrieval (CBIR). This paper proposes a new shape signature. In this technique, features of each shape are extracted based on four sides of the rectangle that covers the shape. The proposed technique is Fourier based and it is invariant to translation, scaling and rotation. The retrieval performance between some commonly used Fourier based signatures and the proposed four sides distance (FSD) signature has been tested using MPEG-7 database. Experimental results are shown that the FSD signature has better performance compared with those signatures.