Abstract:Wireless Capsule Endoscopy is one of the most advanced non-invasive methods for the examination of gastrointestinal tracts. An intelligent computer-aided diagnostic system for detecting gastrointestinal abnormalities like polyp, bleeding, inflammation, etc. is highly exigent in wireless capsule endoscopy image analysis. Abnormalities greatly differ in their shape, size, color, and texture, and some appear to be visually similar to normal regions. This poses a challenge in designing a binary classifier due to intra-class variations. In this study, a hybrid convolutional neural network is proposed for abnormality detection that extracts a rich pool of meaningful features from wireless capsule endoscopy images using a variety of convolution operations. It consists of three parallel convolutional neural networks, each with a distinctive feature learning capability. The first network utilizes depthwise separable convolution, while the second employs cosine normalized convolution operation. A novel meta-feature extraction mechanism is introduced in the third network, to extract patterns from the statistical information drawn over the features generated from the first and second networks and its own previous layer. The network trio effectively handles intra-class variance and efficiently detects gastrointestinal abnormalities. The proposed hybrid convolutional neural network model is trained and tested on two widely used publicly available datasets. The test results demonstrate that the proposed model outperforms six state-of-the-art methods with 97\% and 98\% classification accuracy on KID and Kvasir-Capsule datasets respectively. Cross dataset evaluation results also demonstrate the generalization performance of the proposed model.
Abstract:Thermal infrared (IR) images represent the heat patterns emitted from hot object and they do not consider the energies reflected from an object. Objects living or non-living emit different amounts of IR energy according to their body temperature and characteristics. Humans are homoeothermic and hence capable of maintaining constant temperature under different surrounding temperature. Face recognition from thermal (IR) images should focus on changes of temperature on facial blood vessels. These temperature changes can be regarded as texture features of images and wavelet transform is a very good tool to analyze multi-scale and multi-directional texture. Wavelet transform is also used for image dimensionality reduction, by removing redundancies and preserving original features of the image. The sizes of the facial images are normally large. So, the wavelet transform is used before image similarity is measured. Therefore this paper describes an efficient approach of human face recognition based on wavelet transform from thermal IR images. The system consists of three steps. At the very first step, human thermal IR face image is preprocessed and the face region is only cropped from the entire image. Secondly, Haar wavelet is used to extract low frequency band from the cropped face region. Lastly, the image classification between the training images and the test images is done, which is based on low-frequency components. The proposed approach is tested on a number of human thermal infrared face images created at our own laboratory and Terravic Facial IR Database. Experimental results indicated that the thermal infra red face images can be recognized by the proposed system effectively. The maximum success of 95% recognition has been achieved.
Abstract:In this paper, a thermal infra red face recognition system for human identification and verification using blood perfusion data and back propagation feed forward neural network is proposed. The system consists of three steps. At the very first step face region is cropped from the colour 24-bit input images. Secondly face features are extracted from the croped region, which will be taken as the input of the back propagation feed forward neural network in the third step and classification and recognition is carried out. The proposed approaches are tested on a number of human thermal infra red face images created at our own laboratory. Experimental results reveal the higher degree performance
Abstract:Thermal infra-red (IR) images focus on changes of temperature distribution on facial muscles and blood vessels. These temperature changes can be regarded as texture features of images. A comparative study of face recognition methods working in thermal spectrum is carried out in this paper. In these study two local-matching methods based on Haar wavelet transform and Local Binary Pattern (LBP) are analyzed. Wavelet transform is a good tool to analyze multi-scale, multi-direction changes of texture. Local binary patterns (LBP) are a type of feature used for classification in computer vision. Firstly, human thermal IR face image is preprocessed and cropped the face region only from the entire image. Secondly, two different approaches are used to extract the features from the cropped face region. In the first approach, the training images and the test images are processed with Haar wavelet transform and the LL band and the average of LH/HL/HH bands sub-images are created for each face image. Then a total confidence matrix is formed for each face image by taking a weighted sum of the corresponding pixel values of the LL band and average band. For LBP feature extraction, each of the face images in training and test datasets is divided into 161 numbers of sub images, each of size 8X8 pixels. For each such sub images, LBP features are extracted which are concatenated in row wise manner. PCA is performed separately on the individual feature set for dimensionality reeducation. Finally two different classifiers are used to classify face images. One such classifier multi-layer feed forward neural network and another classifier is minimum distance classifier. The Experiments have been performed on the database created at our own laboratory and Terravic Facial IR Database.
Abstract:In this paper an efficient approach for human face recognition based on the use of minutiae points in thermal face image is proposed. The thermogram of human face is captured by thermal infra-red camera. Image processing methods are used to pre-process the captured thermogram, from which different physiological features based on blood perfusion data are extracted. Blood perfusion data are related to distribution of blood vessels under the face skin. In the present work, three different methods have been used to get the blood perfusion image, namely bit-plane slicing and medial axis transform, morphological erosion and medial axis transform, sobel edge operators. Distribution of blood vessels is unique for each person and a set of extracted minutiae points from a blood perfusion data of a human face should be unique for that face. Two different methods are discussed for extracting minutiae points from blood perfusion data. For extraction of features entire face image is partitioned into equal size blocks and the total number of minutiae points from each block is computed to construct final feature vector. Therefore, the size of the feature vectors is found to be same as total number of blocks considered. A five layer feed-forward back propagation neural network is used as the classification tool. A number of experiments were conducted to evaluate the performance of the proposed face recognition methodologies with varying block size on the database created at our own laboratory. It has been found that the first method supercedes the other two producing an accuracy of 97.62% with block size 16X16 for bit-plane 4.
Abstract:This paper describes an efficient approach for human face recognition based on blood perfusion data from infra-red face images. Blood perfusion data are characterized by the regional blood flow in human tissue and therefore do not depend entirely on surrounding temperature. These data bear a great potential for deriving discriminating facial thermogram for better classification and recognition of face images in comparison to optical image data. Blood perfusion data are related to distribution of blood vessels under the face skin. A distribution of blood vessels are unique for each person and as a set of extracted minutiae points from a blood perfusion data of a human face should be unique for that face. There may be several such minutiae point sets for a single face but all of these correspond to that particular face only. Entire face image is partitioned into equal blocks and the total number of minutiae points from each block is computed to construct final vector. Therefore, the size of the feature vectors is found to be same as total number of blocks considered. For classification, a five layer feed-forward backpropagation neural network has been used. A number of experiments were conducted to evaluate the performance of the proposed face recognition system with varying block sizes. Experiments have been performed on the database created at our own laboratory. The maximum success of 91.47% recognition has been achieved with block size 8X8.