Abstract:Biometric recognition technology has witnessed widespread integration into daily life due to the growing emphasis on information security. In this domain, multimodal biometrics, which combines multiple biometric traits, has overcome limitations found in unimodal systems like susceptibility to spoof attacks or failure to adapt to changes over time. This paper proposes a novel multimodal biometric recognition system that utilizes deep learning algorithms using iris and palmprint modalities. A pioneering approach is introduced, beginning with the implementation of the novel Modified Firefly Algorithm with L\'evy Flights (MFALF) to optimize the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm, thereby effectively enhancing image contrast. Subsequently, feature selection is carried out through a unique hybrid of ReliefF and Moth Flame Optimization (MFOR) to extract informative features. For classification, we employ a parallel approach, first introducing a novel Preactivated Inverted ResNet (PIR) architecture, and secondly, harnessing metaheuristics with hybrid of innovative Johnson Flower Pollination Algorithm and Rainfall Optimization Algorithm for fine tuning of the learning rate and dropout parameters of Transfer Learning based DenseNet architecture (JFPA-ROA). Finally, a score-level fusion strategy is implemented to combine the outputs of the two classifiers, providing a robust and accurate multimodal biometric recognition system. The system's performance is assessed based on accuracy, Detection Error Tradeoff (DET) Curve, Equal Error Rate (EER), and Total Training time. The proposed multimodal recognition architecture, tested across CASIA Palmprint, MMU, BMPD, and IIT datasets, achieves 100% recognition accuracy, outperforming unimodal iris and palmprint identification approaches.
Abstract:Earthquake prediction has been a challenging research area for many decades, where the future occurrence of this highly uncertain calamity is predicted. In this paper, several parametric and non-parametric features were calculated, where the non-parametric features were calculated using the parametric features. $8$ seismic features were calculated using Gutenberg-Richter law, the total recurrence, and the seismic energy release. Additionally, criterions such as Maximum Relevance and Maximum Redundancy were applied to choose the pertinent features. These features along with others were used as input for an Extreme Learning Machine (ELM) Regression Model. Magnitude and time data of $5$ decades from the Assam-Guwahati region were used to create this model for magnitude prediction. The Testing Accuracy and Testing Speed were computed taking the Root Mean Squared Error (RMSE) as the parameter for evaluating the mode. As confirmed by the results, ELM shows better scalability with much faster training and testing speed (up to a thousand times faster) than traditional Support Vector Machines. The testing RMSE came out to be around $0.097$. To further test the model's robustness -- magnitude-time data from California was used to calculate the seismic indicators which were then fed into an ELM and then tested on the Assam-Guwahati region. The model proves to be robust and can be implemented in early warning systems as it continues to be a major part of Disaster Response and management.