Biometric recognition technology has witnessed widespread integration into daily life due to the growing emphasis on information security. In this domain, multimodal biometrics, which combines multiple biometric traits, has overcome limitations found in unimodal systems like susceptibility to spoof attacks or failure to adapt to changes over time. This paper proposes a novel multimodal biometric recognition system that utilizes deep learning algorithms using iris and palmprint modalities. A pioneering approach is introduced, beginning with the implementation of the novel Modified Firefly Algorithm with L\'evy Flights (MFALF) to optimize the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm, thereby effectively enhancing image contrast. Subsequently, feature selection is carried out through a unique hybrid of ReliefF and Moth Flame Optimization (MFOR) to extract informative features. For classification, we employ a parallel approach, first introducing a novel Preactivated Inverted ResNet (PIR) architecture, and secondly, harnessing metaheuristics with hybrid of innovative Johnson Flower Pollination Algorithm and Rainfall Optimization Algorithm for fine tuning of the learning rate and dropout parameters of Transfer Learning based DenseNet architecture (JFPA-ROA). Finally, a score-level fusion strategy is implemented to combine the outputs of the two classifiers, providing a robust and accurate multimodal biometric recognition system. The system's performance is assessed based on accuracy, Detection Error Tradeoff (DET) Curve, Equal Error Rate (EER), and Total Training time. The proposed multimodal recognition architecture, tested across CASIA Palmprint, MMU, BMPD, and IIT datasets, achieves 100% recognition accuracy, outperforming unimodal iris and palmprint identification approaches.