Serifi, MeryemSerifi, Umit2025-03-172025-03-1720251863-17031863-1711https://doi.org/10.1007/s11760-025-03873-7https://hdl.handle.net/20.500.13099/2301Multimodal biometric systems integrate multiple biometric traits to enhance recognition accuracy and robustness. This study introduces a novel face-iris multimodal biometric framework combining texture-based and deep learning methods. The system utilizes uniform local binary patterns applied to capture fine-grained texture features. Additionally, a dual convolutional neural network (CNN) model, incorporating AlexNet and an attention mechanism, extracts high-level discriminative features from entire face and iris images. The attention mechanism prioritizes critical regions in feature maps, improving focus on discriminative details while mitigating noise. The key innovation of the system lies in integrating texture-based and CNN-based features, which collectively enable robust feature extraction and classification. Furthermore, the decision-level fusion strategy using the majority voting technique ensures optimal combination of independent decisions from the methods, providing a resilient final classification decision. Experiments conducted on the CASIA-Iris-Distance database demonstrate a recognition performance of 99.53%, significantly outperforming unimodal and state-of-the-art multimodal systems.eninfo:eu-repo/semantics/closedAccessMultimodal biometric SystemInformation fusionDecision level fusionConvolutional neural networksDual CNNUniform local binary patternsDual CNN and texture-based face-iris multimodal biometric system via decision-level fusionArticle10.1007/s11760-025-03873-7194Q3WOS:0014206714000072-s2.0-85218346125Q2