Novel Illumination-invariant Face Recognition Approach via Reflectance-luminance and Local Matching Model with Weighted Voting System
MD. Ashiquzzaman *
Department of Electrical and Electronic Engineering, American International University-Bangladesh (AIUB), Dhaka, Bangladesh.
Sadman Shahriar Alam
Department of Electrical and Electronic Engineering, American International University-Bangladesh (AIUB), Dhaka, Bangladesh.
Abu Shufian
Department of Electrical and Electronic Engineering, American International University-Bangladesh (AIUB), Dhaka, Bangladesh.
Protik Parvez Sheikh
Department of Electrical and Electronic Engineering, American International University-Bangladesh (AIUB), Dhaka, Bangladesh.
Ahmed Hossain Siddiqui
Department of Electrical and Electronic Engineering, Faculty of Engineering, Stamford University Bangladesh, Bangladesh.
*Author to whom correspondence should be addressed.
Abstract
In this study, a novel approach has been introduced for face recognition that is unaffected by changes in illumination. This method is based on the reflectance-luminance model and incorporates local matching using a weighted voting technique to eliminate any artifacts present in the retina images. A total of 37 different linear and nonlinear filters were tested, including high pass and low pass filters, to achieve the reflectance component of the image, which remains invariant to changes in illumination. Among these filters, the maximum filter, which is a simple filter with low computational complexity, yielded the best results in extracting the illumination invariants. It was observed that the illumination invariants obtained through this method outperformed other methods such as QI, SQI, and image enhancement techniques in terms of recognition accuracy. Importantly, the proposed method does not require any prior knowledge about the facial shape or illumination conditions and can be applied to individual images independently. Unlike many existing methods, this approach does not rely on multiple images during the training stage and does not require any parameter selection to generate the illumination invariants. To further enhance the robustness of illumination, a weighted voting system was introduced. Certain regions of the image, which may adversely affect the recognition outcome due to poor illumination, occlusion, noise, or lack of distinctive information, were identified using predefined factors such as grayscale mean, image entropy, and mutual information. The proposed method was also compared to other face recognition methods in the presence of occlusions, and it demonstrated promising results outperforming existing methods. The Python algorithm successfully detects obstructed faces, genders, and ages in videos with a face matching accuracy between 80.9% to 96.9% based on proximity.
Keywords: Face recognition, haar feature, viola-jones method, raspberry pi