Abstract:
A covariate in face recognition can be defined as an effect that independently
increases the intra-class variability or decreases the inter-class variability or
both. Covariates such as pose, illumination, expression, aging, and disguise
are established and extensively studied in literature and are categorized as
existing covariates of face recognition. However, ever increasing applications
of face recognition have instigated many new and exciting scenarios such as
matching forensic sketches to mug-shot photos, faces altered due to plastic
surgery, low resolution surveillance images, and individual from videos. These
covariates are categorized as emerging covariates of face recognition, which
is the primary emphasis of this dissertation. One of the important cues in
solving crimes and apprehending criminals is matching forensic sketches with
digital face images. The first contribution of this dissertation is a memetically
optimized multi-scale circular Weber’s local descriptor (MCWLD) for matching
forensic sketches with digital face images. This dissertation presents an
automated algorithm to extract discriminative information from local regions
of both sketches and digital images using MCWLD. An evolutionary memetic
optimization is proposed to assign optimal weights to every local facial region
to boost the identification performance. Since, forensic sketches and digital images
can be of poor quality, a pre-processing technique is also used to enhance
the quality of images. Results on different sketch databases, including forensic
sketch database, illustrate the efficacy of the proposed algorithm. Widespread
acceptability and use of biometrics for person authentication has instigated
several techniques for evading identification such as altering facial appearance
using surgical procedures. These procedures modify both the shape and texture
of facial features to varying degrees and thus degrade the performance
of face recognition when matching pre- and post-surgery images. The second
contribution of this dissertation is a multi-objective evolutionary granular algorithm
for matching face images altered due to plastic surgery procedures.
The algorithm first generates non-disjoint face granules at multiple levels of
granularity. The granular information is assimilated using a multi-objective genetic
algorithm that simultaneously optimizes the selection of feature extractor
for each face granule along with the weights of individual granules. On IIIT-D
plastic surgery database, the proposed algorithm yields the state-of-the-art performance.
Face recognition performance degrades when a low resolution face
image captured in unconstrained settings, such as surveillance, is matched with
high resolution gallery images. The primary challenge is to extract discriminative
features from the limited biometric content in low resolution images
and match it with information-rich high resolution face images. The problem
of cross-resolution face matching is further alleviated when there is limited
labeled low resolution training data. The third contribution of this dissertation
is co-transfer learning framework, a cross pollination of transfer learning
and co-training paradigms, for enhancing the performance of cross-resolution
face recognition. The transfer learning component transfers the knowledge
that is learned while matching high resolution face images during training
for matching low resolution probe images with high resolution gallery during
testing. On the other hand, co-training component facilitates this knowledge
transfer by assigning pseudo labels to unlabeled probe instances in the target
domain. Experiments on a synthetic, three low resolution surveillance
quality face databases, and real world examples show the efficacy of the proposed
co-transfer learning algorithm as compared to other approaches. Due
to prevalent applications and availability of large intra-personal variations,
videos have gained significant attention for face recognition. Unlike still face
images, videos provide abundant information that can be leveraged to compensate
for variations in intra-personal variations and enhance face recognition
performance. The fourth contribution of this dissertation is a video based face
recognition algorithm which computes a discriminative video signature as an
ordered (ranked) list of still face images from a large dictionary. A three stage
approach is developed for optimizing ranked lists across multiple video frames
and fusing them into a single composite ordered list to compute the video signature.
The signature embeds diverse intra-personal variations and facilitates in
matching two videos across large variations. Results obtained on Youtube and
MBGC v2 video databases show the effectiveness of the proposed algorithm.