AI Case Study
Researchers from Warsaw University of Technology identify irises as live or dead with a mean accuracy of 98.94% using convolutional neural networks
Researchers from Warsaw University of Technology and other institutions assessed the ability of a deep convolutional neural network to determine whether images of irises belonged to live or deceased subjects. While they achieved a mean classification accuracy of 98.94%, they found that post-mortem irises could be used for a period after death to deceive biometric systems.
Public And Social Sector
Education And Academia
Researchers from Warsaw University of Technology, Research and Academic Computer Network, and the Medical University of Warsaw conduct the first research on a "method for iris
liveness detection in respect to the post-mortem setting,
based on a deep convolutional neural network VGG-16,
adapted and fine-tuned to the task of discerning live and
dead irises" according to their paper. This involved cropping the iris photos of both live and deceased subjects to avoid bias in the way eyelids are often held open on deceased subjects. Images of deceased eyeballs at various port-mortem timeframes were taken from a publicly available database, and live subjects were photographed using the same equipment.
From the arXiv research paper: "Law enforcement officers in the U.S. are reportedly already using the fingerprints of the deceased
to unlock the suspects’ iPhones, which immediately brings up the topic of whether liveness detection should be one of the components of Presentation Attack Detection implemented in such devices. With a constantly growing market share of iris recognition, and recent research proving that iris biometrics in a post-mortem scenario can be viable, these concerns are also becoming true for iris." IEEE Spectrum reports that "The iris, the colored ring of muscle that controls the contraction and dilation of the pupil, is composed of tiny fibers that form an intricate and unique pattern in each individual’s eye. Iris scanners use both visible and near-infrared light to look at hundreds of points within these patterns, then try to match them with a registered profile."
Mean accuracy was 98.94%. From the arXiv paper: "We can expect a few false matches (post-mortem samples being classified as live iris samples) with images obtained 5 hours after death, regardless of the chosen threshold. This can be attributed to the fact that these images are very similar to those obtained from live individuals, as post-mortem changes to the eye are still not pronounced enough to allow for a perfect classification accuracy." However, when these recently deceased subjects were excluded, misclassification of post-mortem irises as live dropped to 0, and misclassification of live irises as post-mortem to about 1%.
From the arXiv paper: "For our solution, we employed the [deep convolutional neural network] VGG-16 model pretrained on natural images from the ImageNet database, which has been shown to repeatedly achieve excellent results in various classification tasks after minor adaptation and re-training. We thus performed a simple modification to the last three layers of the original graph to reflect the nature of our binary classification into live and post-mortem types of images, and performed transfer learning by fine-tuning the network weights to our dataset of iris images representing both classes.
For the network training and testing procedure, 20 subject-disjoint train/test data splits were created by randomly assigning the data from 3 subjects to the test subset, and the data from the remaining subjects to the train subset, both for the live and post-mortem parts of the database. These twenty splits were made with replacement, making them statistically independent. The network was then trained with each train subset independently for each split, and evaluated on the corresponding test subset. This procedure gives 20 statistically independent evaluations and allows to assess the variance of the estimated error rates. The
training, encompassing 10 epochs in each of the train/test split run, was performed with stochastic gradient descent as the minimization method with momentum m = 0.9 and learning rate of 0.0001, with the data being passed through the network in mini batches of 16 images. During testing, a prediction of the live or post-mortem class-wise probability was obtained from the Softmax layer, together with a corresponding predicted categorical label."
574 near-infrared iris images from the public Warsaw BioBase PostMortem Iris dataset, taken from 17 subjects at various time points from 5 hours after death till 34 days, as well as 256 iris photos from live subjects, taken using the same iris camera.