Human Iris Recognition Based on Hybrid Technique

: Iris recognition is a biometric technique that uses iris pattern information to detect person identification. Initially, the system find out the boundary of the pupil and iris. Then, Circular Hough transform used to find out the center of both pupil and iris in order to crop iris part from the eye image. After that, Daugman’s Rubber Sheet model utilized for performing the normalizing step. Then, features extracted based on Legendre moment and Local Quantized. Several orders value with many region of iris have been used to get best value, which satisfied the highest recognition rate. Matching was performed by City Block Distance. The simulation was carried out using samples from CASIA.v4-Interval database, the main tool for programming is MATLAB.


Introduction
In this automated world, there is a rapid development in modern science and technology and a widespread use of computers and electronic devices along with a growing world population. The main problem, however, is security in different aspects that necessitates the need for a very precise and reliable authentication technology. Authentication plays a fundamental role, as it is first line of defense against intruders. Traditional systems should, therefore be replaced by accurate, convenient and effective alternatives. In addition, governments and private sectors are increasingly encouraging the use of biometric systems.
The three basic types of authentication system are something already known such as a passwords, something you got such as a card or token and something you such as biometric measures.
Any physiological or behavioral attribute is biometric if satisfies the following criteria:  Universality all humans have it  Distinctiveness: Be as different as each individual  Invariance: not change over time  Collectability: Easily collectible in terms of acquisition, Digitization and feature extraction from the population  Performance: The availability of data collection and guarantee to achieve high accuracy  Acceptability: The readiness of the population to present that attribute to the recognition system Biometric identifiers are categorized either as physiological or behavioral. The physiological type is specifically related to the shape of the body (e.g., fingerprint, palm veins, face recognition, DNA, palm print, hand geometry, iris recognition, retina and odor/scent). The behavioral category is related to behavioral nature of human beings (e.g., rhythm, gait and voice). These biomedical features are unique, remain constant for each person and can be used to identify individuals owning to the difficulty to replicate and reuse by someone other than a biometric owner. Automated identification systems based on iris recognition is often known to be the most reliable of all biometric methods. The probability of finding two persons with identical iris pattern is almost zero. Iris has several advantages. First, it is characterized by a unique texture pattern, it has a very rich and complex random form that includes the unique features of each individual and is not affected by genetic factors but is only affected by the primary environment of the fetal. It is remarkable that even twins have a different texture of the iris and even in the same person the left eye pattern is different from the right eye. Second, the iris begins to form during the third month of pregnancy. The iris pattern is largely shaped by the age of three years and is almost constant throughout the life in the absence of external damages. Third, unlike other biomedical properties, the iris is protected from external environment by corneal unless there is an eye disease.

The Human Iris
Iris is the colored circular region of the eye. It is close to its center, the pupil which is a circular hole. Iris consists of the sphincter and the dilator muscles, which adjust the space of the pupil and therefore, control the amount of light entering through the pupil. The average diameter of the iris is 12 mm. The differentiation is shaped by fibrous and cellular structures such as ligaments, grooves, cysts, rings, frills, crowns, eyelashes, sometimes moles, freckles, components of human eyes have been explained in Fig. 1.

Biometric History
The idea of using personal identity patterns is proposed in 1936 by ophthalmologist Frank Burch. By 1980, the idea appeared in the James Bond's films, but it was still science imagination and guesswork. In 1987, two ophthalmologists Aram Safir and Leonard Flom acquitted this idea and discovered the fact that the Iris pattern differed for each person. In 1987, they asked John Daugman to try creating actual algorithms to identify the iris. These algorithms obtained from Daugman in 1994 is the basis of all existing iris recognition systems and products (Daugman, 1993) and(Prasad et al., 2018).

The Application
Extensive applications for the iris system include access control to secure areas (buildings), control of distributed systems, secure financial transactions, credit card authentication, secure access to bank accounts, computer access or the database and counterterrorism. Iris systems are deployed in many countries for airline crews, airport staff, national ID cards, identification of missing children, the voting system in parliamentary and assembly polls and many others.

Iris Recognition System
Two operation modes most biometric systems are doing. Templates are added to a database by enrollment mode and an identification mode, where a template is created for an individual and then a match is found in the database of pre-registered templates.
The primary stages of an iris recognition system design include the following:  Localization of pupil and iris  Segmentation borders of the iris and the pupil  Normalization of the iris part  Feature extractions and  Matching Authentication is achieved by comparing the generated template to the iris image with the values templates which are stored in the database.
The matching is perform among one to many templates for the identification or the matching between one to one templates for verification. Jain et al. (2012) presented a biometric algorithm for iris recognition using Fast Fourier Transform and calculating all possible sets of Normalized Moment which are invariant to rotation and scale transformation. The Fast Fourier Transform converts image from spatial domain to frequency domain. It also filters noise in the image and gives more information that is precise. The paper used the CASIA iris image database ver. 1.0 and ver. 2.0. As a conclusion, the algorithm achieved a higher Correct Recognition Rate (Jain et al., 2012). Mabrukar et al. (2013) presented a feature extraction method based on extracting the statistical features in an iris by binarizing the first and second order multi-scale Taylor coefficients using CASIA database on MATLAB.

Related Work
In their experiments, multi-scale Taylor-based features have pretty much immune to illumination changes. This is partially due to neglecting the 0th Taylor coefficient. Feature extraction using Multi-scale Taylor expansion was also implemented and it yielded good results (Mabrukar et al., 2013). Hosaini et al. (2013) compared the performance of Legendre moments, Zernike moments and Pseudo-Zernike moments in feature extraction for iris recognition. They have increased the moment orders until the best recognition rate was chieved. Robustness of these moments in various orders was evaluated in presence of White Gaussian Noise. Numerical results indicate that recognition rate by the Legendre; Zernike and Pseudo-Zernike moments in higher orders are approximately identical. However, average computation time for feature extraction is 4.5, 18 and 0.75 seconds respectively for the Legendre, Zernike and Pseudo-Zernike moments of order 14. On the other hand, the result indicates that the Legendre moment is more robust than the others against the white Gaussian noise (Sarmah and Kumar, 2013). Sarmah and Kumar (2013) presented an algorithm based on Legendre moment. This algorithm takes advantage of the translation invariant property of the Legendre moments. So, it can reduce the computational cost for iris recognition matching on a larger iris image database. The system performed with a test on UPOL image database (Sarmah and Kumar, 2013). Kaur et al. (2018) proposed a discrete orthogonal moment-based feature extraction that extracts global as well as local features. Krawtchouk moments extract local features; Tchebichef moments extract global characteristics of the entire image block. Dual-Hahn moments extract both global and local features, but the performance of the proposed method is evaluated on four publicly available databases achieving an improved accuracy of 99.80% for CASIAIrisV4-Interval, 99.90% for IITD.v1, 100% for UPOL and 97.50% for UBIRIS.v2 as compared to the recently proposed methods. The technique was found to be robust for NIR as well as visible images under uncontrolled environmental conditions (Kaur et al., 2018).
Al-Juburi et al. (2017) presented a new iris recognition system using hybrid methods. These methods were used to extract features of tested eye images. Gabor wavelet and Zernike moment were used to extract features of iris.
The proposed system was tested on CASIA-v4.0 interval database. The results show that the proposed method has a good accuracy about 97%. PSNR is applied on the training and testing iris image to measure the simmilarity between them Al-Juburi et al. (2017). Gnana et al. (2018) proposed an architecture for iris recognition and validated it on the dataset of visible images obtained from the University of Warsaw. They have under took a comparative analysis using LBPH features and Zernike features. They infer red that the proposed approach performed better with the visible images (Gnana et al., 2018).

Methodology
One of the main ways for iris recognition is to construct features vectors corresponding to individual iris images and perform iris matching based on some distance measurements. The extraction of features is a fundamental problem in the recognition of iris-based features that performance is greatly influenced by many parameters in the process of feature extraction (e.g., spatial location, direction, central frequency). It may vary depending on the environmental factors to acquire Iris image. There are many techniques used for feature extracting and merge two or more of these methods may produce a good result.
In image recognition, the rotation, scaling and translation invariant properties of image moments have a high significance. Therefore, Hu presented the use of moments for image analysis and pattern recognition (Hu, 1962). Legendre moments are classical orthogonal moment which are one of widest and most commonly moments used in recognition and image analysis (Oujaoura et al., 2014).

Iris Localization and Segmentation
Iris boundary detection is an important stage in the iris recognition system. Firstly, remove light reflection inside pupil by adjusting image intensity values and filled the holes (Fig. 2a).
The next step is to find the pupil center and pupil radius by the Hough transform. In our case, give the approximate lowest and highest radius of the pupil as input (Fig. 2b).
Then compute iris radii to crop the iris region from the eye image (Fig. 2d).

Fig. 2: Steps of localization and segmentation
Input image

Iris Normalization
After computing the inner and outer circles of the iris, the iris region is segmented out and normalized by a convert from polar coordinate to the Cartesian coordinate for easy computations, as shown in (Fig. 3) (Daugman, 1993). The polar coordinates are defined by r (the radial coordinate) and × (the angular coordinate often called polar angle) while Cartesian coordinates are defined in x and y (Equation 1) to get the iris region as matric of data: One of the problems in an iris recognition system is the occlusion that happens due to eyelashes and eyelids as shown in (Fig. 4). This occlusion increases the complexity and affects the performance of matching and feature extraction processes.
It was done by applying the proposed approach in many iris regions, in order to select a Region Of Interest (ROI) from the iris area by avoiding the regions that occlusion may occur in.
In the following, the five regions were imposed for experimentation: a) Upper region b) Down region c) Two sides region d) The circular region around the pupil e) The circular region around the pupil + two sides region, as shown in (

Features Extraction
The extraction of features remains a significant phase in recognition system using iris. A successful recognition rate and reduction in recognition time of two iris templates mostly depend on efficient feature extraction technique. A great deal of information about the image at a higher level can be contained in small patterns of qualitative differences in the local gray level by using local pattern features such as Local Binary (LBP), Local Triangular (LTP) and Local Quadrant Pattern (LQP). Local patterns have proven very successful in visual recognition tasks ranging from texture classification to face analysis and object detection.

A. Legendre Moments
The two-dimensional Legendre moments of order (p, q) with image intensity function f (x, y) are defined as: The kernel functions P denote Legendre polynomials of order p: And, the recurrent formula of Legendre polynomials is: To compute Legendre moments from a digital image, the integrals in previous (Equation 2) Symmetry and recursion properties of the orthogonal basis function can be exploited to speed up the computation (Oujaoura et al., 2014).

B. Local Quadrant Pattern (LQP)
Hussain and Trigges suggested LQP operator as development for LBP of visual recognition. The LBP method extracts a binary descriptor by creating intensity of the central pixel as the threshold of his neighborhood for each pixel of an image (ul Hussain and Triggs, 2012). Figure 6 gives an illustration with an example of eight neighbors equally spaced around the central pixel.
Let Ic and Ip (p = 1,2,… 8) denote the intensity of the central pixel and its neighbors, respectively.
The operator is performed by the binary test as follows (Equation 7): where, R denotes the different sampling radius and N represents the number of the sample points equally spaced around the circle. A binary code with N bits is obtained from each pixel. So will results in 2 N different patterns. Finally, we convert these patterns to a decimal value.
Local Quadrant Pattern is a new method proposed that is based on the idea of LTP (Al-Jawahry and Mohammed, 2019).
First, the difference between the center pixel ( ) and each neighbor pixels ( ) as (Equation 8) After that, every two results Di for a specific direction will be put in one vector accordant (Equation 9) (Rao and Rao, 2015) as shown in (Fig. 7b Where: Fi = Result each line, i = 1,2,…4 t = Specific threshold where, Fi value results from (Equation 10). The feature vector consists of two parts: first, V1 computes Legendre moment for f(x, y) matric resulted from iris normalization stage. Second, V2 is resulted from applying LQP method to f(x, y); then computing Legendre moment; and finally generating V3 appended V1 by V2.

Matching
For matching, City Block Distance is used due to get a higher recognition accuracy ratio than other methods (Sari et al., 2018). City Block Distance calculates the absolute difference between two vectors according to (Equation 12). In the proposed system the features vector is got from previously mentioned techniques:

Results and Discussion
The proposed approach is implemented and tested on CASIA-V4-Interval database. The developed system was established using MATLAB (version R2017a) programming language. The programs work under Windows 10 operating system, laptop Computing time calculate Average Recognition Time (ART) at second. Table 13 shows computed ART by calculating the average time for comparison of each testing image with all training images from the database.
In Tables 1 to 5 for left eye images and in Tables 6 to 10 for right eye images, showed that increasing of Legendre order lead to increase of recognition rate, in this case just Legendre be used for extraction feature of image, these steps are shown in (Fig. 8 part C1). As shown in Fig 9 and 10, the system utilized LQP followed by Legendre, this approach resulted high recognition rate, but there are decrement in recognition rate at some Legendre orders, these steps are shown in (Fig. 8 part C2).                 As shown with Tables 11 and 12 there three steps for feature extractor, first step the image enrolled to Legendre algorithm to produce vector of features (named V1). Second one, the image enrolled to LQP algorithm to produce array of coefficients that enrolled as inputs to the Legendre to produce vector of features (V2). Finally, fusion stages has been applied by append vector of V1 to V2 to produce the final vector of all features (named V3). Best results with high recognition rate have been satisfied based on this fusion.

Conclusion and Future Work
 The best order for Legendre is 10th, where it gives high recognition rate as well as a resistance to variation of images  The best result, have gotten with down part of left eye while the high recognition rate has been satisfied with two sides of right eye as shown with Fig 11 and

Author's Contributions
Asaad Noori Hashim: Has contributed to the analysis and simulated of the proposed algorithm. Also, he contributed to the write-up and language revision.
Bushraa Mahdi Al-Hashimi: Has contributed to review the literature, identified the research gap and revising the results.

Ethics
This paper is genuine and includes unpublished material. The corresponding author confirms that the coauthor has read and approved the manuscript and no ethical issues involved.