PROPOSED ALGORITHM:FEATURE EXTRACTION USING CROSSING NUMBER (CN) AND RIDGE TRACKING TECHNIQUEThe various steps involved in feature extraction are as given below: 3.2.
1 ADAPTIVE BINARIZATIONThe enhanced greyscale image is converted to a binary image using adaptive binarization [1]. Global thresholding is not used for binarization because of possibilities of non-uniform illumination on the surface of scanner. Thus using adaptive binarization with a window size of 91 x 91 (This size was finalised after a number of trial and errors). The algorithm can be outlined as follows: —————————————————————————————– Algorithm: Adaptive binarization Input: Enhanced greyscale image e(x,y). Output: Binarized image bin(x,y). For each pixel (i) of e(x,y) Compute local mean (ml) in the 91 x 91 neighborhood of the pixel.
If ml > e(xi,yi) then, bin(xi,yi) = white. Else bin(xi,yi)= black. End For. —————————————————————————————- 3.
2.2. THINNINGThe binarised image is skeletonised using medial axis transformation (MAT)[1] to obtain a single pixel thin ridge structure. The thinning algorithm can be outlined as follows: Assumptions: Region points are assumed to have value 1(white) and background points to have value 0(black).
Notations: 1. The 8 neighbour notation of a centre pixel p1 is as shown.
n (p1) = p2 + p3 + …. + p9. 3. t (p1) is the number of 0-1 transitions in the ordered sequence p2, p3,…p9,p2.
Algorithm : Thinning Input: Binarized image bin(x,y). Output: One pixel thinned image th(x,y). Steps: 1. W.r.
t the neighborhood notation a pixel p1 in bin(x,y). is flagged for deletion if the following conditions are satisfied; 2 ? n(p1) ? 6 . t(p1)=1. p2 V p4 V p6 = 0 p4 V p6 V p8 = 0 2. Delete all the flagged pixels from bin(x,y).
3. W.r.t the neighborhood notation a pixel p1 in bin(x,y) is flagged for deletion if the following conditions are satisfied; 2 ? n(p1) ? 6 . t(p1)=1. p2 V p4 V p8 = 0 p2 V p6 V p8 = 0 4.
Delete all the flagged pixel from bin(x,y). 5. Go to step 1 if bin(x, y) is not same as the previous bin(x, y) (indicating that single pixel thickness is yet not obtained) 6. Assign the image bin(x, y) obtained from step 4. to th(x, y).
Thus one iteration of the thinning algorithm consists of applying step 1 to flag border points for deletion deleting the flagged points; applying step 3 to flag the remaining border points for deletion; and deleting the flagged points. The basic procedure is applied iteratively until no further points are deleted, at which time the algorithm terminates, yielding the skeleton of the region. 3.2.3 ESTIMATING SPATIAL CO-ORDINATES & DIRECTION OF MINUTIAE POINTS.Minutiae representation is by far, the most widely used method of fingerprint representation.
Minutia or small details mark the regions of local discontinuity within a fingerprint image. These are locations where the the ridge comes to an end(type: ridge ending) or branches into two (type: bifurcation). Other forms of the minutiae includes a very short ridge (type: ridge dot), or a closed loop (type: enclosure). The different types of minutiae are illustrated Figure 1. There are more than 18 different types of minutiae [2] among which ridge bifurcations and endings are the most widely used. Other minutiae type may simply be expressed as multiple ridge endings of bifurcations.
For instance, a ridge dot may be represented by two opposing ridge endings placed at either extremities. Even this simplification is redundant since many matching algorithms do not even distinguish between ridge ending and bifurcations since their types can get flipped. The template simply consists of a list of minutiae location and their orientations. The feature extractor takes as input a gray scale image I(x,y) and produces a unordered set of tuples- M = {m1,m2,m3…mN}. Each tuple mi corresponds to a single minutia and represents its properties. The properties extracted by most algorithms include its position and orientation.
Thus, each tuple mi is usually represented as a triplet {xi, yi, ?i}. The crossing number (CN) method is used to perform extraction of the spatial coordinates of the minutiae points. This method extracts the bifurcations from the skeleton image by examining the local neighborhood of each ridge pixel using a 3?3 window. The CN of a ridge pixel ‘p’ is given as follows CN=0.5 { i=18pi-pi+1 } p(9) =p(1) . For a pixel ‘p’ if CN= 3 it is a bifurcation point.
For each extracted minutia along with its x and y coordinates the orientation of the associated ridge segment is also recorded. The minutia direction is found out using a ridge tracking technique. With reference to figure 3.3 once the x and y coordinates of the bifurcation point are known, we can track the three directions from that point. Each direction is tracked upto 10 pixel length.
Once tracked we construct a triangle from these three points. The midpoint of the smallest side of the triangle is then connected to the bifurcation point and the angle of the resulting line segment is found which is the minutia direction. Assumptions: Ridges are assumed to have value 0 (black) and background points to have value 1(white). Notations: The 8 neighbor notation of a center pixel p1 is as previously shown. The algorithm for extracting the minutiae using the crossing number technique can be outlined as follows: Algorithm: Crossing number Input: Thinned image th(x,y).
Output: Image with (x,y) coordinates and orientation thita of each minutia. Steps: 1. For every pixel p in th(x,y) compute the crossing number (CN) ; CN=0.5 { i=18pi-pi+1 } p(9) =p(1) . 2. If CN= 3, the pixel p is declared as a bifurcation point and its x and y coordinates, i.
e. p.x and p.y are recorded. 3.
The orientation at the bifurcation points p.? is calculated using tracking algorithm. Fingerprint matching Process:-Each minutiae may be described by a number of attributes such as its position (x,y), its orientation ?, its quality etc. However, most algorithms consider only its position and orientation information. Given a pair of fingerprints and their corresponding minutiae features to be matched, features may be represented as an unordered set given by I1 = {m1,m2….
mM} where mi = (xi, yi, ?i) I2 = {m’1,m’2….m’N} where mi = (x’i, y’i , ?’i ) Here the objective is to find a point m’j in I2 that exclusively corresponds to each point mi in I1. Usually points in I2 is related to points in I1 through a geometric transformation T( ). Therefore, the technique used by most minutiae matching algorithms is to recover the transformation function T( ) that maps the two point sets . The resulting point set I’2 is given by: I’2 = T(I1) = {m”1,m” 2,m” 3….m”M} m”1 = T(m’1) m” N = T(m’N) The minutiae pair mi and m”j are considered to be a match only if (xi-xj)2+(yi-yj)2?r0 min( |?i ? ?” j | , 360 ? |?i ? ?”j | ) < ?0 Here r0 and ?0 denote the tolerance window.
The matcher can make on of the following assumptions on the nature of the transformation T Rigid Transformation: Here it is assumed that one point set is rotated and shifted version of the other. Affine Transformation: Affine transformations are generalization of Euclidean transform. Shape and angle are not preserved during transformation. Non-linear Transformation: Here the transformation may be due to any arbitrary and complex transformation function T(x,y). The problem of matching minutiae can be treated as an instance of generalized point pattern matching problem. In its most general form, point pattern matching consists of matching two unordered set of points of possibly different cardinalities and each point.
It is assumed that the two pointsets are related by some geometrical relationship. In most situations, some of the point correspondences are already known (e.g. control points in an image registration problem [5,4,6,7])andthe problem reduces to finding the most optimal geometrical transformation that relates these two sets. However, in fingerprints, the point correspondences themselves are unknown and therefore the points have to be matched with no prior assumption making it a very challenging combinatorial problem. There have been several prior approaches where general point pattern techniques havebeen applied.
Some of these have been discussed here. Ranade and Rosenfield [8] proposed an iterative approach for obtaining point correspondences. In this approach, for each point pair mi, m’j they assign pij , the likelihood of the point correspondence and c(i, j, h, k), a cost function that captures the correspondence of other pairs(mh,m_k) as a result of matching mi with mj. In each iteration pij is incremented if it increases the compatibility of other points and is decremented if it does not. At the point of convergence, each point mi is assigned to the point argmaxk(pik). While this is a fairly accurate approach and is robust to non-linearities, the iterative nature of the algorithm makes it unsuitable for most applications.
The hough transform [9] approach or the transformation clustering approach reduces the problemof point pattern matching to detecting the most probable transformation in a transformation search space. Ratha et al [10] proposed a fingerprint matching algorithm based on this approach. In this technique, the search space consists of all the possible parameter under the assumed distortionmodel. For instance, if we assume a rigid transformation, then the search space consists of all possible combinations of all translations (?x,?y) , scales s and rotations and ?.
However, to avoid computation complexity the search space is usually discretized into small cells. Therefore the possible transformations form a finite set with ?x ? {?1x,?2x . . .
?Ix} ?y ? {?1y,?2y . . .?Jy} ? ? {?1, ?2 . . .
?K} s ? {s1, s2 . . . sL} A four dimensional accumulator of size (I ?J ?K ?L) is maintained. Each cell A(i, j, k, l) indicatesthe likelihood of the transformation parameters (?ix,?jy, ?k, sl).
To determine the optimal transformation, every possible transformation is tried on each pair of points. The algorithm used is summarized below for each point mi in fingerprint T . for each point m_j in fingerprint I for each ?k ? {?1, ?2 . . .
?K} for each sl ? {s1, s2 . . . sL} compute the translations ?x,?y Explicit alignment: An illustration of the relative alignment using ridges associated with minutiae mi and m’j?x?y=?xi?yi-s1cos?k -sin?ksin?k cos?kx’jy’j …………………(1) d Let (?ix,?jy) be the quantized versions of (?x,?y) respectively. e If T{mi} matches with m_j increase the evidence for the cell A[?ix,?jy, ?k, sl] A[?ix,?jy, ?k, sl] = A[?ix,?jy, ?k, sl]+1 3.The optimal transformation parameters are obtained using (?*x,?*y, ?*, s*) = argmax(i,j,k,l) A[?ix,?jy, ?k, sl] References:Gonzalez, Woods, and Eddins.
Digital Image Processing using matlab. Prentice Hall, 2004. D. Maltoni, D.
Maio, A.K. Jain, S. Prabhakar, Handbook of Fingerprint Recognition, Springer, 2003, ISBN 0-387-95431-7.
R.Thai, Fingerprint image enhancement and feature extraction. Australia. Anil Jain, Salil Prabhakar, Lin Hong, and Sharath Pankanti.
Filterbank-based fingerprint matching. In Transactions on Image Processing, volume 9, pages 846-859, May 2000. Anil Jain, Arun Ross, and Salil Prabhakar. Fingerprint matching using minutiae texture features.
In International Conference on Image Processing, pages 282-285, october 2001. L. Hong, Y. Wang, and A. K. Jain.
Fingerprint image enhancement: Algorithm and performanceevaluation. Transactions on PAMI, 21(4):777-789, August 1998. L. Brown. A survey of image registration techniques. ACM Computing Surveys, 1992.
A. Ranade and A. Rosenfeld. Point pattern matching by relaxation. Pattern Recognition, 12(2):269-275, 1993.
R. O. Duda and P. E. Hart. Use of the hough transformation to detect lines and curves in pictures.
Communications of the ACM, 15(1), 1972. N. K. Ratha, K. Karu, S. Chen, and A.
K. Jain. A real-time matching system for large fingerprint databases. Transactions on Pattern Analysis and Machine Intelligence, 18(8):799-813, 1996.