Essay: Palm print biometric recognition

Essay details:

  • Subject area(s): Information technology essays
  • Reading time: 22 minutes
  • Price: Free download
  • Published on: February 5, 2016
  • File format: Text
  • Number of pages: 2
  • Palm print biometric recognition
    0.0 rating based on 12,345 ratings
    Overall rating: 0 out of 5 based on 0 reviews.

Text preview of this essay:

This page of the essay has 6410 words. Download the full version above.

The recognition of the persons from Palm print has applied in many of the devices and in many of the applications. Palm print recognition is a biometric technology which recognizes a person based on his/her palm print pattern. Palm print serves as a reliable human identifier because the print patterns are not duplicated in other people, even in monozygotic twins. More importantly, the details of these ridges are permanent. The ridge structures are formed at about 13th weeks of the human embryonic development and are completed by about 18th week. The formation remains unchanged from that time on throughout life except for size. After death, decomposition of the skin is last to occur in the area of the palm print. Compared with the other physical biometric characteristics, palm print authentication has several advantages: low-resolution imaging, low-intrusiveness, stable line features, and low-cost capturing device. Palm print covers wider area than fingerprint and it contains abundance of useful information for recognition. Apart from the friction ridges, the three principal lines (the dominant lines on the palm) and wrinkles (the weaker and more irregular lines) on the palm can also be used for recognition. In addition, palm print system does not require very high resolution capturing device as the principal lines and wrinkles can be observed under low-resolution images. Subspace learning methods are very sensitive to the illumination, translation, and rotation variances in image recognition. Thus, they have not obtained promising performance for palm print recognition so far. In this paper, we propose a new descriptor of palm print named histogram of oriented lines (HOL), which is a variant of histogram of oriented gradients (HOG). Histogram of Oriented Gradients (HOG) are feature descriptors used in computer vision and image processing for the purpose of object detection. The technique counts occurrences of gradient orientation in localized portions of an image. This method is similar to that of edge orientation histograms, scale-invariant feature transform descriptors, and shape contexts, but differs in that it is computed on a dense grid of uniformly spaced cells and uses overlapping local contrast normalization for improved accuracy. HOL is not very sensitive to changes of illumination, and has the robustness against small transformations because slight translations and rotations make small histogram value changes. Based on HOL, even some simple subspace learning methods can achieve high recognition rates.
The identification of persons Palm print has numerous applications. It is easy to identify persons by identifying the Palm print. The identification of the persons in this way is not difficult. The identification of the persons in can be much helpful in identifying the suspicious persons. The process requires some tedious works to track and identify the persons in an effective manner.
Palm print recognition inherently implements many of the same matching characteristics that have allowed fingerprint recognition to be one of the most well-known and best publicized biometrics. Both palm and finger biometrics are represented by the information presented in a friction ridge impression. This information combines ridge flow, ridge characteristics, and ridge structure of the raised portion of the epidermis. The data represented by these friction ridge impressions allows a determination that corresponding areas of friction ridge impressions either originated from the same source or could not have been made by the same source. Because fingerprints and palms have both uniqueness and permanence, they have been used for over a century as a trusted form of identification.
However, palm recognition has been slower in becoming automated due to some restraints in computing capabilities and live-scan technologies. This paper provides a brief overview of the historical progress of and future implications for palm print biometric recognition.
Palm print recognition inherently implements many of the same matching characteristics that have allowed fingerprint recognition to be one of the most well-known and best publicized biometrics. Both palm and finger biometrics are represented by the information presented in a friction ridge impression. This information combines ridge flow, ridge characteristics, and ridge
structure of the raised portion of the epidermis. The data represented by these friction ridge impressions allows a determination that corresponding areas of friction ridge impressions either originated from the same source or could not have been made by the same source. Because fingerprints and palms have both uniqueness and permanence, they have been used for over a century as a trusted form of identification. However, palm recognition has been slower in becoming automated due to some restraints in computing capabilities and live-scan technologies.
This paper provides a brief overview of the historical progress of and future implications for palm print biometric recognition.
So far, there have been many approaches proposed for palm print recognition made a survey for these approaches and divided them into several different categories. For texture-based approaches, wavelet transform, discrete cosine transform, local binary pattern (LBP), and other statistical methods were used for texture feature extraction . There are also some line-based approaches as lines including principal lines and wrinkles are essential and basic features of palmprint . It has been proved that coding-based approaches including ordinal code robust line orientation code (RLOC) competitive code , and binary orientation co-occurrence vector have achieved promising recognition performance. Recently, correlation methods such as optimal tradeoff synthetic discriminant function filter and band-limited phase only correlation filter (BLPOC) have also been successfully adopted for palmprint recognition. In the past two decades, the study on subspace learning techniques was also very active.
Many different representative subspace learning methods have been proposed. In the early stage of the research in this field, almost all representative algorithms including principal component analysis (PCA) and linear discriminant analysis (LDA) require that the 2-D image data must be reshaped into 1-D vector, which can be referred to as the strategy of “image-as vector.”
In recent years, some important progress has been made in the research of subspace learning. Among them, three strategies should be highlighted. The first strategy is the kernel method, which uses a linear classifier algorithm to solve nonlinear problems by mapping the original nonlinear observations into a higher dimensional space. Kernel PCA (KPCA) and kernel LDA (KLDA) are two representative kernel-based methods. The second strategy is the manifold learning method, which is based on the idea that the data points are actually samples from a low-dimensional manifold that is embedded in a high-dimensional space. The last strategy is matrix and tensor embedding methods. Matrix embedding methods, such as 2-D PCA (2DPCA) and 2-D LDA (2DLDA), can extract a feature matrix using a straightforward image projection technique. In addition, tensor embedding methods, such as tensor subspace analysis (TSA) , concurrent subspaces analysis (CSA) , and multi-linear discriminant analysis (MDA), can represent the image ensembles by a higher order tensor and extract low dimensional features using multi-linear algebra methods.
The subspace learning methods have been widely applied to biometrics including palmprint recognition. Lu et al. and Wu et al. proposed two methods based on PCA and LDA, respectively. Connie et al. proposed several PCA/LDA/Independent component analysis-based approaches. Hu et al. employed 2-D locality preserving projections (2DLPP), Yang et al. proposed unsupervised discriminant projection, and Zuo et al. presented a post-processed LDA for palmprint recognition. In general, these aforementioned approaches were directly applied to original palmprint images.
In this paper, we call original image as original representation (OR) of palmprint. However, subspace learning methods utilizing OR have one obvious shortcoming, i.e., they are
sensitive to illumination, translation, and rotation variances even if these variances are small. Thus, the recognition rates of subspace learning methods are obviously worse than that of coding and correlation-based methods. In order to increase the discriminating power, Gabor wavelet representation (GWR) was often exploited to help improve the performance of subspace
learning methods. Ekinci and Aykut proposed a palmprint recognition approach integrating GWR and KPCA.
Pan and Ruan proposed two approaches using Gabor feature, in which (2D)2PCA and 2DLPP were adopted for dimensionality reduction, respectively. However, the drawback of GWR is its high-computational cost. Although some researchers exploited Adaboost algorithm to select a sub-set of GWR to improve the computational efficiency and recognition performance, the effectiveness of this strategy has not been fully validated. We proposed the directional representation (DR) of palmprint. Using DR, those subspace learning methods are robust to illumination variance. However, their sensitivity to translation and rotation variances has not been solved. From the above analysis, it can be seen that designing novel representation of palmprint, which is robust to slight illumination, translation, and rotation variances, is a crucial issue for subspace learning methods. Unfortunately, this issue has not been well discussed and solved until now.
Histogram of oriented gradients (HOG) descriptor was initially proposed by Lowe in his scale invariant feature transform (SIFT). Dalal and Triggers proposed using HOG features to solve the pedestrian detection problem. Meanwhile, HOG descriptor has been successfully applied to other object detection and recognition tasks.
However, for palmprint recognition, gradient exploited in HOG is not a good tool to detect the line responses and orientation of pixels because different palm lines have different widths and there are complicated intersections between lines. In this paper, we propose the histogram of oriented lines (HOL) descriptor, a variant of HOG, for palmprint recognition, which exploits line-shape filters or tools such as the real part of Gabor filter and modified finite radon transform (MFRAT) to extract line responses and orientation of pixels. Compared with OR, DR, and GWR, HOL has two obvious advantages. First, using oriented lines and histogram normalization, HOL has better invariance to changes of illumination. Second, HOL has the robustness against transformations because slight translations and rotations make small histogram value changes. In addition, line-shape filters and tools used in HOL can well calculate the line response and orientation of pixels. There is no doubt that, owing to these merits, HOL descriptor will help subspace learning methods achieve promising recognition rates.
1.1 What is an image?
An image is an array, or a matrix, of square pixels (picture elements) arranged in columns and rows.
Figure 1: An image — an array or a matrix of pixels arranged in columns and rows.
In a (8-bit) greyscale image each picture element has an assigned intensity that ranges from 0 to 255. A grey scale image is what people normally call a black and white image, but the name emphasizes that such an image will also include many shades of grey.
Figure 2: Each pixel has a value from 0 (black) to 255 (white). The possible range of the pixel values depend on the colour depth of the image, here 8 bit = 256 tones or greyscales.
A normal grayscale image has 8 bit colour depth = 256 grayscales. A “true colour” image has 24 bit colour depth = 8 x 8 x 8 bits = 256 x 256 x 256 colours = ~16 million colours.
Figure 3: A true-colour image assembled from three grayscale images coloured red, green and blue. Such an image may contain up to 16 million different colours.
Some grayscale images have more grayscales, for instance 16 bit = 65536 grayscales. In principle three grayscale images can be combined to form an image with 281,474,976,710,656 grayscales.
There are two general groups of ‘images’: vector graphics (or line art) and bitmaps (pixel-based or ‘images’). Some of the most common file formats are:
GIF — an 8-bit (256 colour), non-destructively compressed bitmap format. Mostly used for web. Has several sub-standards one of which is the animated GIF.
JPEG — a very efficient (i.e. much information per byte) destructively compressed 24 bit (16 million colours) bitmap format. Widely used, especially for web and Internet (bandwidth-limited).
TIFF — the standard 24 bit publication bitmap format. Compresses non-destructively with, for instance, Lempel-Ziv-Welch (LZW) compression.
PS — Postscript, a standard vector format. Has numerous sub-standards and can be difficult to transport across platforms and operating systems.
PSD — a dedicated Photoshop format that keeps all the information in an image including all the layers.
Pictures are the most common and convenient means of conveying or transmitting information. A picture is worth a thousand words. Pictures concisely convey information about positions, sizes and inter relationships between objects. They portray spatial information that we can recognize as objects. Human beings are good at deriving information from such images, because of our innate visual and mental abilities. About 75% of the information received by human is in pictorial form. An image is digitized to convert it to a form which can be stored in a computer’s memory or on some form of storage media such as a hard disk or CD-ROM. This digitization procedure can be done by a scanner, or by a video camera connected to a frame grabber board in a computer. Once the image has been digitized, it can be operated upon by various image processing operations.
Image processing operations can be roughly divided into three major categories, Image Compression, Image Enhancement and Restoration, and Measurement Extraction. It involves reducing the amount of memory needed to store a digital image. Image defects which could be caused by the digitization process or by faults in the imaging set-up (for example, bad lighting) can be corrected using Image Enhancement techniques. Once the image is in good condition, the Measurement Extraction operations can be used to obtain useful information from the image. Some examples of Image Enhancement and Measurement Extraction are given below. The examples shown all operate on 256 grey-scale images. This means that each pixel in the image is stored as a number between 0 to 255, where 0 represents a black pixel, 255 represents a white pixel and values in-between represent shades of grey. These operations can be extended to operate on colour images. The examples below represent only a few of the many techniques available for operating on images. Details about the inner workings of the operations have not been given, but some references to books containing this information are given at the end for the interested reader.
Images and pictures
As we mentioned in the preface, human beings are predominantly visual creatures: we rely heavily on our vision to make sense of the world around us. We not only look at things to identify and classify them, but we can scan for differences, and obtain an overall rough feeling for a scene with a quick glance. Humans have evolved very precise visual skills: we can identify a face in an instant; we can differentiate colors; we can process a large amount of visual information very quickly.
However, the world is in constant motion: stare at something for long enough and it will change in some way. Even a large solid structure, like a building or a mountain, will change its appearance depending on the time of day (day or night); amount of sunlight (clear or cloudy), or various shadows falling upon it. We are concerned with single images: snapshots, if you like, of a visual scene. Although image processing can deal with changing scenes, we shall not discuss it in any detail in this text. For our purposes, an image is a single picture which represents something. It may be a picture of a person, of people or animals, or of an outdoor scene, or a microphotograph of an electronic component, or the result of medical imaging. Even if the picture is not immediately recognizable, it will not be just a random blur.
Image processing involves changing the nature of an image in order to either
1. Improve its pictorial information for human interpretation,
2. Render it more suitable for autonomous machine perception.
We shall be concerned with digital image processing, which involves using a computer to change the nature of a digital image. It is necessary to realize that these two aspects represent two separate but equally important aspects of image processing. A procedure which satisfies condition, a procedure which makes an image look better may be the very worst procedure for satisfying condition. Humans like their images to be sharp, clear and detailed; machines prefer their images to be simple and uncluttered.
Images and digital images
Suppose we take an image, a photo, say. For the moment, lets make things easy and suppose the photo is black and white (that is, lots of shades of grey), so no colour. We may consider this image as being a two dimensional function, where the function values give the brightness of the image at any given point. We may assume that in such an image brightness values can be any real numbers in the range (black) (white).
A digital image from a photo in that the values are all discrete. Usually they take on only integer values. The brightness values also ranging from 0 (black) to 255 (white). A digital image can be considered as a large array of discrete dots, each of which has a brightness associated with it. These dots are called picture elements, or more simply pixels. The pixels surrounding a given pixel constitute its neighborhood. A neighborhood can be characterized by its shape in the same way as a matrix: we can speak of a neighborhood,. Except in very special circumstances, neighborhoods have odd numbers of rows and columns; this ensures that the current pixel is in the centre of the neighborhood.
Image Processing Fundamentals:
In order for any digital computer processing to be carried out on an image, it must first be stored within the computer in a suitable form that can be manipulated by a computer program. The most practical way of doing this is to divide the image up into a collection of discrete (and usually small) cells, which are known as pixels. Most commonly, the image is divided up into a rectangular grid of pixels, so that each pixel is itself a small rectangle. Once this has been done, each pixel is given a pixel value that represents the color of that pixel. It is assumed that the whole pixel is the same color, and so any color variation that did exist within the area of the pixel before the image was discretized is lost. However, if the area of each pixel is very small, then the discrete nature of the image is often not visible to the human eye.
Other pixel shapes and formations can be used, most notably the hexagonal grid, in which each pixel is a small hexagon. This has some advantages in image processing, including the fact that pixel connectivity is less ambiguously defined than with a square grid, but hexagonal grids are not widely used. Part of the reason is that many image capture systems (e.g. most CCD cameras and scanners) intrinsically discretize the captured image into a rectangular grid in the first instance.
Pixel Connectivity
The notation of pixel connectivity describes a relation between two or more pixels. For two pixels to be connected they have to fulfill certain conditions on the pixel brightness and spatial adjacency.
First, in order for two pixels to be considered connected, their pixel values must both be from the same set of values V. For a grayscale image, V might be any range of graylevels, e.g. V={22,23,…40}, for a binary image we simple have V={1}.
To formulate the adjacency criterion for connectivity, we first introduce the notation of neighborhood. For a pixel p with the coordinates (x,y) the set of pixels given by:
is called its 4-neighbors. Its 8-neighbors are defined as
From this we can infer the definition for 4- and 8-connectivity:
Two pixels p and q, both having values from a set V are 4-connected if q is from the set and 8-connected if q is from .
General connectivity can either be based on 4- or 8-connectivity; for the following discussion we use 4-connectivity.
A pixel p is connected to a pixel q if p is 4-connected to q or if p is 4-connected to a third pixel which itself is connected to q. Or, in other words, two pixels q and p are connected if there is a path from p and q on which each pixel is 4-connected to the next one.
A set of pixels in an image which are all connected to each other is called a connected component. Finding all connected components in an image and marking each of them with a distinctive label is called connected component labeling.
An example of a binary image with two connected components which are based on 4-connectivity can be seen in Figure 1. If the connectivity were based on 8-neighbors, the two connected components would merge into one.
Figure 1 Two connected components based on 4-connectivity.
Pixel Values
Each of the pixels that represents an image stored inside a computer has a pixel value which describes how bright that pixel is, and/or what color it should be. In the simplest case of binary images, the pixel value is a 1-bit number indicating either foreground or background. For a gray scale images, the pixel value is a single number that represents the brightness of the pixel. The most common pixel format is the byte image, where this number is stored as an 8-bit integer giving a range of possible values from 0 to 255. Typically zero is taken to be black, and 255 is taken to be white. Values in between make up the different shades of gray.
To represent colour images, separate red, green and blue components must be specified for each pixel (assuming an RGB colour space), and so the pixel `value’ is actually a vector of three numbers. Often the three different components are stored as three separate `grayscale’ images known as color planes (one for each of red, green and blue), which have to be recombined when displaying or processing. Multispectral Images can contain even more than three components for each pixel, and by extension these are stored in the same kind of way, as a vector pixel value, or as separate color planes.
The actual grayscale or color component intensities for each pixel may not actually be stored explicitly. Often, all that is stored for each pixel is an index into a colour map in which the actual intensity or colors can be looked up.
Although simple 8-bit integers or vectors of 8-bit integers are the most common sorts of pixel values used, some image formats support different types of value, for instance 32-bit signed integers or floating point values. Such values are extremely useful in image processing as they allow processing to be carried out on the image where the resulting pixel values are not necessarily 8-bit integers. If this approach is used then it is usually necessary to set up a colormap which relates particular ranges of pixel values to particular displayed colors.
Pixels, with a neighborhood:
Color scale
The two main color spaces are RGB and CMYK.
The RGB color model is an additive color model in which red, green, and blue light are added together in various ways to reproduce a broad array of colors. RGB uses additive color mixing and is the basic color model used in television or any other medium that projects color with light. It is the basic color model used in computers and for web graphics, but it cannot be used for print production.
The secondary colors of RGB — cyan, magenta, and yellow — are formed by mixing two of the primary colors (red, green or blue) and excluding the third color. Red and green combine to make yellow, green and blue to make cyan, and blue and red form magenta. The combination of red, green, and blue in full intensity makes white.[figure4]
Figure [4]: The additive model of RGB. Red, green, and blue are the primary stimuli for human color perception and are the primary additive colours.
To see how different RGB components combine together, here is a selected repertoire of colors and their respective relative intensities for each of the red, green, and blue components:
*Typical uses of MATLAB include:-
– Math and computation.
-Algorithm development
-Data acquisition
-Modeling, simulation, and prototyping
-Data analysis, exploration, and visualization
-Scientific and engineering graphics
-Application development, including graphical user interface building
Some applications:
Image processing has an enormous range of applications; almost every area of science and technology can make use of image processing methods. Here is a short list just to give some indication of the range of image processing applications.
1. Medicine
• Inspection and interpretation of images obtained from X-rays, MRI or CAT scans,
• Analysis of cell images, of chromosome karyotypes.
2. Agriculture
• Satellite/aerial views of land, for example to determine how much land is being used for different purposes, or to investigate the suitability of different regions for different crops,
• inspection of fruit and vegetables distinguishing good and fresh produce from old.
3. Industry
• Automatic inspection of items on a production line,
• inspection of paper samples.
4. Law enforcement
• Fingerprint analysis,
• sharpening or de-blurring of speed-camera images.
Aspects of image processing:
It is convenient to subdivide different image processing algorithms into broad subclasses. There are different algorithms for different tasks and problems, and often we would like to distinguish the nature of the task at hand.
• Image enhancement: This refers to processing an image so that the result is more suitable for a particular application.
Example include:
 sharpening or de-blurring an out of focus image,
 highlighting edges,
 improving image contrast, or brightening an image,
 removing noise.
• Image restoration. This may be considered as reversing the damage done to an image by a known cause, for example:
 removing of blur caused by linear motion,
 removal of optical distortions,
 removing periodic interference.
• Image segmentation. This involves subdividing an image into constituent parts, or isolating certain aspects of an image:
 circles, or particular shapes in an image,
 In an aerial photograph, identifying cars, trees, buildings, or roads.
These classes are not disjoint; a given algorithm may be used for both image enhancement or for image restoration. However, we should be able to decide what it is that we are trying to do with our image: simply make it look better (enhancement), or removing damage (restoration).
An image processing task
We will look in some detail at a particular real-world task, and see how the above classes may be used to describe the various stages in performing this task. The job is to obtain, by an automatic process, the postcodes from envelopes. Here is how this may be accomplished:
• Acquiring the image: First we need to produce a digital image from a paper envelope. This can be done using either a CCD camera, or a scanner.
• Preprocessing: This is the step taken before the major image processing task. The problem here is to perform some basic tasks in order to render the resulting image more suitable for the job to follow. In this case it may involve enhancing the contrast, removing noise, or identifying regions likely to contain the postcode.
• Segmentation: Here is where we actually get the postcode; in other words we extract from the image that part of it which contains just the postcode.
• Representation and description These terms refer to extracting the particular features which allow us to differentiate between objects. Here we will be looking for curves, holes and corners which allow us to distinguish the different digits which constitute a postcode.
• Recognition and interpretation: This means assigning labels to objects based on their descriptors (from the previous step), and assigning meanings to those labels. So we identify particular digits, and we interpret a string of four digits at the end of the address as the postcode.
Authors: S. Ba and J. M. Odobez
Year: 2007
In this work we used a Biometrics-based personal identification is regarded as an effective method for automatically recognizing, with a high confidence, a person’s identity. This paper presents a new biometric approach to online personal identification using palmprint technology. In contrast to the existing methods, our online palmprint identification system employs low-resolution palmprint images to achieve effective personal identification. The system consists of two parts: a novel device for online palmprint image acquisition and an efficient algorithm for fast palmprint recognition. A robust image coordinate system is defined to facilitate image alignment for feature extraction. In addition, a 2D Gabor phase encoding scheme is proposed for palmprint feature extraction and representation. The experimental results demonstrate the feasibility of the proposed system. A palm print refers to an image acquired of the palm region of the hand. It can be either an online image (i.e. taken by a scanner, or CCD) or offline image where the image is taken with ink and paper. The palm itself consists of principal lines, wrinkles (secondary lines) and epidermal ridges. It differs to a fingerprint in that it also contains other information such as texture, indents and marks which can be used when comparing one palm to another. Palm prints can be used for criminal, forensic or commercial applications. Palm prints, typically from the butt of the palm, are often found at crime scenes as the result of the offender’s gloves slipping during the commission of the crime, and thus exposing part of the unprotected hand. Most previous research in the area of personal authentication using the palmprint as a
biometric trait has concentrated on enhancing accuracy yet resistance to attacks is also a centrally
important feature of any biometric security system. In this paper, we address three relevant security issues: template re-issuances, also called 1cancellable biometrics, replay attacks, and database attacks. We propose to use a random orientation filter bank (ROFB) as a feature extractor to generate noise-like feature codes, called Competitive Codes for templates re-issuances. Secret messages are hidden in templates to prevent replay and database attacks. This technique can be regarded as template watermarking. A series of analyses is provided to evaluate the security levels of the measures. Researchers have proposed various preprocessing, feature extraction, matching, and classification algorithms for on-line palmprint verification and identification. Most of these have concentrated on improving accuracy by developing new feature extraction and matching algorithms and by combining palmprint and hand geometric features . Others have proposed hierarchical and classification algorithms to alleviate the computational cost of large database identification. Although accuracy and matching speed are important, security of In this paper, we use the definition of cancelable biometrics proposed by Ratha and his coworkers, who are the first inventors of cancelable biometrics based on our best knowledge. palmprint systems must not to be ignored. Ratha et al draw our attention to the basic vulnerabilities of biometric systems . The sensor collects biometric signals such as fingerprint images and transmits the signals to the feature extractor through the data link. The feature extractor extracts features such as minutiae points in fingerprint images from the biometric signals and transmits them to the matcher through the data link . The matcher compares the features and templates from the database to compute a matching score or obtain a decision. Finally, the matching score or the decision is sent to the application
Researchers have proposed various preprocessing, feature extraction, matching, and Classification algorithms for on-line palmprint verification and identification. Most of these have concentrated on improving accuracy by developing new feature extraction and matching algorithms and by combining palmprint and hand geometric features.
This procedure can be improved by online measure. The online palmprint identification system employs low-resolution palmprint images This process will surely increase the complexities in the process. The increased complexities will make the process not suitable to employ in most of the places.
Authors D. S. Huang, W. Jia, and D. Zhang
Year: 2008
A novel palmprint verification approach based on principal lines. In feature extraction stage, the modified finite Radon transform is proposed, which can extract principal lines effectively and efficiently even in the case that the palmprint images contain many long and strong wrinkles. In matching stage, a matching algorithm based on pixel-to-area comparison is devised to calculate the similarity between two palmprints, which has shown good robustness for slight rotations and translations of palmprints. The experimental results for the verification on Hong Kong Polytechnic University Palmprint Database show that the discriminability of principal lines is also strong. a novel algorithm for the automatic classi¬fication of low-resolution palmprints. First the principal lines of the palm are using their position and thickness. the potential beginnings (“line initials”) of the principal lines are extracted and then, based on these line initials, a recursive process is applied to extract the principal lines in their entirety. Finally palmprints are classifi¬ed into six categories according to the number of the principal lines and the number of their intersections., Computer-aided personal recognition is becoming in-increasingly important in our information society, and in this ¬field biometrics is one of the most important and reliable methods The most widely used biometric feature is the ¬fingerprint and the most reliable feature is the iris , However, it is very difficult to extract small unique features (known as minutiae) from unclear —finger prints and iris input devices are very expensive. which are mainly composed of the palm lines and ridges. A palmprint, as a relatively new biometric feature, has several advantages compared with other currently available features palmprints contain more information than -fingerprints, so they are more distinctive; palmprint capture devices are much cheaper than iris devices; palmprints contain additional distinctive features such as principal lines and wrinkles, which can be extracted from low-resolution images; and last, by combining all of the features of a palm, such as palm geometry, ridge and valley features, and principal lines and wrinkles, it is possible to build a highly accurate biometrics system. Given these advantages, in recent years, palm-prints have been investigated extensively in automated personal authentication. Duta et al. extracted some points (called “feature points”) on palm-lines from online palm-print images for verification. Zhang et al. used 2-D Gabor fi¬lters to extract the texture features from low-resolution palmprint images and employed these features to implement a highly accurate online palmprint recognition system. Han et al. used Sobel and morphological operations to extract line-like features from palmprints. Kumar et al. integrated line-like features and hand geometric features for personal veri¬fication. All of these palmprint authentication methods require that the input palmprint should be matched against a large number of palmprints in a database, which is very time consuming. To reduce the search time and computational complexity, it is desirable to classify palmprints into several categories such that the input palmprint need be matched only with the palmprints in its corresponding category, which is a subset of palmprints in the database. Like ¬fingerprint classifi¬cation. palmprint classifi¬cation is a coarse-level matching of palmprint. Shuet al. used the orientation property of the ridges on palms to classify online high-resolution palmprints into six categories. Obviously, this classi¬fication method is unsuitable for low-resolution palmprints because it is impossible to obtain the orientation of the ridges from low-resolution images. We classify palmprints by taking into account their most visible and stable features, i.e. the principal lines. Most palmprints show three principal lines: heart line, head line and life line we describe how these principal lines may be extracted according to their characteristics, which allows us then to classify palmprints into six categories by the number of principal lines and the number of their intersections.
As the ¬rest attempt to classify low-resolution palmprints, this paper presents a novel algorithm for palmprint classi¬fication using principal lines. Principal lines are de¬fined and characterized by their position and thickness. A set of directional line detectors is devised for principal line extraction. By using these detectors, the potential line initials of the principal lines are extracted and then, based on the extracted potential line initials, the principal lines are extracted in their entirety using a recursive process.
After extracting the principal lines, we present some rules for palmprint classifi¬cation. The palmprints are classifi¬ed into six categories considering the number of the principal lines
and their intersections. The designing of these systems requires highly equipped devices.
Authors: D. Cai, X. He, and J. Han
Year: 2005
Linear dimensionality reduction techniques have been widely used in pattern recognition and computer vision, such as face recognition, image retrieval, etc. The typical methods include Principal Component Analysis (PCA) which is unsupervised and Linear Discriminant Analysis (LDA) which is supervised. Such a vector representation fails to take into account the spatial locality of pixels in the image. An image is intrinsically a matrix. In this paper, we consider an image as the second order tensor and propose two novel tensor subspace learning algorithms called TensorPCA and TensorLDA. Our algorithms explicitly take into account the relationship between column vectors of the image matrix and the relationship between the row vectors of the image matrix. We compare our proposed approaches with PCA and LDA for face recognition on three standard face databases. Experimental results show that tensor analysis achieves better recognition accuracy, while being much more efficient. Tensors are geometric objects that describe linear relations between vectors, scalars, and other tensors. Elementary examples of such relations include the dot product, the cross product, and linear maps. Vectors and scalars themselves are also tensors. A tensor can be represented as a multi-dimensional array of numerical values. The order (also degree) of a tensor is the dimensionality of the array needed to represent it, or equivalently, the number of indices needed to label a component of that array. For example, a linear map can be represented by a matrix (a 2-dimensional array) and therefore is a 2nd-order tensor. A vector can be represented as a 1-dimensional array and is a 1st-order tensor. Scalars are single numbers and are thus 0th-order tensors. Tensors are used to represent correspondences between sets of geometric vectors. For example, the Cauchy stress tensor T takes a direction v as input and produces the stress T(v) on the surface normal to this vector for output thus expressing a relationship between these two vectors, shown in the figure (right). Because they express a relationship between vectors, tensors themselves must be independent of a particular choice of coordinate system. Finding the representation of a tensor in terms of a coordinate basis results in an organized multidimensional array representing the tensor in that basis or frame of reference. The coordinate independence of a tensor then takes the form of a “covariant” transformation law that relates the array computed in one coordinate system to that computed in another one. The precise form of the transformation law determines the type (or valence) of the tensor.

About Essay Sauce

Essay Sauce is the free student essay website for college and university students. We've got thousands of real essay examples for you to use as inspiration for your own work, all free to access and download.

...(download the rest of the essay above)

About this essay:

This essay was submitted to us by a student in order to help you with your studies.

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Palm print biometric recognition. Available from:<> [Accessed 06-06-20].

Review this essay:

Please note that the above text is only a preview of this essay.

Review Title
Review Content

Latest reviews: