Verified Document

Recognition Using PCA Face Recognition, Literature Review

127, 2005) of the covariance matrix of a training set of facial images (Carts-Power, pg. 127, 2005). This method converts the facial data into eigenvectors projected into Eigenspace (a subspace), (Carts-Power, pg. 127, 2005) allowing copious "data compression because surprisingly few Eigenvector terms are needed to give a fair likeness of most faces. The method of catches the imagination because the vectors form images that look like strange, bland human faces. The projections into Eigenspace are compared and the nearest neighbors are assumed to be matches." (Carts-Power, pg. 127, 2005) The differences in the algorithms are reflective in the output of the resulting match or non-match of real facial features against the biometric database or artificial intelligence generated via algorithm. The variances generated by either the Eigenspace or the PCA will vary according to the use of the approach. Eigenspace work on the premise of vectors, contours, and gradients, which are all essentially geophysical descriptors used in earth science technology. However, the human face is very similar to a geophysical landscape, similar to an arid desert with hills, valleys, and peaks.

Many regard the principle component analysis (PCA) or eigenface approach (Liu, Chen, Lu, Chen, 2006) as highly beneficial. As such, the industry early on has relied on "PCA-based face recognition systems" (Liu, Chen, Lu, Chen, 2006). The PCA approach is able to locate variances in the details and intricacies when reviewing the "scaled and aligned human face, but it will degrade dramatically for not-aligned faces." (Liu, Chen, Lu, Chen, 2006) The prevailing over the limit of this approach (Liu, Chen, Lu, Chen, 2006), is what Liu et al. regard as "a better method named independent component analysis (ICA) is presented" (Liu, Chen, Lu, Chen, 2006), developed to find "basis functions which are local and give good representation of face images." (Liu, Chen, Lu, Chen, 2006)

Issues with parametric modeling of the facial sub-features hidden from shading issues are posed for solution. The use of PCA to solve this issue (Zhao, Chellappa & Rosenfeld, Phillips) provides a means to create a mathematical framework to identify the hidden parameters where the shadow subspace is shading.

Principal Component Analysis (PCA) (Zhao, Chellappa & Rosenfeld, Phillips), recommended as an enabler for to render a solution to the "parametric shape-from shading (SFS) problem." (Zhao, Chellappa & Rosenfeld, Phillips) "An eigen-head approximation of a 3D head" (Zhao, Chellappa & Rosenfeld, Phillips) "was received after training on about 300 laser-scanned range images of real human heads." (Zhao, Chellappa & Rosenfeld, Phillips) The SFS quandary described by Zhao et al. morphs to a "parametric problem" (Zhao et al.) however, "a constant albedo is still assumed." (Zhao et al.) "This assumption does not hold for most real face images and it is one of the reasons why most SFS algorithms fail on real face images. To overcome the constant albedo issue, suggests including the use of a varying albedo reflectance model." (Zhao, Chellappa & Rosenfeld, Phillips)

In the face of stellar results performed by the PCA, this approach has now been understood to possess the "disadvantage of being computationally expensive and complex with the increase in database size" (Neerja, Walia, 2008), as each pixel in the entire image in aggregate, is required to generate representation needed "to match the input image with all others in the database." (Neerja, Walia, 2008)

Neerja & Walia put forth a "new PCA-based face recognition approach" (Neerja, Walia, 2008), "using the geometry and symmetry of faces, which extract the features using fast Fuzzy edge Detection to locate the vital feature points on eyes, nose and mouth exactly and quickly." (Neerja, Walia, 2008) With regard to each feature, each subgroup repository for database images are created. "During recognition only the images falling in same group as test image, will be loaded as image vectors in covariance matrix of PCA for comparison." (Neerja, Walia, 2008)

The aforementioned approach is expensive, however such governmental agencies including the FBI, CIA, and departments such as the DoE, DOD, and the Secret Service will use these approaches to ensure that the SFS problem is eliminated. Additional algorithms are described below.

"The Fisherfaces algorithm,...

Parts of this document are hidden

View Full Document
svg-one

127, 2005) This method is akin to the PCA application, however incorporates addendums that accentuate the differences between faces as more evident. (Carts-Power, pg. 127, 2005) "Instead of looking for the nearest neighbor in a subspace (like PCA and LDA), the Bayesian intrapersonal/extrapersonal classifier looks at the distance between two face images." (Carts-Power, pg. 127, 2005) Each differing image may undergo reclassification into two classes as they either are a function of "two images of the same subject or derived from images of different subjects." (Carts-Power, pg. 127, 2005) Each of the aforementioned classes will unfold as a distribution that is Gaussian (Carts-Power, pg. 127, 2005) in appearance. The Gaussian distribution can have layered results without obfuscation. (Carts-Power, pg. 127, 2005)
The LDA algorithm is similar to the PCA but exacerbates the non-similarities between the compared faces. The key difference of the LDA is in its framework of searching for the differences within the subspace. The Gaussian approach mathematically looks at the variance between the squared deviations to determine the spatial distance between landmarks. Such an approach is supposed to be rather expensive as well. A further look at the ICA reveals the following.

"Recently, there has been an increasing interest in statistical models for learning data representations. A very popular method for this task is independent component analysis (ICA), the concept of which was initially proposed by Comon

. The ICA algorithm was initially proposed to solve the blind source separation (BSS) problem i.e. given only mixtures of a set of underlying sources, the task is to separate the mixed signals and retrieve the original sources. Neither the mixing process nor the distribution of sources is known in the process. A simple mathematical representation of the ICA model is as follows. Consider a simple linear model which consists of N. sources of T. samples i.e. s I =[s I (1)...s I (t)...s I (T)]. The -symbol t here represents time, but it may represent some other parameter like space. M weighted mixtures of the sources are observed as X, where XI =[Xi (1)... Xi (t)... Xi (T)]. This can be represented as -." (Acharya, Panda, 2008)

"ICA is a new signal processing technique for extracting independent variables from a mixture of signals and its basic idea is to represent a set of random variables using basic functions, where the components are statistically independent or as independent as possible. It has become one recent powerful technique in the field of image processing and pattern recognition. The concept of ICA can be seen as a generational of principal component analysis (PCA). PCA tries to obtain a representation of the input signals based on uncorrelated variables, where ICA provides a representation based on statistically independent variables." (Liu, Chen, Lu, Chen, 2006)

"Generally, ICA is performed on multidimensional data. This data may be corrupted by noise, and several original dimensions of data may contain only noise. So if ICA is performed on a high dimensional data, it may lead to poor results due to the fact that such data contain very few latent components. Hence, reduction of the dimensionality of the data is a preprocessing technique that is carried prior to ICA.

ICA is a rather comprehensive approach that incorporates a learning-based approach and also searches against a database. Through the use of statistically independent variables, a facial shape is created via the ICA approach whereas PCA relies on uncorrelated variables to make a distinction regarding facial dissimilarities.

Thus, finding a principal subspace where the data exist reduces the noise. Besides, when the number of parameters is larger, as compared to the number of data pints, the estimation of those parameters becomes very difficult and often leads to over-learning. Over learning ICA typically produces estimates of the independent components that have a single spike or bump and are practically zero everywhere else

. This is because in the space of source signals of unit variance, nongaussianity is more or less maximized by such spike/bump signals." (Acharya, Panda, 2008)

The use of differing algorithms can provide

Sources used in this document:
Thus, finding a principal subspace where the data exist reduces the noise. Besides, when the number of parameters is larger, as compared to the number of data pints, the estimation of those parameters becomes very difficult and often leads to over-learning. Over learning ICA typically produces estimates of the independent components that have a single spike or bump and are practically zero everywhere else

. This is because in the space of source signals of unit variance, nongaussianity is more or less maximized by such spike/bump signals." (Acharya, Panda, 2008)

The use of differing algorithms can provide
Cite this Document:
Copy Bibliography Citation

Related Documents

Analyzing the Biometric Technology Phenomenon
Words: 2000 Length: 5 Document Type: Term Paper

Biometric Technology Biometrics are those easily measurable physiological, behavioral or anatomical characteristics, which can be used in identifying an individual. A common biometric modality is fingerprints, but there are others like DNA, voice patterns, irises, facial patterns, and palm prints. Biometrics have been quite beneficial in the last couple of years for law enforcement and intelligence (investigative) purposes, mostly to the FBI and its associates. in the intelligence and law enforcement

Biometric Security in Both the
Words: 1181 Length: 4 Document Type: Term Paper

It also helps to reduce the threat of identity theft as this is frequently initiated through the hacking of such highly vulnerable wireless communication devices. According to ThirdFactor, the same BioLock technology is currently being adapted to meet the needs of the Microsoft Windows and Mac OS packages on the market's near horizon. This suggests that the pacesetting consumer brands in the technology, software, cell phone and computing industries

Biometric Information Systems and Privacy
Words: 1381 Length: 4 Document Type: Essay

The truth of the matter is the biometric templates for identity enrolment that are stored on a server are not in the real since images rather they are mathematical representations of the data points that the biometric algorithm is able to extract from the scanned fingerprint, finger vein, palm vein or iris. The identifying template is a binary file that has a series of zeros and ones. The algorithm then

Biometric Entry and Ignition System
Words: 946 Length: 3 Document Type: Term Paper

However, a very determined criminal, as mentioned above, might go as far as cutting off fingers in order to circumvent this problem. Nonetheless, fingerprinting appears to make car theft somewhat more challenging than the ordinary immobilizing device. Main Conclusions Because of its groundbreaking technology and the fact that it makes car theft more difficult than ordinary immobilizing devices, biometric fingerprinting devices for immobilizing and car door locking holds particular advantages over

Biometric Payment Trends in Southern
Words: 1644 Length: 6 Document Type: Term Paper

The Homeland Security main division is also located in California. Security, not only in terms of personal banking security but as well home security is an issue in Southern California as evidenced in a February 7, 2007 news report entitled: "Police: Billionaire Robbed by Man Posing as Delivery Person." The report relates a Southern California financier whose home was invaded by a deliveryman. The report states the fact that:

Biometric Optical Technology Retina Biometric
Words: 1338 Length: 5 Document Type: Term Paper

2) False acceptance which is through confusion of one user and another or the acceptance of an invalid individual as being a user that is legitimate. Although the rate of failure is easily adjusted through modifying the threshold through decreasing the rate of failure on the end of rejection or acceptance the increase of failure on the other end of the spectrum increases In the act of choosing equipment with biometric

Sign Up for Unlimited Study Help

Our semester plans gives you unlimited, unrestricted access to our entire library of resources —writing tools, guides, example essays, tutorials, class notes, and more.

Get Started Now