Home > Sample essays > Detect and Recognize Occluded Faces For Real-Time Results w/ Image Processing

Essay: Detect and Recognize Occluded Faces For Real-Time Results w/ Image Processing

Essay details and download:

  • Subject area(s): Sample essays
  • Reading time: 8 minutes
  • Price: Free download
  • Published: 1 April 2019*
  • Last Modified: 23 July 2024
  • File format: Text
  • Words: 2,439 (approx)
  • Number of pages: 10 (approx)

Text preview of this essay:

This page of the essay has 2,439 words.



CHAPTER: 1

Introduction

1.1 Problem Summary

The main problem that is faced nowadays in face recognition is when the term ‘Occlusion' enters. The cameras and CCTVs when detect a face (whether occluded or not) the main problem comes during the recognition part. When any face is occluded (pose variance, aging or face with specs, scarfs, etc…), it is tough or sometimes impossible to recognize that particular face.

1.2 Aim and Objective of the Project

AIM: Real-time occlusion detection using image processing.

OBJECTIVE: Our objective is to detect an occluded face and recognize it with the help of database which will however result in maximum possible accuracy.

Now the question arises what is occlusion??? So basically OCCLUSION in an image refers to hindrance in the view of an object. So mainly we are trying to detect occluded faces with specs, face dirt, scarfs, etc… and will try to recognize it.

The main objective of this project is to implement the Face recognition system which verify the occluded image form the database with the use of LPOG (Local Pattern of Gradient) technique which is the most important technique in face recognition system which is a useful feature extraction technique.

1.3 Problem Specification

In a human body, the face is playing an important role in conveying identity. One can identify and recognize a number of faces throughout his/her lifespan, and can simply remember other person’s name by just showing his/her face.

A system built on computer programs that analyse images of human faces for the purpose of identifying them is known as Face recognition. This programs basically takes a facial image, measure characteristics such as length of the nose, angle of the jaw, distance between eyes, etc…  and store them in a unique file.

Face recognition is the computer based security system which is able to automatically detect and identify human faces.

A face recognition is an attractive and with full of challenges research field in computer vision and software system. Face recognition is one of the most successful studies in Image processing (IP), Pattern recognition (PR), and Machine learning (ML).

There are many different factor which affect the face recognition accuracy, which are listed below…

• Facial expression

• Aging condition

• Low resolution

• Image orientation

• Pose Variations

• Occlusion

Our main focus of this project is to deal with “Occlusion” challenge.

Occlusion is defined as any unnecessary-object in image, which disturbing the matching process sometime called as recognizing process

Occlusion may be intentional or unintentional, we can classify occlusion in following 2 categories:

1. Natural Occlusion: It indicates the obstruction of view between two image objects without any intention.

2. Synthetic Occlusion: It refers to the artificial blockade of covering the image view with intention to hide identity.

In real time application of face recognition, face image intentionally become occluded via accessories such as

• Face behind Fence

• Hand on face

• Sunglasses

• Scarf

• Beards

• Hat etc…

1.4 Literature Review

While going through this project i.e. "Real time occlusion detection using image processing", we referred many patents as well as literatures. These literatures helped us along many new areas related to our project. The key points that helped us along are as under:

Capturing people in video sequences is one component of smart surveillance systems. Ideally, for every person entering or leaving the scene, a face keyframe is generated and stored in the database. For each person that enters and leaves the scene, only one keyframe of the person's face needs to be stored in the database. [1]

Skin tone ration in head region: the observation that the frontal view of a human head usually contains more skin tone pixels than that of backview, also called a rear-facing view, may be used. Thus a higher head region skin tone ratio may indicate a better snapshot. Target trajectory: from the footprint trajectory of the target, it may be determined if the human is moving towards or away from the camera. Moving towards the camera may provide a much better snapshot than moving away from the camera. [2]

Size of the head: the bigger the image size of the human head, the more details the image might may provide on the human target. To ensure Algorithm works in practice, a set of training illuminations must be found that are indeed sufficient to linearly interpolate a variety of practical indoor and outdoor illuminations.  To this end, a system has been designed that can illuminate a subject from all directions above horizontal, while acquiring frontal images of the subject. [3]

Face recognition usually employs various statistical techniques to derive appearance-based models for classification. Some of these techniques include, but are not limited to, Principal Component Analysis (hereinafter referred to as PCA); Fisher Linear Discriminant (hereinafter referred to as FLD), which is also known as Linear Discriminant Analysis (hereinafter referred to as LDA); Independent Component Analysis (hereinafter referred to as ICA); Local Feature Analysis (hereinafter referred to as LFA); and Gabor and bunch graphs. [4]

1.5 Work Plan

Table 1 : Work plan

Software Development Phases Days

Requirements 20

Design 30

Coding 55

Test 45

Maintenance 30

TOTAL 180

Figure 1: Work plan

1.6 System Requirement Specification

 Hardware Requirements

 Processor: 2.5Ghz or Above

 RAM: 4GB

 Space: 20GB

 Software Requirements

 OS: Windows 7/8/10

 Tool: MATLAB 2016 or above

CHAPTER 2:

ANALYSIS METHODOLOGY

Every minor detail is needed to be studied for completing any project. And for this purpose the designing is used. The below figures explains the same.

2.1 Data Flow Diagram

Figure 2: DFD Level 0.0

2.2 Use Case Diagram

2.3 Sequence Diagram

2.4 Activity Diagram

2.5 AEIOU Summary Sheet

This is the first and basic sheet of any project. This sheet contains small but important things which are the building pillars of any project, such as users, activities and interactions of the location, etc.

2.6 Observation Matrix canvas

Here the sheet included is observation matrix canvas, goodly known as ‘empathy summary canvas’. This canvas includes topics from daily interactions and activities to the main objective of project.

2.7 Ideation Canvas

This second sheet i.e. ‘ideation canvas’ includes points such as users of the system, activities and location of visits as well as the props or tools used during the project.

2.8 Product development canvas

This is somehow the most important sheet, which mainly includes objective of the project, features and functions of the project, the pros and cons of the project.

2.9 Business Model Canvas

Business model canvas is used to validate the market significance of products and services which will be of technology nature in this case. Technology projects are often solutions or processes that solve a technical problem. However, the market implementation of such solutions also require that the problem solution is designed to overcome not just the technical barriers but also market and business related barriers of costs, customer reach and collaborations and those that pertain to the practical nature of limited initial capacities within the team.

Thus business model canvas can be used to visualize such market problems and customer expectations. This exercise will increase the market potential and penetration of technology goods and services. This will make them more effective in market.

Key Partners:

It is always recommended to map Key Partners to Key Activities. If an activity is key, it’s still part of your business model. This is a way to denote which specific Partners are handling various Key Activities for you.

o Banks

o Airports

o Border Areas

o Railway Stations

o Admin

Key Activities:

The most important activities in executing a company's value proposition. An example for Bic, the pen manufacturer, would be creating an efficient supply chain to drive down costs.

o Face Recognition

o Occluded Face Detection

o Identifies The Suspect

Key Resources:

The resources that is necessary to create value for the customer. They are considered an asset to a company, which are needed in order to sustain and support the business. These resources could be human, financial, physical and intellectual.

o Face Database

o CCTV Cameras

o Admin

o Suspects

Value Propositions:

The collection of products and services a business offers to meet the needs of its customers. According to Osterwalder, (2004), a company's value proposition is what distinguishes itself from its competitors. The value proposition provides value through various elements such as newness, performance, customization, "getting the job done", design, brand/status, price, cost reduction, risk reduction, accessibility, and convenience/usability. The value propositions may be:

o Face Detection

o Image Preprocessing

o Feature Extraction

o Classification

o Face Identification

o Image Segmentation

o Security Issues Are Satisfied

Customer Segments:

To build an effective business model, a company must identify which customers it tries to serve. Various sets of customers can be segmented based on the different needs and attributes to ensure appropriate implementation of corporate strategy meets the characteristics of selected group of clients. The different types of customer segments include:

o Banks

o Authorized User

o Airports

o Railway Stations

o Border Areas

Channels:

A company can deliver its value proposition to its targeted customers through different channels. Effective channels will distribute a company’s value proposition in ways that are fast, efficient and cost effective. An organization can reach its clients either through its own channels (store front), partner channels (major distributors), or a combination of both.

o Connecting More Associations

o Social Media Sharing

o Advertisements

o Seminars

Customer Relationships:

To ensure the survival and success of any businesses, companies must identify the type of relationship they want to create with their customer segments. Various forms of customer relationships include:

o Automated Services

Cost Structure:

This describes the most important monetary consequences while operating under different business models. A company's DOC.

o CCTV Cameras

o Feature Comparision

o Creation of Facial Database

o Face Recognition

Revenue Streams:

The way a company makes income from each customer segment. Several ways to generate a revenue stream:

o Occluded Face Detection and Recognition

o Real-time Market

o Product Feature Dependent

o Usage Fee

CHAPTER-3

Implementation

3.1 IMPLEMENTATION ENVIRONMENT

The tool which is used to develop our system provide an environment which has the majority of the functionality of an actual implementation of an integrated design system.

3.2 PROPOSED WORKFLOW OF SYSTEM

Above mentioned figure is the workflow of the system. Following are the substeps of the flow, which are necessary to acquire the result.

Step 1: Input to the system

The input can be in the form of ‘.jpg’ or in the terms of webcam.

Step 2: Normalization of image

 Terms of our system, normalization of the image leads to the face detection. This phenomenon however ignores the environment and just the faces from the given frame/image are processed for the further steps.

Step 3: Image Database

This database is the collection of the processed frames, which inneed would be the ones with whom the input image would be compared.

Step 4: Feature Extraction

• The main humanly features needed for classification are nose, eyes and mouth. So feature extraction is the step which includes the extraction of these features. Also the distance between these features are calculated.

• Along with these features, the statistical features such as homogeneity, skewness, mean, correlation coefficient, etc. are calculated here.

• Rather than these the important features such as hog features and LBP of image are also calculated for further reference.

Step 5: Image Classification

• The main concept of classification starts, which in short means comparision of images. The comparing of the processed image is done with the ones in the image database.

• Here the features of both the images are compared. and the one with the maximum matched features is acquired as the resultant output.

Step 6: Output

After comparision, the image (from image database) with the maximum matched features is acquired as the resultant output.

3.3 Modules:

1. Face Detection:

This module works on the Viola-Jones algorithm which basically work for face detection. Here the low dimensional images are used which are taken as input. And on this image processing is done which results in face detection.

2. Calculation of Statistical features:

Here in this module statistical features of the processed image are calculated. These statistical features are as under:

 Mean

 Co-relation coefficient

 Contrast

 Energy

 Homogeneity

 Standard-deviation

 RMS

 Variance

 Smoothness

 Skewness

3. LBP (Local Binary Features):

LBP is a simple yet very efficient texture operator which labels the pixels of an image by thresholding the neighbourhood of each pixel and considers the result as a binary number.

The most important property of the LBP operator in real-world applications is its robustness to monotonic gray-scale changes caused, for example, by illumination variations. [7]

4. HOG features:

The histogram of oriented gradients (HOG) is a feature descriptor used in computer vision and image processing for the purpose of object detection. The technique counts occurrences of gradient orientation in localized portions of an image. [6]

5. PCA:

Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. [5]

3.4 Screenshots:

Figure 16: Screenshot 1 – No human in picture

Figure 17: Screenshot 2 – Non occluded face matched

CHAPTER-4

CONCLUSION & FUTURE ENHANCEMENT

4.1 Conclusion:  

The problem of machine face recognition has been an ongoing subject of research for more than 20 years. Although a large number of approaches have been proposed in the literature and have been implemented successfully for real world applications. Robust face recognition is still a challenging subject. Mainly because of large facial variability, pose variation and uncontrolled environment conditions.

The aim of this work is to provide a detailed description on various facial occlusion detection methods, which includes occlusion detection procedures as main part.

4.2 Future Enhancement:

As our project is however based on image processing, the main thing that plays a key role is the intensity of the image. So research in future are inclined towards the improvement of accuracy which inherits the deduction of the problems based on intensity.

References

PATENTS

1.Rogerio Feris, Arun Hampapur, Ying Li Tian” Real time occlusion detection using image processing”.

https://patents.google.com/patent/US20080247609A1/en

2. Harry Wechsler,Hung Lai, Venkatesh Ramanathan “Recognition by parts using adaptive and robust correlation filters”.

https://www.google.tl/patents/US8073287

3. Yi Ma, Allen Yang Yang, John Norbert Wright, Andrew Willaim Wagner “Recognition via high-dimensional data classification”.

https://patents.google.com/patent/US20110064302A1/en

4. Harry Wechsler, Fayin Li, “Face authentication using recognition by parts, boosting and

Transduction”.

https://patents.google.com/patent/US20110135166A1/en

5. Principal component analysis

https://en.wikipedia.org/wiki/Principal_component_analysis

6. Histogram of oriented gradients

https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients

7. Local binary patterns

https://en.wikipedia.org/wiki/Local_binary_patterns

8. Viola–Jones object detection framework

https://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework

WEBSITE

9. Google:

https://www.google.co.in/

10. YouTube:

https://www.youtube.com

11. MATLAB:

https://in.mathworks.com/help/matlab/

Appendix

• PPR

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Detect and Recognize Occluded Faces For Real-Time Results w/ Image Processing. Available from:<https://www.essaysauce.com/sample-essays/2018-5-10-1525980930/> [Accessed 13-05-26].

These Sample essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.