In this section we will see what is transform and why do we need it. There are lots of transforms available to the researchers and every transform has its advantage and disadvantage. We will discuss these advantages and disadvantages and finally we will see why do we need Wavelet Transform (WT) and what is its advantage. We will also the derivation of wavelet transform and finally we will move to Bionic Wavelet Transform (BWT) and see what the difference between BWT and WT.
Transformation is more importantly done to acquire details of a signal which cannot be acquired in its raw form. Signals are generally time domain when they are acquired which means they are plotted it will be amplitude versus time. Now to get the spectral component different types of transforms available among which the most used one is Fourier Transform. It gives the frequency component of the signal but it doesn’t give any time domain information. Now the information about both time and frequency component of the signal may not be necessary for a stationery signal but is well needed in non-stationery signal. The main difference between Fourier Transform and Wavelet Transform is that Wavelet Transform provides time-frequency representation of the signal.
WAVELET TRANSFORM
It is a method to provide frequency details of a signal with its time domain representation. In other words it tells us which frequency component arrives at a particular instant. Though this information can also be provided by Short Time Fourier Transform but it has some scaling problem and hence Wavelet Transform came to existence.
HISTORY
Wavelet Transform is not very old and was introduced around a decade earlier. The name wavelet was given by few French researchers. The presence of wavelet has come to knowledge since the beginning of this century. Different ideas that are now mapped in wavelet were initiated from work in ‘sub band coding in engineering, coherent states and renormalization group theory in physics and the study of Calderon-Zygmund operators in mathematics’ [4].
DEFINITION
‘A wavelet is a small wave and in brief, a wavelet is an oscillation that decays quickly’ [11]. Jean Morlet (1982) [17 and 11] introduced the idea of wavelet [17 and 11] and also introduced ‘seismic wave analysis’ [11] mathematically. Wavelet was considered as the family of functions and was derived from alteration of the ‘mother wavelet’??(t) [11]. It is given as
??_(a,b) (t)=1/'(|a| ) ??((t-b)/a), a,b’R,a’0 (1)
(Jun Yao, et al.,)
Where,
a is the scale and it measures the degree of compression [11]
b is the translation parameter which determines the time localization [11].
If scale is less than one then it links mainly to higher frequencies and if scale is greater than one it mainly links to lower frequencies. The time widths change depending on their frequencies. It is one of the main causes why it is better than other transforms.
WAVELET COEFFICIENTS
If a function f’L_2 (R), the series
‘_(j’z)”_(k’Z)”<f,??_(j,k) ‘> ??_(j,k) (t) (2)
(Jun Yao, et al.,)
is called the wavelet series for function f and
<f,??_(j,k)> =d_(j,k)= ‘_(-‘)^”’f(t) ??_(j,k) ‘ (t)dt (3)
equation 1-3(Jun Yao, et al.,)
is called the wavelet coefficient of f .
Wavelet Transform effectively investigates the multi-determination sign at distinctive recurrence groups, by decaying the indicator into rough guess and point of interest data. The system for recurrence band partition for EMOTION distinguishment is actualized in MATLAB 2013a. Feeling distinguishment utilizing EEG obliges characteristic extraction from the procured sign in particular recurrence reach of delta, theta, alpha, beta, and gamma. Despite the fact that a few specialists have said the utilization of wavelet transform disintegration to acquire these groups. Groups which are disintegrated are given in table
Figure 3 1: Sub bands
After a first level decay, two successions speaking to the high and low determination segments of the sign are acquired. The low-determination segments are further disintegrated into low and high determination parts. After a second level deterioration, three more disintegrations are carried out as indicated below in figure. ‘ca1,ca2,ca3,ca4 and Ca5’ [2] are the rough coefficients. Furthermore ‘Cd1,cd2,cd3,cd4 and Cd5’ [2] are the definite coefficients acquired after progressive deterioration.
The multi-determination investigation, utilizing five levels of decay, yields five separate EEG sub-groups [2]. The primary goal of the proposed strategy for is Wavelet Transform the division of the first ‘EEG signals’ [2] into diverse recurrence groups.
Figure 3 2: Decomposition of signal
BIONIC WAVELET TRANSFORM (BWT)
BWT was presented focused around a sound-related model by "Jun Yao" [6] and "Yuan-Ting Zhang" [6] in 2001. As per [6] BWT enlarges one-dimensional-flexible-determination (1d-customizable-determination) along the frequency axis (f-axis) in customary wavelet to two-dimensional-movable-determination along both time and frequency domain [6]. The most separating normal for BWT is that its determination in the ‘time-frequency domain’ [6] could be adaptively [6] fluctuated by the signal frequency as well as by the signal quick amplitude and its first request differential [6].
Research has proved that selection of mother wavelet can prove to change the result of time frequency representation drastically. This problem was tried to solve by using different mother function at various times [6] and frequency [6]. There are few methods available which select their mother function depending on the entropy criterion [6] but it is not beneficial for bio-signal as appropriate entropy criterion is not known for biomedical signals [6].
In [6] they tried to map the active mechanism of the auditory [6] system into Wavelet Transform. Experiments in early years have proved that frequency selection in the cochlea is dependent on basilar membrane. It is also proved that the cochlear models assess signals in time-frequency domain [6].
Researchers had to come up with a new model in order to implement it to Wavelet Transform. This model was given by the otoaccoustic [6] emission.
EAR MODEL
In [6] a point of basilar membrane [6] was modeled by the following equation:
d ??(x,t)+(R_eq (x,d) d ??(x,t))/L(x) +w_0^2 (x)d(x,t)=P (4)
(Jun Yao, et al.,)
Where,
x- distance along the basal membrane from the basal end;
t- time;
d- displacement of the BM;
d ?? and d ?? ‘ 1st and 2nd order differential of d with respect to t.
P- pressure difference across the BM;
character frequency of the point (w_0)=1/(‘L(x)C(x)) where L(x) and C(x) represent the acoustic mass and compliance.
The equivalent resistance, R_eq , in (4) is denoted as R_eq=R(x)-G_1 (x) d_(1/2)/(d_(1/2)+|d(x,t)| ) R(x) [8] (5)
Where
R(x)- passive resistance corresponding to the acoustic resistance [6];
d_(1/2) ‘ saturation factor [6]
G_1 (x) ‘ active gain factor whose value is related to the activity of the corresponding OHC-group [6].
The second term in (5) denotes the active resistance function of the OHC control [6].
‘In Giguere’s model, R(x) is determined by Q-1’L(x)/C(x) where Q=2’ [6]. All the components are constant for a value of x.
From (4) it can be concluded that the ‘basilar membrane’ [6] is modeled with a band pass filter which has a non-linear damping [6]. Further modification has been done in [6] by presenting non-linear capacitance.
C_eq (x)=(1+G_2 (x)|'[d(x,t)]/’t|)^2 C(x) (6)
(Jun Yao, et al.,)
where G_2 (x) is the dynamic element that connects with the impact of the dynamic instrument on the consistence on a purpose of Basilar Membrane [6]. Reenactment outcomes demonstrate that a few enhancements in the portrayal of OAE properties might be accomplished by this modification. The reason for utilizing this model as a part of this study is not to recognize the dynamic system of the sound-related framework flawlessly, yet to show the conceivable impact of a dynamic instrument on the indicator preparing.
To present the dynamic instrument of the sound-related framework into WT, we displaced the steady Q_(0 ) [6]of WT by a variable Q_T[6]. In the dynamic model, without the dynamic system, the aloof model of a point of BM has a constant quality variable =R^(-1) ‘L/C [6]. With the presentation of the dynamic system, the proportional quality component, Q_eq=R_eq^(-1) ‘L/C_eq [6] differs with the change of BM. We calculated the "transfer" function, which exchanges the steady quality component Q [6] of the inactive model to the variable quality element Q_eq [6] of the dynamic model. Utilizing the same T-function, we exchanged the steady quality element Q_(0 ) [6] of WT to the variable quality component Q_T[6] of BWT. They named it bionic WT in light of the fact that the T-function comes from the bio-framework model.
DEFINITION
A mother function, h(t) [6 7], of WT must fulfill the permissible condition, which suggests that h(t) [6 7] has a few oscillations. On the off chance that the window center of h(t) [6 7] is at f0 in the frequency domain, we can speak h(t) [6 7] to as its envelope function, h(t) [6 7], balanced by the sinusoidal signal at frequency f0 [6 7].
h(t)=1/’a h(t) exp'(j??_0 t) (7)
(Jun Yao, et al.,)
where ??_0=2??f_0. If the signal to be studied is f(t), the WT is defined as
(WTf)(T,a)=1/’a ”’f(t) (h^* ) ?? ‘ ((t-T)/a).exp'(-jw_0 ((t-T)/a))dt (8)
(Jun Yao, et al.,)
Where
a – scale;
T- time shift;
To copy the OHC-like control, another parameter T, where T>0, is brought into the WT mother function bringing about the BWT mother function
h_T (t)=1/(T’a) h ??(t/T) exp'(jw_0 t) (9)
(Jun Yao, et al.,)
With the new mother function, we define BWT as
(BWT_T f)(??,a)=1/’a ”’f(t) (h^* ) ?? ‘ ((t-??)/aT).exp'(-jw_0 ((t-??)/a))dt (10)
(Jun Yao, et al.,)
Obviously from (10), the focuses of the investigating window in the time what’s more frequency [6] areas are the same as those of WT. The main distinction is that the envelope of the BWT mother function can be balanced by the parameter T. The presentation of the component 1/T in (9) is for standardization.
Given T is consistent in a short enough period, the Fourier transform of h(t) [6 and 7]and hT(t) [6 and 7] are
H(??)= 1/’a H ??(??- w_0 ) (11)
(Jun Yao, et al.,)
and
H_T (??)= 1/’a H ??[T(??- w_0 )] (12)
(Jun Yao, et al.,)
From (11) and (12) it is clear that the quality component QT of hT(t) for BWT is identified with Q0 of h(t) for WT by
‘ Q’_T=TQ_0 (13)
(Jun Yao, et al.,)
The T-function [6 7] in (13) is presented from the dynamic sound-related model. In the model, when substituting (5) and (6) to Q_eq=R_eq^(-1) ‘L/C_eq where Qeq is related to Q by
Q_eq=(1-G_1 d_(1/2)/(d_(1/2)+|d| ))^(-1) (1+G_2 |’d/’t|)^(-1) Q (14)
(Jun Yao, et al.,)
Analyzing (14) and (13), we get
T(??+’??)=(1-G_1 ‘BWT’_s/(‘BWT’_s+|’BWT’_f (??,a)| ))^(-1)??(1+G_2 |'((‘BWT’_f (??,a))/'(t) )|)^(-1) (15)
(Jun Yao, et al.,)
Where
‘BWT’_f (??,a) ‘ Coefficient of the BWT at time ?? and scale a;
‘BWT’_s – Saturation constant that maps d_(1/2) in the sound model;
G_1 and G_2 ‘active factors
‘?? ‘ Calculation step.
By expanding G_1 and G_2 , resolutions in recurrence area and time area might be expanded, individually.
To acquire (12), T must be a consistent in the time of ‘??. This rough guess is sensible if the signal and its first differential are nonstop and is ‘?? little enough. Plainly from the definition, BWT is a nonlinear transform [6].
PROPERTIES OF BWT
BWT is a nonlinear change that has high affectability and frequency selectivity;
BWT speaks to the indicator with a concentrated vitality circulation;
The inverse BWT can reconstruct the original signal from its time’frequency representation.
METHODOLOGY
To implement the two methods described above there are few steps to be followed. In order to compare the results I have compared it with another method. Both the methods have been defined in the preceding chapter and their application will be described in the later sections.
DATA INFORMATION
The enterface06_emobrain database accumulates sincerely determined physiological indicators from both the fringe (galvanic skin reaction, breath and blood volume weight) and focal sensory system (EEG and frontal fnirs). The target of this database is to give a typical system to feeling evaluation from multimodal physiological indicators.
G:m.techmaster thesisdataenterface06_EMOBRAINeNTERFACE06_EMOBRAIN.html
Figure 4 1: Sad Neutral and Happy
I have gathered all our crude EEG signals [3, 6 and 7] from the database of Enterface’06 [2] (alluded to as the Enterface database) that was assembled by Savran et al [2]. As our beginning dataset as a trial convention, every member was made to experience three sessions. In every session the member was indicated 30 picture sets each one picture set relates to one class of feeling named as POS, NEG and CALM and holds fine pictures. These five pictures are demonstrated for 12.5 favors every, hence aggregate to 12.5 second introduction. In the middle of the two picture sets, we have a dark screen for 10 seconds. The member provides for self-appraisal of the picture set period. These evaluations toward oneself are measured since feelings are known to be extremely subjective and reliant on past encounter, one can ever make certain that the individual feels the feeling that was planned by the pictures, and this assessment toward oneself gives a great plausibility to analyze the effects between persons. On the other hand, evaluations toward oneself are known to be exceptionally subjective, and hard to analyze. Case in point, a few persons have a tendency to give compelling scores where others generally pick the middle.
A few issues with this test outline are given by Savran [2 and 9] et al:
Participants reported a migraine at the end of every session, because of the diverse tops (they likewise wore a useful attractive reverberation imaging (FMRI) top) [2 and 9].
Positive reactions were not generally felt, however the negative pictures were a bit excessively hard [2 and 9].
Several members said that the impacts of the pictures diminish in the wake of review numerous pictures.
To build the EEG database, feeling must be evoked with the members. The specialists have decided to utilize pictures for enthusiastic incitement, specifically pictures from the International Affective Picture System (IAPS) [2 and 9]. The subset of pictures utilized as a part of this analysis could be separated into three distinctive passionate classes: smooth, leaving positive (POS) and leaving negative (NEG) [2 and 9].
Information which is downloaded from enterface [2 and 9] website has 2 documents, one is BDF document and different is imprint record MRK record contains the list of specimens for each one piece of pictures. These documents are named as emulating
PartA_iaps_sesB_eeg_fnirs_ddmmaaaa.bdf
PartA_iaps_sesB_eeg_fnirs_ddmmaaaa.mrk
Here A is the member number (1-4), B is the session number (1-3) and DDMMAAAA speaks to the date of the recording.
EEG information from distinctive cathodes is accessible in various content organizations. The test here is to relate the information with the time when the picture is indicated to the subject. The information is first down inspected at ‘256 examples’ [2] for every second to decrease the size. The imprint record has data of pictures demonstrated. Since the information has ‘12.5 seconds’ [2] of first picture set seeing, it implies that the initial ‘256×12.5=3200 specimens’ [2] relate to EEG sign when the picture are constantly demonstrated. The EEG information for next 10 seconds is for evaluation when the dark screen is demonstrated. It implies that next ‘256×10=2560 examples’ [2] require not be examined. Essentially next ‘3200 specimens’ [2] are in a class of feeling given in imprint record, while the following ‘2560 examples’ [2] need to be overlooked one feeling set of ‘12.5 second i.e. 3200 specimen’ [2] is handled for characteristic extraction. The characteristics concentrated are to be utilized for grouping of feeling.
There is one other envelope named as basic information in downloaded information, which hold the documents given beneath.
Iaps_images_eeg_fnirs.txt, holds three segments, one for every session, with the names of the IAPS pictures utilized in this study;
Iaps_eval_valence_eeg_fnirs.txt and Iaps_eval_arousal_eeg_fnirs.txt hold in three segments the valence or arousal quality for each one picture;
Iaps_classes_eeg_fnirs.txt rundown in three segments the cohorted classes we acknowledged for each one square of pictures. Marks could be "Cool", "Pos" or "Neg" [2, 24, 27 and 32]. This could be valuable if one would not like to consider self-assessment of participant.
PREPROCESSING
EEG signs recorded over different positions on the scalp are normally sullied with clamors (because of force line also outer impedances) and artifacts [8] (Ocular (Electroocculogram), Muscular [8] (Electromyogram), Vascular [8] (Electrocardiogram) and Gloss motor curios). The complete evacuation of antiques will likewise evacuate some of the helpful data of EEG signs. There are many methods available like using Butterworth filter [8], subtracting average from its original signal [2]. These methods help in removing of noise so that the methods used can yield better results.
At the point when applying signal transforming, the fundamental point is to uproot some piece of the artifact [2 and 8] display in the signal or to dispense with a few wellsprings of variety not identified with the measured y-variable. It is likewise conceivable to attempt and increment the contrasts in the commitment of every segment to the total signal and thusly make certain wavelengths [2 and 17] more particular. The kind of preprocessing relies on upon the way of the signal.
Broadly useful techniques are smoothing and separation. By smoothing one tries to decrease the irregular commotion in the instrumental indicator. It is a moving window averaging technique. The guideline of the system is that, for little wavelength interims, information could be fitted by a polynomial of satisfactory degree, and that the fitted qualities are a superior appraisal than those measured, in light of the fact that some clamor has been uprooted.
WAVELET TRANSFORM
As given above wavelet transform gives the frequency component of a signal and it also gives it corresponding time domain [2, 4 and 7] information.
Algorithm:
The flowchart given above is the steps I applied for each session of each subject. For example if my 1st subject has expressed three emotions, I have executed the code for each session separately. The above flowchart is explained below:
The signal is read and then we find the length of it. The length in this program is used in various places like for executing every instance of the signal in a loop. It’s a simple function in MATLAB [34, 36, 37 and 41] which calculates the length of any given data.
Next we find the bandwidth of the signal. For this first we need to acquire the sampling frequency which is given in the site from where we downloaded the data. The sampling frequency is given by samples per second. For example if the sampling frequency is 10Hz and the data is recorded for 60 seconds then there are 600 samples in one minute. It is mainly used for digitization [42, 44, 45]. Depending on how many samples we want in a second we can set the sampling frequency while recording the signal. Message signal can only be recorded half the frequency we set for sampling frequency i.e., sampling frequency is equal to twice the message signal. For example if we set sampling frequency as 600Hz, we can record data only upto 300Hz. We calculate the value of message signal. And then find the bandwidth. Bandwidth is basically the difference between the upper and lower limit of the signal frequency.
Next step is to remove the powerline [20] interference. The restorative observing gadgets are more delicate for the biomedical [20] indicator recording also requires more precise outcomes for each determination. It is confounded to get fault less result for each ‘biomedical signal’s’ [20] recording while patient is conclusion by therapeutic [20] following supplies, for example, ‘ECG, EEG and EMG’ [20]. The low frequency signal is obliterated by force line impedance of ’50/60 Hz’ [20] clamor, this commotion is likewise wellspring of obstruction for biomedical signal [20] recording. The indicator can additionally be ruined by ‘electromagnetic field (EMF)’ [20] by the apparatus which is set adjacent. To remove this power line interference many methods can be used and in this experiment use of notch filter is done. A notch filter [20] is that holds one or all the more profound indents or in a perfect world immaculate nulls in its frequency [20] trademark. It is helpful in numerous requisitions where particular recurrence segments must be killed.
Next step is to remove artifacts. When a signal is recorded lot of noise is also recorded with it. The highest frequency which gives information for EEG signal is 50Hz so we use a Butterworth [10 and 20] filter to remove all other frequency. Removing of noise will give us accurate message signal and processing this signal yield better output which is necessary because we want more the accuracy rate of the classification to be as high as possible.
Next step is to do ‘wavelet decomposition’ [2, 5, and 6]. It performs wavelet [2,5 and 7] dissection depending on the wavelet [2,5 and 7] name.
After completing the above steps we need to classify the data. Classifiers used for this work is Linear Discriminant Analysis. For classification few steps are to be followed which is given in the flowchart below:
Algorithm for classification
The above flowchart shows the classification of data. The steps are explained below:
First we separate all the data that has been decomposed in the first section. For example we separate all the sad data into one array likewise we do it for happy and neutral. Now we can use these data for classification.
There are different types of classifiers available in MATLAB like Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), SVM. Different classifiers show different classification accuracy. These accuracy rates are given in result and explained them.
BIONIC WAVELET TRANSFORM
BWT is basically a WT whose window is adjustable. The definition and derivations are mentioned in the previous chapter and in this section we will see how it is implemented in the program. The algorithm is given below:
Algorithm:
Figure 4 2: Algorithm for BWT
The algorithm above is explained below:
As given in the definition in BWT there is a new parameter introduced called T-function. This function enables the window to be adjustable and change accordingly. In BWT the mother function is considered to be morlet wavelet to achieve the smallest size of window in time and frequency domains [6 and 7]. Now to calculate this we need to calculate the center frequency [6, 7 and 46]. The equation for center frequency is given in the algorithm. We take the initial frequency depending on the signal. For auditory model [6] they considered a high value of frequency but in our case EEG signal’s frequency range is very less so we have decided 512Hz. This frequency can be set by setting values and comparing their results. The base frequency is given by the basic equation 5/2pi. In implementation BWT can be determined by convoluting the morlet function with the original signal.
Then we calculate the step. Step is given by determining the time period. The T-function should be constant for one single calculation step providing that the step value is very small.
Once the step is calculated we first make sure the wavelet is zero centered. After that we run a loop where we multiply the step win the window length. Since the window value changes the product also changes accordingly.
Now we multiply to determine the morlet wavelet [6 and 7] and then we multiply it with the signal which is given by the equation and derivation in the previous chapter.
Then we finally convolute it with the original signal. Convolution is a process which involves various steps and finally gives a product.
Now this is again done for all the subjects and their respective sessions. After finding the BWT each signal, we classify them as we did in WT. We follow the same procedure and get the results and then compare the accuracy with the previous method.
POWER
Power is given by square of the summation of all the points in a data set. First part is where we just find the transforms and classify them. In the second part we find the power of the signal after the transforms and compare them and see if we can get a better result.
RESULTS AND DISCUSSION
The methods applied in the above chapter have yield results with various accuracy. In the above chapter it is mentioned we have used two methods and for each methods we have used different classifiers. The results are given below:
RESULT OF WAVELET TRANSFORM
Samples No. Linear Discriminant Analysis Quadratic Discriminant Analysis
Training 36000 36000
Test 30000 30000
Classification 19023 18550
Table 5 1:Sample set details WT for sad versus happy
The data of the wavelet transform was fed into two classifiers and they gave different outputs. Some data were trained and the other was given for tests and the above table shows how many samples it has given back depending on which accuracy is calculated. The above table shows samples for classifying sad and happy.
Samples No. Linear Discriminant Analysis Quadratic Discriminant Analysis
Training 36000 36000
Test 30000 30000
Classification 15639(30000) 16548(30000)
Table 5 2:Sample set details WT for sad versus neutral
The above table shows samples for classifying sad and neutral. We can see here that QDA gives better output than LDA.
Samples No. Linear Discriminant Analysis Quadratic Discriminant Analysis
Training 36000 36000
Test 30000 30000
Classification 19026(30000) 18572(30000)
Table 5 3:Sample set details WT for neutral versus happy
The above table shows samples for classifying neutral and happy. We can see here that QDA gives better output than LDA.
ACCURACY RATE OF THE CLASSIFICATION
Emotions Linear Discriminant Analysis Quadratic Discriminant Analysis
Sad 63.41% 61.83%
Neutral 52.13% 55.16%
Happy 63.42% 61.90%
Table 5 4: Accuracy rate of classification of different emotions
Depending on the classification samples received the accuracy percentage was calculated.
The above table shows accuracy rate of two different classifiers used and the difference of accuracy between them.
Figure 5 1: Bar chart representation of Accuracy
RESULT OF BIONIC WAVELET TRANSFORM
Samples No. Linear Discriminant Analysis Quadratic Discriminant Analysis
Training 36000 36000
Test 30000 30000
Classification 26055(30000) 26055(30000)
Table 5 5: Sample set details WT for sad versus happy
The same representation of classification has been done for BWT and we see that the samples received are better than Wavelet Transform. The above table shows classification of sad and happy.
Samples No. Linear Discriminant Analysis Quadratic Discriminant Analysis
Training 36000 36000
Test 30000 30000
Classification 26895(30000) 26895(30000)
Table 5 6: Sample set details WT for neutral versus happy
The above table shows classification for neutral and happy emotions.
Samples No. Linear Discriminant Analysis Quadratic Discriminant Analysis
Training 36000 36000
Test 30000 30000
Classification 20100 20100
Table 5 7: Sample set details WT for sad versus neutral
The above table shows classification between sad and neutral emotions. We see that though the classification is better than the WT but it is not very accurate.
ACCURACY RATE OF THE CLASSIFICATION
Emotions Linear Discriminant Analysis Quadratic Discriminant Analysis
Sad 86.85% 86.85%
Neutral 89.65% 89.65%
Happy 67% 67%
Table 5 8: Accuracy rate by BWT
The accuracy rate determined from the above samples is shown here in a tabular form above. A Bar chart representation has also been done for the above result.
Table 5 9: Bar chart for BWT
CONCLUSION
From the above results we saw that Bionic Wavelet Transform gives better classification accuracy than Wavelet Transform. We saw that when we tried to distinguish sad and happy signal we got around 89% and when neutral and happy signal were distinguished it also showed around 87% of accuracy but when we tried to distinguish between sad and neutral the classification rate goes pretty low upto 67%. Since we had two subjects whose emotions were in contrast to the pictures shown, we can say that it is believed while the neutral emotion those subjects must not be feeling neutral. They had some emotions in their which resulted in such less accuracy.
Further when we distinguished neutral and happy it almost showed the same accuracy as of the sad and happy. We can also say that may be the subject was feeling sad, but these assumptions are made on the result we got. There is no quantification for this assumption.
In terms of quantification we can say that BWT is better than WT and with almost 90% of accuracy it can be used for various applications to detect emotions.