Emotion performs a prominent role in human computer interaction (HCI) and is one of important characteristics of human being that makes them different from machines. Machines are not able to measure or interact through these emotions in communication with humans; an essential part of the communication bandwidth is lost. Machines that identify and interacts with emotions is more fun to work with and results in less mistakes.Different methods have been implemented and explored in order to recognize emotional state. These methods are based on various resources, i.e. physiological (body temperature, rate of heart) [1, 2], visual (facial appearance, gesture) , an audio (speech and voice) , input devices (keystroke dynamics, movement of mouse, touch screen) [5, 6] etc.
Systems that can identify user emotion can do a lot better than the existing systems like text-processing, games, online tutorial, image processing and video, user authentication and so many other areas where user emotional state is crucial. For various example, gaming application that can detect and respond to end user's emotion can assess the user’s current emotional state and change accordingly (i.e. It can change the sound levels, graphics quality, control sensitivity level, music selection and so on). Likewise, an emotionally intelligent system is online tutorial system can change its teaching style or contents, change the interface, giving it a more attractive and easy-to-understand look according to a particular student’s current emotional behavior.
Keystroke dynamics is one of a behavioural biometric; this technique is relatively simple to be relevant as well as is one of the least expensive biometrics. It is the study of distinctive timing patterns in the individual’s typing and it includes extract keystroke timing features such as the interval of key press and the time elapsed between key presses. No any additional devices need to be purchased, installed, or integrated. The way a person types on a keyboard contains timing patterns which can help to identify him. This is called keystroke dynamics, and is a type of behavioural biometrics (as opposed to physiological biometrics such as ﬁngerprint or iris patterns). More research has been done in keystroke dynamics since the early 1980s, most of these have focused on ﬁxed text, wherein a pre-determined string is used to authenticate the user. Recently, however, there is rising importance in freetext keystroke dynamics, instead to be constrained to a pre-determined text, in freetext keystroke dynamics the user to be authenticated is free to type whatever he/she want. This is especially necessary in continuous authentication systems or continuous dynamic analysis, in which the system constantly checks for the occurrence of the authorized user after login by user. In such a situation, it is not practical to require the user to repeatedly type his/her userid or password, or any pre-determined text for that matter. The typing rhythms users demonstrate for the duration of their normal dealings with a computer.
Keystroke dynamics is one biometric based on the concept that various people type in uniquely characteristic manners. Study of telegraph operators in the 19th century expose personally unique patterns when keying messages above telegraph lines, and telegraph operators could identify every other depends on their keying dynamics. Keystroke dynamics also known as distinct name such as keyboard-dynamics, keystroke-analysis, biometrics- typing and typing-rhythms. Presently users start information to computer systems via mouse or keyboard on touch screens. The major benefit of by keystroke dynamics is that it can be used with no any other hardware required. Therefore it is cheap. keystroke dynamics is one biometric system is very high, because it is not intrusive and users do not essentially even notice that such a system is used. Keystroke dynamics features are generally extracted by the timing information of the different event such as key-down or key-hold or key-up. The hold time or dwell time of individual keys, and the latency between two keys, i.e., the time interval between the release of a key and the pressing of the next key are usually exploited. Digraphs, which are the time latencies between two consecutive keystrokes, are generally used. A new distance metric for characteristic matching is Manhattan distance. The top performers are the classifier using Manhattan distance. The Manhattan distance has the advantages of simplicity computation and easy decomposition into help made by each variable. Mainly essentially, it is most robust to the influence compare to upper order distance metrics together with Euclidean distance.
The purpose of this dissertation is to develop an application for identify emotional state through keystroke dynamics on that can recognize the emotional state of the user while the user is working on his/her system.
One important things of human being is Emotion. For a computer system; detecting emotion of a user is not easy. A lot of calculations and analysis are needed. Lots of factors are needed to be considered while anticipating a particular person’s emotional state. If computer systems were capable of recognizing users’ emotions they would be able to make more intelligent decisions; however, today‘s computer systems typically do not incorporate the emotional context of a situation in the decision making process.
In some situations, computer systems with emotional intelligence could attempt to infer the possible causes of these emotions through the situation variables, and then respond appropriately. For example, in tutoring programs the subject material could be altered depending on the student‘s emotional state. If the student is frustrated, the program could provide assistance in some form (e.g., an alternate explanation/example). Conversely, if the computer detects that a student is bored and yet performing well, the computer system could then provide the student with more challenging activities or speed up the pace of the material presented. At the present methods of measuring influence can be very costly for the reason that they utilize dedicated equipment that is unusual in home or office. This limits the real-world applicability of any affective solution as this equipment is not as widely used as standard computer equipment. For example, studies based on determination of different user states through thermal imaging avoid the problem of invasive sensors because the user‘s image can be captured without the user realizing it. However, these techniques still require the use of specialized equipment that is both expensive and non-standard in the home or office.
Many approaches for determining user emotion have been analyzed. They are wide in range and sometimes expensive. Procedures like voice intonation analysis, physiological sensors attached to the skin, facial expression analysis, thermal imaging of the face, gesture and pose tracking and many more[7,5] has already been done. This approaches showed good results but has some problems as well. Almost all of them require expensive and hard-to-use hardware which may not always be available to the user and they can be intrusive .
In many conditions, it might not issue what the particular reason of the emotion is, just that the computer must take some course of action. For example, detecting when users are in a stressed, distracted state would be helpful in high stress occupation, for instance the monitoring of mission critical systems, since mistakes have the potential for calamitous outcomes. possibly the user is exhausted from a poor night‘s sleep due to a noisy neighbor or is stressed about a latest performance review in which he did weakly. In that case, the cause of the user‘s state is not relevant to the instant condition. Still if it were, it is not likely the computer would be capable to help among the root cause of these matters. The more significant concern is that the user may not have his intelligence fully on the task at hand which may lead to mistakes. When computer systems can detect while the user was in one of these states, the system can provide also the user or a supervisor with response identifying a potentially hazardous situation.
Our approach to identify particular emotion states by analyzing the differences in the user‘s typing rhythms, an area of research known as keystroke dynamics.
To solve this problem we need a more cheap and reliable means of tool of interaction between computer and human users. As it is a fact that a particular user can show certain behavioral change while moving from one emotional state to other, those behavioral changes could be used to detect user emotional state.
This research was encouraged by keystroke dynamics research in authentication systems where users gain access to computer systems by providing the password as well as the correct typing rhythm of the original user. Monrose and Rubin observed that the user‘s affective state actually interfered with recognize participants due to changes in their keystroke rhythms. For example user typing behaviors. Keystroke dynamics is a biometric based technique; in which assume that various people type in individually characteristic behavior. Standard-keyboard is a very inexpensive and existing hardware that can be used to identify user emotion. That is the main reason we have chosen keystroke features to identify user emotion.
The advantage to identifying affective states using keystroke dynamics is that its implementation does not require any extra cost for hardware, while further biometric technologies do (e.g. face, fingerprint). Keystroke dynamics also has the advantage of using the general keyboard which is inexpensive and everywhere on most computer systems. Identifying affective states through keystroke dynamics could allow us to implement affective computing solutions using standard equipment that is currently available on a large scale.
The general outline of the thesis, and the main contributions presented in the different chapters, are summarized below:
Chapter 1: Introduction outlines the structure of thesis. This section provides a review of problem domain and proposed solution along with the motivation behind designing the solution.
Chapter2: Literature Survey, describes a study through different books, research papers, manuals and journals.
Chapter 3: Methodology, this chapter describes about the process model adopted for proposed solution in detail.
Chapter 4: Design, this chapter includes use case diagram, activity diagram and sequence diagram.
Chapter 5: Implementation and Testing, describes the implementation details of module and the details of validation and testing for the developed module.
Chapter 6: Concluding remarks and Future enhancements.
In the last section, references and glossary are provided as part of appendix. This section is the list of research papers, books, manuals and journals referred to effectively complete the thesis.
2. LITERATURE SURVEY
The following literature reviews attempts to demonstrate and support the hypothesis.
The First Studies (1980-1989):
Back to the Rand report in 1980 telegraph was the primary means of long-distance communication. Telegraph operators were used to identify each other by the different ways they used to tap into an exact same code. Inspired by the idea that individuals have unique rhythms when they sent telegraphs, the U.S government funded research to study if this same behavior was exhibited by people using a computer keyboard. After this preliminary study, the concept of a digraph was described. A di-graph is a couple of two keystrokes and the time elapsed between the typing of the first and second keystroke. The study recorded di-graph measurements for the participants.
The statistical analysis indicated that keystroke analysis was definitely a feasible biometric. While the authors of the Rand-report were able to get 99% success rate in classification, many researchers argue that this is insignificant due to fact that only 7 test subjects were involved in the study and that a significant amount of fine tuning of their metric had to be done(over fitting). After the Rand report, more experimental studies were also conducted to confirm the relevance of digraphs in identifying user typing signatures.
For example, one of the methods was to determine the standard deviation of all the di-graphs. When comparing a claimant sample against a reference sample, each applicant digraph was checked to observe if it was within 0.5 standard deviations of the mention digraph. When this situation was true; then the digraph was valid. If the applicant sample had more than 55% valid digraphs; the user would be authenticated.
The significant conclusions from these studies were that: digraphs were confirmed to be a good measure of keystrokes, mean digraph time (essentially typing speed) was determined to be not useful in classification, removing di-graphs more than 500 milliseconds seemed to be a good method of removing outliers in typing samples, and using all the digraphs yielded better results than using specific digraphs for classification.
2.2 Practical Keystroke Authentication and Neural Networks (1990-1999)
Monrose and Rubin  recognized the shortcomings of both neural networks and statistical/mathematical strategies. They developed some improved techniques for authorized access to computer system resources and data via keystroke dynamics. Keystroke dynamics aims to identify users based on certain habitual typing rhythm patterns. The authors collected, over a period of seven weeks, typing samples from 42 users performing structured and unstructured tasks in various computing environments.
They recommended that to mitigate the cost of constant retraining of neural networks, users can be broken up into smaller groups with one neural net for each group. Mathematical methods which require the storing of numerous reference profiles may suffer from long search times. For reducing the computational cost of the recognition process, firstly they reduced the search time by clustering the collected data using various maximum distance algorithms. They used the typing speed, or number of words typed per minute, in a given profile as the clustering criteria. One of the limitations of this technique is to re-cluster the data as the system is used. Several distance measures were used to compute pattern similarities and dissimilarities including Normalized Euclidean distance, and weighted and non-weighted maximum probability measures. The reference profile was constructed by computing the mean and standard deviations for the timing samples. Although Monrose and Rubin obtained a 90% correct classification for fixed text detection, they obtained (using the weighted probability measure) only a 23% correct classification for free text detection.
Obaidat and Sadoun  conducted numerous experiments comparing the effectiveness of neural networks versus mathematical methods. They used Linear Perceptron as their classifier to verify the computer user’s by keystroke dynamics. The experiment confirmed that keystroke durations were a useful measure and could potentially be better than keystroke latencies.
The author used user’s login information as characteristics using pattern recognition and neural network techniques. This paper deals with user authentication using both the keystroke durations and keystroke latencies that can characterize the typing style effectively. This work is the first of its kind that shows that hold times classification is superior to interkey time-based classification of computer users. Finally, it was found that using the combined hold times and interkey times as features for identifying computer users using pattern recognition and neural network algorithms has shown excellent verification accuracy.
2.3 New Ideas and Commercial Products (2000-2009)
In the research article by Richard et al. , the topic is addressed according to the correlation between students’ keystroking speed and performance while the user is programming. Sometimes 25% or even 50% of enrolled students do not succeed. Therefore, the author presented a work that identified students’ potential problem by low level, continuous keystrokes monitoring. They presented the results from two studies with computer science students conducted in different contexts and under different condition (controlled experiment, field study). Keystroke timings were recorded while they work on Java and Ada programming language. They have checked for students in two countries: first in a controlled program developing c code for a few hours; the other using Ada programming language in first year laboratories over a few weeks. Quality of their programming work was measured mainly in term of completeness.
The results were inconsistent because two studies were conducted in different context. The correlation was of a lower strength in study 2 because typing metrics have been computed for Ada source code over a six week period, not the typing to complete the tasks in the assessed work. As a result controlled experiment has a stronger result than an extended field study.
The research article by E. Zavadskas et al.  describes analysis of emotional state and labour productiveness using the Web-based Biometric Mouse Intelligent (WBMI) System. WBMI system works in background. If the system knows how its users feel, it can react appropriately according to the moods and present concrete recommendations. This research facilitated user’s emotional and labour productivity analysis by detecting mouse micro motions and physiological properties of the hand, as well as via analysis of self reports of each individual user.
Next, According to the research paper by Gunetti and Picardi  addressed the more challenging task of keystroke biometrics using freetext, where the users can type arbitrary text as input. The features of Keystroke dynamics are usually extracted using the timing information of key up or down or hold events. Digraphs and trigraphs also used in keystroke dynamics. The time latencies between two successive keystrokes, these patterns called digraphs. The time latencies between every three consecutive keys, that pattern called trigraphs. The study of digraph and trigraph on keystroke analysis via free text, Sim and Janakiraman  investigated the efficiency of digraphs and trigraphs for free text keystroke biometrics. As such, the digraph and trigraphs features depend on the word context they are computed in.
A key limitation of Gunetti and Picardi’s approach is the reliance on other users for the construction of a user profile in the identity verification step for a particular data. In that method, verification need checking the received sample against the profiles of all the legal users before a final decision can be made. This raises some serious efficiency and scalability issues. Ideally, authentication will involve checking a received sample against only the claimed identity. Involving the other legal users in the verification process can negatively impact performance and scalability in large networks. This may pose some issues in continuous authentication applications where computation in real-time is required. In the experiments conducted by Gunetti and Picardi, the comparison of a new sample against 40 profiles containing 14 samples each takes about 140 s on a Pentium IV at 2.5 GHz.
2.4 Current Research (2009-Present):
In the research article by Sally et al. , they investigated a technique to enhance the security of Personal Identification Number (PINs) and Automatic Teller Machines (ATM).They attempt to secure the user account if card and password or PINs fell in an imposter hands .For this; they combined passwords or PINs with the keystroke dynamics of the user. This paper also presented the experiments results using keystroke dynamics to find the optimum number of digits in PINs or password when this biometric is used.
This experiment does not negatively affect the legitimate user because the false reject rate (FRR) is almost zero but false acceptance rate was below 15%.
According to Clayton et al.(research in 2011), there have been different methods of evaluating emotional states that have seen changing rates of success, while they withal display one problems or both main problems preventing broad utilize; they can be intrusive to the user ,it need costly equipment and not easily available in typical home. This system using keystrokes is more intuitive and has commonly wider range of users.
A field study that collected keystrokes as users performed their daily computer tasks. Via an experience-sampling concept, users labeled the data information with their level of agreement with 8 emotional states. The authors extracted a various features such as dwell time and flight time from keystroke data. These parameters are used to created decision tree classifiers for 8 emotional states.
The result indicates that using the extracted keystroke features are use for classifiers for 8 emotional states. It can accurately classify at least two levels of six emotional states sadness, relaxation, nervousness, confidence, happy, and tired.
The best results include 2-level classifiers for sadness, confidence, happy, nervousness, relaxation, and tiredness with accuracies ranging from 77 to 85%.
The research article by Ned Bakelman et al. , describes the evaluation of a keystroke biometric system for continual user authentication on short burst-input durations of one or a few minutes. This application can be used for intruder detection, by which we mean the discovery that someone other than the authenticate user is use the computer. By identity proof another application is verifying the identity of students taking online tests and exams.
The main contribution of that study was the evaluation of the text input enforcement as a function of two independent variables, the population size and the number of keystrokes per test sample. As the number of keystrokes per sample was improved the equal error rate reduced logarithmically.
Keystroke dynamics can be classified in two types one is fixed or static text and free or dynamic text. The developments of an emotional detection system via keystroke dynamics require a different of steps. Initial need to collect a huge amount of emotionally labeled typing data. After that, we require to extract the applicable keystroke features, then build and authenticate models of emotional state. In this research, we used study to collect emotionally labeled data as the user performs their daily routines activities. Then that data set we will be extract keystroke features. After then we create training data set via user enter their current emotional state using fixed text and free text analysis. Then next apply Manhattan distance using extracting parameters.
3.1 Data Collection
It consists of gathering and labeling user’s keystroke data. In general, various software to collect data and extract features from collected data.In starting, collect fixed text data and then collect the free text data using application.
3.1.1- Fixed text analysis
Fixed-text analysis contains analyzing keystroke performance of a single user on fixed group of data from; the well-known novel “Alice’s Adventures in Wonderland” . According to , the cause behind selecting these text excerpts are, they have relatively simple sentence structure, absence of large uncommon words and each piece of text has generally the same length. There are several cause behind selecting fixed text over free text. It is very likely that while normal computer use, the user may use mouse more than keyboard. Using fixed text ensures a minimum number of keystrokes per sample and produces good results. After typing these paragraphs, the user had to select one of seven emotional states which best matched with the user’s current emotional state. The interface of the software is shown in appendix A. The user are asked to type the paragraph and after that the user had to select one of the six emotional states, which best matched with the user current emotional state. The good of that method was no need to produce or induce any emotional states. For collecting fixed text data a number of users were needed. We requested a good number of users to provide data. Some of them responded. These users were of varying ages, educational background and computer experience. We developed application in Java to collect data for using in keystroke dynamics. This software captured all pressed key by the user and their press and release time in a log file. We supplied this software to each user and requested to enter data at least once in a day. This ensured data collection under different emotional states of the same user. This took about 4 weeks to collect all the data. The user was not able to copying and pasting the fixed text. When the user is busy, then close the data collection window.
3.1.2- Free text analysis
It involves continuous or periodical checking of keystroke behavior. In free text, it is first time check, when a user logs in the computer system and continues till user work on system. We developed java application that can run in background and then continuously collect background data as keystroke timings i.e. key press time and key release time. The window pop up with six emotional states and the user was disrupted from time to time by record keeping their present typed text and asking to select an emotional state.
3.2- Feature Extraction
After gathering the information of keystroke timing, then used algorithm to represent the sample in recognizable form. After that divide gathering information (feature extract) into two parts: frequency parameters and timing parameters [5, 16]. Timing features are the duration times of a key and hold time (dwell time) and speed of typing. The main advantage of with timing features is that isolation can be maintained as there is no need processing the characters that the user types; simply timing values are viewed and this helps maintain the user‘s privacy. Frequency features are important that show how many times a user presses selected keys, such as backspace key, delete key, shift key, alt key etc. Some mathematical functions are also included in this application for example median, range, mean, standard deviation etc. A one feature vector is generally created by average value of the parameters’ values taken by users during a defined period of time.
In figure 3.2, following features extract show below. It describes how we can extract keystroke dynamics feature and the help of that feature identify the emotional states of users.
Fig 3.1: Flow Chart for Emotional Classification
3.2.1- Hold time or dwell time - How long a key is pressed until it is released? In othe words;
Dwell time (in milliseconds) between a key press and key release of the same key.
Duration of each keystroke (D_1) = Key release time (R_1) – Key press time (P_1)
3.2.2- Keystroke Latency or Flight Time -
a) Press time and Press time: Interval between two successive key presses (always positive).
D_2= P_2 '〖'-P'〗'_1
Fig 3.2: Keystroke features: Dwell Time and Flight Time
b) Release –Press: - Interval between a key release and the next key press time (may be
negative if next key is pressed before previous key is released).
c) Release time and Release time: - Interval between two consecutive key releases
D_4 = R_2-R_1
Table 3.1 Obtain Keystroke Features and their explanation
Keystroke Features Explanation
Number of characters use in one sample
Num_mistakes Number of faults find out ( use of backspace key and delete key)
Num_digits Number of digits find out
Num_symbols Number of symbols find out
Num_letter Number of letters found
Key count Total number of keys pressed
The ASCII code .Every key have fixed ASCII value
Dwell time Time difference between the key press time and key release time of the one key
Flight time Time period between releasing time a key and pressing time of the next key
We use Minimum, Maximum, Mean, Mode of information extract, Medium of information extract, Standard Deviation of information extract, and Variance of information extract by dwell time ,hold time, press- press time etc. These features were used to build the training data set.
3.3 - Using labeled data set we create a training set
In our application used two main methods to emotional state labeling. One approach, a human is to label emotion; the second approach is automatic labeling. There are various types of labels i.e. excitement, lack of interest, joy, surprise and so on. where people are able to select from a already define a set of word in paragraph labels during questionnaire. The questionnaires may use scales a range of responses to each question  or Self-Assessment-Manikin (SAM) technique, which is a graphical represent user emotion via valence, arousal and dominance  .
In our application, create a training data set and labels for that data set is assigned according to designed questionnaires given to the users for the best accuracy of the trained system.
3.4 Classifier Learning
After extracting the users’ typing features and creating their profile templates has been completed, the classification process is performed to find the similarities and difference between the users’ template stored at the enrollment phase and the sample providing during the session the system is being used. There are various approaches used in recognition through keystroke dynamics; for classifiers creation; ranging from simple analytical methods to more difficult pattern recognition and neural network algorithm.
Simple statistical methods were used as a classification mean for typing behaviour in several free-text keystroke system studies. A variety of distance techniques have been used : Euclidean distance , weighted Euclidian distance , Manhattan distance , Bhattacharyya distance  were all utilized to find the level of similarities between samples.
3.4.1 Distance Based Classification
Basically feature vectors are extracted to stand for the typing characteristics; they are next classified for confirmation and identification purposes. Some research mainly used the Nearest Neighbor classifier  with different distance functions that determine the keystroke features. Further distance functions are described as follows.
Euclidean distance is simple and straightforward method. But has two main limitations are as following:
Euclidean distance highly responsive to size variations in the feature variables.
Euclidean distance has no means to contract with the correlation between feature variables.
The Manhattan distance metric is defined as follows:
= ∑| xi −Yj |
The Manhattan distance has the benefit of straightforward computation and decomposition into assistance complete via each variable. Most significantly, it is more robust to the influence of outliers when compared to superior order distance metrics including Euclidean distance and Mahalanobis distance.
Our approach involves
Presenting them with a typing task (fixed text and free text),
Recording keystroke-timing information,
Extracting features suitable for training and testing a classifier, and
Training the classifier using one portion of the typing data and testing it using another.
4.1 Use Case Diagram
The Use case diagram is one of the most important software engineering paradigms, which is used to recognize the primary elements and then processes that form of the system. The primary elements are called actors and the processes are called use cases. The Use case diagrams represent the interaction of actors with every use case. The use case diagram demonstrates the functional elements of a system. Basically, it captures the business specific processes and the functionality, processes of the system. The main goal of this type of approach is to discover significant properties of the system that modeled in the use case model. Use case model is not a part of technical expert; in that point no implementation or design activities are involved.
Basic concept of a Use Case Diagram:
A use case model is easy in nature and basically it divided in two types of elements: one show the business roles and the second one show the business processes
• Actors: - An actor in a use case model collaborates with a use case. For illustration, for design a restaurant application, a customer entity which use some service, represents an actor in that application. Same as, the person who provides service at the customer counter is also representing an actor.
If an entity does not affect a certain piece of functionality that is modeling, that not
represent in the form of an actor. An actor is representing as a stick figure in a use
case model depicted "outside" the system boundary.
• Use cases: - It is in a use case model is a visual shown of distinct business functionality in a system. First analyze business functionality then becomes the underlying use cases become simpler event. A use case is shown as an ellipse in a use case diagram.
Fig 4.1: Use case diagram for administrator
Fig 4.2: Use case diagram for user
4.2 Activity Diagram
Fig 4.3: Activity diagram for administrator
Fig 4.4: Activity diagram for user
4.3 Sequence Diagram
Fig 4.5: Login sequence diagram
Fig 4.6: Registration sequence diagram
...(download the rest of the essay above)