Essay:

Essay details:

  • Subject area(s): Engineering
  • Price: Free download
  • Published on: 7th September 2019
  • File format: Text
  • Number of pages: 2

Text preview of this essay:

This page is a preview - download the full version of this essay above.

SAI Intelligent Systems Conference 2016

September 21-22, 2016 j London, UK

NRCS: Neutrosophic Rule-based Classification

System

Sameh H. Basha Areeg S. Abdalla Aboul Ella Hassanien

Faculty of Science Faculty of Science Faculty of Computers and Information

Cairo University, Egypt Cairo University, Egypt Cairo University, Egypt

Scientific Research Group in Egypt (SRGE) Email: [email protected] Scientific Research Group in Egypt (SRGE)

http://www.egyptscience.net http://www.egyptscience.net

Email: [email protected] Email: [email protected]

Abstract—This article presents a Neutrosophic Rule-Based Classification System where neutrosophic logic (NL) is used to represent several forms of knowledge. The presented system generalizes the fuzzy rule-based classification by describing ev-ery logical variable with its truth, indeterminacy, and falsity degrees. which are obtained from truth, indeterminacy, and falsity membership functions extracted from the fuzzy trapezoidal membership function. Then, it is followed by an extraction of the ”IF-THEN” rules which used in the classification phase. Tests on different datasets; the Iris, the Wine, and the Wisconsin Diagnostic Breast Cancer (WDBC) have been done on the proposed neutrosophic rule based classification system. Compar-isons between the proposed system and the fuzzy rule based classification system are done. The proposed system showed an accuracy of 94.7% compared with 89.5% of the fuzzy rule- based classification system, on average.

Keywords—Neutrosophic rule-based classification system, Neu-trosophic set, Neutrosophic logic, Neutrosophic classification sys-tem.

I. INTRODUCTION

methodology for combining fuzzy rule-based classifier and genetic algorithms (GAs).This methodology is used to extract the optimum parameters of the fuzzy classifier. They introduced new representation for their fuzzy membership functions according to the fuzzy rules. They also introduced a new representation for the size and the structure of the fuzzy rules by exploitation GAs. They developed an efficient measure for addition and deletion of the fuzzy rules throughout the GA process. Their classifier has been tested on solely two real world databases that contain Iris and Wine datasets.

Kunjal Mankad et.al. in [13] designed and developed a system that evolves rules exploitation genetic-fuzzy approach. Their work shows the benefits of the genetic and fuzzy hybrid systems. As well, they proposed a framework, mechanically, evolves the rules. Their framework decrease development effort greatly.

In any data related application, there has been huge amount of data and lots of overlapped features to objects. Also it may contain redundant or not needed, incomplete, imprecise, and inconsistent data. This leads to high complexity in analyzing this data. It may decrease the accuracy of the data analysis. Also, some hidden object relations might not appear in the knowledge base. As a result, the traditional statistical techniques and data management tools are no longer adaptable to analyze these massive datasets [12]. Fuzzy set theory [20] is one of many theories existing to deal with this imprecision in information. Each of these theories can handle only one aspect of imprecision at a time. They are not to capture all types of imprecision at once. For example, the fuzzy set theory only handles vague information. But it cannot handle the inconsistency in information, however, Neutrosophic can handle uncertain, incomplete, inconsistent and imprecise information which exist in real world.

Fuzzy rule based systems have been mostly used in many applications that involve rule based systems. Therefore, Research on rule-based system focuses on improving these systems such as using evolutionary techniques by E.Zhou, and A. Khotanzad in [21]. In which, They proposed hybrid

Yi-Chung Hu et.al. in[15] proposed a methodology that depends on finding a better set of the fuzzy rules for classification problems by exploitation data mining techniques. They used the A priori algorithm. Then, they divided the quantitative attributes to seek out the frequent fuzzy grids with a pre-specified range of varied values and generates the classified fuzzy rules from these frequent fuzzy grids.

Oscar Cordn and Francisco Herrera in [6] proposed precise linguistic modeling that depending on two assumptions: First: a little amendment within the structure of the linguistic mode to enhance the model accuracy; The second : a several method to seek out the knowledge base to enhance the rule collaboration based on the generation of an earliest fuzzy rule set with an oversized range of single and double resulting rules and therefore the choice of the subset of them best collaborating. Additionally, for these kinds of linguistic models they introduced two variants of an automatic design methodology depending on well-known inductive fuzzy rule generation process and a genetic process for choosing rules. they permit a certain combination of antecedents to own two consequents related to. That is solely in those cases, during which it is very necessary to enhance the model accuracy

c 357 j P a g e

978-1-5090-1121-6/16/$31.00   2016 IEEE

in this subspace and not in all the possible ones as in [8]. Therefore, the existence of a primary and a secondary fuzzy rule base is avoided, and therefore the number of rules within the single knowledge base is reduced, that makes it easier to interpret the model. The role of conjunctive and implication operator are done by using the Minimum t-norm and therefore the center of gravity weighted by the matching degree [7] as defuzzification method.

Another research approach is to hybrid fuzzy set with rough set in order to improve the classification efficiency as Manish Sarkar in [17]. He used fuzzy-rough uncertainty to enhances the classification efficiency of the K-nearest algorithm. His Approach retains the simplicity and nonparametric characteristics of the K-nearest algorithm. In which the algorithm did not require the optimal value of K. Moreover, the generated class confidence values did not necessarily add up to one.

Hassanien et.al. in [1] proposed a fuzzy rule generation approach that is depending on granular computing approach exploitation rough mereology(FRGAGCRM). Their proposed system works in two stages. Within the first phase, the pre-processing phase uses fuzzification methodology that maps the numeric dataset into a categorical dataset consistent with a membership function defined in their paper. The second phase consists of the rough mereology phase and rule generation phase.

Fuzzy classification systems have attracted many researchers because of its ability to utilize simple rules and have best limitations of the symbolic and crisp rule based classifiers. The neutrosophic classifier is an extension to the fuzzy classifier, which utilizes the neutrosophic logic. It is a more generalized logic which is capable of effectively handling indeterminacy, and stochasticity acquisition errors that fuzzy logic cannot deal with.

The neutrosophic logic, introduced by Smarandache [19], handles incomplete and inconsistent information without any trivialization [9]. This paper presents Neutrosophic rule-based classification system. The antecedents and consequents of the rules are composed of neutrosophic logic statements, instead of fuzzy logic.

Section II presents an introduction to neutrosophic systems. Section III introduces neutrosophic classification system also describes the neutrosophic membership function that used in the paper and the different phases of the proposed neutrosophic classifier. Section IV shows the experimental results obtained and compares these results with the corresponding fuzzy rule based classification system. At the end , section V presents the conclusion and discusses future work.

II. NEUTROSOPHIC LOGIC

Neutrosophy is an up growing branch of philosophy, with old roots, dealing with the origin, nature and scope of neutralities, and also their interactions with various ideational spectra[9]. This theory considers every idea <A> and its opposite <AntiA> and the spectrum of neutralities <NeutA>

SAI Intelligent Systems Conference 2016

September 21-22, 2016 j London, UK

(the ideas located between the two extremes, neither <A> nor <AntiA>). The <NeutA> and <AntiA> ideas together are referred to as <NonA>[9]. The neutrosophic theory relaxes every idea <A> and neutralizes it by <AntiA> and <NonA> ideas - as a state of equilibrium [9]. Neutrosophic logic was done to rigorously represent mathematical model of many types of uncertainty, vagueness, ambiguity, imprecision, incompleteness, inconsistency, redundancy and contradiction [19].

In Neutrosophic logic each proposition is to have the

percentage of truth, the percentage of indeterminacy, and

the percentage of falsity in subsets T, I, and F respectively.

The T , I, F are standard or non-standard real subsets of

non-standard unit interval ] 0; 1+[ [2] [14].

Where

sup T = t sup, inf T = t inf

sup I = i sup, inf I = i inf

sup F = f sup, inf F = f inf

and

n sup = t sup + i sup + f sup

n inf = t inf + i inf + f inf.

In neutrosophy, neutrosophic logic, neutrosophic set, neutrosophic probability, and neutrosophic statistics, the neutrosophic components, T , I, and F , represent the truth, indeterminacy, and falsehood values respectively[4]. The non-standard unit interval] 0; 1+[ is replaced by standard real interval [0,1] in real world applications which for simplicity [2].

The neutrosophic logic introduces a much better way to represent human thinking. Humans do not decide in definite environments, rather they evolve imprecision in their decisions. Which always needed in real life due to the several types of uncertainty and imperfection knowledge that human has[2]. The basic concepts of neutrosophic set, was introduced by Smarandache in [19] [18], Salama et al. in [16] [10] [11] [3]. They provide the foundation for mathematically treating the neutrosophic ideas that exists in our globe.

Now we present the Neutrosophic set Definition[9]. ” For a space of points X , with x in X.

A neutrosophic set A in X is represented by its truth-membership function TA, an indeterminacy-membership function IA and a falsity membership function FA. TA(x); IA(x) and FA(x) are real standard or non-standard subsets of ] 0; 1+[”. That is:

TA : X !] 0; 1+[

IA : X !] 0; 1+[

FA : X !] 0; 1+[

Since each membership is in ] 0; 1+[, with no restriction on their sum of , therefore

0 supTA(x) + supIA(x) + supFA(x) 3+.

c 358 j P a g e

978-1-5090-1121-6/16/$31.00   2016 IEEE

SAI Intelligent Systems Conference 2016 September 21-22, 2016 j London, UK

III. THE PROPOSED NEUTROSOPHIC RULE-BASED

CLASSIFICATION SYSTEM

It is been a while that Knowledge based systems using IF-THEN rules where successfully used to represent several human problems. They are the best ways to model brain activity and adaptive behavior. These traditional approaches to represent knowledge representation were all based on the two-valued boolean logic. yet, the boolean logic is not able to deal with several kinds of uncertainty and imprecision [5].

Fig. 1: Neutrosophic Rule–based System structure

The generic structure of a NRCS as shown in Figure (1). Where, (1) Neutrosophication: Converts the crisp input to a linguistic variable using the three membership functions: truth membership function, indeterminacy membership function, and falsity membership function. stored in the neutrosophic knowledge base, (2) Inference Engine: Using If-Then type neutrosophic rules that converts the neutrosophic input to a neutrosophic output, and (3) Deneutrosophication:

Converts the neutrosophic output of the inference engine

to crisp using three functions analogous to the ones used

by the neutrosophication. While the knowledge base (KB) stores the available knowledge about the problem in the form of neutrosophic ”IF- THEN” rules. the knowledge base is divided into two completely different information:

(a) The neutrosophic rule semantics that within the kind of neutrosophic sets and (b) The linguistic rules that representing the knowledgeable information.

A. Neutrosophic membership function

We will begin by a Trapezoidal membership function as used in fuzzy system, Fig. 2. And we will extract three membership functions: truth, indeterminacy, and falsity membership functions as in Fig. 3.

Fig. 4. shows both truth-membership and indeterminacy-membership. The extracted indeterminacy-membership, truth-membership, and falsity-membership are in Fig. 5.,Fig. 6., and Fig. 7. respectively.

The proposed approaches contains five phases: Extracting Information , Neutrosophication , Generating a set of rules, Classification, and Performance measure Phases . And are described in more details in the following subsections.

Fig. 2: Trapezoidal membership function.

Fig. 3: Determining Truth, Indeterminacy, and Falsity mem-bership functions.

B. Extracting Information phase

Here,the important features of the data set are extracted. For example, the minimum and the maximum of each attribute are extracted and will be needed in phase C.

C. Neutrosophication phase

The three extracted neutrosophic membership functions are applied on the data set. That represents each value by the three neutrosophic components < T,I,F >. these neutrosophic components are restricted to be subsets of standard real interval [0,1] instead of ] 0; 1+[.

D. Generating a set of rules phase

In this phase, we generate two types of rules: training rules, testing rules, and for testing the exact rules are generated.

c 359 j P a g e

978-1-5090-1121-6/16/$31.00   2016 IEEE

Fig. 4: Determining indeterminacy membership function.

Fig. 5: The indeterminacy-membership function.

In neutrosophic rule based system, each rule contains some attributes and each attribute in this rule has three values in the form < T,I,F > which represent degree of truth, degree of indeterminacy, and degree of falsity as shown in Fig. 8. Then, the rules are added to the training rules set after checking for redundancy. Therefore, the number of training rules generated in most cases is less than the number of training instances as in Table I. In other side for generating the testing rules we don’t need to check for redundancy, but class labels are removed.

TABLE I: The number of training rules generated for Iris, Wine, and Wdbc data sets.

SAI Intelligent Systems Conference 2016

September 21-22, 2016 j London, UK

Fig. 6: The truth-membership function.

Fig. 7: The falsity-membership function.

if sepal length is [ Medium , IndetermincyLawMedium , FalseMedium ] and if sepal width is [ Law , 0, 0] and if petal length is [ High , IndetermincyMediumHigh , FalseHigh ] and if petal width is [ High , 0, 0] Then Class is : [Iris-virginica]. if sepal length is [ Medium , 0, 0] and if sepal width is [ Law , 0, 0] and if petal length is [ High , IndetermincyMediumHigh

, FalseHigh ] and if petal width is [ Medium , 0, 0] Then Class is : [Iris-virginica].

————————————————————————————————————————————————-

if Alcohol is [ High , 0, 0] and if Malic acid is [ Law , 0, 0] and if Ash is [ Medium , 0, 0] and if Alcalinity of ash is [ Law , 0, 0] and if Magnesium is [ Medium , IndetermincyMediumHigh , FalseMedium ] and if Total phenols is [ Medium , IndetermincyMediumHigh , FalseMedium ] and if Flavanoids is [ Medium , 0, 0] and if Nonflavanoid phenols is [ Law , 0, 0] and if Proanthocyanins is [ Medium , 0, 0] and if Color intensity is [ Medium , IndetermincyLawMedium , FalseMedium ] and if Hue is [ Medium , 0, 0] and if OD Medium FalseHigh 0/OD High Law IndetermincyMediumHigh of diluted wines is [ High , 0, 0] and if Proline is [ Medium , 0, 0] Then Class is : [1].

————————————————————————————————————————————————-

if x2 is [ law , 0, 0] and if x3 is [ law , 0, 0] and if x4 is [ law , 0, 0] and if x5 is [ law , 0, 0] and if x6 is [ medium , IndetermincyLawMedium , FalseMedium ] and if x7 is [ law , 0, 0] and if x8 is [ law , 0, 0] and if x9 is [ law , 0, 0] and if x10 is [ law , IndetermincyLawMedium , FalseLaw ] and if x11 is [ law , 0, 0] and if x12 is [ law , 0, 0] and if x13 is [ law , 0, 0] and if x14 is [ law , 0, 0] and if x15 is [ law , 0, 0] and if x16 is [ law , 0, 0] and if x17 is [ law , 0, 0] and if x18 is [ law , 0, 0] and if x19 is [ law , 0, 0] and if x20 is [ law , 0, 0] and if x21 is [ law , 0, 0] and if x22 is [ law , 0, 0] and if x23 is [ law , 0, 0] and if x24 is [ law , 0, 0] and if x25 is [ law , 0, 0] and if x26 is [ law , IndetermincyLawMedium

, FalseLaw ] and if x27 is [ law , 0, 0] and if x28 is [ law , 0, 0] and if x29 is [ law , 0, 0] and if x30 is [ law , 0, 0] and if x31 is [ law , 0, 0] Then Class is : B.

Fig. 8: The sample of training rules in Neutrosophic for the three datasets Iris, Wine, and Wdbc

Data Set Number of Number Of

training samples Generated Training

Rules

Iris DataSet 75 51

Wine DataSet 89 89

Wdbc DataSet 284 269

E. Classification Phase

After generate set of rule phase we have the training rules and the testing rules after removing the class labels, in classification phase the testing rule matrix are constructed.

c 360 j P a g e

978-1-5090-1121-6/16/$31.00   2016 IEEE

SAI Intelligent Systems Conference 2016 September 21-22, 2016 j London, UK

Then, in order to predict the class label each rule in the testing set are compared with all rules in the training rules set. In case of there is no any one of the training rules matches at least in half of the attributes with the current testing rule, this rule are added from exact rules set to the training rules set.

F.  Performance measure Phase

We have testing matrix, from phase D and exact matrix from phase C, and in order to compute the confusion matrix which is used to measure the classifier performance and compute some measures such as precision, sensitivity, and specifity for each class, we compare the testing matrix with the exact matrix.

Finally, we calculate the total accuracy, precision, sensitivity, and specifity for each class. In Fig. 9., Fig. 10., Fig. 11., and Fig. 12., we compare our neutrosophic rule based classification system results with the results obtained from the corresponding fuzzy rule based classification system using membership function in Fig. 2.

Fig. 9: Accuracy of classification for Iris, Wine, and Wdbc data sets in Neutrosophic and Fuzzy

IV. RESULTS AND DISCUSSION

A. Dataset

The neutrosophic rule based classification system is applied and tested on public three widely used real-world datasets because of the limitation of accessibility of private ones.

The public three real-world datasets are Iris with 4 attributes, Wine with 13 attributes, and Wisconsin Diagnostic Breast Cancer (WDBC) with 32 attributes which available on UCI Machine Learning Repository web site.

B. Evaluation Results

The proposed Neutrosophic rule-based classification system generalizes the Fuzzy rule based classification system. A comparison between the two systems shows that the proposed system provides much better and more accurate results as shown in Fig. 9. NL introduced the term of indeterminacy, therefore there is no overlaps between sets.

Fig. 10: precision, sensitivity, and specifity for Iris dataset in neutrosophic rule based classification system and correspond-ing fuzzy rule based classification system

The truth membership functions, Fig. 6., didn’t have any overlaps as well furthermore that all of this yields to our proposed system decreases the complexity and computational. Table II displays the number of generated training rules. For each class of Iris data set, the number of training sample used, number of testing sample used, and the corresponding correct classifications.

A comparison of the results of this table with the corresponding fuzzy system is shown in Fig. 10.

Table III displays the number of generated training rules. For each class of the Wine data set, the number of training sample used, number of testing sample used, and the corresponding correct classifications.

Fig. 11. shows a comparison of the results of this table with the corresponding fuzzy system.

Table IV displays the number of generated training rules. For each class of the Wdbc data set, the number of training sample used, number of testing sample used, and the corresponding correct classifications.

A comparison of the results of this table with the corresponding fuzzy system is shown in Fig. 12.

c 361 j P a g e

978-1-5090-1121-6/16/$31.00   2016 IEEE

Fig. 11: precision, sensitivity, and specifity for Wine dataset in neutrosophic rule based classification system and correspond-ing fuzzy rule based classification system

TABLE II: Details of training and testing samples for Iris Data Set in neutrosophic.

Iris classes Number Number Number Correct Wrong

Of Of Of Class Class

Training Training Testing

Samples Rule Samples

Used Gener- Used

ated

Iris-setosa 25 25 25 0

Iris-versicolor 25 51 25 23 2

Iris-virginica 25 25 24 1

SAI Intelligent Systems Conference 2016

September 21-22, 2016 j London, UK

Fig. 12: precision, sensitivity, and specifity for Wdbc dataset in neutrosophic rule based classification system and correspond-ing fuzzy rule based classification system

TABLE IV: Details of training and testing samples for Wdbc Data Set in neutrosophic.

Wdbc classes Number Number Number Correct Wrong

Of Of Of Class Class

Training Training Testing

Samples Rule Samples

Used Gener- Used

ated

M 106 266 106 95 11

B 178 179 176 3

TABLE III: Details of training and testing samples for Wine Data Set in neutrosophic.

Wine classes Number Number Number Correct Wrong

Of Of Of Class Class

Training Training Testing

Samples Rule Samples

Used Gener- Used

ated

1 29 30 26 4

2 36 89 35 30 5

3 24 24 24 0

V. CONCLUSIONS AND FUTURE ENHANCEMENT

The proposed Neutrosophic rule-based classification system (NRCS) generalizes the fuzzy rule based classification system with gives better, and more accurate results. Since the decision class can be determined correctly. Also the neutrosophic rule based classification system reduces the complexity and the computational of the classifier. In addition, the results show that NRCS is more sturdy in classification.

Since one of the major shortfalls of NRCS is that their inability to learn, but require the knowledge base to be extracted from expert knowledge. One idea is to hybrids an evolutionary techniques with NRCS to automate the NRCS design. In future work, we aim to build a hybrid

c 362 j P a g e

978-1-5090-1121-6/16/$31.00   2016 IEEE

system between one of the evolutionary techniques and a neutrosophic rule based classification system which will use the evolutionary learning process to automate the NRCS and in designing and optimizing the knowledge base.

References

[1] A. Hassanien, M. Mahmood, N. El-bendary, and H. Hefny, Fuzzy rule generation approach to granular computing using rough mereology, The 5th International Conference on Computer Research and Development(ICCRD 2013), 6 pages, Feb 23–24, 2013.

[2] A. Ansari, R. Biswas, and S. Aggarwal, Neutrosophic classifier: An extension of fuzzy classifer, Applied Soft Computing, vol. 13, no. 1, pp. 563–573, 2013.

[3] A.Salama, and S. Alblowi, Generalized neutrosophic set and generalized neutrosophic spaces, Journal Computer Sci. Engineering, vol. 2, no. 7, pp. 129–132, 2012.

[4] C. Ashbacher, Introduction to neutrosophic logic, American Research Press, Sep 1, 2002.

[5] O. Cordn, Genetic fuzzy systems: evolutionary tuning and learning of fuzzy knowledge bases, Advances in Fuzzy Systems - Applications and Theory series, Wspc, vol. 19, 2002.

[6] O. Cordn, and F. Herrera, A proposal for improving the accuracy of linguistic modeling, IEEE transactions on fuzzy systems, vol. 8, pp. 335 - 344, JUNE 2000.

[7] F. Herrera, O. Cordn, and A. Peregrn, Applicability  of  the  fuzzy

operators in the design of fuzzy logic controllers, Fuzzy Sets and

Systems, vol. 86, pp. 15-41, 1997.

[8] H. Ishibuchi, K. Nozaki, and H. Tanaka, A simple but powerful heuristic method for generating fuzzy rules from numerical data”, Fuzzy Sets and Systems, vol. 86, 1997.

[9] H. Wang, F. Smarandache, R. Sunderraman, and Y. Zhang, Interval neutrosophic sets and logic:theory and applications in Computing, Hexis, Neutrosophic Book Series, 1st edition, 2005.

[10] I. Hanafy, A. Salama, and K. Mahfouz, Correlation of neutrosophic data, International Refereed Journal of Engineering and Science (IRJES), vol. 1, no. 2, pp. 33–39, 2012.

[11] I. Hanafy, A. Salama, and K. Mahfouz, Neutrosophic classical events and its probability, International Journal of Mathematics and Computer Applications Research(IJMCAR), vol. 3, no. 1, pp. 171–178, 2013.

[12] S. Jian, and Y. Lin, Hybrid rough sets and applications in uncertain decision–making, Auerbach Publications, 1st edition, 2010.

[13] K. Mankad, P. Sajja, and R. Akerkar, Evolving rules using genetic

fuzzy approach - an educational case study, International Journal on

Soft Computing ( IJSC ), vol. 2, no. 1, 2011.

[14]  A.  Robinson,  Non-standard  analysis, Princeton  Landmarks  in

Mathematics and Physics series, 1996.

[15] R. Chen, Y.C. Hu, and G.H. Tzeng, Finding fuzzy classification rules using data mining techniques, Pattern Recognition Letters, vol. 24, pp. 509-519, 2003.

SAI Intelligent Systems Conference 2016

September 21-22, 2016 j London, UK

[16] S. Alblowi, A. Salama, and M. Eisa, New concepts of neutrosophic sets, International Journal of Mathematics and Computer Applications Research (IJMCAR), vol. 4, no. 1, pp. 59-66, 2014.

[17] M. Sarkar, Fuzzy-rough nearest neighbor algorithms in classification, Fuzzy Sets and Systems, vol. 158, pp. 2134–2152, 2007.

[18] F. Smarandache, Neutrosophic set, a generialization of the intuituionistics fuzzy sets, International Journal of Pure and Applied Mathematics, vol. 24, pp. 287–297, 2005.

[19] F.  Smarandache,  A  unifying  field  in  logics:  neutrosophic  logic.

neutrosophy, neutrosophic set, neutrosophic probability, American

Research Press, 3rd edition, Dec 31, 2003.

[20] L. Zadeh, Fuzzy sets, Information and Control, vol. 8, pp. 338–353, 1965.

[21] E. Zhou, and A. Khotanzad, Fuzzy classifier design using genetic algorithms, Pattern Recognition, vol. 40, pp. 3401–3414, 2007.

978-1-5090-1121-6/16/$31.00 c 2016 IEEE 363 j P a g e

...(download the rest of the essay above)

About this essay:

This essay was submitted to us by a student in order to help you with your studies.

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, . Available from:< https://www.essaysauce.com/essays/engineering/2017-4-14-1492196996.php > [Accessed 24.10.19].