Chapter 1
Introduction
The purpose of this chapter is to provide a basic understanding of the terms used in Semantic Web, Ontology and Rule Acquisition
Semantic Web
The Web is continuously evolving toward Web 3.0 after going through Web 2.0 [29], since it was created with hyperlinks and multimedia capabilities. The Semantic Web, which is the key component of Web 2.0 and Web 3.0, is an evolving development of the World Wide Web in which the semantics of information and services on the Web are being defined. This is enabling the Web to understand and satisfy the requests of people and machines to use the Web content [2].
Definition:
“The Semantic Web is an extension of the current Web in which information is given well-defined meaning, better enabling computers and people to work in cooperation.” (Berners-Lee et al. 2001).
“The Semantic Web is a vision: the idea of having data on the Web defined and linked in a way that it can be used by machines not just for display purposes, but for automation, integration and reuse of data across various applications.” (www.w3c.org)
These definitions emphasize:
• well-defined meaning of the information
• machines use the information (automation)
• collaboration through data integration and reuse of data
The Semantic Web project aims to enable machines to access the available information in a more efficient and effective way. The researchers and developers are working on two different options towards the realization of their vision. In the first option, the machine reads and processes a machine-sensible specification of the semantics of the information. In the second option, web application developers implant the domain knowledge into the software which will enable the machines to perform assigned tasks correctly.
Semantic Web Architecture
The preceding ideas and standards to complete the Web are being positioned into practice under the supervision of the World Wide Web Consortium. To lessen the quantity of standardisation needed and increase reuse, the Semantic Web technologies have been organized into several layers shown in Figure. The two base layers are inherited from the preceding Web. The rest of the layers aim to build the Semantic Web. The top one inserts trust to complete a Semantic Web of trust.
The Semantic Web layers are organized following an increasing level of difficulty from bottom to top. Higher layers functionality relies upon on lower ones. This design approach facilitates scalability and encourages using the simpler tools for the aim at hand. All the layers are detailed in the subsequent subsections.
Semantic Web Stack, from Tim Berners-Lee presentation for Japan Prize, 2002
URI and UNICODE
The two technologies that verify this layer are directly taken from the World Wide Web. URI offers global identifiers and UNICODE is a character-encoding standard that supports international characters.
In few words, this layer offers the global perspective, already present in the WWW, for the Semantic Web.
XML and Namespaces
The Semantic Web should smoothly incorporate with the Web. Consequently, it must be mingle with Web documents. HTML is not adequate to capture all that is going to be expressible in the Semantic Web. XML is a superset of HTML that may be used the serialisation syntax for the Semantic Web. XML was first of all tried but in recent times other possibilities have been developed. They are given and compared in the next section.
The XML assists integrating Semantic Web documents in the current HTML/XML web. The other possibilities are N-Triples and Notation 3 syntax, http://www.w3.org/DesignIssues/Notation3.html.
Namespaces where introduced to XML to increase its modularisation and the reuse of XML vocabularies together with XML Schemas. They are also used within the Semantic web for the identical purpose.
RDF Model and Syntax
The RDF Model and Syntax specification [Becket04] defines the building blocks to realise the Semantic Web. This is the prime layer that was specifically developed for it. This specification defines the RDF abstract syntax and the RDF graph model.
The RDF graph model defines a structure consists of nodes and directed edges between nodes. The structure of nodes and directed edges conform directed graphs that model the network of terms and associations between terms of the Semantic Web. The nodes and relations are called resources and are recognized by URIs. Each node has its own URI and there are different types of relations that also have an URI, they are called properties. Figure shows and example of RDF graph model.
RDF Graph Model example
Particular edges are recognized by the triad composed by the source node, the property and the target node. Triads are named triples ore RDF statements and they are the RDF abstract syntax. Graphs can be arranged as a set of triples, one for each edge in the graph. Both representations are identical so the graph model can be reconstructed from the set of triples. Triples can also be assigned an explicit identifier, that is an URI. This procedure is called reification.
RDF Schema
RDF Schema extends RDF and is a vocabulary for describing properties and classes of RDF-based resources, with semantics for generalized-hierarchies of such properties and classes.
Simple RDF gives the tools to build semantic networks. They are a knowledge representation technology. Nevertheless, there is still a lack of several semantic network facilities not existing with RDF.
There are no defined taxonomical relations. They are defined in the RDF Schema specification [Brickley04]. Taxonomical relations leverage RDF to a knowledge representation language with capabilities similar to semantic networks. This enables taxonomical reasoning about the resources and the properties that relate them.
RDF Schema specification gives some primitives from semantic networks to specify metadata vocabularies. RDF Schemas execute metadata vocabularies in a modular way, like XML Schemas. An example of RDF Schema is shown in Figure:
• type: it is a property that links a resource to a Class to which it pertains. The resource is labelled as a member of this Class and thus it possesses its features.
• Class: it is a set of things that share some features; they have a common conceptual abstraction. A class models the concepts present at the referential semantic level.
• subClassOf: this property holds the taxonomical relations between classes. If class B is a subclass of class A, then class B has all the typical characteristics of class A plus some specific ones that can distinguish it from A.
For instance, if a RDF graph states that, a resource R is a "Mammal", i.e. R has type the class "Mammal", and that "Mammal" is subclass of "Animal"; then it can be deduced by taxonomical reasoning that R is also an "Animal".
• subPropertyOf: this property creates the taxonomy of properties. If property B is a subproperty of property A, then whenever it is stated that the property B holds between two resources it can be deduced that A also holds.
For instance, if a RDF graph says that, a resource R is related to another one S through a relation called "motherOf" and "motherOf" is a subproperty of "parentOf"; then it can be deduced that the property "parentOf" also holds between resources R and S.
• domain, range: Both are properties that associate other properties to classes. They constraint the classes to which the associated properties can be connected. Domain defines all classes to which the subject resource of the triples where property appears must belong. The same is applicable for range but constraining the object resource.
Example of RDF Schema
1.1.1 Ontology
An ontology is an enriched contextual form for representing domain knowledge.
Ontology is a hierarchical description of significant classes in a specific domain, together with the description of the properties of each class. This has led to the extensive effort to develop a suitable ontology language, concluding in the design of the OWL Web Ontology Language [3].
1.1.2 The Web Ontology Language (OWL)
Semantic Web relies heavily on ontologies. Concretely, ontologies based on Description Logics paradigm include definitions of concepts –OWL classes–, roles –OWL properties– and individuals. The most common language to formalize Semantic Web ontologies is OWL (OntologyWeb Language [8]), a proposal of the W3C. The goal of this standard is to formalize the semantics that was created ad hoc in old frame systems and semantic networks.
The Web Ontology Language (OWL) is a family of knowledge representation languages for representing ontologies. OWL gets its formal semantics from description logics (DL), a family of knowledge representation languages which can be used to represent the terminological knowledge of an application domain in a structured and formally easiest way.
The OWL language consists of three sub-languages of increasing expressive power, such as OWL Full, OWL DL and OWL Lite.
Lite – partially restricted to aid
learning curve
‣ DL = Description Logic
Description Logics are a fragment of
First Order Logic (FOL) that are
decidable – this allows us to use DL
reasoners (more later)
‣ Full
unrestricted use of OWL constructs,
but cannot perform DL reasoning
OWL Full – It is entirely compatible with RDF and it includes full OWL language primitives.
OWL DL – It is the part of OWL full with the benefit of efficient reasoning support. However it has less compatibility with RDF.
OWL Lite – It is subset of OWL DL with the advantages of easier implementation and understandability. However it has limited expressivity.
Use of Ontology
Ontology is used as a form of knowledge representation about the world or some part of it. As ontology defines the concepts and relationships within a domain, it provides a standardized vocabulary for that domain and the relationships between those concepts. Ontology is now central to many applications such as scientific knowledge portals, information management and integration systems, electronic commerce, and semantic web services. It provides a shared common understanding of a domain and the means to facilitate knowledge reuse by different applications, software systems and human resources (Gómez-Pérez 1996, 1998).
Ontology
Ontologies are necessary when the expressiveness achieved with semantic network-like tools is not enough. Metadata vocabularies defined by RDF Schemas can be considered simplified ontologies. The tools included in this layer rise the developed vocabularies to the category of ontologies. For a comparative with XML Schemas, see Table.
Ontologies, which were defined in the Knowledge Representation Ontology section, are specially suited to formalise domain specific knowledge. Once it is formalised, it can be easily interconnected with other formalisations. This facilitates the interoperability among independent communities and thus ontologies are one of the fundamental building blocks of the Semantic Web.
Description Logics are particularly suited for ontology creation. They were introduced in the corresponding Knowledge Representation subsection. The World Wide Web Consortium is currently developing a language for web ontologies, OWL [Dean04]. It is based on Description Logics and expressible in RDF so it integrates smoothly in the current Semantic Web initiative.
Description Logic makes possible to develop ontologies that are more expressible than RDF Schemas. Moreover, the particular computational properties of description logics reasoners make possible efficient classification and subsumption inferences.
XML Schemas vs. Ontologies
When comparing ontologies and XML schemas directly we run the risk of trying to compare two incomparable things. Ontologies are domain models and XML schemas define document structures. Still, when applying ontologies to on-line information sources their relationship becomes closer. Then, ontologies provide a structure and vocabulary to describe the semantics of information contained in these documents. The purpose of XML schemas is prescribing the structure and valid content of documents, but, as a side effect, they also provide a shared vocabulary for the users of a specific XML application.
Differences between ontologies and schema definitions:
• A language for defining ontologies is syntactically and semantically richer than common approaches for databases.
• The information that is described by an ontology consists of semi-structured natural language texts and not tabular information.
• An ontology must be a shared and consensual terminology because it is used for information sharing and exchange.
• An ontology provides a domain theory and not the structure of a data container
SPARQL is a protocol and query language for semantic web data sources.
Rules
The rules layer permits proof without full logic machinery. The Semantic Web technology for this layer is the Semantic Web Rule Language (SWRL) [Horrocks04]. It is based on a previous initiative language called Rule Modelling Language (RuleML) [Boley01]. As RuleML, SWRL covers the entire rule spectrum, from derivation and transformation rules to reaction rules. It can thus specify queries and inferences in Web ontologies, mappings between Web ontologies, and dynamic Web behaviours of workflows, services, and agents.
RIF is the W3C Rule Interchange Format. It is an XML language for expressing Web rules that computers can execute. RIF provides multiple versions, called dialects. It includes a RIF Basic Logic Dialect (RIF-BLD) and RIF Production Rules Dialect (RIF PRD).
Logic
The function of this layer is to grant the features of FOL. First Order Logic was considered as the most important type of logic in the Logic types section. With FOL support, the Semantic Web has all the capabilities of logic available at a reasonable computation cost as shown in the Deduction section.
There are some initiatives in this layer. One of the first alternatives was RDFLogic [Berners-Lee03]. It provides some extensions to basic RDF to represent important FOL constructs, for instance the universal (∀) and existential (∃) quantifiers. These extensions are supported by the CWM [Berners-Lee05] inference engine. Another more recent initiative is SWRL FOL [Patel-Schneider04], an extension of the rule language SWRL in order to cope with FOL features.
Proof
The use of inference engines in the Semantic Web makes it open, contrary to computer programs that apply the black-box principle. An inference engine can be asked why it has arrived to a conclusion, i.e. it gives proofs of their conclusions.
There is also another important motivation for proofs. Inference engines problems are open questions that may require great or even infinite answer time. This is worse as the reasoning medium moves from simple taxonomical knowledge to full FOL. When possible, this problem can be reduced by providing reasoning engines pre-build demonstrations, proofs, that can be easily checked.
Therefore, the idea is to write down the proofs when the problem is faced and it is easier to solve as the reasoning context is more constrained. Further, proofs are used whenever the problem is newly faced as a clue that facilitates reasoning on a wider content.
Many inference engines specialised in particular subsets of logic have been presented so far. For instance:
• The production system Jess .
• Prolog for logic programming.
• The CWM inference engine.
• The FaCT implementation of Description Logics reasoners.
Trust
This is the top most layer of the Semantic Web architecture. The trust layer uses all the Semantic Web layers below. But, they do not offer the required functionality to trustily bind statements with their responsible parts. This is achieved with some additional technologies that are shown in the right part of the Semantic Web stack Figure.
The used tools are digital signature and encryption. Thus, the trust web will make intensive use of Public Key Infrastructures. They are already present in the Web, for instance as digital certificates identifying parties that sign digital contracts. Notwithstanding, there is not a widespread use of them.
The premise is that Public Key Infrastructure is not of extended use because it is not a decentralised web structure. It is hierarchical and therefore rigid. What the Semantic Web might contribute here is a less constraining substrate of use. The web of trust is based on the graph structure of the Web. Moreover, it supports the dynamic construction of this graph. These features might enable the common use of Public Key Infrastructure in the future Web.
Lastly, the final Semantic Web architecture consists of reasoning engines complemented with digital signatures to build trust-engines. Then, a Trust Web can be developed with rules about which signed assertions are trusted depending on signer.
Rule Acquisition:
It is the process of gaining knowledge or rules from websites.
Rule Acquisition General Approaches:
ID3
The best well-known divide-and-conquer algorithm is ID3 developed by Quinlan [2]. The key task of ID3 is making a decision tree. Nodes of the tree are labelled by attributes while arcs are labelled by values of an attribute. In ID3, nodes are chosen from which to develop the tree according to the information contents of the associated object attributes. The use of the entropy measure gives a rational and effective means to build decision trees.
RULES-3
RULES-3 extract classification rules for a set of objects each belonging to one of a number of given classes. The objects together with their related classes represent the set of training examples from which the algorithm has to induce general rules. An object must be specified in terms of a fixed set of attributes, each with its own set of possible values. In RULES-3, an attribute-value pair forms a condition. If the number of attributes is na, a rule may hold between one and na conditions, each of which should be a different attribute-value pair. Only conjunction of conditions is allowed in a rule and for that reason the attributes must all be different if the rule consists of more than one condition.
RULES-3 Plus
RULES-3 Plus (RULe Extraction System — 3 Plus version ), a new covering algorithm belonging to the RULES family of simple inductive learning algorithms introduced by Pham and Aksoy [11'12]. Compared to its instant predecessor RULES-3 [2], RULES-3 Plus has two innovative strong features. First, it employs an efficient rule searching procedure rather than the exhaustive search conducted in RULES-3. Second, it integrates a simple metric for selecting and sorting candidate rules in accordance with their generality and accuracy. RULES-3 does not utilize any measure for assessing the information content of rules.
DIRT
DIRT (Discovery of Inference Rules from Text) [24], a method to discover entailment rules from text that are potentially useful to information retrieval. They used a dependency tree that is a set of dependency relationships [17] to represent the structure of a sentence. By calculating the similarities between paths of dependency trees, DIRT finds the most similar paths to a given path and makes inference rules from the results.
TEASE
TEASE [41] that is a similar method to DIRT presents an unsupervised learning algorithm for Web-based extraction of entailment rules. It focuses on acquiring entailment rules from the Web, so it reduces the complexity to apply at larger scales.
Rule Acquisition Using Graph Search algorithm:
Constraint-directed search
Constraint-directed search is an algorithm to solve Constraint Satisfaction Problems (CSPs). It searches the problem space under the supervision of the relationships, limits, and dependencies among problem objects [1]. Conventionally, in a CSP, a heuristic commitment is the assignment of some values to some variables [1]. Heuristics focus on variable ordering and value ordering: what is the next variable to assign and to what value will it be assigned? One popular variable ordering heuristic is to choose the variable with the fewest number of possible remaining values, which implies the smallest domain size.
Best-first search
Best-first search is a most widely used problem solving technique in the area of artificial intelligence [23]. Best-first search is a graph-based search algorithm [10].It means that the search space can be represented as a sequence of nodes connected by paths. It is appropriate to a discrete optimization problem in which they can assume that the state space is represented in the structure of a tree. Best-first search estimates the promise of node n with a heuristic evaluation function f(n), which may depend on the information collected by the search up to that point on any extra knowledge about the problem domain[12].
A* search algorithm
The A* search algorithm is an alternative of best-first search. It is guaranteed to locate the least-cost path from a given initial node to one goal node out of one or more possible goals [24]. It uses a distance-plus-cost heuristic function f (n) to find out the order in which the search visits nodes in the tree. The distance plus cost heuristic is a sum of two functions: the path-cost function g (n), which is the actual shortest distance travelled from the source node to the current node and a heuristic estimate h (n) of the distance from the current node to the goal. The h (n) should not overestimate the distance to the goal.
Knuth-Morris-Pratt Algorithm:
It is used to select exact parts that contain rules from Web pages. knuth-morris-pratt algorithm (or KMP algorithm) is used to find whether the string is present in pattern or not. The Knuth–Morris–Pratt string searching algorithm searches for occurrences of a word within a key text string by utilizing the observation that when a mismatch takes place, the word itself exemplifies sufficient information to find where the subsequent match could begin, thus bypassing re-examination of previously matched characters
Rule Acquisition in Semantic Web:
Inferential rules are as necessary to the Semantic Web applications as ontology. For that reason, rule acquisition is also an important issue, and the Web that implies inferential rules can be a key source of rule acquisition.
As the Semantic Web becomes more popular, combining rule and ontology reasoning is becoming an important research issue, in addition to ontology inference based on OWL [14].
XRML
The eXtensible Rule Markup Language (XRML) approach is a framework for extracting rules from texts and tables of Web pages [21]. The core of the XRML framework is rule identification, in which a knowledge engineer identifies various rule components such as variables and values from the Web pages with a rule editor [21]. The effectiveness of the rule acquisition procedure of the XRML approach depends on the rule identification step, which also depends on the large amount of manual work done by the knowledge engineer.
Rule Acquisition with ontology:
Using Rule Ontology
In this case of the rule ontology, several rules can be generalized into one rule. For example, 102 rules in the rule base of Tajhotels.com were generalized into 21 rules of the rule ontology in our experiment. Fig. 4 shows an example of a rule in the ontology.
Fig. 4. Structure of Rule Ontology
Major steps in Rule Acquisition in Semantic Web using Ontology
It consists of two main steps,
(i) rule component identification and
(ii) rule composition with the identified rule components.
Rule Component Identification
The basic algorithm of rule component identification is based on text matching between ontology and the text on a Web page [33]. Moreover, we can use information about omitted variables and the relations between the variables and values
described in the ontology [33]. For example, we can perceive that item is omitted from the Web page shown in Fig. 2, because books, CDs, and VHS tapes are values of item in the ontology shown in Fig. 2. Also, it is possible to assign variables to corresponding values, because every value has its matching variable in the ontology.
Rule Composition:
The main goal of rule composition is to combine identified variable instances into rules. The basic idea of rule composition is using patterns of rules in similar systems.
There can be many combinations for the cases of assigning variables to rule
candidates that are extracted from the ontology. Therefore we need to reduce the
complexity of choosing the next variable instance to assign. We imported the concept, variable ordering in the Constrained Heuristic Search [2].
Variable ordering
One popular variable ordering heuristic is to choose the variable with the fewest number of remaining possible values [2]. When assign variable instances to each rule, start from a variable which has the minimum number of matching instances because it can lessen the number of options in the beginning. This procedure is called heuristic variable ordering.
Rule ordering
In addition to setting the variable order in each rule, one should decide the order of candidate rules. At first, start from a rule which has the minimum number of combinations for assigning variable instances to the rule. This procedure is called rule ordering.
Rule Refinement
1.3 Handling Uncertainty in Rule Acquisition
1.3.1 Crisp Ontology
A crisp ontology is an exact (i.e., binary) specification of a conceptualization. In other words, it is an enumeration of the precise concepts and relationships that exist for any information assemblage. In crisp ontology, the domain knowledge [6] is structured in terms of
• concepts (C)
• properties (P)
• relations (R)
• axioms (A)
It is properly defined as a 4 – tuple
That is, O = (C, P, R, A)
where:
• C is a set of concepts specified for the domain. A concept similar to a class.
• P is a set of concepts properties
• R is a set of semantic relations specified between the concepts in C.
• A is a set of axioms and it is a reasoning rule or a real fact.
1.3.2 Fuzzy Ontology
Surveying the literature, we can find that there is no unique definition of fuzzy ontology. In the simplest case [2], a fuzzy ontology is a pair (C, R), where C is a set of (fuzzy) concepts and R is a set of (fuzzy) binary (n-ary) relations. In various approaches, this pair can be extended in several ways:
• individuals (I), fuzzy axioms (A) [16],
• concept hierarchy (H) and axioms [49],
• attributes of a concept, concept hierarchy, fuzzy events of a concept [57].
Fuzzy ontology can be seen as extended domain ontology [28], which makes use of the exact domain and fuzzy information processing as follows:
(i) the input is unstructured data;
(ii) the definition of related concepts in the particular domain,
e.g. instances,objects, and their relationships;
(iii) the generation of domain ontology;
(iv) the domain ontology extended as fuzzy ontology; and
(v) applying the fuzzy ontology to the specific domain.
Fuzzy ontologies are an extension of classical ontologies of a particular domain for handling the inaccuracy or uncertainty problems. Inaccuracies and Impreciseness are often encountered in the present systems [6]. Fuzzy ontology aims to handle the vagueness in itself, adapt the uncertainties and bring forth a view which is machine readable, processable and interpretable.
A fuzzy ontology is a 7-tuple
That is, OF = (C, P, CF, PF, R, RF, As, AsF, A)
where,
• C is a set of crisp concepts specified for the domain.
• P is a set of crisp concept properties.
• CF is a set of fuzzy concepts
• PF is a set of fuzzy concept properties
• R is a set of crisp binary semantic relations specified between concepts in C or fuzzy concepts in CF.
• RF is a set of fuzzy binary semantic relations specified between crisp concepts in C or fuzzy concepts in CF
• As is a set of crisp binary associations specified between concepts in C or fuzzy concepts in CF.
• AsF is a set of fuzzy binary associations specified between crisp concepts in C or fuzzy concepts in CF.
• A is a set of axioms.
Linguistic Variables and Hedges
In daily life, natural human language consists of the terms such as “old” ,“fast” and “ugly”. Such terms are called as the linguistic variables in the Fuzzy set theory. The values of linguistic variables are words and not numerals. The aim of using linguistic variable is to provide means of approximate description of occurrences that are not specified accurately and precisely [19]. Such basic terms in language are often changed using adjectives and adverbs such as lightly, slow, moderately, very, fairly etc. Such words are called as linguistic hedges. The linguistic hedges impact and modify the membership function for the linguistic variables.
1.3.2 Type-2 Fuzzy Ontology
More precisely, type-2 fuzzy concepts were used to handle the uncertainties in the group-decision making process. A Type-2 fuzzy set is considered by a fuzzy membership function, that is, the membership grade (or membership value) for each element of type-2 fuzzy set is a fuzzy set in the unit interval [0,1]. The membership functions of type-2 fuzzy sets are three-dimensional and contain a footprint of uncertainty (FOU). Hence, type-2 fuzzy sets present additional degrees of freedom that are helpful to model the interuser (group) uncertainties, which involve the varying opinions and preferences of experts. A Type-2 fuzzy set is characterized by a type-2 membership function μÃ(x,u) [23], where xєX and uєJx⊆[0,1], à is denoted as follows :
à = {((x,u), μÃ(x,u))|∀xєX ∀uє, Jx ⊆ [0,1]}
in which 0 < μÃ(x,u) < 1. Type-1 membership functions contain two endpoints that are referred to as upper and lower membership functions [24], which are bounds for the FOU(Ã) of a type-2 fuzzy sets à [24]. The Upper membership function is associated with the upper bound of FOU(Ã) and is denoted by μà (x) ∀xєX [23]. The Lower membership function is associated with the lower bound of FOU(Ã) and is denoted by μà (x) ∀x є X [24].
1.4 Problem Definition & Objective
The existing rule acquisition systems with classical ontology deals with crisp data and cannot retrieve needed results from the imprecise source of such uncertain and confused internet data. Therefore, fuzzy logic should be integrated with classical ontology. Type-1 fuzzy and type-2 fuzzy ontology-based systems have fewer capabilities to handle this problem. Meantime, type-2 fuzzy rough logic systems have been becoming an opt and effective tool for handling this problem for its qualitative characteristics. The main idea of the proposed system is to reformulate such uncertain and confused internet data using type-2 fuzzy rough description logic ontologies to support effective cognition and decision making.
Hence this research work extends the current standard language OWL to deal with additional features to handle uncertainty features and proposes an appropriate methodology to represent type-2 fuzzy rough ontologies by the aid of OWL2 annotation properties.
1.7 Structure of the Thesis
This dissertation is organised into eight chapters as follows,.
Chapter 1: Brief description of the research work .
The foremost objective of this chapter is to provide a brief outline of this dissertation.
Chapter 2: Basics of AMI middleware architecture
The objective of this chapter is to introduce the field of Ambient Intelligence, Middleware and Multiagents. This chapter also defines some related fields and concepts of the field.
Chapter 3: Literature survey & related work
This chapter focuses on the limitations of existing approaches and work related to this dissertation is highlighted.
Chapter 4: Elements of Research Approach
The main objective of this chapter is to specify the elements needed for the implementation of the research work. Elements include multi-agents, middleware, Context-awareness and representation of context.
Chapter 5: Methodology
This chapter describes the framework’s architecture, elements and its topologies. This chapter also explains how this proposed methodology overcomes the existing challenge in Ambient Intelligent environment architecture.
Chapter 6: Case Study
The practical outcome of this research and the implementation of the concepts are also presented in this chapter. This chapter focuses on the implementation of the proposed framework with a case study on higher education.
Chapter 7: Results & evaluation
This chapter evaluates the proposed methodology with the case study.
Chapter 8: Conclusions.
This chapter summarizes the research on multi-agent system towards the design of middleware for Ambient Intelligence. This chapter also provides the details on the contributions of this dissertation. This chapter also outlines the possible future work from the proposed approach.