2. Literature Review
Different approaches based on Evolutionary Algorithms (EA) technics for solving Single Objective Optimization (SOO) problems, have been proposed in recent years.
In this literatures review, we focus on the following topics: single-objective optimization by using Genetic Algorithms (GAs); the role of a schema in GAs: the effect of first initial population, and convergence of GA.
Optimization is essential for finding suitable answers to real life problems. In particular, genetic (or more generally, evolutionary) algorithms can provide satisfactory approximate solutions to many problems to which exact analytical results are not accessible.
Global optimization algorithms can be divided into two groups: deterministic algorithms and metaheuristic algorithms, see [44]. Meta-heuristic methods are helpful for a wide class of optimization problems where deterministic algorithms are not suitable (for example, functions with a large number of local extrema). Metaheuristic algorithms include, in particular, Ant Colony Optimization (ACO), Genetic Algorithms (GAs), Bees Algorithms (BAs) and other bio-inspired techniques.
Evolutionary Algorithms (EAs) constitute a large class of optimization procedures, including classical GAs, that are inspired by the process of natural evolution. As Eiben and Smith [45] observe, different implementations of EAs (e.g., genetic algorithm, genetic programming, evolutionary strategy) can essentially be summarized
by the following steps:
Initialize a population randomly and evaluate each candidate;
Select parents;
Recombine pairs of parents;
Mutate the resulting offspring;
Evaluate each new candidate;
Select individuals for the next generation;
Repeat from Step 2 until a stopping criterion is satisfied.
2.1 Some new evolutionary algorithm for SSO
Chang et al. [46] propose two new operators which are added to the classical GA: duplication and fabrication. Duplication is a procedure producing multiple copies of the best-fit chromosome from some elite base. It is similar to what has been done in the DSC algorithm (see Chapter 3). The difference is that in [46] the duplicated chromosomes replace the worst chromosomes in the population, while in the DSC algorithm the copies of the best chromosome replace randomly chosen chromosomes. Fabrication is a procedure producing new chromosomes (called artificial chromosomes) from a given elite chromosome base, by using some chromosome matrix. There is some analogy with the similarity operator, however, fabrication can use more than two chromosomes from the elite base and is based on random assignment. Another difference is that we use only binary strings as chromosomes, while in [46] chromosomes as strings of symbols from a given finite set are used.
In [47] it is written that a modern evolutionary optimization method, Extreme Optimization was proposed and has since been applied to a number of combinatorial optimization problems successfully, however, Extreme Optimization has rarely been applied to continuous optimization problems. Therefore, Zeng et al. [47] have recommended the use of an Improved Real-Coded Population-Based Extreme Optimization (IRPEO) method in order to solve problems associated with unconstrained optimization. Basic IRPEO operations consist of real-coded random generation of the initial population, individual fitness evaluation, population fitness evaluation, assortment of bad elements based on the power-law probability distribution, new population generation according to the uniform random mutation, update of population though unconditional acceptance of new population. The authors have applied the IRPEO on 10 tested functions with 30 dimensions, experimental results showing that IRPEO is competitive and even better compared to selected versions of Genetic Algorithm with different mutation operators.
2.2 Convergence of genetic algorithms
In the article [48], the authors state that genetic algorithms are widely used in solving some world optimization challenges, but few rigorous results on their convergence can be found in the literature. They show that, with a proper rigorous multistage Markov chain modeling and with simple probabilistic arguments, some convergence results for GAs can be derived. In particular, for a GA with superindividual (elitist model), the probability that the current population contains an optimal solution converges to one as the number of iteration tends to infinity. In [48], a new crossover operator is defined. It is further extended in another paper [49], where some modifications of the algorithms from [48] are introduced and their theoretical convergence is established. All these algorithms have a superindividual. Numerical comparisons among these algorithms are also included.
In [50], the authors consider a non-homogeneous genetic algorithm (NHGA) which uses two parameters (probability of mutation and probability of crossover) which can change during the execution of the algorithm. For an elitist version of this algorithm, they prove its almost sure convergence to some population containing an optimal point. By using the theory of Markov chains with finite state space, and the Chapman���Kolmogorov equation, they studied the probability of crossover and mutation for NHGA. Then the authors compare the NHGA with the homogeneous genetic algorithm (HGA). They show by some examples that there exists a non-empty subset E* of the state space that is more frequently visited when the NHGA is used. They also observe that, in the NHGA, the mutation probability should, at the beginning, be bigger that in the canonical genetic algorithm, to allow the algorithm to expand its search space. Finally, they conclude that the bigger the population size is, the closer the results for both algorithms are, but it should be noted that the computational effort increases when the size of population increases.
In [51] the authors develop sufficient conditions required for finiteness of the mean convergence time of a genetic algorithm with elitism. They also establish a lower bound for the probability of finding an optimal solution in the first m iterations. The results presented in [51] can also be extended to other optimization schemes.
In [52] the authors consider several versions of a genetic algorithms and obtain theoretical estimates for their convergence. They proved the convergence of the mean fitness of a population to the optimal value of a given function. This result is obtained for two types of GA: with crossover and mutation, and with crossover, inversion and mutation. It can also be extended to other variations of genetic operators. However, in the point of view of the authors, real-coded genetic algorithms are of special interest, but their result cannot be applied to such algorithms.
In [53] the author has obtained some stopping criteria in genetic algorithm theory, for a general model of the algorithm being a special case of the Random Heuristic Search (RHS). The approach adopted to this problem was to obtain upper bounds for the number of iterations necessary to ensure finding an optimal solution with a prescribed probability. Here ���finding an optimal solution��� means that the current population contains at least one copy of an individual belonging to a given set of optimal solutions.
In [54] the author studies stopping criteria for a genetic algorithm designed for solving multi-objective optimization problem. This algorithm is described in terms of a general Markov chain model of. He establishes an upper bound for the number of iterations which must be executed in order to produce, with a prescribed probability, a population consisting entirely of optimal solutions. Since populations may contain multiple copies of the same element, this stopping criterion can only guarantee that at least one minimal solution is found.
2.3 Genetic algorithms based on schema theory
The aim of the paper [55] was to improve the search performance of the Stochastic Schemata Exploiter (SSE, already known in the literature) without sacrificing its convergence speed. For this purpose, the authors introduce the Extended Stochastic Schemata Exploiter (ESSE) and the cross-generational elitist selection SSE (cSSE). In the ESSE, once the common schemata list is defined from the common schemata which are extracted from the individuals in the sub-populations, the list is modified by deleting individual schemata, updating similar schemata, and so on. In the cSSE, a cross-generational elitist selection was introduced to the original SSE. In the numerical examples, SSE, ESSE and cSSE are compared with a genetic algorithm (GA) with minimum generation gap (MGG) and the Bayesian Optimization Algorithm (BOA). Several numerical results show that the GA with MGG can find better global solutions although the convergence speed is sacrificed. In comparing the convergence speed of different algorithms, the authors notice that the cSSE and BOA are fastest among them.
In [56], a new type of multi-population GA called forking Genetic Algorithm (fGA) was suggested by Tsutsui and Fujimoto.
The fGA was designed to solve multi-modal problems, which are hard to solve by the traditional GAs since they have many local optima. The fGA algorithm was prepared to search for a single global optimum by keeping track of potential local optima. The population structure consists of a parent population and a variable number of child populations. When a certain level of similarity is detected in the parent population, the algorithm creates a child population by using the similarity calculated from the binary strings encoding (genotypic forking) or by using the Euclidean distance between individuals (phenotypic forking) to measure a phenotypic similarity. The division of the search space in genotypic forking is based on the so-called temporal and salient schemata, which detect the convergence of bit positions in the binary encoding. The child and the parent populations are not allowed to overlap.
The temporal schema reflects the population state in the current iteration, while the salient schema is calculated from the last K_h iterations. The schemata are strings consisting of the letters ���0���, ���1���, and ���*��� but the temporal schema contains a 0 or 1 if more than a predetermined percentage K_TS of the individuals have the same value in a gene, otherwise ���*��� is inserted.
The fGA is tested on two problems as test functions. One is a FM Sound���s parameter identification problem and the other is Oliver���s 30 City Travel Salesperson Problem. The results of experiments show that the fGA outperforms the standard GA.
2.4 Initial population effects
In [57] Population review it is important in evolutionary algorithm since it affects the speed of convergence and the final answer equality. In case there is no available information about a solution then, random initialization is applied as a method to produce the candidate solutions in accordance to the initial population. In [57] a novel initialization population is applied as a method used.
Through the conducted experiments it is demonstrated that when an opposition based population is replaced by a random initialization, the convergence speed is accelerated.
Thus it is proposed that opposition based approach should be used in the optimization of a population initialization. The multimodal and unimodal test functions are key of verifying the experiment. The experiment results record 10% faster the average convergence speed [57].
According to the proposed algorithm, it is recommended that one should start with an appropriate population in cases where there is no information related to a solution. The idea is also applicable to the various population in optimizing algorithms, for instance, the genetic algorithms that form future work directions [57].
Genetic algorithms [58] are the mostly used metaheuristics in solving the global optimization problems. In [58] also the performance of genetic algorithms is discussed.. Also, initial population affects the objective function values found in various generations. Authors studied the properties of different point generators using four main criteria: the genetic diversity of the points and the uniform coverage also the speed and the usability of the generator.
The authors of [58] conducted the initial population difference tests for the real coded genetic algorithms. The motivation was to initiate discussion and to study whether pseudo-random numbers are used in the generation of initial population and their justification. The paper concentrates on the study of various realistic paths of generating initial population for cases without a priori information of a location of the global minima. The paper has summarized the foundational properties of quasi and pseudo-random generator sequence. The simple sequential inhibition (SSI) process application together with the nonaligned systematic sampling is summarized too. The effects of the different initial population were studies to obtain the best function values objectives after the tenth and twentieth generation when the whole algorithm is run.
There existed differences in genetic diversity and coverage of tested sets point. In the SSI uniform coverage is showed good results. However, the good results were recorded in pseudo and average genetic diversity points and vice versa.
While considering the usability and speed of point generators, the pseudo and quasi-random sequence generators are easy and also fast to use. Thus the traditional way used to generate initial populations in algorithms worked better with pseudo-random generator compared to others.