Chapter 1 Introduction
Two types of empherical observation-discreetness of energy (which are taken place in light matter interaction) and the nature of light as a particle and nature of particle (electron etc.) as a wave (also known as wave-particle duality) motivate towards the quantum mechanics. In 1900 Max Planck postulated that exchange of energy between atoms and electromagnetic fields occur in bundle or quanta which is the product of wave frequency and a constant known as Planck constant -the idea which resolve the spectrum of black body radiation. Planck introduced his constant h (Planck’s constant ) only to satisfy the agreement between theoretical model and experiment. For the explanation of photoelectric effect Albert Einstein took this idea of quanta and in 1905 he himself considered electromagnetic wave being consist of quanta of energy  . At this time light was thought to have wave properties (diffraction, interference etc.) but Einstein present another bold view of the nature of light.
Neil Bohr presented a simple model of Hydrogen atom in 1913 by assuming that electron revolves around the nucleus in a way planet revolve around the sun with the assumption that angular momentum of electron should be integral multiple of reduced Planck constant ℏ=h/2, and obtained discrete spectrum of hydrogen, which leads to the discrete energies of the corresponding electron. Further, transitions between these energy levels are accompanied by the absorption or emission of a photon whose frequency is E/h, where E is the energy difference of the two levels. Apparently, the quantization of light is strongly tied to the quantization within matter. Inspired by his predecessors, Louis de Broglie suggested that not only light has particle characteristics, but that classical particles, such as electrons, have wave characteristics . He associated the wavelength λ of these waves with the particle momentum p through the relation p = h/λ. Interestingly, Bohr’s condition for the orbital momentum of the electron is equivalent with the demand that the length of an orbital path of the electron has to be an integer multiple of its de Broglie wavelength.
Erwin Schrodinger was very intrigued by de Broglie’s ideas and set his mind on finding a wave equation for the electron. Closely following the electromagnetic prototype of a wave equation, and attempting to describe the electron relativistically, he first arrived at what we today know as the Klein-Gordon-equation. To his annoyance, however, this equation, when applied to the hydrogen atom, did not result in energy levels consistent with Arnold Sommerfeld’s fine structure formula, a refinement of the energy levels according to Bohr. Schrodinger therefore retreated to the non-relativistic case, and obtained as the non-relativistic limit to his original equation the famous equation that now bears his name. He published his results in a series of four papers in 1926[5-8]. Therein, he emphasizes the analogy between electrodynamics as a wave theory of light, which in the limit of small electromagnetic wavelength approaches ray optics, and his wave theory of matter, which approaches classical mechanics in the limit of small de Broglie wavelengths. His theory was consequently called wave mechanics. In a wave mechanical treatment of the hydrogen atom and other bound particle systems, the quantization of energy levels followed naturally from the boundary conditions. A year earlier, Werner Heisenberg had developed his matrix mechanics, which yielded the values of all measurable physical quantities as eigenvalues of a matrix. Schrodinger succeeded in showing the mathematical equivalence of matrix and wave mechanics; they are just two different descriptions of quantum mechanics. A relativistic equation for the electron was found by Paul Dirac in 1927. It included the electron spin of 1/2, a purely quantum mechanical feature without classical analog. Schrodinger’s original equation was taken up by Klein and Gordon, and eventually turned out to be a relativistic equation for bosons, i.e. particles with integer spin. In spite of its limitation to non-relativistic particles, and initial rejection from Heisenberg and colleagues, the Schrodinger equation became eventually very popular. Today, it provides the material for a large fraction of most introductory quantum mechanics courses.
1.1 Time-dependent Schrodinger Equation
The form of the Schrödinger equation depends on the physical situation. The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time that is the following
ⅈℏ δΨ(r,t)/δt=Ĥ(r,t)Ψ 1.1
where i is the imaginary unit, ħ is the Planck constant divided by 2π, the symbol δ/δt indicates a partial derivative with respect to time t, Ψ (the Greek letter psi) is the wave function of the quantum system, r and t are the position vector and time respectively, and Ĥ is the Hamiltonian operator (which characterizes the total energy of any given wave function and takes different forms depending on the situation).
Putting the expression for Hamiltonian
ⅈℏ (δΨ(r,t))/δt=((-ℏ^2)/2m ∇^2+V(r,t))Ψ(r,t) 1.2
1.2 Time-independent Schrodinger Equation
The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states (also called \”orbitals\”, as in atomic orbitals or molecular orbitals). These states are important in their own right, and if the stationary states are classified and understood, then it becomes easier to solve the time-dependent Schrödinger equation for any state. Stationary states can also be described by a simpler form of the Schrödinger equation, the time-independent Schrödinger equation. (This is only used when the Hamiltonian itself is not dependent on time explicitly. However, even in this case the total wave function still has a time dependency.)
Putting the value of Hamiltonian
EΨ(r)=((-ℏ^2)/2m ∇^2+V(r))Ψ(r) 1.4
In equation 1.4 potential function V(r) is solely a function of space variable r and hence the resulting wave function.
Chapter 02 Literature Review
2.1 Genetic Algorithm
Genetic Algorithms are the directed search algorithms which are based on the process of biological evolution. It is actually developed by John Holland, University of Michigan in 1970’s to understand the adaptive processes of natural systems and to design artificial systems software that retains the robustness of natural systems. Genetic Algorithms provide efficient and effective techniques for optimization and machine learning applications. It is widely used today in scientific, business and engineering community.
Different classes of search techniques are:
i) Direct methods
ii) Indirect methods
i) Dynamic programming
3-Guided random search techniques
i) Evolutionary algorithms
a) Evolutionary strategies
b) Genetic algorithms
2.2 Terminology of Genetic Algorithms
In a genetic algorithm, a population of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.
The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual\’s genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.
A typical genetic algorithm requires:
a genetic representation of the solution domain,
a fitness function to evaluate the solution domain.
A standard representation of each candidate solution is as an array of bits . Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming.
Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators.
The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the search space). Occasionally, the solutions may be \”seeded\” in areas where optimal solutions are likely to be found.
During each successive generation, a proportion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming.
The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid or 0 otherwise.
2.2.3 Genetic operators
The next step is to generate a second generation population of solutions from those selected through a combination of genetic operators: crossover (also called recombination), and mutation. Population to be produced, a pair of \”parent\” solutions is selected for breeding from the pool selected previously. By producing a \”child\” solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its \”parents\”. New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. Although reproduction methods that are based on the use of two parents are more \”biology inspired\”, some research suggests that more than two \”parents\” generate higher quality chromosomes.
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.
Opinion is divided over the importance of crossover versus mutation. There are many references in Fogel (2006) that support the importance of mutation-based search.
Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.
It is worth tuning parameters such as the mutation probability, crossover probability and population size to find reasonable settings for the problem class being worked on. A very small mutation rate may lead to genetic drift. A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unless elitist selection is employed.
This generational process is repeated until a termination condition has been reached. Common terminating conditions are:
A solution is found that satisfies minimum criteria
Fixed number of generations reached
Allocated budget (computation time/money) reached
The highest ranking solution\’s fitness is reaching or has reached a plateau such that successive iterations no longer produce better results
Combinations of the above
2.3 FINITE DIFFERENCE METHOD
In finite difference method we approximate the derivatives by using their definitions as they are defined in calculus. Actually they are obtained by using Taylor series. Here we will not go into that detail, only exploit the result.
First order derivative is obtained as
(∂^2 ψ)/(∂x^2 )+2m/ℏ^2 Eψ=0 2.1
Second order derivative is obtained as
(ⅆ^2 y)/(ⅆx^2 )=(y_(i-1)-2y_i+y_(i+1))/〖Δx〗^2 2.3
As Schrodinger equation is an eigen value equation (Hψ=Eψ), we first try to solve in that manner using finite difference method.
-ℏ^2/2m (∂^2 ψ)/(∂x^2 )+V(x)ψ=Eψ 2.4
We will try to solve Schrodinger equation for particle in a box. For the first case V(x) is zero inside the box and infinity outside the box as shown in figure.
Figure 1: Particle in a Box Potential
So equation 3 for this case becomes
-ℏ^2/2m (∂^2 ψ)/(∂x^2 )=Eψ 2.5
After rearranging equation 4
(∂^2 ψ)/(∂x^2 )+2m/ℏ^2 Eψ=0 2.6
In equation 5 a parameter E must be known in order to solve it numerically along with boundary condition. We take E from analytical solution that is
E_n=(n^2 ℏ^2 π^2)/(2mL^2 );n=1,2,3…..
And boundary conditions are ψ(0)=ψ(L)=0 for the reason that will be clear later in chapter 03. In equation 5 m is the mass of electron ℏ is the Planck’s constant both of which are very pretty small. To get rid of the small values we will here use atomic units in which m_e and ℏ are taken to be equal to 1. Consequently, length must be measure in a_0 which is called Bohr’s radius (equal to 5.29x〖10〗^(-11)) and energy in Hartree (1 Hartree is equal to 4.36x〖10〗^(-18) Joules)
Equation 5 in atomic units and putting the values of E_n then becomes
(∂^2 ψ)/(∂x^2 )+n^2 π^2 ψ=0 2.7
Applying finite difference to equation 5
(ψ_(i-1)-2ψ_i+ψ_(i+1))/〖Δx〗^2 +n^2 π^2 ψ_i=0 2.8
In equation 7 I represent the node at which we have approximated the derivative. If we are considering nine points then I goes from 1 to 9. So for node 1 equation 7 after slight rearrangement becomes
-ψ_2+2ψ_1-ψ_0=n^2 π^2 ψ_i 〖Δx〗^2
But ψ(0)=ψ(L)=0 reduce above equation to
2ψ_1-ψ_2=n^2 π^2 ψ_i 〖Δx〗^2 (i)
For node 2 up to 8 equation 7 gives us following set of equations
-ψ_1+2ψ_2-ψ_3=n^2 π^2 ψ_2 〖Δx〗^2 (ii)
-ψ_2+2ψ_3-ψ_4=n^2 π^2 ψ_3 〖Δx〗^2 (iii)
-ψ_3+2ψ_4-ψ_5=n^2 π^2 ψ_4 〖Δx〗^2 (iv)
-ψ_4+2ψ_5-ψ_6=n^2 π^2 ψ_5 〖Δx〗^2 (v)
-ψ_5+2ψ_6-ψ_7=n^2 π^2 ψ_6 〖Δx〗^2 (vi)
-ψ_6+2ψ_7-ψ_8=n^2 π^2 ψ_7 〖Δx〗^2 (vii)
-ψ_7+2ψ_8-ψ_9=n^2 π^2 ψ_8 〖Δx〗^2 (viii)
At node 9 we come across again boundary condition
-ψ_8+2ψ_9-ψ_10 =n^2 π^2 ψ_9 〖Δx〗^2
After putting the boundary condition at node 10
-ψ_8+2ψ_9=n^2 π^2 ψ_9 〖Δx〗^2 (ix)
The above nine set of equation can be written in matrix form as following.
(■(2&-1&0&0&0&0&0&[email protected]&2&-1&0&0&0&0&[email protected]&-1&2&-1&0&0&0&[email protected]&0&-1&2&-1&0&0&[email protected]&0&0&-1&2&-1&0&[email protected]&0&0&0&-1&2&-1&[email protected]&0&0&0&0&-1&2&[email protected]&0&0&0&0&0&-1&2))(█(ψ[email protected]ψ[email protected]ψ[email protected]ψ[email protected]ψ[email protected]ψ[email protected]ψ[email protected]ψ_9 ))=n^2 〖Δx〗^2 π^2 (█(ψ[email protected]ψ[email protected]ψ[email protected]ψ[email protected]ψ[email protected]ψ[email protected]ψ[email protected]ψ_9 ))
Following graphs are obtained for first, second and third excited states along with graphs of probability density.
Chapter 03 Methods and Calculations
3.1 Particle in a box
Schrodinger equation involves potential energy represented by the symbol V. Particle in a box is a problem in which potential energy is infinitely large at the boundaries and zero inside some selected region and the particle is there to move with constant speed as shown in the figure 3.1. It is a prototype example used to describe some quantum effects like quantization of energy and probability density which has no classical counterparts.
Time independent Schrodingeq equation for particle in a box
-ℏ^2/2m (δ^2 ψ)/〖δx〗^2 +Vψ=Eψ 3.1
Since V is zero in the region we interested therefore
-ℏ^2/2m (δ^2 ψ)/〖δx〗^2 =Eψ 3.2
In order to get rid of the extremly small or exteremly large vales we will make use of the atomic untis in which the value of Planck’s constant, electron’mass, electric permitivity are taken to be equal to 1 by definition.
-1/(2〖a_0〗^2 ) (δ^2 ψ)/〖δx〗^2 =Eψ 3.3
Where a_0 is the bohr radius and is equal to one unit of length in atomic mass units. Since there is only one varaible involve so we switch from partial derivative to total dertivative. Equation (3) becomes
-1/(2〖a_0〗^2 ) (ⅆ^2 ψ)/(ⅆx^2 )=Eψ 3.4
We borrow analytical solution from literature. Boundary conditions are made from the logic that outside the box V is infinite beyond the region L and hence ψ must vanish at both the region allowing to write the boundary conditions as ψ(0)=ψ(L)=0
ψ=√(2/L) sin(nπx/L) 3.5
Here we make assumption of the unit length in atomic mass untis so L=1 meanig we have confined the particle(electron) in one Bohr distance, n is called quantum number and represent quantum state of the particle. Allowed values of n are 1,2,3,……… Here 0 is not allowed as quantum number since otherwise ψ will vanish inside the box. So equation (5) becomes as following
ψ=√2 sin(nπx) 3.6
In equation (6) distance must be measured in multiple of bohr’s radius.
Probabilty density is defind as probability of finding the particle par unit distande such that 〖(ψ(x))〗^2 dx reperesents probablity of finding the particle in a region dx. Since the particle can not go beyond the region L so particle must exist in that region. Hence normalization condition is that ∫_0^L▒〖〖(ψ(x))〗^2 ⅆx〗=1. The equation (5) is the normalized solution of the particle in a box problem and the factor √(2/L) is the normalization constant.
In analytical method the expression for energy is
E=(n^2 π^2 ℏ^2)/(2mL^2 ) Joules in SI units
Here quantum number n appears as well meaning energy E is quantized. This result is in complete contrast to classical mechanics which allows particle to have any energy. Putting our our values for L , ℏ^2and m we have the following equation
E=(n^2 π^2)/2 Hartree in atomic mass units
Where 1 Hartree is equal to 4.36x〖10〗^(-18) Joules.
The first few wave function and probability density plots are shown in the following figures.
Figure 3: First three wave function plot of particle in a box
Figure 4: First three density plots of particle in a box
3.2 By Genetic Algorithms
In Genetic Algorithms based methodology we formulate our fitness function by exploiting finite difference techeque. So by making approximations of the derivatives we rewrite equation (4) as following please
-(ψ_(i-1)-2ψ_i+ψ_(i+1))/(2〖a_0〗^2△x^2 )=Eψ 3.7
Rearranging equation (7), we obtain the following equation
ψ_(i-1)-2ψ_i+ψ_(i+1)+Eψ△x^2 2〖a_0〗^2=0 3.8
In the above equation E is the total energy. In numerical techniqes as is the case here(since Genetic Algorithms is numerically optimize techniques) we must provide all the parameters of the problem being in hand. So, we borrow energy from analytical ones and boundary condtions as well.
Final expression after putting energy
ψ_(i-1)-2ψ_i+ψ_(i+1)+n^2 π^2 ψ△x^2=0 3.9
In above equation (9), the parameter n will dictate which states we are searching for such that n=0 is the ground state, n=1 is the first excited states and so on.
Figure 5: Discretization of length for particle in a box
In figure 3.04 we discritize length axis so that we can apply finite difference technique for fitness function. Considring the first nine point, we apply the formerlly mentioned technique to each point.
So for point 1, equation (9) becomes
ψ_0-2ψ_1+ψ_2+1^2 π^2 ψ_1 〖0.1〗^2=0 3.10
If we are using nine points that implies that △x is 0.1. we, here, are interested in the ground state hence 1 for n. ψ_0 lies at the boundary so we will use boundary point over there. Equation 10 then becomes
-2ψ_1+ψ_2+1^2 π^2 ψ〖0.1〗^2=0 3.11
Aplying to point 2 equation (9) becomes
ψ_1-2ψ_2+ψ_2+1^2 π^2 ψ_2 〖0.1〗^2=0 3.12
Proceeding in the same manner upto point 8….
ψ_2-2ψ_3+ψ_4+1^2 π^2 ψ_3 〖0.1〗^2=0 3.13
ψ_3-2ψ_4+ψ_5+1^2 π^2 ψ_4 〖0.1〗^2=0 3.14
ψ_4-2ψ_5+ψ_6+1^2 π^2 ψ_5 〖0.1〗^2=0 3.15
ψ_5-2ψ_6+ψ_7+1^2 π^2 ψ_6 〖0.1〗^2=0 3.16
ψ_6-2ψ_7+ψ_8+1^2 π^2 ψ_7 〖0.1〗^2=0 3.17
ψ_7-2ψ_8+ψ_9+1^2 π^2 ψ_8 〖0.1〗^2=0 3.18
ψ_8-2ψ_9+ψ_10+1^2 π^2 ψ_9 〖0.1〗^2=0 3.19
In equation 19 , ψ_10 appears which again lies at the boundary we put its value which is zero. Equation 19 then becomes
ψ_8-2ψ_9+1^2 π^2 ψ_9 〖0.1〗^2=0 3.20
Now we take the absolute of each above discretized equations for the reason mention earlier in chapter no 02 in order to proceed to our fitness function and add all of them and name the added sum C.
C=abs(-2ψ_2+ψ_3+1^2 π^2 ψ_1 〖0.1〗^2
+ψ_1-2ψ_2+ψ_2+1^2 π^2 ψ_2 〖0.1〗^2
+ψ_2-2ψ_3+ψ_4+1^2 π^2 ψ_3 〖0.1〗^2
+ψ_3-2ψ_4+ψ_5+1^2 π^2 ψ_4 〖0.1〗^2
+ψ_4-2ψ_5+ψ_6+1^2 π^2 ψ_5 〖0.1〗^2
+ψ_5-2ψ_6+ψ_7+1^2 π^2 ψ_6 〖0.1〗^2
+ψ_6-2ψ_7+ψ_8+1^2 π^2 ψ_7 〖0.1〗^2
+ψ_7-2ψ_8+ψ_9+1^2 π^2 ψ_8 〖0.1〗^2
+ψ_8-2ψ_9+1^2 π^2 ψ_9 〖0.1〗^2 )= 0 3.21
Hence we have obtained our desired fitness function which we have to give to matlab genetic algorithm toolbox along with modified option. Matlab GA toolbox accepts fitness function as “M file”. So matlab “M file” for the above fitness function will be
Matlab M-file 01
3.3 Harmonic Oscillator
In harmonic oscillator the potential energy V varies as the square of the distance between the particle and equilibrium that is 1/2 kx^2. It is the most important potential in physics in a way that any restoring force which gives rise to potential energy V can be expressed as a sum of Maclaurin’s series. If vibrations are small, the potential energy can be very pretty approximated as that of harmonic oscillator. This important point is proved in following.
Let any restoring force F which is a function of x is expressed in Maclaurin’s series in terms of x.
F(x)=F(0)+ⅆF/ⅆx (0) x+1/2 (ⅆ^2 F)/(ⅆx^2 ) (0) 〖 x〗^2+1/6 (ⅆ^3 F)/(ⅆx^3 ) (0) 〖 x〗^3… 3.22
Where x=0 is the equilibrium position. For small vibrations x^2 and higher order becomes smaller and smaller and so negligible. Equation 21 then becomes
F(x)=ⅆF/ⅆx (0) x 3.23
As V= V=-∫_0^x▒〖F(x)ⅆx〗
So equation 22 becomes
V=-∫_0^x▒(ⅆF/ⅆx (0)x)ⅆx 3.24
V= -ⅆF/ⅆx (0) x^2/2 3.25
In equation 24 ⅆF/ⅆx (0) is by definition should be negative (it is restoring force) and the whole thing becomes positive as a result. So -ⅆF/ⅆx(0) must be a positive constant and equal to k, the spring constant of the force. Equation 24 becomes then
V=k x^2/2 3.26
Figure below show the graph of potential energy and a classical physical realization of harmonic oscillator. Quantum realization may be a molecule of HCl in which vibration of Hydrogen atom oscillates under the influence of columbic force harmonically.
...(download the rest of the essay above)