Essay:

Essay details:

  • Subject area(s): Engineering
  • Price: Free download
  • Published on: 7th September 2019
  • File format: Text
  • Number of pages: 2

Text preview of this essay:

This page is a preview - download the full version of this essay above.

Then, the training, Tt, validation, Tv, and generalization, Tg, sets are randomly selected from the generated output data set. Typically a very small percent (around 0.5%-2%) are selected for training, even a smaller percent (around 1/3:1/2) of the training set\'s size are selected for validation and the same for generalization. For the proposed analysis, 1% (3906 example), 0.5% (1953 example), 0.5% (1953 example) of the generated output data set are selected for the training, validation and generalization respectively. The variables distributions of the training and generalization sets are showed in figs.37-52. The figures show that the very small percents kept the box-car distribution of the original huge data set. Consequently, the network is not biased to specific output data values than other output data values.

Due to the huge number of training+validation+generalization examples, an efficient solver of Maxwell\'s equations is required for the input data set (scattered electromagnetic fields) generation. We used a solver that we particularly developed for this kind of application; that considers a radially stratified infinite cylinder scatterer along the z-direction embedded in free space (as a known background medium) with an un-phased infinite electric line source along the z-direction placed in the background medium for incident field excitation. The solver obtains the scattered field reflected from the radially stratified cylinder back to the background medium, see fig.10 in chapter 2 for illustration. For each example, scattered field values for ten harmonics (M=0:9) are computed; then their real and imaginary components are separated and concatenated as twenty (2M) real numbers.

Figure 37: Training values\' histogram distribution of the inner layer\'s relative permittivity (ε_(r,1))

Figure 38: Generalization values\' histogram distribution of the inner layer\'s relative permittivity (ε_(r,1))

Figure 39: Training values\' histogram distribution of the inner layer\'s conductivity (σ_1)

Figure 40: Generalization values\' histogram distribution of the inner layer\'s conductivity (σ_1)

Figure 41: Training values\' histogram distribution of the second layer\'s relative permittivity (ε_(r,2))

Figure 42: Generalization values\' histogram distribution of the second layer\'s relative permittivity (ε_(r,2))

Figure 43: Training values\' histogram distribution of the second layer\'s conductivity (σ_2)

Figure 44: Generalization values\' histogram distribution of the second layer\'s conductivity (σ_2)

Figure 45: Training values\' histogram distribution of the third layer\'s relative permittivity (ε_(r,3))

Figure 46: Generalization values\' histogram distribution of the third layer\'s relative permittivity (ε_(r,3))

Figure 47: Training values\' histogram distribution of the third layer\'s conductivity (σ_3)

Figure 48: Generalization values\' histogram distribution of the third layer\'s conductivity (σ_3)

Figure 49: Training values\' histogram distribution of the fourth layer\'s relative permittivity (ε_(r,4))

Figure 50: Generalization values\' histogram distribution of the fourth layer\'s relative permittivity (ε_(r,4))

Figure 51: Training values\' histogram distribution of the fourth layer\'s conductivity (σ_4)

Figure 52: Generalizaton values\' histogram distribution of the fourth layer\'s conductivity (σ_4)

4.2 Training and Results

All the network training parameters are set to be the same for the total 36 runs (12 different training methods with 3 different numbers of neurons per hidden layer) for fair comparison between all the methods and all the numbers of neurons per hidden layer. The maximum training time and the performance goal of the network are set to be one hour and zero, respectively, in order to help realizing every method\'s capabilities or in other words allowing every method to reach the minimum error reachable in the limited given time.

Best Training Methods/Number of Neurons per Hidden Layer

The comparison resulted in two best training methods which are Levenberg-Marquardt backpropagation (LMb) and Bayesian regularization backpropagation (Brb) with 14 neurons per hidden layer for both of them. Figs.53-84 show their histogram distributions of the relative percentage error of the variables for the training and generalization sets.

Figure 53: Histogram distribution of the training set\'s relative percentage error of the inner layer\'s relative permittivity ε_(r,1) for the LMb method

Figure 54: Histogram distribution of the generalization set\'s relative percentage error of the inner layer\'s relative permittivity ε_(r,1) for the LMb method

Figure 55: Histogram distribution of the training set\'s relative percentage error of the inner layer\'s conductivity 〖 σ〗_1 for the LMb method

Figure 56: Histogram distribution of the generalization set\'s relative percentage error of the inner layer\'s conductivity 〖 σ〗_1 for the LMb method

Figure 57: Histogram distribution of the training set\'s relative percentage error of the second layer\'s relative permittivity ε_(r,2) for the LMb method

Figure 58: Histogram distribution of the generalization set\'s relative percentage error of the second layer\'s relative permittivity ε_(r,2) for the LMb method

Figure 59: Histogram distribution of the training set\'s relative percentage error of the second layer\'s conductivity 〖 σ〗_2 for the LMb method

Figure 60: Histogram distribution of the generalization set\'s relative percentage error of the second layer\'s conductivity 〖 σ〗_2 for the LMb method

Figure 61: Histogram distribution of the training set\'s relative percentage error of the third layer\'s relative permittivity ε_(r,3) for the LMb method

Figure 62: Histogram distribution of the generalization set\'s relative percentage error of the third layer\'s relative permittivity ε_(r,3) for the LMb method

Figure 63: Histogram distribution of the training set\'s relative percentage error of the third layer\'s conductivity 〖 σ〗_3 for the LMb method

Figure 64: Histogram distribution of the generalization set\'s relative percentage error of the third layer\'s conductivity 〖 σ〗_3 for the LMb method

Figure 65: Histogram distribution of the training set\'s relative percentage error of the fourth layer\'s relative permittivity ε_(r,4) for the LMb method

Figure 66: Histogram distribution of the generalization set\'s relative percentage error of the fourth layer\'s relative permittivity ε_(r,4) for the LMb method

Figure 67: Histogram distribution of the training set\'s relative percentage error of the fourth layer\'s conductivity 〖 σ〗_4 for the LMb method

Figure 68: Histogram distribution of the generalization set\'s relative percentage error of the fourth layer\'s conductivity 〖 σ〗_4 for the LMb method

Figure 69: Histogram distribution of the training set\'s relative percentage error of the inner layer\'s relative permittivity ε_(r,1) for the Brb method

Figure 70: Histogram distribution of the generalization set\'s relative percentage error of the inner layer\'s relative permittivity ε_(r,1) for the Brb method

Figure 71: Histogram distribution of the training set\'s relative percentage error of the inner layer\'s conductivity 〖 σ〗_1 for the Brb method  

Figure 72: Histogram distribution of the generalization set\'s relative percentage error of the inner layer\'s conductivity 〖 σ〗_1 for the Brb method

Figure 73: Histogram distribution of the training set\'s relative percentage error of the second layer\'s relative permittivity ε_(r,2) for the Brb method

Figure 74: Histogram distribution of the generalization set\'s relative percentage error of the second layer\'s relative permittivity ε_(r,2) for the Brb method

Figure 75: Histogram distribution of the training set\'s relative percentage error of the second layer\'s conductivity 〖 σ〗_2 for the Brb method

Figure 76: Histogram distribution of the generalization set\'s relative percentage error of the second layer\'s conductivity 〖 σ〗_2 for the Brb method

Figure 77: Histogram distribution of the training set\'s relative percentage error of the third layer\'s relative permittivity ε_(r,3) for the Brb method

Figure 78: Histogram distribution of the generalization set\'s relative percentage error of the third layer\'s relative permittivity ε_(r,3) for the Brb method

Figure 79: Histogram distribution of the training set\'s relative percentage error of the third layer\'s conductivity 〖 σ〗_3 for the Brb method

Figure 80: Histogram distribution of the generalization set\'s relative percentage error of the third layer\'s conductivity 〖 σ〗_3 for the Brb method

Figure 81: Histogram distribution of the training set\'s relative percentage error of the fourth layer\'s relative permittivity ε_(r,4) for the Brb method

Figure 82: Histogram distribution of the generalization set\'s relative percentage error of the fourth layer\'s relative permittivity ε_(r,4) for the Brb method

Figure 83: Histogram distribution of the training set\'s relative percentage error of the fourth layer\'s conductivity 〖 σ〗_4 for the Brb method

Figure 84: Histogram distribution of the generalization set\'s relative percentage error of the fourth layer\'s conductivity 〖 σ〗_4 for the Brb method

Furthermore, the minimum performance goal (PG) reached in the limited given time and the root mean square error of, typically, the inner layer\'s relative permittivity ε_(r,1) and conductivity 〖 σ〗_1 for the training and generalization sets (RMSt, RMSg) of the two methods are showed at tables 1-2. It is to be noticed from the shown figures and tables that the network trained with either of the two methods resulted in considerably very small errors and the results of the two methods are very near to each other; maximum relative errors (MREt, MREg) of the underestimates and overestimates, root mean square errors (RMSt, RMSg) and the performance goal (PG) of the two methods are very small and close to each other. Also, the network trained with either of the two methods proved to be efficient; obtains accurate results with an acceptable number of training examples that requires computing resources usually available to the practitioner. In addition to, its good interpolation ability shown by the generalization results. Therefore, the current network design including both of these two training methods and this number of neurons per layer is recommended by this analysis.

Noise Addition

The network robustness against noise is tested by adding different noise levels, using the box-car random number generator Rand(1,1), with various percentages, 2%, 5%, 7%, 10% and 20%, to the training and generalization sets. The two methods after applying noise also showed close results. The performance goal (PG) and the root mean square errors (RMSt, RMSg) of typical two variables (ε_(r,1) and〖 σ〗_1) after adding these various percentages of noise for the two methods are showed in tables 1-2. In addition to, the histogram distributions of the relative percentage error of the variables for the training and generalization sets after adding 20% noise for the two methods are showed in figs. 85-116. The network trained with either of the two methods proved to be robust against moderate levels of noise. Moreover, it is worth to be mentioned that, even after adding noise, the network preserved the original scattering medium profile (monotonically increasing).

Noise PG RMSt(ε_(r,1)) RMSg(ε_(r,1)) RMSt(〖 σ〗_1) RMSg(〖 σ〗_1)

0 5.40E-07 0.0008 0.0009 0.0003 0.0003

±2% 0.00162 0.0326 0.0338 0.0012 0.0011

±5% 0.00681 0.0448 0.0443 0.0024 0.0024

±7% 0.00763 0.0602 0.0624 0.0043 0.0043

±10% 0.00889 0.1046 0.1064 0.004 0.004

±20% 0.01 0.0918 0.0933 0.0056 0.0057

Table 1: PERFORMANCE OF LMb METHOD

Noise PG RMSt(ε_(r,1)) RMSg(ε_(r,1)) RMSt(〖 σ〗_1) RMSg(〖 σ〗_1)

0 6.10E-07 0.0008 0.0008 0.0003 0.0003

±2% 0.00171 0.0348 0.0349 0.0012 0.0012

±5% 0.00728 0.0564 0.0564 0.0046 0.0047

±7% 0.00744 0.0583 0.0592 0.0042 0.0043

±10% 0.00924 0.0941 0.0951 0.004 0.0039

±20% 0.0103 0.0923 0.094 0.0053 0.0053

Table 2: PERFORMANCE OF Brb METHOD

Figure 85: Histogram distribution of the training set\'s relative percentage error of the inner layer\'s relative permittivity ε_(r,1) for the LMb method with ±20% noise

Figure 86: Histogram distribution of the generalization set\'s relative percentage error of the inner layer\'s relative permittivity ε_(r,1) for the LMb method with ±20% noise

Figure 87: Histogram distribution of the training set\'s relative percentage error of the inner layer\'s conductivity 〖 σ〗_1 for the LMb method with ±20% noise

Figure 88: Histogram distribution of the generalization set\'s relative percentage error of the inner layer\'s conductivity 〖 σ〗_1 for the LMb method with ±20% noise

Figure 89: Histogram distribution of the training set\'s relative percentage error of the second layer\'s relative permittivity ε_(r,2) for the LMb method with ±20% noise

Figure 90: Histogram distribution of the generalization set\'s relative percentage error of the second layer\'s relative permittivity ε_(r,2) for the LMb method with ±20% noise

Figure 91: Histogram distribution of the training set\'s relative percentage error of the second layer\'s conductivity 〖 σ〗_2 for the LMb method with ±20% noise

Figure 92: Histogram distribution of the generalization set\'s relative percentage error of the second layer\'s conductivity 〖 σ〗_2 for the LMb method with ±20% noise

Figure 93: Histogram distribution of the training set\'s relative percentage error of the third layer\'s relative permittivity ε_(r,3) for the LMb method with ±20% noise

Figure 94: Histogram distribution of the generalization set\'s relative percentage error of the third layer\'s relative permittivity ε_(r,3) for the LMb method with ±20% noise

Figure 95: Histogram distribution of the training set\'s relative percentage error of the third layer\'s conductivity 〖 σ〗_3 for the LMb method with ±20% noise

Figure 96: Histogram distribution of the generalization set\'s relative percentage error of the third layer\'s conductivity 〖 σ〗_3 for the LMb method with ±20% noise

Figure 97: Histogram distribution of the training set\'s relative percentage error of the fourth layer\'s relative permittivity ε_(r,4) for the LMb method with ±20% noise

Figure 98: Histogram distribution of the generalization set\'s relative percentage error of the fourth layer\'s relative permittivity ε_(r,4) for the LMb method with ±20% noise

Figure 99: Histogram distribution of the training set\'s relative percentage error of the fourth layer\'s conductivity 〖 σ〗_4 for the LMb method with ±20% noise

Figure 100: Histogram distribution of the training set\'s relative percentage error of the fourth layer\'s conductivity 〖 σ〗_4 for the LMb method with ±20% noise

Figure 101: Histogram distribution of the training set\'s relative percentage error of the inner layer\'s relative permittivity ε_(r,1) for the Brb method with ±20% noise

Figure 102: Histogram distribution of the generalization set\'s relative percentage error of the inner layer\'s relative permittivity ε_(r,1) for the Brb method with ±20% noise

Figure 103: Histogram distribution of the training set\'s relative percentage error of the inner layer\'s conductivity 〖 σ〗_1 for the Brb method with ±20% noise

Figure 104: Histogram distribution of the generalization set\'s relative percentage error of the inner layer\'s conductivity 〖 σ〗_1 for the Brb method with ±20% noise

Figure 105: Histogram distribution of the training set\'s relative percentage error of the second layer\'s relative permittivity ε_(r,2) for the Brb method with ±20% noise

Figure 106: Histogram distribution of the generalization set\'s relative percentage error of the second layer\'s relative permittivity ε_(r,2) for the Brb method with ±20% noise

Figure 107: Histogram distribution of the training set\'s relative percentage error of the second layer\'s conductivity 〖 σ〗_2 for the Brb method with ±20% noise

Figure 108: Histogram distribution of the generalization set\'s relative percentage error of the second layer\'s conductivity 〖 σ〗_2 for the Brb method with ±20% noise

Figure 109: Histogram distribution of the training set\'s relative percentage error of the third layer\'s relative permittivity ε_(r,3) for the Brb method with ±20% noise

Figure 110: Histogram distribution of the generalization set\'s relative percentage error of the third layer\'s relative permittivity ε_(r,3) for the Brb method with ±20% noise

Figure 111: Histogram distribution of the training set\'s relative percentage error of the third layer\'s conductivity 〖 σ〗_3 for the Brb method with ±20% noise

Figure 112: Histogram distribution of the generalization set\'s relative percentage error of the third layer\'s conductivity 〖 σ〗_3 for the Brb method with ±20% noise

Figure 113: Histogram distribution of the training set\'s relative percentage error of the fourth layer\'s relative permittivity ε_(r,4) for the Brb method with ±20% noise

Figure 114: Histogram distribution of the generalization set\'s relative percentage error of the fourth layer\'s relative permittivity ε_(r,4) for the Brb method with ±20% noise

Figure 115: Histogram distribution of the training set\'s relative percentage error of the fourth layer\'s conductivity 〖 σ〗_4 for the Brb method with ±20% noise

Figure 116: Histogram distribution of the generalization set\'s relative percentage error of the fourth layer\'s conductivity 〖 σ〗_4 for the Brb method with ±20% noise

4.3 Summary

A technique has been proposed for the multilayer perceptron artificial neural networks\' design for the purpose of solving inverse scattering problems. Such technique is versatile due to the relatively large number of parameters that are tuned for achieving the best performance. The performance results\' analysis proves that multilayer perceptron neural networks are effective in solving nontrivial inverse scattering problems.

CHAPTER 5: MLP NEURAL NETWORK FOR SOLVING FORWARD PROBLEM

Another way to solve the presented forward problem (as shown in fig.10 chapter 2) is the supervised artificial neural nets. It requires a number of solved examples and some training time, but it gives instant (online) solution after the training process ends. In the current analysis an application of multilayer perceptron neural networks is presented where we design the network to receive input pattern in the form of permittivities and conductivities, as the material properties, of a radially stratified cylinder embedded in a homogeneous medium and produce the corresponding scattered electromagnetic fields. The same training, validation and generalization sets used for the neural network of the inverse problem (presented in chapter 4) are used here with the consideration of swapping the inputs and outputs. Therefore, the cylinder is divided into four radial layers with fixed layer thickness; resulting in eight input vector space\'s dimension. The scattered fields are gathered for ten harmonics (M=0:9). In the frequency domain, fields are represented by complex numbers which the neural networks do not recognize. Therefore, the real and imaginary components are separated forming twenty output vector space\'s dimension.

5.1 Network’s Design and Training

 A multilayer perceptron network is designed, executed and tested for performance using the neural network toolbox of Matlab. We began with the same approach of the neural network of the inverse problem.  Design factors included number of hidden layers, number of neurons per hidden layer and training methods. As many studies in the literature for this kind of application proved, see [22] and [23], one hidden layer is sufficient for best performance. The Levenberg-Marquardt backpropagation training method with 14 neurons per hidden layer is used in the training of the presented neural network. The maximum training time and the performance goal of the network are set to be one hour and zero, respectively, in order to help realizing the method\'s capabilities or in other words allowing every method to reach the minimum error reachable in the limited given time. The training process is terminated after approximately 2 minutes due to the minimum gradient limitation with an error in the order of 30%. Then we changed the minimum gradient of the method from the default e^(-7) to e^(-30) to allow the method take the full hour in the training process for the sake of improving the error. The resulted error improved greatly to be around 0%. Therefore, there was no need to train the network with the other methods or numbers of neurons per hidden layer, because any other improvement is irrelevant.

5.2 Results

The minimum performance goal (PG) reached in the limited given time is in the order of〖 e〗^(-16). The root mean square errors of the real and imaginary parts of the scattered electric fields of the ten harmonics for the training and generalization sets are showed in table 3. Moreover, the histogram distributions of the relative error ((Desired Output)/(Simulated Output)) of the scattered electric fields\' absolute values of the ten harmonics for the training and generalization sets are showed in figs.117-136.

Harmonic (M) RmsTR RmsTI RmsGR RmsGI

0 1.74E-08 1.80E-08 1.77E-08 1.87E-08

1 1.05E-08 8.86E-09 1.03E-08 8.78E-09

2 1.64E-08 1.81E-08 1.66E-08 1.86E-08

3 1.13E-08 8.75E-09 1.10E-08 8.73E-09

4 2.06E-08 2.11E-08 2.07E-08 2.10E-08

5 1.43E-08 1.35E-08 1.38E-08 1.34E-08

6 2.31E-08 2.43E-08 2.28E-08 2.36E-08

7 1.70E-08 1.76E-08 1.72E-08 1.74E-08

8 2.06E-08 2.79E-08 2.07E-08 2.79E-08

9 2.45E-08 2.24E-08 2.53E-08 2.24E-08

Table 3: Root Mean Square Errors for the Training and Generalization Sets

Figure 117: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=0^th harmonic for the training set

Figure 118: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=0^th harmonic for the generalization set

Figure 119: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=1^st harmonic for the training set

Figure 120: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=1^st harmonic for the generalization set

Figure 121: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=2^nd harmonic for the training set

Figure 122: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=2^nd harmonic for the generalization set

Figure 123: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=3^rd harmonic for the training set

Figure 124: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=3^rd harmonic for the generalization set

Figure 125: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=4^th harmonic for the training set

Figure 126: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=4^th harmonic for the generalization set

Figure 127: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=5^th harmonic for the training set

Figure 128: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=5^th harmonic for the generalization set

Figure 129: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=6^th harmonic for the training set

Figure 130: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=6^th harmonic for the generalization set

Figure 131: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=7^th harmonic for the training set

Figure 132: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=7^th harmonic for the generalization set

Figure 133: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=8^th harmonic for the training set

Figure 134: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=8^th harmonic for the generalization set

Figure 135: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=9^th harmonic for the training set

Figure 136: Histogram distribution of the relative error of the scattered electric field\'s absolute value of M=9^th harmonic for the generalization set

It is proved by the shown table and figures that the network resulted in very small errors (approximately 0); maximum relative errors of the underestimates and overestimates of the scattered electric fields\' absolute values of the ten harmonics for the training and generalization sets, root mean square errors of the real and imaginary parts of the scattered electric fields of the ten harmonics for the training and generalization sets and the performance goal reached are approximately 0. Also, the network proved to be efficient; obtains accurate results with an acceptable number of training examples that require computing resources usually available to the practitioner. In addition to, its good interpolation ability shown by the generalization results. Therefore, the current network design is recommended by this analysis.

5.3 Summary

Finally, a technique has been proposed for the design of multilayer perceptron artificial neural networks for the purpose of solving the presented forward problem. Such technique has the ability to provide instant (online) solution with very small errors. A characteristic that is crucial in many applications (e.g. surveillance, finding mines and medical diagnoses). The performance results\' analysis proves that multilayer perceptron neural networks are effective in solving nontrivial forward problems in the field of electromagnetic waves\' scattering.

CHAPTER 6: CONCLUSION AND FUTURE WORK

Firstly, a new Maxwell\'s equations’ solver is presented. It proved to be quite precise, fast and efficient for a large variety of problem parameters. It belongs to the category of schemes that is dividable into two parts, known as Compute Once Use Many/Use Many (COUM/UM). In which the first part computing the complete orthonormal set of the polarization current is independent of the characteristics of the scattering medium (COUM). While, the second part is an extremely rapid computation, that is responsible for producing the fields for a specific choice of medium parameters (UM). All the current approaches in the literature are Compute Many Use Many (CMUM) approaches, in which, if a change happens to any problem parameter will result in necessarily recomputing the entire scheme from the first step. Therefore, the COUM/UM characteristic of the proposed approach makes it far more efficient than the ones available in the literature. Especially when tackling problems wherein numerous instances of the scattered fields are needed to be calculated for various values of medium parameters as the inverse scattering problem under consideration in this work. Furthermore, the proposed technique is embarrassingly parallel; hence, it exploits all the cores available on an Intel multicore computing device, via using the parallel toolbox of Matlab. Hence, the proposed approach proved to be one of the most efficient formulations for solving Maxwell\'s equations.

Secondly, a technique has been proposed for the multilayer perceptron artificial neural networks\' design for the purpose of solving such inverse scattering problem. Such technique is versatile due to the relatively large number of parameters that are tuned for achieving the best performance. The network proved to be effective in solving nontrivial inverse scattering problems, even in the presence of noise.

Finally, a technique has been proposed for the design of multilayer perceptron artificial neural networks for the purpose of solving the presented forward problem. Such technique has the ability to provide instant (online) solution with very small errors. A characteristic that is crucial in many applications (e.g. surveillance, finding mines and medical diagnoses). The performance results\' analysis proves that multilayer perceptron neural networks are effective in solving nontrivial forward problems in the field of electromagnetic waves\' scattering.

6.1 Potential Vision for Future Work

While we have proposed the case of radial stratification; extensions to cover azimuthal variations would be a progress.

Including magnetic materials would allow handling more general situations.

Moreover, solving the 3D cases would be a step forward.

References

[1] Genyuan Wang and Moeness G. Amin, “Imaging Through Unknown Walls Using Different Standoff Distances”, IEEE Transactions on Signal Processing, vol. 54, no.10, October 2006.

[2] Victor M. Lubecke, Olga B. Lubecke, Anders H. Madsen and Aly E. Fathy,”Through-the-Wall Radar Life Detection and Monitoring”, IEEE Transactions on Signal Processing, 2007.

[3] Isaac Cushman, Danda B. Rawat, Abhishek Bhimraj and Malik Fraser, “Experimental Approach for Seeing Through Walls Using Wi-Fi Enabled Software Defined Radio Technology”, Digital Communications and Networks, vol.2, issue 4, November 2016, pp.245-255.

[4] A. Lakhal,” A Decoupling-Based Imaging Method for Inverse Medium Scattering for Maxwell\'s Equations”, Inverse Problems, vol.26, number 1, 2010.

[5] David Colton and Rainer Kress, “Inverse Acoustic and Electromagnetic Scattering Theory”, Applied Mathematical Sciences, vol.93, 1998, pp.195-210.

[6] D. Colton and P. Monk,” A Modified Dual Space Method for Solving the Electromagnetic Inverse Scattering Problem for an Infinite Cylinder”, Inverse Problems, vol.10, number 1, 1994.

[7] Tim Op’t Root, Christiaan C. Stolk and Maarten V. de Hoop,”Linearized Inverse Scattering Based on Seismic Reverse Time Migration”, Journal de Mathematiques Pures et Appliquees, vol. 98, issue 2, August 2012, Pages 211-238.

[8] Chris Stolk, Tim Op’t Root, and Maarten de Hoop,” Seismic Inverse Scattering by Reverse Time Migration”, RICAM Workshop, November 2011.

[9] E.M.T. Hendrix and B. G.-Tóth, “Introduction to Nonlinear and Global Optimization”, Springer, 2010.

[10] T. Weise, “Global Optimization Algorithms”, Theory and Application, 2009.

[11] M. Sen and P. Stoffa, “Global Optimization Methods in Geophysical Inversion”, Elsevier, 1995.

[12] L. Liberti and N. Maculan, “Global Optimization From Theory to Implementation”, Springer, 2006.

[13] C. M. Tan, “Simulated Annealing”, In-Teh, 2008.

[14] R. Chibante, “Simulated Annealing: Theory with Applications”, Sciyo, 2010.

[15] Tomasz D. Gwiazda, Genetic Algorithms Reference: Crossover for Single-Objective Numerical Optimization Problems, vol. I, 2006 [E-book].

[16] Marco Dorigo and Thomas Stutzle, “Ant Colony Optimization”, The MIT Press, 2004.

[17] Ehab A. El-Fayome, Solutions of Microwave Problems Using VIE, M.Sc. Thesis, Electronics and Communications Department, Faculty of Engineering, Ain Shams University, Egypt, 2017.

[18] Federico París and José Cañas, Boundary Element Method: Fundamentals and Applications, Oxford: Clarendon Press, 1997.

[19] Allen Taflove and Susan C. Hagness, Computational Electrodynamics: The Finite-Difference Time-Domain Method, 3 rd ed., Boston|London: Artech House, 2005.

[20] Jianming Jin, The Finite Element Method in Electromagnetics, 2 nd ed., New Jersey: John Wiley & Sons, 2002.

[21] John L. Volakis and Kubilay Sertel, Integral Equation Methods for Electromagnetics, Raleigh: SciTech Publishing, Inc., 2012.

[22] Gunter Röth and Albert Tarantola, \"Neural networks and inversion of seismic data,\" in Journal of Geophysical Research, Vol. 99, 1994, pp.6753-6768.

[23] Aya E. El-Shorfa, Nour H. El-Kattan and Mariam A. Hamza, Inverse scattering and imaging for breast cancer detection, Senior Graduation Project, 2013, Electronics and Communications Department, Faculty of Engineering, MSA University, Egypt. Project advisor: Aladin H. Kamel.

[24]  Leopold Felsen and Nathan Marcuvitz, Radiation and Scattering of Waves, New Jersey: IEEE Press, 1994, pp. 274-278.

[25] A.J. Devaney and E. Wolf, \"Radiating and nonradiating classical current distributions and the fields they generate,\" Phys. Rev. D Part. Fields, 8, 1044-1047, 1973.

[26] T.M. Habashy, M.L. Oristaglio and A.T. de Hoop, \"Simultaneous nonlinear reconstruction of two-dimensional permittivity and conductivity,\" Radio Science, Vol. 29, No. 4, pp. 1011-1118, July-August 1994.

[27] G. N. Watson, A Treatise on the Theory of Bessel Functions, Cambridge Mathematical Library, Cambridge University Press, Cambridge, UK, 1958.

[28] E. Wallacher and A. K. Louis, \"Complete sets of radiating and nonradiating parts of a source and their fields with applications in inverse scattering limited-angle problems,\" International Journal of Biomedical Imaging, vol. 2006, article ID 93074, Pages 1–13.

[29] D. Marcuse, Light Transmission Optics, Van Nostrand Reinhold, NY, 1982.

[30] L. M. Brekhovskikh, Waves in Layered Media, Academic Press, NY, 1980.

[31] K. H. B. Wilhelmsson, \"Interaction between an Obliquely Incident Plane Electromagnetic Wave and an Electron Beam in the Presence of a Static Magnetic Field of Arbitrary Strength\", Journal of Research of the National Bureau of Science, 1962.

[32] S. N. Samaddar, \"Orthogonality Properties of Modes in a Compressible Partially Ionized Plasma\", Applied Scientific Research, Section B, 1964.

[33] C. T. Tai, \"Maxwell Fish-Eye Treated by Maxwell Equations\", Applied Scientific Research, 1958.

[34] N. F. Mott and H. S. W. Massey: The Theory of Atomic Collisions (Oxford, 1965).

[35] S. N. Samaddar, \"Scattering of Plane Electromagnetic Waves by Radially Inhomogeneous Infinite Cylinders\", Il Nuovo Cimento, Vol. LXVI B, N, 1, 1970, pp. 33-50.

[36] H. Buchholz, \"The Confluent Hypergeometric Function: with Special Emphasis on its Applications\", Springer Tracts in Natural Philosophy, Vol.15, NY, 1969.

[37] H. Kirchhoff, \"Wave Propagation along Radially Inhomogeneous Glass Fibres\", A.E.U., Band 27, Heft 1, 1973, pp. 13‐18.

[38] H. G. Unger, Planar Optical Waveguides and Fibres, Oxford University Press, New York, 1978, pp. 497‐500.

[39] Takanori Okoshi, Optical Fibers, Academic, New York, 1982, pp. 121−124.

[40] A. W. Snyder, Optical Waveguide Theory, Chapman and Hall, London, 1983, p. 327.

[41] I.V.Neves and A.S.C. Fernandes, \" Wave Propagation in a Radially Inhomogeneous Cylindrical Dielectric Structure: A General Analytical Solution\", Microwave and Optical Technology Letters, Vol.5, No.13, December 1992, pp.675-679.

[42] James A. Anderson, An Introduction to Neural Networks. Boston: MIT Press, 1997.

[43] Hagan, M.T., H.B. Demuth, and M.H. Beale, Neural Network Design, Boston, MA: PWS Publishing, 1996.

[44] Marquardt, D., \"An Algorithm for Least-Squares Estimation of Nonlinear Parameters,\" SIAM Journal on Applied Mathematics, Vol. 11, No. 2, June 1963, pp. 431–441.

[45] MacKay, D.J.C., \"Bayesian interpolation,\" Neural Computation, Vol. 4, No. 3, 1992, pp. 415–447.

APPENDIX A

Consider the Green\'s function of the Bessel operator

[d/dρ ρ d/dρ+τρ-m^2/ρ]g(ρ,ρ^\';τ)=-δ(ρ-ρ^\' ), (A.1)

with the impedance boundary condition given by

{J_m (kρ)d/dρ g(ρ,ρ^\';τ)-(g(ρ,ρ^\';τ)d/dρ J_m (kρ)}_(ρ=R)=0. (A.2)

As per [24],

f ⃖  = J_m (√τ ρ_< ). (A.3)

f ⃗  = J_m (√τ ρ_> )-A(τ)H_m^((1) ) (√τ ρ_> ), (A.4)

where

A(τ) =  (J_m (kR)d/dρ J_m (√τ R)-J_m (√τ R)d/dρ J_m (kR))/(J_m (kR)d/dρ H_m^((1)) (√τ R)-H_m^((1)) (√τ R)d/dρ J_m (kR))

=  (A_1 (τ))/(A_2 (τ)).

g(ρ,ρ^\';τ)=-πi/2  (J_m (√τ ρ_< )[J_m (√τ ρ_> )-A(τ)H_m^((1) ) (√τ ρ_> )])/A(τ) .(A.5)

From which we find the singularities (poles at〖 τ〗_(m,n))

A1(√(τ_(m,n) ))=0=J_m (kR)  d/dρ J_m (√(τ_(m,n) ) R)-J_m (√(τ_(m,n) ) R)  d/dρ J_m (kR). (A.6)

Substitute √τ  =  λ/R, we obtain

(-kRJ_m^\' (kR))/(J_m (kR) ) J_m (λ_(m,n))+λ_(m,n) J_m^\' (λ_(m,n)) = 0. (A.7)

The complete orthonormal set of eigen functions is found from

(δ(ρ-ρ^\'))/ρ=(-1)/2πi ∫_c▒g (ρ,ρ^\';τ)dτ, (A.8)

the contour of integration, c, encloses all the real poles and one of the two conjugate imaginary poles, if they exist, in a counter-clockwise direction.

(δ(ρ-ρ^\'))/ρ=∑_n▒〖1/4  (A_2 (τ_(m,n) ))/(dA_2 (τ)/dτ|τ_(m,n) ) J_m (√(τ_(m,n) ) ρ) J_m (√(τ_(m,n) ) ρ^\' ).(A.9)〗

Hence, the 1-D complete orthonormal set we derived is given by

ϖ_(m,n) (ρ)=(J_m (√(τ_(m,n) ) ρ))/( N(m,n) ), (A.10)

N(m,n)=1/2[(A_2 (τ_(m,n) ))/(dA_2 (τ)/dτ|τ_(m,n) ) ]^(-1/2),(A.11)

where N(m,n) is the normalization constant. The same was constructed in [28] through a less general approach.

Consequently, the radiating and non-radiating polarization current sets are ϖ_(m,n=N_(〖rad〗_m ) ) (ρ) and ϖ_(m,∀n≠N_(〖rad〗_m ) ) (ρ) respectively; with N_(〖rad〗_m ) representing the radiating mode number with √(τ_(m,N_(〖rad〗_m ) ) ) =〖 k〗_0.

كلية الهندسة - جامعة عين شمس

قسم هندسة الالكترونيات و الاتصالات الكهربيه

اسم الباحثة: اية عماد روحي الشرفا

عنوان الرسالة: الرؤية وراء المواد ذات الانعكاس الجزئي

اسم الدرجة: ماجيستير العلوم في الهندسة الكهربية (هندسة الالكترونيات و الاتصالات)

ملخص الرسالة

هذه الرسالة مقسمة الي ستة فصول كما هو موضح أدناه:

الفصل الأول

الفصل الأول تمهيد للطريقة المقدمة, يشمل تعريف المسألة المراد حلها, دراسة الأدب للطرق الحالية لحل المسألة المقدمة و مخطط الرسالة.

الفصل الثاني

الفصل الثاني يعرض صيغة جديدة شبه تحليليه لحل معادلات مكسويل عبر الجمع ما بين طرق معادلة التكامل الحجمي و الطيارات الاستقطابية الاشعاعية/غير الاشعاعية.

الفصل الثالث

الفصل الثالث يمد الصيغة الجديدة لحل معادلات مكسويل للوسط الغير متجانس نصف قطري.

الفصل الرابع

الفصل الرابع يعرض منهجية و حل مقترح للمسألة العكسية قيد النظر عبر شبكة عصبية متعددة الطبقات المستقبلات.

الفصل الخامس

الفصل الخامس يقدم حل فوري لمعادلات مكسويل عبر شبكة عصبية متعددة الطبقات المستقبلات.

الفصل السادس

الفصل السادس يقدم استنتاج للعمل المطروح, شامل رؤية محتملة للعمل المستقبلي.

كلية الهندسة - جامعة عين شمس

قسم هندسة الالكترونيات و الاتصالات الكهربيه

اسم الباحثة: اية عماد روحي الشرفا

عنوان الرسالة: الرؤية وراء المواد ذات الانعكاس الجزئي

اسم الدرجة: ماجيستير العلوم في الهندسة الكهربية (هندسة الالكترونيات و الاتصالات)

نبذة مختصرة عن الرسالة

هذه الرسالة تتعامل مع مسألة انتثار معكوس، حيث الخصائص الكهربائية (السماحية والموصلية) الخاصة بشىء نثر يتم تحديدها من معرفة مصدر البيانات ونثرها. طريقة الشبكة العصبية الاصطناعية الخاضعة للإشراف تم اختيارها لحل المسألة قيد النظر. الشبكة العصبية تحتاج الي عدد كبير من الأمثلة المحلولة لتدريب الشبكة بها. لذلك، حلال سريع وفعال للمسألة الأمامية ( حيث يتم تحديد البيانات المنتثرة من معرفة المصدر و الخصائص الكهربائية الخاصة بالشىء النثر) مطلوب.

بالتالي, صياغة شبه تحليلية جديدة مقدمة لحساب الحقل الكهرومغناطيسي المتناثر من منثرلانهائي أسطواني الشكل. هي تشمل تكوين معادلة تكامل حجمية علي طيار كهربائي استقطابي داخل المنثر و مجموعة كاملة متعامدة معيرة من طيارات استقطابية اشعاعية/غير اشعاعية, عبر طريقة دالة جرين, لحل معادلات مكسويل. تم مقارنة نتائج الطريقة المقدمة مع عدد من الحالات التي لديها حلول تحليلية معروفة. تمت المقارنة لمجموعة كبيرة ومتنوعة من اقطار وسماحيات وموصلات المنثر و ترددات تشغيل المصدر. المقارنة اظهرت ان الطريقة المقدمة تمثيل دقيق جدا للطريقة التحليلية. بالتالي, الطريقة المقدمة اثبتت انها واحدة من اكثر الطرق الفعالة في حل معادلات مكسويل.

ثم, شبكة عصبية متعددة الطبقات المستقبلات صممت, نفذت و اختبرت لأداء المسألة العكسية قيد النظر. عوامل التصميم شملت عدد الطبقات المخفية, أعداد مختلفة من الخلايا العصبية لكل طبقة خفية وطرق تدريب مختلفة. تحليل نتائج الأداء أثبت أن الشبكات العصبية متعددة الطبقات المستقبلات فعالة في حل مسائل الانتثار العكسية غير التافهة; حتي في وجود الضوضاء.

اخيرا, حل فورى قدم باستخدام شبكة عصبية متعددة الطبقات المستقبلات للمسألة الأمامية المقدمة. الحل ثبت أنه فعال في حل مسائل الانتثار الأمامية غير التافهة في مجال انتثار الموجات الكهرومغناطيسية.

...(download the rest of the essay above)

About this essay:

This essay was submitted to us by a student in order to help you with your studies.

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, . Available from:< https://www.essaysauce.com/essays/engineering/2017-5-25-1495701111.php > [Accessed 17.10.19].