5 %

1.1. General considerations

Camera lens distortion is an important in medium to wide angle lenses. At the present many distortion models and trials are available for evaluating camera lens distortion. Choosing the right model and trials could provide accurate result. Simpler methods used by the computer vision community, have been developed due to the complexity of the evaluation trials and advances in the use of computers for analytical analysis [1, 2, 3].

The distortion parameters (k) are calibrated with camera model parameters (intrinsic and extrinsic parameter) [4]. Some methods require metric information around the image scene. In addition, a few sorts of association happen between the cameras internal and external parameters which yield high errors on the internal parameters [5, 9].

Also, another non-metric method which doesn’t depend on identified scene points has been offered [5, 6, 7, 8, and 9]. Numerous of these methods depend on the fact that the scene straight lines have to project always to the image straight lines.

Numerous PC vision algorithms critically depend on the supposition of a linear pinhole camera model, especially those structures from motion algorithms.

The distortion model problems have been addressed by a few approaches that can work on a wide range of cameras. While some of them deliberate an automatic selection approach of distortion model [5, 10, and 11].

A parameter-free method was proposed by Hartley and Kang to model distortion which doesn’t depend on any two specific distortion models and it could be applied on fish-eye, wide angle, and normal angle lenses. There are several types of distortion but the most significant type is the radial distortion, especially, in high quality camera that present the error to three dimensions restructure processes which turns straight lines into circular arcs [12, 13]. Straight lines into the world map to straight lines into the image plane could cause in violating the main invariance conserved in the pinhole camera model [14, 15, and 16]. Radial distortion at short focal lengths appears as a barrel distortion, while at longer focal lengths appears as pincushion distortion.

Since there is no much knowledge with genuine cameras that present tangential distortion, we ignored the tangential distortion, like some prior works [17, 18, 19, and 20].

Lastly, the thesis addresses the problem of how to select a proper model for lens distortion, and presenting an exact formula to compute the inverse of the selected radial distortion model.

1.2. Literature survey

In 2005 Chistopher Paul Broaddus suggested a new widespread camera calibration approach, which utilized statistical information criteria for automatic camera model selection [21].

In 2006 Kai San Choi demonstrated a method to attain a high level of accuracy in the identification by perceiving for each camera the intrinsic lens radial distortion [22].

In 2007 Besma Roui Abidi presented work on the testing and development of a complete camera calibration approach which can be applied to a wide range of cameras equipped with normal, wide-angle, fish-eye, or telephoto lenses [23].

In 2008 Jianhua Wang presented a new model of camera lens distortion. They showed in comparison with the conventional model, the new model has more explicit physical and fewer parameters to be calibrated [24].

In 2009 Aiqi Wang presented a new simple method to determine the distortion function of camera systems suffering from radial lens distortion [13].

In 2010 Carlos Ricolfe-Viala’s paper recommended a metric calibration technique which figures the camera lens distortion isolated separated from the camera calibration process under steady circumstances, independently of the computed lens distortion model or the quantity of parameters. They fulfilled the best execution of the camera lens distortion calibration process [25].

In 2011Carlos Ricolfe-Viala proposed a metric calibration technique to figure lens distortion secluded from the pin-hole model. This procedure was determined under steady circumstances; independently of the computed lens distortion model [26].

In 2012 Z. Tang, used calibration harp method, which evaluated on images corrected by state of the art distortion correction algorithms, and by common software [27].

In 2013 Dapeng Gao proposed a method for computing the complete camera lens distortion model. The proposed method works well according to the experimental results [28].

In 2014 Miguel Alem´an-Flores presented a method to automatically correct the radial distortion caused by wide-angle lenses using the distorted lines generated by the projection of three dimension straight lines onto the image [29].

In 2015 Jakub Kolecki debated the removal issue of the automatic distortion from images learned with non-metric SLR camera fortified with prime lenses [30].

In 2016 Pierre Drap presents another way to compute the inverse of radial distortions [31].

1.3. Scope of the thesis

In this thesis, we propose an algorithm which will select the best radial distortion model between two models for several lenses of different focal lengths.

The aim is to choose the best model automatically and compute its inverse using exact formula without losing accuracy. The algorithm will be tested on a simple chessboard pattern that observed from different position to find the best model that satisfies farther reduction of error and accuracy loss.

1.4. Thesis overview

This thesis consists of five chapters divided as follows:

Chapter Two: An introduction about camera models and the nonlinear distortion models.

Chapter Three: In this chapter, details about the proposed algorithm will be presented. Also, mathematical modeling for the distortion model selection for several lens including statistical information criteria will be explained in details as well as the exact formula for inverse model calculation.

Chapter Four: This chapter displays the result obtained by algorithm and discusses the performance of the chosen distortion model for different lenses. Definition of scenarios for experimental work will be stated.

Chapter Five: A conclusion for overall work will be introduced and some proposed ideas for future work will be stated.

2.1. Pinhole Camera

In order to extract and analyze the data from images of the real world, we need to build mathematical models of cameras. Building these models will be kind of difficult task. That brings the need to introduce many methods to address this problem, starting from traditional pin-hole camera model expanding to more complex methods [21, 23].

We will focus in this section on demonstrating the pinhole camera model which describes mathematically the ideal perspective projection as shown in fig (2.1).

Let’s assume a point in space M = (X, Y, Z)T is projected onto the image plane to image point m = (u, v)T so the ray from m passes through the camera center (center of projection). The focal length characterized by the distance from the Center of the camera to the picture plane. While, the point at which the principal axis meets the image plane is defined as principal point p= (uo ,vo). Contingent upon the matching of triangles, the coordinates of M and the focal length will be utilized to portray the point on the picture plane in wording as takes after [15]:

(X ,Y, Z )T → ( f X/ Z , f Y/ Z )T = (u, v)T (2.1)

Utilizing homogenous coordinates we can write this in matrix notation through supplementing M and m with one consequently that M ~ (X ,Y, Z,1)T and m ~ (u, v,1)T where ~ indicates to a scale factor.

In computer vision field the projective spaces and the homogenous coordinates are utilized, and a plentiful quantity will subsists of information on the subject [16, 21].

Figure 2.1: (a) The Pinhole camera geometry and (b) a profile description to show the similarity of triangles.

Using homogenous coordinates to express pinhole camera:

[■(u@v@1)] = [■(f&0&0@0&f&0@0&0&1) ■(0@0@0)] = [█(■(X@Y@Z)@1)] (2.2)

The internal parameters of the camera will be used to form the camera calibration matrix. This matrix expressed by the following form:

K=[■(f&0&0@0&f&0@0&0&1) ■(0@0@0)] (2.3)

The pinhole camera has the subsequent features which are essential to be noticed:

It does not include parameters for rectangular pixels, non-orthogonal image axes or principal point offset.

It assumes the point M is in camera coordinates.

2.2 Projective camera

The pinhole camera presents the foundation for the projective camera, in the absence of the restrictions. The parameters that model the internal camera aspects are called the internal parameters, they include aspect ratio (rectangular pixels) α/ β , the focal length f, skew s (non-orthogonal image axes) and the principal point (uo,vo) (location of the image center). The projective camera in comparison with model real Camera is more flexible and more accurately.

As shown in fig. (2.2) the translation vector t= (t x ,ty ,tz ) and the rotation matrix R comprise of the extrinsic parameters that exchange M to camera coordinates. We will undertake that the rotation matrix is parameterized as Euler points (χ ,ϕ ,γ ), despite the fact that it could be parameterized in various routes, for example, as angle/axis, quaternions or Cayley-Klein parameters.

Aspect ratio, skew and principal point: Whereas the pinhole camera assumes the two image axes have equal scale in both directions.

The CCD cameras have the probability of having non-square pixels, where the pinhole camera assumes that the two image axes have equal scale in both directions. m x and m y are the pixels per unit distance in both x and y directions [3].

The inhomogeneous mapping representation.

(X, Y, Z )T= (fmx X /Z , fmy Y / Z )T =(u,v)T (2.4)

In the lower superiority cameras, when the image axes are not orthogonal, the non- orthogonally could be included with the skew parameter.

(X, Y, Z )T = (fmx X / Z+ sY / Z , fmy Y / Z)T =(u, v)T (2.5)

And the principal point p= (uo,vo) could be included to the image as an offset and written in the inhomogeneous representation as :

(X,Y,Z) T= (fmx X / Z + sY /Z+ uo , fmy Y /Z +vo) T =( v, u) T (2.6)

Keeping in mind the origin of the image is usually in the upper left corner. Homogenous representation combines all these as:

[■(u@v@1)] = [■(α&s&u_0@0&β&v_0@0&0&1) ■(0@0@1)] = [█(■(X@Y@Z)@1)] (2.7)

Putting α= fmx and β= fmx . The projective camera calibration matrix is

K= [■(α&s&u_0@0&β&v_0@0&0&1) ] (2.8)

Rotation and translation:

The projective camera doesn’t confine a point M in space into the camera coordinates. On the other hand, through a rotation R and translation the world coordinate system is associated to camera coordinates. Where P is the projection matrix which represents the projective camera, so the projection matrix can briefly written as following.

P = K [R | t] (2.9)

With a rotation matrix R 3×3

R= (r1 r2 r3) =[■(r_11&r_12&r_13@r_21&r_22&r_23@r_31&r_32&r_33 )] (2.10)

And a translation vector t 3×1

t= (tx ,ty ,tz )T (2.11)

2.3 Camera parameter

The system coordinates world and pixel generally linked by a group of physical parameters like:

1. The focal length of the lens

2. The pixel size

3. The principal point location

4. The camera location and orientation

In order to rebuild the structure 3D of a scene from the pixel coordinates of its image point, two types of parameters required to be recover:

The extrinsic parameters of the camera which represent the orientation and location of the camera reference frame respect to the known world reference frame.

Intrinsic camera parameters which are essential for the image point to associate the pixel coordinates with the matching coordinates in the camera reference frame [5].

2.4 The nonlinear distortion model

Both of the viewpoint projection and function which models the deviations from the ideal pinhole camera, are brought about by the mapping between the three- D points and the two- D image points. A viewpoint projection correlated with the focal length f outline three-D point M whose coordinates are (X, Y, Z) in the camera- centered coordinate framework to an “undistorted” picture point mu= (xu, yu) on the image plane:

xu = f X/Z (2.12)

yu = f Y/Z (2.13)

Then, mu will be transformed to the distorted image point md by the image distortion. The distortion model of the image [15, 33] is appeared as a plotting between the distorted image coordinates, which are observable in the gained images, and the undistorted image coordinates, which are required for more computations. There are two terms of the distortion image function:

Radial and tangential distortion, the radial distortion disfigures the image from the distortion center along the direction to the considered image point. The radial distortion function is D, which is invertible over the image:

D: ru → rd = D (ru), with ∂R/(∂r_u )(0) =1 (2.14)

The distortion model can be described as:

xu = xd (R^(-1) ( r_d))/r_d

(2.15)

yu = yd (R^(-1) ( r_d))/r_d

Where rd =√( x_d^2+y_d^2 ) , also, likewise the reverse distortion model is:

xd = xu (R( r_u))/r_u

(2.16)

yd = yu (R( r_u))/r_u

Where r_u =√( x_u^2+y_u^2 ).

At last the coordinates of the distorted image plane are changed over to frame buffer coordinates, and this could be stated either in standardized coordinates (i.e. pixels divided by picture dimensions) or in pixels, which rely on upon the unit of f:

xi = Sx xd + u

(2.17)

yi = yd + v

Where (Cx,Cy) are the image coordinates of the principal point and Sx is the image aspect ratio.

2.4.1 Polynomial distortion models

Equations (2.15) are depicted the lens distortion model and they could be written as an Infinite arrangement:

xu = xd (1+ k 1 〖r 〗_d^2+k 2 〖r 〗_d^4+..)

(2.18)

yu = yd (1+ k 1 〖r 〗_d^2+k 2 〖r 〗_d^4+..)

A precision of around 0.1 pixels in the imae space could be accomplished by utilizing lenses which display large distortion, together with other parameters of the viewpoint camera [33], as the past studies appeared by utilizing just the first order radial symmetric distortion parameter k 1 [3, 33].

The undistorted coordinates are given by the condition:

〖 x〗_u=x_d (1+k_1 r_d^2 )

(2.19)

y_u=y_d (1+k_1 r_d^2)

The inverse distortion model was found, by resolving the following equation for rd , given ru:

ru = rd (1+ k 1 〖r 〗_d^2) (2.20)

Where ru =√( x_u^2+y_u^2 ) is the undistorted radius and rd is the distorted radius.

It may be fundamental in higher distortion lenses to utilize higher order terms in the distortion model [15, 33]. The changing from undistorted to distorted coordinates for this situation has no closed-from solution, and by utilizing the simple newton technique will be sufficient. So two terms of Eq. (2.18) were used in our work to recompense the nonlinear distortion of lenses.

For that we focused on the distortion models which are more appropriate to this sort of lens.

2.4.2 Fish-eye models

The lenses of Fish-eye designed to contain some types of the nonlinear distortion. Consequently, it was found that it is ideal to use the distortion model which endeavors to duplicate this effect, better than utilizing high number of terms as in the chain of Equation 2.18

In [37] Basu and Licardie model the fish-eye lenses, utilizing Fish-Eye Transform FET or by utilizing polynomial distortion model PFET, and demonstrating that the PFET model works superior to the FET.

Which the FET based on the surveillance that the fish-eye having a high resolution.

2.4.3 Inverse Models

A lot of experience after Conrady and Brown was made to treat the eliminating of distortion from images. And the problem of this eliminating has a complete review written by Atkinson for both photogrammetry and computer vision communities, we can back to several books including famous manual of photogrammetry [33, 38].

Nevertheless, the inverse distortion problematic is evident in numerous applications, and it’s sort of the poor relation of the distortion difficulties. So several arrangements to the back-projection issue can be found in literature, as exposed by Silvén and Heikkilä [1].

No analytic solution was found to the inverse planning as we can realize.

Furthermore, an additional invertible model in light of the Field-of-View was exhibited by Devernay and Faugeras [15]. A complete review of these models written by Hugues [39] and also in [40].

3.1. Camera calibration algorithm (Zhang’s method)

Zhang’s technique was utilized as motivation errand for its straightforwardness and known accuracy. In our thesis, we will use this technique for some evaluations of lens distortion model. It requires several images of a chessboard pattern to be captured from a different position with a known dimension. The algorithm calculates the camera calibration parameters using the relation between the checkerboard corners in a camera coordinate system and a world coordinate system attached to the checkerboard plane [41].

Zhang’s calibration method used to obtain internal and external camera parameters. It uses calibration template for more than two images with shoot from different angles.

This method is simpler and easier in comparison to the other methods. This simplicity is because it is easy to make the planar template. Adding to that, there is no need to know displacement information and the specific position of this planar template movement [42].

Zhang [3, 43] followed an algorithm requires two dissimilar perspectives of a planar example at least. In order to get more precise calibration, a big number of perspectives were used.

Examining the results quality it’s a vital step in the camera calibration process, which clearly depends on accuracy of the data and hardware of the computer.

To enhance the calculated camera model, lens distortion model is comprised in the calibration process. Distortion model can be evaluated alone or together with the pin-hole model.

Geometric invariants like straight lines, vanishing points images of a sphere or correspondences between points in various pictures from different perspectives are utilized in calibrating the distortion model alone [42].

The accuracy of calibration is surveyed by comparing the variance between measurement results and their position information. The exactness of camera calibration decides the estimation accuracy of the system. The quality of camera calibration result directly affects the final measurement result accuracy.

The precision of the inevitable calibration during calibration process is affected directly by feature point extraction and image mark point pattern selection. In precise 3-D renovation of the scene structure, the issue of evaluating the original 2-D signal gets to be clear.

For a good calibration process the noise component of the system is require to make up. The uncertainty clearly happen in the features of the image coordinates due to the random part of the measurement noise. The primary element of the irregular variant is signal quantization. Fig. (3.1) shows the camera calibration algorithm (Zhang’s method):

Figure (3.1): Flowchart of camera calibration algorithm

3.1. Lens Distortion Correction

Lens distortion correction is a vital stage. The ideal camera model is the pinhole camera, the lens distortion is the most important part in camera, and it’s the main distinction between a genuine camera and the ideal pinhole camera from the geometric perspective.

Due to the numerous kinds of defects in the design and assemblage of lenses constituting the optical system camera, the lens distortion occurs. And it makes an area varying displacement of each point in the image field from its optimal pinhole position [44].

Until now, camera lens distortion is not considered. Lens distortions are well known externals that forbid utilization of modest pinhole camera models in most applications. Being the most difficult sort of lens aberrations, they don’t impact image quality; nevertheless have notable effect on image geometry [45].

3.1.1. Lens Distortions effect

The distortions are aberrations occur by the existence of a lens. There are several kinds of distortions that occur in imaging systems, but the main type of distortion is the radial distortion, as the other types normally have less significant impacts than the radial, e.g. thin prism distortion [44].

Mainly, one aberration can be viewed is the radial distortion. In cylindrically symmetric lens systems, there are mainly five aberrations happened, and the distortion is one of them. The other four are spherical aberration, coma, astigmatism, and field curvature.

We ignored tangential distortion like we mentioned in chapter one, since there is no much knowledge with genuine cameras that present noteworthy tangential distortion.

Distortion is the aberration that is suffered by the beam that passing through the lens center and it begins from Snell’s law nonlinearity. Distortion relies upon the object point farness from the lens axis (or on the angle that the beam creates with axis from the object point to the lens center), thus, a concentrated square object seems misshaped either pincushion or barrel distortion, as shown in Fig. (3.2), since the corner points are more far or closer to the axis than the points in the middle [46].

3.1.2. Types of lens distortions

Theoretically, it is conceivable to characterize a lens that presents no distortion. Practically, no lens is perfect and this is basically due to manufacturing, where it is much simpler to make a spherical lens than to make a more mathematically perfect parabolic lens.

Additionally, it is hard to mechanically adjust the lens and imager precisely.

The main kind of lens distortion, the radial distortion is formed by defects in lens shape and displays itself only as radial positional error. While the other types of distortion are usually created by indecorous lens and camera gathering, produce both radial and tangential errors in point positions. A radial distortions caused by the lens shape, also lens characteristic may present radial distortion.

Radial distortion is a linear motion for the image point radially from or to the image center, due to the fact that objects at several angular spaces from the lens axis suffer many amplifications and its lack in a straight line transmission fig (3.2) and fig (3.3) shows radial distortion model and its type. It is important to indicate that radial distortion is exceedingly connected with focal length, even if it’s not modeled within intrinsic parameters of the camera [31].

There are three sorts of outspread twisting, they are as taking after:

Barrel distortion:

It is also called negative displacement; this kind of distortion is widespread for wide angle lenses. It does occur when the points are moved from their position towards the image center.

Pincushion distortion:

It is also called positive displacement; this kind of distortion is for narrow angle lenses. It’s occurring when the points are moved far away from the image axis.

Mustache distortion:

It is called wavy or complex distortion. It is the worst type, which represent a mix of both pincushion and barrel radial distortion.

This kind of distortion starts to be close to the image center as barrel distortion and then progressively move away to the image periphery as pincushion distortion, making the horizontal lines in the top half of the frame look like a handlebar mustache, making the horizontal lines in the top half of the frame look like a handlebar mustache.

Mathematically, pincushion and barrel distortions increase with the square of distance from the center which means they are from second degree.

While mustache distortion from fourth degree is important in the center, the second degree barrel distortion is governs, while at the edge the fourth degree distortion in the pincushion direction dominates.

Physically, in large focal length systems pincushion distortion can be exist; while barrel distortion can be exist in smaller focal length [31, 47, and 48]. The effects of these radial distortions can be significant, especially, in low-cost wide-angle lenses which are frequently used until today.

Practically, distortion is little and can be described by the few terms of a Taylor chain extension. The radial position of the image points will be re scaled by taking the following equations [49].

rd= r +δr (3.1)

Where δr is the radial distortion and rd is the radial distorted distance.

The following polynomial equation symbolizes the radial distortion:

rd = rf (r)= r (1+k 1 r2+k 2 r4+k 3 r6+….) (3.2)

Where k1, k2, k3… are the distortion coefficients. It leads to

xd= x f (r) , yd= y f (r) (3.3)

They are proportional to the association between the undistorted and distorted image points:

ud –uo = (u –uo) f (r)

(3.4)

vd –vo = (v –vo) f (r)

The distortion is specially controlled by the first term as shown in eq. (3.2) and its variations; also it has found that numerical instability could happen due to high order.

No analytic solution was found to the inverse planning as we can realize. The inverse of the polynomial function in eq. (3.2) will be solved using a numerical method which applies an exact formula by an iterative mapping. This is due to the fact that it is pretty hard to utilize analytical methods [31].

Figure (3.3): Radial Distortion Model

Radial distortion could be commonly lessened or eliminated by applying first parametric radial distortion model, assessing the distortion coefficients, and after that modifying the distortion. Many of the current works can be followed back to early study in photogrammetry [33].

In this thesis, two models of radial distortion has been depended and applied on four lenses with different focal length (70mm, 100mm, 150mm, and 190mm).The f(r) in equation (3.2) becomes:

4th distortion model, using two coefficients: f(r) = 1+k1 r2 +k2 r4

6th distortion model, using three coefficients: f(r) = 1+k1 r2 +k2 r4 +k3 r6

The previous two polynomial radial distortion models act as standards for evaluating the performance of the radial distortion models automatically which will be presented in ch.4.

3.2. The proposed method

We propose an algorithm which uses the model selection criterions to automatically select the best lens distortion model of two distortion models for four different lenses.

And then, computing the inverse formula for the chosen radial distortion model of the lens containing minimum distortion to reduce the distortion coefficient and accuracy loss.

A demonstration of the whole work is viewed in the following sections.

Figure (3.5) shows an overview of the algorithm.

Figure (3.4): An overview of the algorithm

3.2.1. Distortion model selection

When several competing models for a given system can represent the distortion, the task of distortion model selection would be good for choosing the finest model. Using the most fitting and instructions model will give both better precision and decreased computational complexity model. As it was mentioned earlier, numerical instability occur due to the higher order terms in the polynomial radial distortion model but the model with more degrees of freedom in most cases will fit the data closer than other less complex models [21]. By keeping the number of distortion coefficients low, one can overcome the numerical instability when modeling standard cameras since higher order terms are relatively insignificant in some systems [2, 16, and 21]. But when modeling wide-angle lenses higher order terms may be necessary.

The aim is to choose the best radial distortion model of minimum error automatically without losing accuracy.

For measurable model choice the data theoretic paradigm Akaike (AIC) is presented [51], and it has accompanying structure, and it has the following form:

AIC = -2 logL(θ;mi) + 2 k (3.5)

The model that reduces the error of another perception will be chosen in AIC measure. Where k is the parameters number in the model and logL(θ;mi) is the model parameters likelihood θ = ( α,γ,uo,β,vo ) given the observations mi.

In equation (3.5) the first term used as a measure of how good is the fitting model, while higher complex models will be penalized by the second term.

According to the model parameters θ the estimated projection of point will be denoted as M j m`i.

The sum-square-error (SSE) was computed as: SSE=∑_i▒〖r_i〗^2 Where 〖r^2〗_i=‖m_i ┤ -├ 〖m`〗_i ┤‖ is the difference between the measured and estimated image points.

The log -likelihood of the model parameters θ given the observations mi is then [21]:

logL(θ;mi)= – 1/(2σ^2 ) ∑_i▒〖r_i〗^2 + constant (3.6)

Where σ^2 is the variance of noise.

The maximum log-likelihood estimate (MLE) is group of parameters that maximizes logL(θ;mi). Maximizing the log-likelihood and the likelihood of the model parameters θ is equivalent, which is in-turn equivalent to minimizing the SSE, thus, substituting equation (3.6) into equation (3.5) leads to the following form:

AIC = 1/σ^2 ∑_i▒〖r_i〗^2 + 2 k (3.7)

Using the formula in [21, 52] to calculate the variance:

〖 σ〗^2 = ∑_i▒〖r_i〗^2 / (N – K ̂) (3.8)

K ̂ Represent the coefficients number of the more multifaceted model in the library, while N refers to the number of samples. The same was done with all the criterions in Table 1[51, 53, 54, and 55].

Table 3.1: Model selection criterions

From the listed criterions in table 3.1one of the criteria is used to find the best model.

3.2.2. Inverse radial distortion exact formula

Concerning the polynomial model, numerous arrangements have been analyzed to fulfill inverse radial distortion and its can be described in three essential classes [56]:

Approximation: An inverse approximation of a Taylor extensions was introduced by Mallon [57], Heikkilä [1, 58], and after them, Wei and Ma [59] with the first order derivations. Sometimes, this is presumed to be the real model according to Welhan and Mallon, and it serves certainly for minor distortion levels [57]. “An international method, inverse distortion plus image interpolation, is presented in a patent held by Adobe Systems Incorporated” [60].

Iterative: it begins with an initial prediction, the distorted location is iteratively repeated till a suitable convergence is determined [61, 17].

Look-up table: All the pixels are calculated earlier and a look-up table is made to save the mapping [31].

Using the radial distortion of general model for distortion correction written in the following equation, we can find the inverse conversion of the selected radial distortion model.

And stating it on a comparable form of the direct conversion, with the parameters (k1’, k2’, k3’…).

x’= x+ x’ (k1 r2 + k2 r4 + k3 r6 +…)

(3.9)

y’= y+ y’ (k1 r2 + k2 r4 + k3 r6 +…)

Therefore, according to [31] the formulas for the first three coefficients of 6th order radial distortion model are provided according to the best model.

b1= -k_1

b2= 3k_1^2 – k2

b3= -12k_1^3 +8k1k2 – k3

Applying these formulas iteratively inside a loop we will get the distortion coefficients reduction.

## Chapter four

Experimental and Simulation Results

4.1. Introduction

A great deal of lens distortion models exist with numerous differences and every distortion model is evaluated with various methods.

Selecting the right model may represent a really complicated job for correcting lens distortion. Several methods are presented on how to select model also some of them have un accurate results. (ابقيهم ؟؟ لو ميحتاج )

The results presented in this chapter were obtained for the described algorithm in chapter 3. The performance of the 6th order radial distortion model was studied as well as its reverse formula and the aim is to reduce the distortion coefficient and accuracy loss.

4.2. Experimental sets

The experimental work was done in the laboratories of the laser and optoelectronics department, using digital camera to capture five images of four lenses with different focal length. Using the following components:

1- Sony G camera with 14.1 MP CCD sensors

2- A ruler with measurements.

3- Four Lenses of focal length (190mm, 150mm, 100mm, and70mm).

4- Chessboard Pattern with resolution 1830×1330, and with point’s corner 8×11=88.

Fig (4.1) and Fig (4.2) shows the experimental setup and components.

Figure (4.1): Experimental setup

Figure (4.2): Experimental components

4.2.2. Implementation

For the experimental work each lens was placed on a certain distance depending on the image clearness between Sony G camera and pattern as shown in fig (4.1).

For lens of 70 mm: the distance between the camera and the lens is 1.5cm and between the lens and pattern was 31cm.

For 100 mm: the distance between the camera and the lens is 3cm and between the lens and pattern is 34cm.

For 150 mm: the distance between the camera and the lens is 4cm and between the lens and pattern is 38 cm.

And for 190 mm lens: the distance between the camera and the lens is 6cm and between the lens and pattern is 40cm.

The first component of the software of our algorithm shown in Fig (3.3) is to capture five images of a chessboard pattern for each lens from a different position and orientations as input, as shown in figure (4.3) and (4.4). The second component includes the effective Harris corner detection, using 4×4 chessboards from the pattern, as shown in fig. (4.5).

The third, fourth, and fifth components include the estimation of internal and external parameter of the camera for the selected model which includes focal length, principle point, skew, and the two important parameter distortion and pixel error after initialization and optimization as shown in table(4.1) and (4.2).

The sixth and seventh component includes choosing the right model automatically from two models of escalating complexity using statistical information criteria listed in table (3.1).

The last components include reversing the choosing model using an exact formula, and applying the formula twice and comparing the result with the original one. Last component includes a secondary step, repeating the process 10,000 times and comparing the final result with the original.

Nevertheless, the used functions have been taken from several sources.

Figure (4.3): Image captured with lens of focal length 70mm & 100mm

Figure (4.4): Image captured with lens of focal length 150mm & 190mm

Figure (4.5): Extracted Corners

Table (4.1) and (4.2) shows the evaluation results for both 2nd and 6th order radial distortion model after optimization for the used lenses.

Table (4.1): Evaluation results after optimization using 2nd order distortion model for Several Lenses where the * is the proper lens.

Several different lenses (mm) Radial distortion coefficient Pixel error

K1 K2 K3 u v

1) 190* 0.25901 – – 2.04594 1.91832

2) 150 0.94332 – – 3.46324 4.13295

3) 100 0.52793 – – 1.94436 2.96566

4) 70 0.36694 – – 4.43100 4.46265

Table (4.2): Evaluation result after optimization using 6th order distortion model for Several Lenses where the * is the proper lens.

Several different lenses (mm) Radial distortion coefficient Pixel error

K1 K2 K3 u v

1) 190* 0.26567 -0.12787 0.04428 1.93297 1.90527

2) 150 0.94949 -1.86983 -0.09026 3.18991 3.97695

3) 100 0.64628 19.27099 -0.00648 1.95038 2.42106

4) 70 0.37225 0.31389 -0.01690 4.24701 4.44458

The complexity of both models 2nd order and 6th order were tested analytically using model selection criterions shown in table (3.1) to choose automatically the best distortion model.

Table (4.4): Complexity of the 2nd order distortion model for the four lenses using model selection criterions.

Lens(mm) AIC MDL BIC SSD CAIC

1) 190 4 2.349 3.39 2.06 2.69

2) 150 6 4. 422 4.42 4.17 5.84

3) 100 8 6.477 7.90 6.26 7.95

4) 70 8 7.50 7.90 6.26 7.95

Table (4.5): Complexity of the 6th order distortion model for the four lenses using model selection criterions.

Lens(mm) AIC MDL BIC SSD CAIC

1) 190 8 3.048 6.193 1.594 7.096

2) 150 8 3.048 6.193 1.594 7.096

3) 100 8 3.048 6.193 1.594 7.096

4) 70 8 3.048 6.193 1.594 7.096

The complexity for the 6th Distortion model is calculated corresponding to the MDL criterion in table(4.5) .Similarly the complexity of the 2nd Distortion model in Table(4.4) corresponding to the MDL criterion, as shown in fig(4.6).

Figure (4.6): Graph of the complexity of the 2nd and 6th Distortion model for four different lenses (mm) (1) 190mm, (2)150mm, (3) 100mm and (4) 70mm.

The MDL criterion was chosen over the others, which select complexity equal to or less than that of the other criterions without sacrificing a significantly lower error.

The complexity of the 2nd model increased as the focal length degreases. However, the 6th for the entire lens stayed level.

Fig (4.7) and Fig (4.8) shows that for both models the lens of focal length 190mm generates low-distortion and it’s the proper lens to use for testing the inverse formula accuracy of the chosen model, while lens of 70mm represent high-distortion.

Figure (4.7): shows the sum square error using 2nd order distortion model for each lens (1)190mm, (2)150mm, (3)100mm, and (4)70mm.

Figure (4.8): shows the sum square error using 6th order distortion model for each lens (1)190mm, (2)150mm, (3)100mm, and (4)70mm.

Table (4.6) shows the original value of the 6th order distortion coefficient computed using lens of 190mm and the value of the inverse formula.

Table (4.6): The inverse of 6th order radial distortion model with coefficients K1, K2, and K3.

Radial Distortion Coefficient Original Value Inverse Value

K1 0.26567 – 0.2656700000000

K2 – 0.12787 0.33961164670000

K3 0.04428 – 0.541063396315156

Table (4.7) shows the 6th order radial distortion model for lens of 190mm inverse loop after several inversions, Delta loop1 represent the second inverse compared to the original distortion coefficients. Delta loop10000 the 10001 inverse compared to the original distortion coefficient.

k1 and k2 for both columns ‘Delta Loop 1’ and ‘Delta Loop

10000’ did not change while k3 are small with respect to the

Corresponding coefficients.

Table (4.7): 6th order radial distortion model for lens of 190mm inverse loop after several inversions

## Chapter five

Conclusions and Future Works

5.1. Conclusions

In this work, statistical information criteria was used to find the best lens distortion model, 6th order and 2nd order radial distortion model were taken into account. As well as an exact formula was used to reverse the chosen radial distortion model.

The algorithm successfully chose the best radial distortion model for four different lenses and the complexity for both distortion models was plotted the inverse of the selected model was computed to reduce the distortion coefficient and accuracy loss

One criterion was chosen over the others, which selects the complexity equal to or less than that of the other criterions without sacrificing a significantly lower error. As well as one lens were depended over the others three, which contain lower distortion.