Home > Sample essays > Overview of Strategy for Media-Adventitia Border ExtractionStrategy for Media-Adventitia Border Extraction Summary: 4 Steps for Image Preprocessing and Classification with Sparse Representation

Essay: Overview of Strategy for Media-Adventitia Border ExtractionStrategy for Media-Adventitia Border Extraction Summary: 4 Steps for Image Preprocessing and Classification with Sparse Representation

Essay details and download:

  • Subject area(s): Sample essays
  • Reading time: 14 minutes
  • Price: Free download
  • Published: 1 April 2019*
  • Last Modified: 23 July 2024
  • File format: Text
  • Words: 482 (approx)
  • Number of pages: 2 (approx)

Text preview of this essay:

This page of the essay has 482 words.



Our strategy for media-adventitia border extraction is summarized in four main steps:

I: Preprocessing

Label of the IVUS image based on sparse representation classification on the appearance model.

Main sparse edge pattern construction and external force definition based on dynamic directional convolution vector field on the filtered image.

Convert the initial binary image obtained by multiplying of two archived images (of steps 1 and 2) to the polar coordinate.

II: Automatic initial contour detection

Extract the initial contour from the initial binary image in the preprocessing step.

Smooth the initial contour by robust smoothing algorithm.

Refine and re-smooth the achieved initial contour in calcifications without shadow regions.

III: Active contour segmentation technique

Convert the initial contour to the Cartesian coordinate.

Segment the original image by the dynamic directional vector field convolution active contour model utilizing the achieved initial contour.

IV: Refinement contour

Detect the major calcification and shadow regions.

Convert extracted contour to polar coordinate.

Refine the contour in the detected calcification and shadow regions.

Convert final contour to the Cartesian coordinate.

Preprocessing

An overview on the sparse representation framework

The sparse representation framework has recently received a great deal of attention to solve various problems in signal processing, image processing and computer vision. Intensive investigations about motivations, challenges and application of the dictionary learning algorithms in the sparse representation framework were performed by [19]. In this section, basic important subjects of this framework will be discussed.

Sparse Coding

A dictionary is formed of k atoms that are collected as n-dimensional column vectors in a matrix  D=[d_1,d_2,…,d_k]∈R^(n×k). When (k>n),  these atoms, {d_i }_(i=1)^k, are utilized as an over-complete basis for signal reconstruction. Accordingly to approximate a signal like x∈R^n, a linear combination of the dictionary atoms can be utilized, so that x=∑_(i=1)^k▒〖α_i d_i=D.α〗 that α∈R^n is named the sparse code of the signal x. Since an over-complete dictionary is used to signal reconstruction, an undetermined equation system with many solutions is achieved. Consequently, a constraint such as a sparse code with the smallest number of non-zeros coefficient must be imposed to this system to find a single solution. Thus, the signal approximation equation is formulated as:

α= 〖arg min〗┬α⁡〖‖α‖_0 〗  subject to  x=D.α. (1)

that ‖α‖_0 is the l_0 norm of the coefficient [23].

This problem is NP-hard and the approximate solution can be obtained by several pursuit algorithms, including the matching pursuit algorithm (MP) [24] and the orthogonal matching pursuit algorithm (OMP) [25].

Reconstructive Dictionary learning

The aim of dictionary learning algorithms is to obtain the optimum sparse approximation for a set of m training signals {x_i }_(i=1)^m by calculating the best reconstructive dictionary  D∈R^(n×k)  . During the dictionary learning process, the reconstructive dictionary and the sparse codes are simultaneously computed according to the optimization problem formulated as:

{D,A}= 〖arg min〗┬(D,A)⁡〖‖X-D.A‖_F^2 〗  ∋  ‖α_i ‖_0<L  ∀i. (2)

Training samples {x_i }_(i=1)^m  and sparse codes {α_i }_(i=1)^m∈R^k are arranged as column vectors in the matrixes X=[x_1,x_2,…x_m∈R^(n×m)] and  A=[α_1,α_2,…α_m∈R^(k×m)], respectively.  L is the sparsity constraint. The maximum number of the utilized atoms in the sparse decomposition process of each training sample is bounded by this constrain [23].

To solve the optimization problem several dictionary learning algorithms, including the Method of Optimal Directions (MOD) [26] and (K-SVD) [27] are introduced. Both the MOD and the K-SVD methods can efficiently work with any mentioned pursuit techniques; but matrix inversion procedure in the MOD method can make degenerated results for untrained dictionaries sets with a large number of atoms [28].

Classification based on Sparse Representation

Signal reconstruction by selection of a subset of learned dictionaries in the sparse representation framework has a discriminative nature. It is one of the fundamental and natural concepts in the different computer vision problems and has been successful in many applications such as texture segmentation and classification [28-29].

To make a classification framework based on sparse representation, we assume a set of l learned dictionaries, {D_i }_(i=1)^l∈R^(n×k) to use for sparse approximation of a set of signals {x_j }_(j=1)^(n_i )∈R^n from class i. Now, to classify a test sample signal x∈R^n, the following equation can be utilized:

r_i (x,D_i)= 〖arg min〗┬α⁡〖[‖x-D_i.α‖_2^2 〗+λ.‖α‖_0];  i=1,….,l .  (3)

where r_i (x,D_i) is the residual value of the test signal approximation over the i-th dictionary. This equation is a transformation 〖T:R〗^n □(→┴  ) R^l that can map the desired test signal x∈R^n into the residual vector r=[r_1,r_2,…r_l∈R^l] by applying the matching pursuit algorithms. Finally, the minimization rule is applied to assign a label to the test signal [27]:

label= 〖arg min〗┬( i=1,….,l )⁡〖r_i (x,D_i ).〗  (4)

Local Image Appearance Model

As mentioned before, the intensity of IVUS images suffers from some inherent characteristics such as speckle, attenuation, shadows, calcification and signal dropout due to the acquisition orientation dependency. These characteristics can make degeneracy results in all segmentation algorithms based on gray level value. But, a local image such as a patch that is centered at a spatial point can provide more reliable and significant local appearance information. Consequently, a classical feature vector based on local gray value information can be a powerful tool for classification in the sparse representation framework. Extracted features have enough information about intraclass differency and interclass coherency of the region of interest. Accordingly, classification task is enhanced by the extracted feature vector instead of utilizing only the gray value of the pixels of patches. Since media, adventitia and other tissues have special local appearance differences in IVUS images- media is seen as a thin black line while adventitia has high echogenicity features and bright appearance [1]- a feature vector based on the pixel intensity is created. This feature vector f_i∈R^N that can describe characteristics of a pixel in location s_i is made by N entries, including:

the gray level values of a n×n square neighborhood that is centered at location s_i: a column vector G(s_i).

The mean value of the neighborhood pixel: (G(s_(i ))) ̅.

The gray level values of a second n×n square neighborhood centered at location s_i extracted from a contour-enhanced version of the original images: a column vector CG(s_i).

Hence the feature vector for each pixel is consisted of:

f_i=〖[〖G(s_i )〗^t,(G(s_(i ) ) ) ̅,〖CG(s_i)〗^t]〗^t. (5)

Labeled IVUS image based on sparse representation classification on the local appearance model

We consider the media-adventitia border detection problem as a classification problem by dividing an image to the overlapped patches and classifying each patch in one of the two classes of high level gray value pixels (calcification and normal adventitia textures) or the low level ones (blood, lumen and media textures). Accordingly, first the initial overlap patches are randomly chosen from the mentioned regions. The size of each patch is [3, 3]. Then the feature vectors described before are extracted from each patch. Now, two initial dictionary versions of the image patches are trained by the K-SVD method [27] in Eq. (2). Finally, IVUS image segmentation is performed as a patch-by-patch classification procedure based on the two sets of trained dictionaries. Consequently, each IVUS image is segmented by (1) extraction of the feature vector, x, from each patch per pixel by Eq. (5), (2) transformation of the extracted feature vector to the residual vector r by Eq. (3), and (3) assignment of a label to the considered patch, label∈{1,2}, by Eq. (4). Figure 2.b shows the results of the first step segmentation of four IVUS images based on the sparse representation framework.

Main sparse edge pattern construction and external force definition based on dynamic directional convolution vector field

The directional gradient concept is so important for image segmentation process to find negative or positive boundaries. A boundary with a positive step edge along its outward normal is considered as a positive boundary and one with a negative step edge is assumed as a negative one for both horizontal and vertical directions [20]. Accordingly, to build the main edge pattern and the external force for the active contour segmentation model, four steps are executed: (1) directional edge map construction, (2) DDCVF field computation, (3) external force definition, and (4) main edge pattern construction.

Directional edge map construction

To construct a directional edge map, a gradient operator is applied on a filtered image by a 2d-Gaussian filter:

g(x,y)=∇(G_σ (x,y)*I(x,y) )=(g_x (x,y),g_y (x,y) ). (6)

where I(x,y) is the original image and G_σ (x,y) is a 2d-Gaussian filter with std equal to σ. g_x (x,y) and g_y (x,y) are the horizontal and vertical gradients of the filtered image [20].

All directions of the image gradients must be considered to build the directional edge map due to the unknown location of the snake before initialization process. Accordingly, for a positive boundary we have:

〖f_x〗^+ (x,y)=max(g_x (x,y),0); 〖f_x〗^- (x,y)=-min(g_x (x,y),0). (7)

〖f_y〗^+ (x,y)=max(g_y (x,y),0); 〖f_y〗^- (x,y)=-min(g_y (x,y),0). (8)

and for a negative boundary

〖f_x〗^+ (x,y)=-min(g_x (x,y),0); 〖f_x〗^- (x,y)=max(g_x (x,y),0). (9)

〖f_y〗^+ (x,y)=-min(g_y (x,y),0); 〖f_y〗^- (x,y)=max(g_y (x,y),0). (10)

where 〖f_x〗^+ (x,y),〖f_x〗^- (x,y),〖f_y〗^+ (x,y)  and  〖f_y〗^- (x,y) are the edge gradients in +x,-x,+y and-y directions respectively. Consequently, the directional edge map is defined as [20]:

f(x,y)=[〖f_x〗^+ (x,y),〖f_x〗^- (x,y),〖f_y〗^+ (x,y)  ,〖f_y〗^- (x,y)] (11)

DDCVF field computation

The dynamic directional gradient vector flow (DDGVF) [30] and the dynamic directional convolution vector field (DDCVF) are two commonly used introduced external filed for active contour segmentation models. Whereas DDCVF has more robustness to initialization and disturbance with large capture range, utilized to build a sparse edge pattern and external field for IVUS image segmentation process [20].

Accordingly, we define a vector field kernel  K(x,y)=(u_k (x,y),v_k (x,y) )  that u_k (x,y)  and v_k (x,y) are its horizontal and vertical components, respectively:

k(x,y)=m(x,y)n(x,y)   with  m(x,y)=〖(r+ε)〗^(-γ)  and n(x,y)=[-x/r,-y/r].   (12)

where m(x,y) is the magnitude of the vector filed kernel at position (x,y) and  n(x,y) is the unit vector that is pointing to the vector filed kernel origin. Also, the parameter ε is a small positive value to avoiding division to zero at the origin (0,0). The positive parameter γ can control the decreasing slope of the kernel. As well r=√(x^2+y^2 )  is the distance of each point from the origin [20].

Now, the DDCVF field u(x,y)=〖〖[u〗_ 〗^+ (x,y),〖u_ 〗^- (x,y),〖v_ 〗^+ (x,y)  ,v^- (x,y) is calculated by the convolution operation between the defined vector field kernel and the directional edge map as [20]:

〖u_ 〗^+ (x,y)=〖f_x〗^+ (x,y)*u_k (x,y) and  〖u_ 〗^- (x,y)=〖f_x〗^- (x,y)*u_k (x,y).  (13)

〖v_ 〗^+ (x,y)=〖f_y〗^+ (x,y)*v_k (x,y) and  〖v_ 〗^- (x,y)=〖f_y〗^- (x,y)*v_k (x,y).   (14)

External force definition

To the external force construction, we suppose θ as the contour normal direction at a special pixel. Since cosθ  is the x direction component of the normal vector and sinθ  is the y direction component, the horizontal and the vertical external force components, F_ext=[F_x,F_y], are defined as [20]:

F_x=〖u_ 〗^+ (x,y)*max⁡(cosθ,0)-〖u_ 〗^- (x,y)*min(cosθ,0).   (15)

F_y=〖v_ 〗^+ (x,y)*max⁡(sinθ,0)-〖v_ 〗^- (x,y)*min(sinθ,0).   (16)

Main sparse edge pattern construction

To build the sparse initial binary image for the initial contour extraction in the proposed automatic segmentation process, we utilize negative boundary edge map. Now, the sparse edge map pattern,  S, is defined as:

S=F_x×N_x+F_y×N_y  that N_x=(g_x (x,y))/Normg,N_x=(g_y (x,y))/Normg. (17)

while Normg=√(〖g_x〗^2+〖g_y〗^2 ). Figure 3.a shows the main sparse edge map obtained based on the DDCVF field of the corresponding images in Fig 2.

Convert the initial binary image from to the polar coordinate

Now to build the initial binary image, we multiple segmented image obtained by sparse representation based classification framework and sparse edge pattern achieved by DDCVF. This strategy will be resulted in a more accurate initial contour detection, especially in non homogeneities of lumen and media regions and in the calcification without shadow regions. Finally, the initial binary image is converted to the polar coordinate utilizing in automatic initial contour extraction for active contour based segmentation. Figure 3.b shows the initial binary image and its thresholded one in the polar coordinate (with threshold equal to 150) of the corresponding images in Fig 2.

Automatic initial contour detection

Extract of the initial contour

To extract the initial contour in the polar coordinate, all pixels are investigated from the top of the initial binary image. The first non zero pixel of each column are chosen as a candidate of the initial contour pixels. If not find any pixel for a column, the chosen pixel of its neighborhood is repeated. Then, the achieved initial contour is smoothed by the robust smoothing algorithm [21].

The robust smoothing algorithm

This fast algorithm is based on a robust version of a discretized smoothing spline method deal with missing and outlying data. Considering the simplest one-dimensional model for the noisy signal y:

y=y ̂+ε.   (18)

where y ̂ is the estimated smoothed version of y and the parameter ε defined as a mean zero Gaussian noise with an unknown variance. Now, the classical smoothing approach based on utilizing a penalized least squares regression is applied to solve the smoothing problem Eq. (18). Therefore, the final minimization problem is formulated in below [21]:

F(y ̂ )=‖y ̂-y‖^2+sP(y ̂ ).   (19)

where ‖.‖^  is the Euclidean norm symbol. In addition, the roughness of the smoothed data is reflected by the penalty term P(.) and the positive scalar parameter s can control the degree of smoothing process.

By expressing the penalty term as a second-order divided difference of desired signal y ̂, the minimization problem Eq. (19) is converted to a linear system one. This linear system can be solved by a Cholesky decomposition based algorithm [21]. Also for automatic estimation of the smoothing parameter s, the generalized cross-validation (GCV) method [31] is introduced, so that the over smoothing or under-smoothing is avoided as much as possible [21].

Now to smooth the achieved initial contour by the described robust smoothing algorithm, we follow the algorithm 1.

Algorithm 1:

Automatic estimation of the smoothing parameter s.

Refining the estimated parameter: if the parameter s is less than 0.1, adding it a bias with value 0.1. Then, the achieved smoothing parameter is multiplied with 〖10〗^4 value for enough smoothing in our application.

Finally, the robust smoothing algorithm [21] is applied to obtain the smoothed initial contour.

Figure 4.a shows the smoothed initial contour of the corresponding images in Fig 2.

Refine the initial contour

The initial contour propagation based on the active contour segmentation method can fail in images with high level inhomogeneities in the media-intima region and in the non shadow calcification regions. Hence refinement of initial obtained contour is so essential in the mentioned regions. Since the corresponded regions exist before the real media-adventitia border, the maxima parts of the initial contour can be represented the location of them. Accordingly, algorithm 2 is proposed to initial contour refinement.  

Algorithm 2:

Detect the extremum pieces of the initial contour: Gradient of the initial contour in the x direction is zero.

Select the maxima of the detected pieces: Before the start of the maxima parts, the x direction gradient is negative (before first zero value) named h1 and after the end of one (after the last zero value) the x direction is positive, h2.

Find the start and the end of the candida regions needing correction around the maxima regions: The first pixels with the opposite x direction gradient to the start and the end of the detected maxima parts, h1 and h2 respectively, are chosen as the interested pixels named j1 and j2. To avoid extreme missing in the media-adventitia parts detected as the maxima parts, seeking of the interested pixels is limited to start_region and end_region pixels.

start_region=j1-confidence_value,end_region=j2+confidence_value.  (20)

confidence_value= 0.2 ratio of the initial contour perimeter   (21)

Hence if not, find the interested pixels with the opposite x direction gradient, the start_region and the end_region pixels are adjusted as the start and the end pixels, j1 and j2.   

Find the candida interval in the y direction: The maximum interval below the elected maxima parts are extracted from the obtained initial binary image in section 2.3.1. For implementation, we have two main searches from the mean pixel between h1 and h2 named h to j1 and j2 pixels. In each main search, we count the number of zero gray value pixels of each row below the pixel h and above j1 or j2 pixels. If reaching to the no zero value in the initial binary image before j1 or j2 pixel, counting is stopped for each row. Finally, the rows with the maximum number of the zero value pixels are chosen and named i1 and i2.

Eliminate the not correct part of the initial contour: The pixels of the initial contour in the interval [j1,  j2] are removed.

Refinement the initial contour in the deleted regions: For each main search in part 4, the first no zero pixel in the initial binary image from i1 and i2 rows to above j1 and j2 pixels are considered as the new pixels of the initial counter, respectively.

Smoothing the corrected initial contour: The same smoothing method like the algorithm 1 is used for the contour smoothness.

Due to the multi calcifications IVUS images, we utilize two times of the algorithm 2 to achieve the final version for the initial contour. This tact can present appropriate results in inhomogeneities of the media-intima region and also in not shadow calcification regions. We can see the final initial contour in Figure 4.b.

Active contour segmentation technique

To the active contour segmentation, we first convert the final version of the initial contour into the Cartesian coordinate as shown in Figure 5.a. Next, contour propagation is performed according to the active contour model introduced in [30, 32 and 33]. The DDCVF field is utilized as the external force of the model instead of DDGVF. Utilizing dynamic directional field is indispensable due to the subject that not all parts of the initial contour may fall exactly inside the media-intima region; ones may be in the adventitia region. Figure 5.b shows active contour segmentation results for the corresponding images in Fig 2.

Refinement contour

In the active contour segmentation results based on the DDCVF, we can find leakage on shadow regions, and false border in the extreme calcified regions in some IVUS images. Hence, we refine the segmentation results in the mentioned regions according to the algorithm 3.

Algorithm3:

Cut the original image: Since the calcification detection based on theresholding algorithms in the original image can make mistakes a lot in our image database, we utilized the cutoff version of the original image. Therefore, the image is cut based on the maximum value of the extracted border coordinates in ±x and ±y directions according to the Figure 6.a.

Calculate pixels characterized as the calcifications: Whereas the calcification regions have the bright pattern in the IVUS images, we can extract these interested regions based on the threshold algorithms. So, the pixels with the maximum and the minimum gray value of the cutoff version of the original image is calculated, named maxium_gray and  minium_gray , respectively. Then an automated threshold is defined as:

ratio_value=(maxium_gray)/(minium_gray ); Thresh=(1-ratio_value )-0.2   (20)

The obtained threshold can recognize the calcification regions well, but other high gray value pixels not calcified are also detected by the extracted threshold. The candida pixels can be seen in Figure 6.b for an IVUS image.

Calculate circle regions utilizing in the pruning process: To extract the only calcified regions from the candida pixels obtained in part 2, we defined three regions. First, we consider a circular shape model for vessels. Then, the center, the minimum radius, the maximum radius and the mean radius of the extracted contour in section 2.4 are calculated by following equations:

Center=mean(s_i ),i=1,…m, radius= ‖S_i-Center‖_2^2.   (21)

max⁡_radius=max⁡(radius),   min_radius=min(radius).  (22)

mean⁡_radius=mean(max_radius=,(min_radius ).  (23)

where s_i is ith pixel location; m is the count of pixels constructing the extracted contour. Also, ‖.‖_2^  is the Euclidean norm symbol.

Finally, three mentioned circular regions are defined. The center calculated in EQ. (21) is utilized as their centers. The radius of the first circle is considered as the mean⁡_radius named circle_mean. For the second circle, the max⁡_radius is utilized named circle_max. The third circle with radius equal to 1.5 ×max⁡_radius is named circle_great. The defined regions are shown in Figure 7.a.

Convert to the polar coordinate: As the pruning process can easily implement in the polar coordinate, the extracted contour and all circular regions are converted into this domain as shown in Figure 7.b.

Pruning process based on the location: As mentioned beforehand, all major calcification textures in the media region can make mistakes in border detection. Accordingly, we developed a twofold pruning process based on the location. In the first step, all candida calcified pixels in the adventitia region means under the circle_max were deleted as shown in Figure 8.a. For the second step, we define the maximum width of a calcification according to the subject that the calcification regions are created in the media- intima regions, not lumen. We have:

max⁡_width=max⁡_radius /2.   (24)

Then, we extract the first non zero pixels of each column of the initial binary contour from the end to the top named end_pixel . This concept is shown in Figure 9.a. Now, each candida pixel with distance in the y direction form the end_pixel more than max⁡_width is also deleted, because of having a width more than the reasonable defined max⁡_width in Eq. (24). Figure 8.b is shown the remained pixels.

Pruning process based on shadow existence: The major calcification regions are followed with shadow in below. This shadow is characterized with zero value pixels in the extracted initial binary image. Therefore, we calculate the median value under each reminded pixel from the three defined circle regions. If any of three median values not to be zero, the candida calcified pixel is also deleted. Figure 8.c is shown the remained candida pixels.

Detect the maximum interval around the detected calcified pixels: A similar strategy like algorithm 2. part 4 is utilized. We find the maximum interval below the candida pixels with zero gray value. For implementation, we have two searches for each row from the candida calcified pixel to its corresponding circle_max  pixel. For each row, zero value pixels are counted in ±x directions. If reaching to a no zero value in the initial binary image, or a pixel belongs to the initial contour, counting is stopped. Now, we have d1 and d2 columns as the start and end of the intended interval. Now the maximum intended interval is defined as the region between s1 and s2 so that:

extend_size=(d2-d1)/10, s1=d1-extend_size  and s2=d2+extend_size.   (24)

The reason for interval extending is the subject that usually the length of the calcification region is more than its corresponding shadow. The extend_size is achieved by trial and error.

Eliminate the extracted counter by the active contour segmentation process in the calcification regions: We eliminate the initial counter in each maximum interval calculated around each candida calcified pixel.  

Refinement the contour in the deleted regions: In the deleted regions, we utilize the pixels of the circle_max for contour refinement. Utilizing the circle_max instead of the interpolation process [5] can prevent missing in contour refinement due to the subject that if the complicated patterns of the IVUS images make mistakes in the maximum interval extraction process, degenerated results are achieved by the interpolation process. It should be noted that the intended process can also make mistakes in the major calcification regions, but we do conciliation between little error in contour refinement in the calcification below regions and degenerated achieved results by the interpolation process.

If the first or the last column of the contour image is refined, we have the refinement process in the not corrected last or first part of the contour respectively. This tact is taken because that some classification regions are divided into the two parts in the polar coordinate, some part in the first and the rest in the end.

Refinement the contour in the shadow regions: If we find any column in the initial binary image with all zero elements from the  circle_mean location to the  circle_great location, we replace the extracted contour pixels by the pixels of the  circle_max for each column. This process is shown in Figure 11.j.

Convert to the Cartesian coordinate: The corrected contour in the polar coordinate is converted into the Cartesian coordinate.

Little smoothness: The robust smoothing algorithm [21] is utilized on the corrected contour with estimated smoothing parameter multiplied with 〖10〗^2 for little smoothing to obtain the final contour.

Figure 12 shows the extracted media-adventitia border by the proposed segmentation system for the corresponding images in Fig 2.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Overview of Strategy for Media-Adventitia Border ExtractionStrategy for Media-Adventitia Border Extraction Summary: 4 Steps for Image Preprocessing and Classification with Sparse Representation. Available from:<https://www.essaysauce.com/sample-essays/2016-10-8-1475889058/> [Accessed 16-04-26].

These Sample essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.