Home > Business essays > Development, Factor Structure and Reliability of the Shared Instructional Leadership Scale (SILS)

Essay: Development, Factor Structure and Reliability of the Shared Instructional Leadership Scale (SILS)

Essay details and download:

  • Subject area(s): Business essays
  • Reading time: 12 minutes
  • Price: Free download
  • Published: 15 September 2019*
  • Last Modified: 22 July 2024
  • File format: Text
  • Words: 3,349 (approx)
  • Number of pages: 14 (approx)

Text preview of this essay:

This page of the essay has 3,349 words.

Development, Factor Structure and Reliability of the Shared Instructional Leadership Scale (SILS)

Abstract

Shared instructional leadership may support informed decision making on matters of curriculum, instruction, and assessment. Given the various organizational processes and outcomes associated with this construct, it is important to be able to measure its existence in schools. In this paper, we propose the shared instructional leadership scale (SILS) and report its reliability data. We conducted four exploratory factor analyses, respectively based on four sub-samples generated from a sample of teachers (n = 422). The sub-samples included one individual-level sample and three school-level samples. The factor structure remained stable across the four sub-samples. We conducted a confirmatory factor analysis on a second school-level sample (n = 103). Its results confirmed the unidimensional structure. The paper concludes with a discussion of the measure’s applicability and potential directions for further validation.

Keywords: Instructional leadership, shared instructional leadership, factor analysis.

1    Introduction

Instructional leadership entails the principal’s monitoring of and direction on teachers’ instruction, development of a school vision, and fostering a school climate that supports teaching and learning [1], [2] and improvement [3]. Extending instructional leadership to recognize teachers’ contributions to school improvement, shared instructional leadership emphasizes the collaborative undertaking of principals and teachers to improve curriculum, instruction, and assessment [4]. Involving school staff in school leadership responsibilities may facilitate informed decision making on instructional matters [5]. It may also increase teachers’ efficacy, which is linked to greater teaching effectiveness, high-quality instruction, and student achievement [6].

Shared instructional leadership requires a school climate that encourages collaboration and sufficient support for leadership participants to lend their respective contributions. Such support includes professional development for staff’s capacity to exercise leadership and full access to information about the school’s current condition and future direction. Insufficient support or an autocratic culture could suppress the staff’s effort to undertake and coordinate leadership responsibilities [7], [8].

Although a rich collection of empirical work about the forms of instructional leadership and shared instructional leadership (SIL) exists, there is a lack of measures specifically assessing SIL. A measure of SIL can facilitate future research on its relationship to a host of school processes and outcomes. We developed the shared instructional leadership scale (SILS) to measure this concept, with a focus on collaboration between principals, teachers, and school staff members. The scale highlights key instructional leadership practices, including developing and communicating an instructional vision, securing resources to ensure high-quality instruction, decision-making processes, examining data, developing community partnerships, and continuous instructional improvement. Through presenting the SILS, we aim to provide researchers and educators an instrument to examine schools’ shared instructional leadership.

2    Theoretical Framework

This section discusses research and theory on instructional leadership and its applicability to the construct of shared instructional leadership.

2.1 Instructional Leadership

Studies of instructional leadership originated in the early 1980s on schools identified as effective at fostering instructional quality and student achievement. The conventional conceptualization of instructional leadership has emphasized the role of the principal as an inspector, director, and supervisor of educational issues [1], [2], [3]. While critics contend that instructional leadership is archaic and neglects teachers’ commitment to and capacity of development, abundant studies have suggested the concept’s potential to examine its relation to other conceptions of leadership and flexibility to consider teacher and staff collaboration. For example, when principals involve teachers and staff in instructional decision-making processes and encourage discussions about instructional issues, instructional leadership has been shown to increase teacher morale and promote their capacity to improve instruction, thus leading to better educational outcomes [9], [10]. Existing measures of school instructional leadership include the Principal Instructional Management Rating Scale [1], state-level measures of principal leadership [11], [12], and the Vanderbilt Assessment of Leadership in Education [13].

2.2 Shared Instructional Leadership

In response to criticism that an exclusive focus on principal authority ignores shared contributions of others, scholars proposed shared instructional leadership as a revised conception of instructional leadership [4], [14]. This revision promotes staff professional growth, the contribution of staff expertise to leadership, and efficacy through empowering staff to participate in decision-making processes [9], [15], [16], [17], [14]. This conception of leadership also is not confined to a single role or position; rather it flows through networks in the school community and interactions among members who deploy personal resources [18], [19].

Despite the importance of shared instructional leadership to practice and research, there is a lack of validated measures assessing it. Several researchers, however, have constructed indices of shared instructional leadership to examine its relationship to other variables such as transformational leadership, teacher collaboration, and student achievement [4], [20], [21]. Drawing on conceptual descriptions of shared instructional leadership offered in the literature, as well as considering limitations of items used across existing indices, our shared instructional leadership scale (SILS) aims to capture key aspects of shared instructional leadership such as collaboration, development and communication of an instructional vision, instructional improvement, and community engagement. Consequently, the purpose of our study was two-fold: (1) to develop the SILS, and (2) to evaluate the validity of the factor structure and the reliability of the SILS.

3   Method

3.1    Data Collection and Sample

For this study, we collected data from two samples – the first sample was used for exploratory factor analysis (EFA), and the second sample was used for confirmatory factor analysis (CFA). A link to the online survey was distributed by email to 6200 principals, teachers, and other school staff members such as intervention specialists and program directors, at 150 public secondary schools in a Midwestern state. We emailed two reminders to request that recruited individuals respond to the survey. We received 422 total valid responses from teachers. Two principals and zero school staff members responded to the survey. Since we received only two total responses from principals and other staff members, an insufficient number for any inferential statistics analysis, only the teachers’ responses were used in this study. To conduct the EFA, we generated four sub-samples: (a) an individual-level sample which included all teacher responses (n = 422); (b) a school-level sample including the responses from the schools which had a response rate above 10% of their total teaching staff (n = 60); (c) a school-level sample including the responses from the schools which had a response rate below 10% of their total teaching staff (n = 57); and (d) a school-level sample including the responses from all schools where at least one teacher provided a response (n = 117). The school-level responses were generated by aggregating the teachers’ responses. We then conducted EFA on each sub-s
ample.

The second version of the survey was distributed by email to 5269 teachers at 141 public secondary schools in a Midwestern state. We received 658 valid teacher responses in total. We used a school-level sample including the responses from the schools that had a response rate above 10% of their total teaching staff (n = 103) for the confirmatory factor analysis. The CFA sample includes 587 teachers from 103 public secondary schools. Once again, the school-level response was generated by aggregating the teachers’ responses by school.

3.2   Instrument Development

We began instrument development by interrogating the instructional and shared instructional leadership literature and seeking feedback from educational researchers with K-12 administration professional experience [22]. The initial version of the survey included seven questions about shared leadership and used a 6-point Likert scale (1-strongly disagree, 2-disagree, 3-somewhat disagree, 4-somewhat agree, 5-agree, 6-strongly agree). The instrument is displayed in Table 1.

Table 1. Retained Items of the Shared Instructional Leadership Scale

Shared Instructional Leadership Scale

Instructions

Please respond to each of the items on this questionnaire by selecting the response that best reflect the practices at your school. Read each statement carefully. Then, select the response that indicates your level of agreement regarding feelings about whether each specific behavior was demonstrated at your school during the past school year. For each behavior, a 6 represents “Strongly Agree” and a 1 represents “Strongly Disagree.”

The principal, teachers, and other staff work collaboratively to …

Strongly Disagree

Disagree

Somewhat Disagree

Somewhat Agree

Agree

Strongly Agree

1

2

3

4

5

6

1. develop an instructional vision.

2. communicate an instructional vision.

3. identify potential community partnerships    that align with school’s goals.

4. ensure resources for high-quality instruction.

5. make instructional decisions.

6. examine student achievement data.

7. improve the school’s instructional program.

4    Results

This section discusses the exploratory factor analyses (EFA), the confirmatory factor analysis (CFA), and the reliability tests on this survey.

4.1    Exploratory Factor Analysis

This sub-section discusses the EFA results and reliability tests for each sub-sample based on the first sample.

4.1.1    Sample I. Individual Level Sample

We used SAS 9.4 to examine all the responses, revealing a small percentage of missing item data. Only item 3 had a missing value. Among the total 422 respondents, 99.76% had no missing data. We used the expectation maximization algorithm and the Markov Chain Monte Carlo method to impute missing values for the remainder of respondents who had missing data. For all the items, the means ranged from 4.09 to 4.62, standard deviation from 1.25 to 1.38, and skewness from -1.06 to -.51.

We conducted an EFA based on a polychoric correlation matrix. EFA was used instead of principal component analysis (PCA). PCA is appropriate when the nature of the data is event-typed, while EFA is appropriate when the measured items are within control of the respondents. The practices measured by the questionnaire are not events and are best characterized as the manifestation of latent structures in the respondents’ minds. For example, the item “the principal, teachers, and other staff work collaboratively to develop an instructional vision” does not indicate a specific event. Based on the above considerations, an EFA using polychoric correlations is appropriate for the data analysis.

We calculated polychoric correlations instead of Spearman’s rank correlations or Pearson correlations. Given that an ordinal scale was used to measure the seven items, it was not appropriate to use either raw data or a Pearson correlation matrix to perform the EFA. The choice between Spearman rank and polychoric correlations hinges upon whether the underlying distribution is multivariate normal. Since a multivariate normal distribution is likely to underlie the Likert scale of extent (from “strongly agree” to “strongly disagree”), the decision was made to use the polychoric correlation matrix [23]. The correlations between items were all significant (p< .001), ranging from .57 to .94.

We used SAS 9.4 to conduct the parallel analysis (PA) and the EFA. Before conducting an EFA, we conducting a PA to determine the number of factors to retain. The pervasive methods of determining the number of factors such as Kaiser criterion and the screen test are accurate 22% and 57% of the time, according to simulation studies [24]. PA retains the factors which have eigenvalues greater than those derived from randomly generated data having the same number of variables and sample size. The nature of the data (i.e., continuous versus discrete) and distributional characteristics such as the mean and variance of each variable are also taken into account [25]. In order to avoid a bias of PA, only the factors having eigenvalues greater than the 95th percentile of those from the simulated data are retained.

The results of the PA recommended that one factor should be retained. Thus, we conducted an EFA retaining one factor. We employed Varimax as the pre-rotation method. There was no rotation because a one-factor solution was suggested. All items loaded above .3 on the factor and were retained [26]. Table 2 reports the factor structure and the eigenvalue for each item. The factor accounted for 78.80% variance. The Cronbach’s alpha was .95 and the Guttman Split-Half Coefficient was .91, indicating good internal consistency of the items in the scale.

Table 2. Factor Pattern and Eigenvalues of Sample I

Factor1

Eigenvalue

SIL7

0.94

-.10

SIL1

0.94

5.51

SIL2

0.94

.16

SIL5

0.91

-.04

SIL4

0.90

.01

SIL3

0.79

.03

SIL6

0.79

-.06

4.1.2    Sample II. Schools with a Response Rate above 10% of Total Staff

There were no missing values in this sample. We aggregated the responses from school members for each school that had a response rate above 10% of their total teaching staff, generating a sample of schools (n = 60). We transformed the ordinal data to be continuous after aggregation. For all the items, the means ranged from 4.15 to 4.73, standard deviation from .81 to .93, and skewness from -1.06 to -.51. Thus, we calculated Pearson correlations as the basis of EFA. The correlations between items were all significant in two-tailed test (p< .01), ranging from.62 to .96.

One factor-solution was employed according to the results of PA. We conducted an EFA using the same process as Sample I. All items loaded above .3 on the factor, and were therefore retained. Table 3 reports the factor structure and the eigenvalue for each item. The factor accounted for 85.51% variance. The Cronbach’s alpha was .97 and the Guttman Split-Half Coefficient was .94, indicating good internal consistency of the items in the scale.

Table 3. Factor Pattern and Eigenvalues of Sample II

Factor 1

Eigenvalue

SIL2

.97

.13

SIL7

.96

-.09

SIL4

.96

-.01

SIL1

.96

5.99

SIL5

.96

-.02

SIL3

.86

.02

SIL6

.79

-.03

4.1.3    Sample III. Schools with a Response Rate Below 10% of Total Staff

We prepared data using the same process as Sample II. Only question 3 had a missing value. Among the total 101 respondents, 99.01% had no missing data. We used multiple imputations to process the missing values. Individual responses were aggre
gated to be school-level responses, generating a sample of schools (n = 57).  For all the items, the means ranged from 4.11 to 4.55, standard deviation from .94 to 1.15, and skewness from -1.28 to -.81. In this sample, many schools had only one individual response, which means the aggregation did not change the response in these schools. However, the data still lost its ordinal nature and thus we calculated Pearson correlations as the basis of EFA. The correlations between items were all significant in two-tailed test (p< .01), ranging from .50 to .90.

The results of the PA recommended a one factor-solution, therefore we conducted an EFA retaining one factor. We retained all items because they were all loaded above .3. Table 4 reports the factor structure and the eigenvalue for each item. The factor accounted for 71.54% of the common variance. The Cronbach’s alpha was .94 and the Guttman Split-Half Coefficient was .87, indicating good internal consistency of the items in the scale.

Table 4. Factor Pattern and Eigenvalues of Sample III

Factor 1

Eigenvalue

SIL2

.93

.19

SIL1

.92

5.01

SIL7

.92

-.13

SIL3

.83

.05

SIL4

.81

.01

SIL5

.77

-.03

SIL6

.71

-.09

4.1.4    Sample IV. All the Schools

As revealed in sample I, only question 3 had a missing value. We deleted three cases without the school information because they could not be clustered in a school. Among the remaining 419 cases, 99.76% had no missing data. To estimate missing values, we used the same method as described for Samples III. After aggregating the responses by school, each school had a response, generating a sample of schools (n = 117). For all the items, the means ranged from 4.12 to 4.63, standard deviation from .92 to 1.06, and skewness from -1.07 to -.67. We calculated Pearson correlations as the basis of EFA because the data had a continuous nature after aggregation. The correlations between items were all significant in two-tailed test (p< .01), ranging from .57 to .93.

An EFA retaining one factor was conducted according to the recommendation from PA. The results suggested retaining all items as they all loaded above .3 on the factor. Table 5 reports the factor structure and the eigenvalue for each item. The factor accounted for 78.81% of the common variance. The Cronbach’s alpha was .96 and the Guttman Split-Half Coefficient was .91, indicating good internal consistency of the items in the scale.

Table 5. Factor Pattern and Eigenvalues of Sample IV

Factor 1

Eigenvalue

SIL2

.95

.15

SIL7

.94

-.11

SIL1

.93

5.52

SIL4

.89

.01

SIL5

.86

-.02

SIL3

.85

.03

SIL6

.77

-.06

The results of EFA and reliability tests for the four sub-samples all point to one factor retaining all seven items. Since the items focused on the overall practices of a school rather than school staff’s behaviours, we decided to retain the results of the school-level samples. Specifically, in order to represent more teachers and minimize the bias caused by a low number of respondents in a school, in the following CFA we studied the school-level group with response rates over 10% based on the second sample.

4.2    Confirmatory Factor Analysis

We used the same method to impute missing values as we used for the EFA. We detected a small percentage of missing item data, ranging from .002 to .003 for each item. Of the total 587 respondents, 98.98% had no missing data. Although the factor analysis results for the four EFAs were similar, in the CFA we used the schools with a response rate above 10% of total staff to better represent each school. We aggregated the responses from staff for each school, generating a continuous data set of the school sample (n = 103). Most of the schools included in this sample had a response rate between 10% and 20%. Table 5 represents the Pearson correlations between the items. All the values of skewness and kurtosis were between -2 and 2, indicating that the data did not violate the assumption of normality. Thus, for the CFA we used maximum likelihood as the estimation method.

LISREL 9.2 was used to conduct the CFA. Figure 1 represents the final model. We added error covariance between questions 1 and 2, 4 and 5, and 5 and 6 to improve the model fit. The three pairs of items were closely related to each other in terms of the school practices that they represented. The error covariance between the three pairs was added one by one. The whole model was re-analysed each time when an error covariance was freed up. In the final model, all the indexes indicated a good model fit. All the structure coefficients and factor loadings were significant (p < .001). The value of Chi-Square was non-significant, indicating a good model fit (p = .33). The value of RMSEA was .04 which was smaller than .05. The value of SRMR was .03 which was smaller than .05. The value of GFI was .97. The model is over-identified (df=11).

Figure 1. Final Model of Shared Instructional Leadership

5   Discussion

These results are based on teachers’ perceptions of shared instructional leadership. In the schools that value collective contributions from school members in leadership practices, the principal, teachers, and other staff tend to collaborate on the following activities: developing and communicating an instructional vision, improving instructional programs, securing resources for instruction, examining student achievement data, and identifying community partnerships. Our study findings suggest principals, teachers, and other staff seem to have less collaborative input in identifying potential community partnerships, but more input in the practices involving instruction, resources, and examining student achievement data.

The modifications of the model in CFA indicated a correlation between three pairs of items: developing and communicating an instructional vision; ensuring resources for high-quality instruction and making instructional decisions; and making instructional decisions and examining student achievement data. The principal, teachers, and other school staff tend to contribute more collaboratively to these paired practices. If they collaboratively develop an instructional vision (item 1), they also tend to communicate the vision in a shared way (item 2). Also, if the principal, teachers, and other staff work collaboratively to ensure resources for high-quality instruction (item 4), they also tend to make shared contributions on other instructional decisions (item 5). In schools with shared instructional leadership, it may be that principals empower school staff to use their knowledge in data-driven decision-making processes because this approach improves data relevance (item 5 and item 6) [20], [27], [28]. Overall, the results support that the responses of teachers to the items were driven by the underlying unidimensional construct of shared instructional leadership theorized based on our literature review.

6    Limitations and Future Research

Although the conception of shared instructional leadership involves the principal, teachers, and other staff, the sample used in this study included only secondary teachers, which means the factor structure is based on secondary teachers’ perspectives. Future studies should attempt to explore the perspectives of principals and other staff, to permit the generalization of the measure to other populations or to explore differences of factor structure across these populations. Furthermore, the sample schools were characterized as public secondary schools. The factor structure and internal consistency should be re-examined in samples with different characteristics, such as elementary schools and private schools.

Most of the samples (schools) used in this study had response rates ranging from 10 – 20%, which
may have limited the representation of teachers’ opinions. We distributed the survey through email and received a relatively low percentage of responses from most of the schools contacted. Even so, we found that the factor structure across four sub-samples of schools with different response rates was quite stable. Nevertheless, we suggest that future research examine the psychometric properties of the SLIS measure developed here in a sample of schools with higher mean response rates. Finally, future researchers should also conduct concurrent and predictive validity tests to further understand more about the usefulness of the survey and the processes and outcomes with which it may be associated.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Development, Factor Structure and Reliability of the Shared Instructional Leadership Scale (SILS). Available from:<https://www.essaysauce.com/business-essays/2016-11-4-1478265039/> [Accessed 13-04-26].

These Business essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.