The need for subject matter expertise is inherent in survey design of the questionnaire. There may be gap between the researcher's survey methodological expertise and familiarity with the survey's subject. This need is most acute in surveys requesting complex or technical information, such as establishment surveys, as opposed to opinion surveys of the general population. Survey researchers attempt to address this in part by involving the targeted respondent, presumably knowledgeable about the subject and population of the survey, and other non-survey experts. A subject matter expert review is the process of reviewing questionnaire by design experts and embodied by the questionnaire appraisal approach (Lessler & Forsyth, 1996; Willis & Lessler, 1999). It is also distinct from the host of methods used to obtain feedback from members of the respondent population. A number of survey methodologists have proposed approaches for incorporating subject matter expertise (Dillman, 2000). For example, Dillman (2000) recommends review by knowledgeable colleagues and analysts as the first stage of pre-testing and he recommends tapping the experience of:
a) People who have analyzed data;
b) Policy makers who can opine on how meaningful and relevant questionnaire data may be;
c) Individuals with survey experience.
The researcher conducted a qualitative review from 12 expert reviewers on the contents in the questionnaire. The expert reviewers are selected from different industries and companies with many years experience in manufacturing, security, logistics, freight, warehousing management and auditing skills. Listed in Figure 3.3 is the summary of the subject matter expert reviewer. The name and companies of the reviewers are excluded in this summary to ensure the confidentiality of the personal information of the reviewers. While other assessments of questionnaire testing methods sometimes use measures such as the number of questionnaire problems identified as a way to compare the utility of testing methods (Presser & Blair, 1994). In this research the process of reviewing by subject matter expert suggested the following general observations on the use and value of this technique.
The design and format comments from subject matter experts are insightful and useful in enhancing the questionnaire used for this research. The following are the feedback received:
i. A scale of 5 or more points extent scale should be used instead of a dichotomous yes/no format.
ii. Recommended re-arranging the answer categories in order of likely frequency of response, regardless of other effects this re-ordering might have. However, this information might be useful for highlighting very frequent answers in the stem or eliminating rare answers and consolidating them in “other” categories.
iii. A preference for an open-ended format over pre-coded answer categories, perhaps suggesting concerns about the comprehensiveness of the original answer categories.
iv. There are too many classifications on budget.
v. There are items not applicable to certain industries are included in the questionnaire.
vi. To include the number of security incident related to cargo crime in the questionnaire (e.g. theft cases, robbery or hijacking cases).
vii. Good introductory material and instructions in the questionnaire cover pages was used.
viii. The instructions and attractive cover page would impact likely participation by respondents.
ix. The cover page is also very attractive and nicely formatted.
3.4.4 Content Validity Test
Content validity is a non-statistical type of validity that involves the systematic examination of the test content to determine whether it covers a representative sample of the behaviour domain to be measured (Anastasi & Urbina, 1997). Content validity evidence involves the degree to which the content of the test matches a content domain associated with the construct. For example, a test of the ability to add two numbers should include a range of combinations of digits. Anastasi and Urbina (1997) explained that a test with only one-digit numbers, or only even numbers, would not have good coverage of the content domain. Content related evidence typically involves subject matter experts (SME's) evaluating test items against the test specifications. Anastasi & Urbina (1997) further explained that a test has content validity built into it by careful selection of which items need to be included in the survey. Foxcroft, Paterson, le Roux and Herbst (2004) noted that by using a panel of experts to review the test specifications and the selection of items; the content validity of a test can be improved. Subject matter expert will be able to review the items and comment whether the items have representative sample of the behaviour domain. The use of reliability and validity are common in quantitative research and now it is reconsidered in the qualitative research paradigm (Golafshani, 2003). Wainer and Braun (1998) described the validity in quantitative research as construct validity and further explained that the construct is the initial concept, notion, question or hypothesis that determines which data is to be gathered and how the information can be collected. Quantitative researchers usually used the application of a test or other process that will actively cause or affect the interplay between construct and data in order to validate their investigation. In this sense, the involvement of the researchers in the research process would greatly affect the validity of a test.
3.5 Sampling Designs
The sampling criteria used in this research are non-probability sampling. There are two major types of sampling designs: probability and non-probability sampling. Ideally, probability samples will be preferred when we want to create a representative sample. The elements in the population have some known chances or probability of being selected as sample subjects in probably sampling (Saunders, Lewis & Thornhill, 2012) whereas in non-probability sampling, the elements do not have a known or predetermined chance of being selected as subjects. Probability sampling designs are used where the representatives of the sample are of importance in the interest of wider generalizability. When time or other factors becomes critical then non-probability sampling is generally used (Saunders, Lewis & Thornhill, 2012) and any issues related to determination of sampling will be encountered when conducting research. Any sampling procedure that violates the equal probability of selection method is viewed as a non-probability sampling. Such sampling may be necessary in order to answer certain kinds of questions with large population. However, in addition to the use of populations rather than samples, some research does not need to generalize to a population, or cannot adequately identify the members of population in order to draw a sample (Blaikie, 2010). Sometimes to resist on the use of random sampling would make the research very difficult or impossible. Therefore, in this case, it will be necessary to compromise with the ideal and use a non-probability sampling method.
This research design decision can be justified in the terms of it being better to have some knowledge that is restricted because of the type of sample than to have no knowledge of the topic at all. Decision about whether or not to use a probability sampling, and how it should be done, are not confined to quantitative research; they are also necessary in research that intends to gather qualitative data (Burgess, 1982). Invariably, because qualitative methods are resource-intensive, smaller samples must be used. Here the compromise is between having the data on, perhaps, an unrepresentative sample or just a single case. Blaikie (2010) argued that there is no necessary connection between the type of research method used (quantitative or qualitative) and the type of sample that is appropriate. Qualitative research can also work with populations, although they are likely to be much smaller. For example, it would be possible to study neighbouring behaviour through participant observation in a housing area in middle class suburbs. Defining a population in this way may restrict the statistical generalizability of the results, but the richness of the data may allow generalizations, based on a judgment about how typical the chosen research site is, or whether other suburbs or housing area in other cities are similar is an important aspect. If the research also included a variety of housing area in the same city, then generalizability may be enhanced. Here the sampling issue becomes one of which city, and which town or housing area is to be selected. This method of selection may be based on judgment rather than probability, although probability sampling should not be overlooked.
The elements in the population do not have any probabilities being chosen as respondent in non-probability sampling designs. This means that the findings from the study of the sample cannot be confidently generalized to the population. Researchers may at time be less concerned about the generalizability than obtaining some preliminary information in a quick and inexpensive manner. This is why non probability sampling is used in this research. Sometimes non probability sampling could be the only best way to obtain the necessary data and information. In this research the selected transportation companies in Malaysia have been selected as sample or respondents. Probability sampling is not used in this research due to the reason that trucking companies registered with the authority in Malaysia may not represent an organization managing the trucking or transportation business in Malaysia. Some companies are using other individual's trucking permits or the trucks are registered in different companies or ownership. Furthermore there is no legislation requirement for a company to be registered as a transportation company which is actually operating a transportation business. It will be more suitable to use non probability sampling as the researcher is very knowledgeable and experience in selecting the appropriate sample for this research. Some of the non probability sampling plans are more dependable than others and could offer some important leads to potentially useful information with regard to population. The non probability sampling designs, which fit into the broad categories of purposive sampling, are discussed.
It is necessary to obtain information from specific target groups rather than obtaining information from those who are not most readily or conveniently available. The sample here is confined to specific types of people who can provide the desired information, either because they are the only respondents who have it, or conform to some criteria set by the researcher and this type of sampling design is called purposive sampling (Sekaran, 2003). Purposive (judgmental) samples, on the other hand, represent the selection of an appropriate sample based on the researcher's skill, judgment, and needs (Hagan, 2003). Hagan (2003) explained that this type of sampling is well used on election nights when the major networks, based on sample precincts, are able to quite accurately predict the likely outcome, often with a small margin of error with only 2 percent of the votes cast. Marketing studies often use test areas that possess characteristics quite similar to those of the nation. Both political campaign planners and market analysts have made use of focus groups. The organizer of these focus groups group together purposively selected volunteers to measure reactions or attitudes about products, candidate speeches and other elements (Krueger, 1994 & 1997; Morgan, 1993; Stewart & Shamdasani, 1990).
3.5.1 Determination on Sample Size in Quantitative Survey
Sample size plays a vital role in determining the representativeness of the set of data that a researcher considered for sampling. It depends on the maximum allowable or acceptable error that a sampling can have or it can refer to the accuracy that a researcher desires (i.e., required accuracy). When the sample size is small in its representativeness, the change of increase in error is on the higher side. Confidence interval of 95% or 99% (Lynn & Elliot, 2000) of the sampling that a researcher needs also determines the sample size. Other things being equal, larger samples result in survey estimates having smaller variance (smaller standard errors). Variance is inversely proportional to sample size, and hence standard errors and confidence intervals are inversely proportional to the square root of the sample size. For example, doubling a sample size will tend to reduce standard errors by around 29% (the multiplying factor being 1/ 2) as discussed by Lynn and Elliot (2000).
It is often very important step in planning a statistical research and it is usually a difficult one for sample size determination. Lenth (2001) reported that one must obtain an estimate of one or more error variances and specify the effect size of importance. There is the temptation to take some shortcuts and statistical research (surveys, experiments, observational studies, etc.) are always better when they are carefully planned. In any research it is noted that not all sample size problems are always the same, nor is sample size equally important. For example, the ethical issues in an opinion poll which are very different from those in a medical experiment and the consequences of an over or undersized study also differ and sample size problems are context-dependent as discussed by Lenth (2001).
The determination of sample size in quantitative research is a common task for many researchers using this methodology. The response rates are usually below 100% in many research studies that uses surveys and other voluntary participation methods. Salkind (1997) recommended oversampling when he stated that by mailing out surveys or questionnaires, then we need to count on increasing the sample size by 40% to 50% to account for lost mail and uncooperative respondents (Salkind, 1997). Fink (1995) stated that oversampling can add costs to the survey but it is often necessary. A second consequence was due to the fact that variances of estimates are increased because the sample actually obtained is smaller than the target sample (Cochran, 1977). These factors can be allowed in selecting the size of the sample (Cochran, 1977). Many researchers do not agree to use of over-sampling and researcher need ensure that this minimum sample size is achieved.
The sample size formulas and procedures used for categorical data are very similar, but some variations do exist. Assuming a researcher has set the alpha level a priori at .05 and plans to use a proportional variable, has set the level of acceptable error at 5%, and has estimated the standard deviation of the scale as 0.5. Cochran's (1977) sample size formula for categorical data and an example of its use is presented here along with explanations as to how these decisions were made below.
...(download the rest of the essay above)