Researchers need to know whether an assessment adequately represents the performance domains and/or constructs they are interested in and whether the assessment items address these domains and/or constructs in the correct proportions. This is known as validity. Validation is undertaken by a test developer to collect evidence to support the types of interferences that are to be drawn from the results of an assessment.

Researchers should be able to understand several types of validity in order to review test manuals to identify whether psychometric studies have shown the assessment has appropriate levels of validity.Three types of validation, known as content, construct and criterion-related validation are traditionally performed. There is no one recognized measure of validity, so it is usual for researchers to conduct a range of studies to examine its different aspects (Turner, Foster, & Johnson, 2002). Concept of Validity In the social sciences there appear to be two approaches to establishing the validity of a research instrument: logic and statistical evidence. Establishing validity through logic implies justification of each question in relation to the objectives of the study, whereas the statistical evidence.

Establishing validity through logic implies justification of each question in relation to the objectives of the study, whereas the statistical procedures provide hard evidence by way of calculating the coefficient of correlations between the questions and the outcome variables (Reynolds & Kamphaus, 2003). Establishing a logical connection between these questions and the purpose is both simple and hard. It is simple in the sense that we may find it effortless to see a link for yourself, and hard because your justification may lack the backing of experts and the statistical evidence to convince others.Establishing a logical connection between questions and objectives is easier when the questions relate to tangible matters. For example, if you want to find bout about age, income, height or weight, it is relatively easy to establish whether a set of questions is measuring, say, the effectiveness of a program, the attitudes of a group of people towards an issue, or the extent of satisfaction of a group of consumers with the service provided by an organization is harder.

When a tangible concept is involved, such as effectiveness, attitude or satisfaction, you need to ask several questions in order to cover different aspects of the concept and demonstrate that the questions asked are actually measuring it. Validity in such situations becomes more difficult to establish. It is important to remember that the concept of validity is pertinent only to a particular instrument and it is an ideal state that you as a researcher aim to achieve (Kumar, 2005). Three Types of ValidityIn the past, validity has been classified into broad types with overlapping, interrelated categories.

Over time, however, “measurement specialists have come to view these as aspects of a unitary concept of validity that subsumes all of them” (Sawyer, 2004). The validity of a measure is the degree to which it measures the theoretical construct under investigation. This construct is, in the nature of things, unobservable; all we can do is to obtain imperfect measures of that entity (Mitchell & Carson, 1989).Methodological decisions must be taken in light of the validity of the research. The term ‘validity’ in social research vis-a-vis research findings refers to the degree to which they approximate the truth. Various types of validity can be distinguished.

For this research three types of validity are relevant: face validity, construct validity and external validity. Face validity is the most basic kind of validity. It concerns whether, on the face of it, the definitions and methods used are appropriate.It is argued that transparency is the sole validity criterion for qualitative research.

Though transparency is very important in qualitative research, construct validity can also be a goal, especially when the guiding conceptual framework is not too loose. Construct validity refers to the approximate validity with which generalizations can be made about higher-order constructs from research operations. Construct validity is based on the fit between research operations and conceptual definitions.External validity relates to the validity with which conclusions can be generalized beyond the study to other populations of persons, settings and times, etc (Rombouts, 2004). Of the three types of evidence, construct validity is recognized as a force unifying all types of validity evidence.

A construct has been defined as “some postulated attribute of people, assumed to be reflected in test performance” (Sawyer, 2004). Content validity (or face validity) involves the issue of whether the measure adequately covers the construct’s domain.It differs from the other validity types in that it can only be assessed by a subjective judgment based on an examination of the instrument (usually the wording of a question). Thus, the content validity of a fifty-item scale designed to measure people’s historical knowledge might be evaluated by a panel of historians who would be asked how well these items cover the domain. To the extent that any of the items were inaccurate or that important historical periods were not adequately represented among the items, the panel presumably would question the scale’s content validity (Johnsen, 2004).

Criterion validity is concerned with whether the measure of the construct is related to other measures which may be regarded as criteria. It is far cheaper to use surveys to estimate crime victimization rates than to use police records for this purpose; but are self-reports an accurate measure? Self reports have been validated by checking them against official reports. The results have been generally favorable, although certain kinds of events, such as assaults, are more likely to be underreported than less serious episodes, such as larcenies (Mitchell & Carson, 1989).