Reliability can be defined as the ability of a measurement method to perform in a constant, stable manner during consecutive uses. In other words, Reliability is the extent to which a test is repeatable and yields steady scores.
Reliability is a precondition for validity. An unreliable indicator cannot produce trustworthy results. On the other hand Validity can be defined as the extent to which a method or a measurement is likely to be accurate and free from bias. In other words it is the degree to which a test or any other measure does the job for which it was intended.
(Patton 2000).There are various types of reliability these include:Test-Retest ReliabilityIn this type of reliability approximations are obtained by repeating the measurement using the same instrument under as almost alike conditions as possible. The results obtained from the two administrators are then contrasted and the extent of correspondence is determined. The greater the differences, the lower are the reliability. In other words Test-retest reliability means that each time a test is administered; the results will be the identical. For instance if a study is designed to determine both brain impairment behaviors and disability, if the scale employed to measure each concept is administered to a group of people today, their answers should look the same 12 weeks from now given that all other variables are the identical.
The Pearson's r correlation coefficient is used to establish relationship two different set of measurements. (changingmind.org 2010).Internal Comparison ReliabilityIt is also known as internal consistency reliability.
Internal comparison reliability is measured by intercorrelation among the scores of the items on multiple item index. Every item on the index must be designed to determine precisely the same thing. Internal consistency reliability evaluates individual questions in comparison with one another for their ability to give consistently appropriate results. (changingmind.org 2010).
Split-half reliability This type of reliability contrasts one half of a test to the other half based on the supposition that all items should be comparable in measuring one construct and the results should be similar. For instance in a study involving 50 items, the first 25 items would be compared to the second 25 items and the degree of similarity between the two is determined. The Spearman Brown correlation formula is used to determine split-half reliability. Inter-Rater ReliabilityThis is also known as inter-observer reliability.
It is used when multiple people are giving assessments of some kind or are the subjects of some test. It is required that similar people should lead to the same scores. It can be used to calibrate people, for example those being used as observers in an experiment. It is used to estimate reliability across different people and how similarity people score items. This is the best way of assessing reliability when you are employing observation as a method of data collection, as observer bias very easily creeps in.
For instance observers can be used in assessing a certain behavior in a group of people are who are briefed to respond in a programmed and consistent way. The variation in results from a standard gives a measure of their reliability (changingmind.org 2010).There are various types of validity. The most common types include the following:Criterion-related validityThis examines the capability of the measure to predict a variable that is designated as a criterion. Realizing this level of validity thus makes results more credible.
This type of validity can be employed in establishing whether a test reveal on certain abilities. For instance if you wish to determine the risk of pregnancy among the youth using a 25- item questionnaire, criterion validity is established if your proposed measure correlates with previously validated measure on risks of abortion among the youth (Changingmind.org 2009).Construct validity This refers to how well the instrument creates the theoretical soundness of the instrument. In other words construct validity defines how a well a test or experiment measures up to its claims.
Construct validity occurs when the hypothetical constructs of cause and effect correctly characterize the real-world situations they are planned to model. This is correlated to how well the experiment is operationalized. A good experiment turns the hypothesis (constructs) into real things you can evaluate. Construct validity is thus a measurement of the quality of an instrument or experimental design.
For instance an instrument should measure the construct it is supposed to measure accurately and precisely. If you do not have construct validity, you are likely draw incorrect conclusions from the experiment (Changingmind.org 2009).Face validity occurs when something appears to be valid. This depends very much on the judgment of the observer. In other words face validity is a measure of how representative a research project is 'at face value.
' Measures often begin out with face validity as the researcher selects those which seem likely to prove the point. For instance when determining a certain behavior in a group of people, face validity of the test will be established if everyone is shown the proposed tests and agrees that it is valid (Changingmind.org 2009). Content validityContent validity occurs when the experiment provides adequate coverage of the subject being studied. This includes measuring the right things as well as having an adequate sample. Samples should be both large enough and be taken for proper target groups.
Content validity is connected very closely to good experimental design. A high content validity question covers more of what is required. A trick with all questions is to ensure that all of the target content is covered uniformly. For instance if one wishes to determine sleepiness in a group of individuals, valid measure of sleepiness will require that both external and internal factors be considered for instance the amount of sleep the previous day and the amount of activity during the day (Changingmind.
org 2009).Data collection methods and instrumentsVarious data collection methods can be employed to collect data in both human service researches and managerial researches. These methods include: interviews, group discussions, use of questionnaires, observations, using available information, projection, mapping and scaling. Using already available information entails retrieving data from other sources that contain information on similar researches that have been carried before by other people for instance newspapers, research papers, text books etc. observation involves systematically selecting, watching and recording behavior and characteristics of living beings, objects or phenomena. Interviewing is a data-collection technique that involves oral questioning of respondents, either individually or as a group.
Questionnaires act as data collection tools in which written questions are presented that are to be answered by the respondents in written form. When a researcher uses projective techniques, he asks an informant to react to some kind of visual or verbal stimulus. Mapping is a valuable technique for visually displaying relationships and resources. Mapping a community is also very useful and often indispensable as a pre-stage to sampling. Scaling is a technique that allows researchers through their respondents to classify certain variables that they would not be able to rank themselves.
(Moser A & Kalton 1999).Various instruments can be used when carrying out human service research or managerial research. The type of instruments used depends on the type of study that is being carried out. For instance, if a research involves measurement of certain parameter at some point then appropriate measuring instruments are used.
Also the type of instruments employed in a study greatly depends on the experimental l design and the methods employed in collecting data. When collecting data from available sources, instruments such as checklists and data compiling forms can be employed to ensure that data is obtained easily and in an accurate manner. When collecting data through observation various instruments are used. If observations are made using a defined scale then it is required that the appropriate measurement tools be available. These instruments can include microscopes, watches, scales etc.
When collecting data using interviews it is required that you should have instruments that will facilitate the exercise and make data collection easy and accurate. For instance you need to have an interview guide, checklist, questionnaire and tape recorders. (Moser & Kalton 1999).Reliability and validity in data collection methods and instruments in both human service and managerial researches is of great significance as it ensures that the data obtained is a true reflection of the construct under investigation.
This also ensures that conclusions and recommendations that are drawn from a study are viable and that the data obtained can be used for future related studies. Reliable and valid methods and instruments are indispensable especially when human beings are the study objects or when the data obtained is used to make major decisions affecting an organization- in this case data that is not valid and unreliable can led to wrong decisions or judgment that will negatively affect individuals or organizations (Moser & Kalton 1999). Therefore reliability and validity form fundamental aspects of any method or research design. A research ceases to be valid if the data obtained is not accurate or is unreliable. Therefore it required that prior to carrying any research especially those that require accurate data to draw conclusions; one should prudently employ research procedures and instruments that are reliable and valid.