I. Validity of Data
Validity means testing what you are purposed to test and not something else. The goals of project are normality related to learning outcomes, which is
what assessment determines. Tests, examination and continuous assessment can provide valuable data for action research. The process of validity a test of a
conduct is by no means an easy task. In this case, the steps involved in determining concurrent validity are as follow:
1 Administer the new test to a defined group of individuals.
2 Administer a previously established, valid test for acquire such
scores if already available to the same group, at the same time, or shortly thereafter.
3 Correlate the two sets of scores.
4 Evaluate the result.
36
Regarding validity in action research, the writer adopts Anderson, Her, and Nilhen‟s criteria that mention the validity of action research including democratic
validity, outcome validity, process validity, catalytic validity, and dialogic validity
37
. In this case, the writer and teacher discuss and assess the students‟ test result of cycle one and cycle two together. It is done in order to avoid invalid data.
J. Trustworthiness of study
1 Discriminating power
Discriminating power provides a more detailed analysis of the test items than does item difficulty, because it show how the top scores and lower scores
performed on each item.
38
36
L. G. Ray, Educational Research competence for analysis and application,Ohio: Merill Publising Company,1987, p. 132
37
Geoffrey E, Mills, Action Research: A Guide for the Teacher Researcher, Ohio: Merrill Prentice Hall, 2003, p. 84
38
Kathleen M. Bailey, learning about language Assessment: Dilemas, Decisions, and Direction, London: Heinle Publisher, 1998, p. 135
D = U – L
N In which, D : The index of discriminating power
U : The number of pupils in the upper group who answered the item Correctly
L : The number of pupils in the lower group who answered the item Correctly
N : Number of pupils in each of the group
Next, the discriminating scale uses:
39
DP REMARK
0.6 – 1.0
Very good 0.4
– 0.6 Good
0.1 – 0.3
Ok -1
– 0.0 Bad
2 Difficulty Item of test
The difficulty item of test analysis concerns with the proportion of comparing students who answer correctly with all of students who follow the test.
Item difficulty is how easy or difficult an item is form the viewpoint of the group of students or examinees taking the test of which that item is a part.
40
The formula as following.
41
39
J.B.Heaton, Classroom Testing, New York: Longman Inc, 1990, p.174
40
Jhon W. Oller, Language Test at School, London: Longman group Limited, 1979, p.246
41
Norman E. Gronlund, Construction Achievement test, New York: Prentice Hall, 1982, p.102
P = R T
In which : P : Index of Difficulty R : The total number of students who selected the Correct answer
T : The total number of students including upper and Lower group
The classification of Difficulty level
42
ID REMARK
0.80 – 1.00
Low 0.30
– 0.79 Medium
– 1.00 High
42
Jhon W.Oller, Language test at School,…..p.247
CHAPTER IV RESULT AND DISCUSSION
A. Before Implementing the Action
Before the implementation of the action, the writer divided three parts of data description in order to know the obstacles of teaching learning activities.
Those are pre-observation, pre-interview, and pre-test. Here are the explanations:
1. Result of Pre Observation
The objective of pre-observation is to know the teaching learning process directly before implementing the Classroom Action Research CAR. It was
conducted on Monday, 28 February 2011 at VII grade of SMP Prakarya at 07: 30 A.M to 08:45 A.M. there are 29 students in the Class. Based on the observation it
is to know that in teaching Vocabulary at VII grade of SMP Prakarya Anjatan- Indramayu academic 20102011, the teacher tends to be teacher centered during
the teaching learning process. It makes the students be passive. When the teacher teaching vocabulary, the teacher only gives the translation and asked the students