Then, in this study, the student would ask to write a recount text twice in the pre- and posttest which the theme will be given first. In the pre-test, the theme-
given is “Experience” and “The Most Memorable Event” in the posttest For experimental class, the treatment of peer-assessment was given. In the
peer-assessment activity, the teacher would play a role as a facilitator and instructor. Those meant the teacher will give some instruction how to do peer-
assessment and monitoring the students’ work. In doing peer-assessment, the experimental students will be guided by the rubric of peer-assessment below due
to help them in giving mark in their friends’ work: The data of this research is collected from 38 students from two classes.
For the explanation of the data, the researcher would use quantitative data technique, where the data will be range in number or score. Since the data was in
writing form, it will be scored first. The score would follow the rubric of scale scoring categories of oral proficiency test developed by Jacobs. It called by
Analytic scoring, it can be seen in appendix 5. In the Analytic scoring, scripts are rated on several aspects of writing and the criteria are given rather than in a single
score. It is suggest a separate score for each of the element in writing test, include of scoring of the content, organization, and mechanics and so on. Moreover, its
schemes provide more detailed information about a test taker’s performance in different aspects of writing.
4
Although it was for an oral proficiency test but it also can be used for the writing test.
3. Data Validity
Every test should represent to what the test want to measure. It called validity. It also implement in this research. Validity can be defined as the degree to which a
test measures what it supposed to measure. Grondlund, as cited by Brown explains that validity of a test is the extent to which inferences made from
assessment results are appropriate, meaningful, and useful in terms of the purpose
4
Sara Cushing Weigle, Assessing Writing, Cambridge: Cambridge University Press, 2002, pp. 114
115.
of the assessment.
5
A test aimed at providing true measurement of the particular skill that is intended to measure.
Validity was a complex concept; it is reminding the teacher on how to make a good test. The researcher would use face validity in order to gain the validity.
Mousavi suggested that “Face validity refers to the degree to which a test looks right and appears to measure knowledge or ability it claims to measure.”
6
This validity cannot be empirically tested by the teacher or even a testing expert. It
purely see by the “eye of beholder” or in another word by the test-takers and test- giver in which they can intuitively understand the instrument. It meant that the
validity depended on the view of both test takers and test-giver by looking at its surface, appeal and perceived, for example the instruction of the test.
4. Technique of Data Analysis
Before the test is given to the students in the pre and posttest, it will be tested in. After make an instrument, it will be tried out. The result from tryout-test
will be analyzed first by the test of normality and test of homogeneity. After doing pre-test, students would do peer assessment; they will correct their peer’s
writing. This process is expected to give a different result in the posttest. Then, the students did a posttest and it will be analyzed by the researcher. The data will be
analyzed by assumption testing and hypothesis testing. The kind of assumption testing was normality test and homogeneous test. It is calculated due to know the
next step of analyzing the data which whether using parametric test or non- parametric test.
1. Normality Test
Normality test is used to know whether the data come from the normal distribution or not. In this study, the researcher tends to use SPSS version 22
to find out the normality of the data by followed these steps:
5
Douglas Brown, Language Assessment Principles and Classroom Practice, New York: Pearson Education Inc., 2004, p. 22.
6
Ibid., pp. 26 27.