Techniques of Data Analysis Validity and Reliability

personal opinions from the collaborators and the students themselves which were used as indicators whether the actions were successful or not. 3. Tests The quantitative data were obtained by conducting tests. Pretest and the posttest were to obtain the information related to the students’ reading comprehension level. From the pretest scores, it could be seen whether the students had low, average, or high reading comprehension level. The pretest was held one week before the implementation of the actions, while the posttest was held at the end of Cycle II. The mean scores both from the pretest and the posttest were compared in order to know whether there was a significant improvement or not after the implementation of the action.

F. Techniques of Data Analysis

The data of the research are qualitative and quantitative data. Both data were analyzed in order to know whether the study was successfully done or not. According to Burns 1999:157-4, there are some steps to follow by a researcher in analyzing the qualitative data, as follows: 1. Assembling the data The researcher assembled the data such as field notes, interview transcript, and observation sheets by scanning the data in a general way to show up board patterns so that they can be compared and contrasted. Thus, the researcher could see what really occured in the field in the running of the teaching and learning process. 2. Coding the data Coding the data was done after scanning them. It functions to identify the data more specifically. Coding the data was a process of attempting to reduce the large amount of data that may be collected to more manageable categories of concepts, themes or types. 3. Comparing the data After coding the data, the researcher compared the data by identifying the relationships and connections between different sources of data to find out whether the actions were repeated or developed across different data collection techniques. 4. Building interpretations At this stage, the researcher was required to come back to the data several times to pose questions, rethink the connections and develop explanations of the bigger picture underpinning the research. 5. Reporting the outcomes At this stage, the researcher presented the results of the research. It included presenting the issue underlying the study, describing the research context, outlining the findings supported by the data, relating the context and the finding, and finally, suggesting how the process has been improved so that it could lead to other areas to research.

G. Validity and Reliability

According to Anderson et al. in Burns 1999: 161-162, there are some criteria of validity needed in an action research study to get valid data. Those are democratic validity, outcome validity, process validity, and dialogic validity. 1. Democratic validity Democratic validity means giving the stakeholders chances to voice their opinions, ideas, and comments on the application and implication of the research. The researcher will interview the English teachers as well as the students of grade X at SMAN 1 Kasihan in order to find out how they feel about the running of the process. The interview will be conducted in some time during the process. 2. Outcome validity Outcome validity takes care of the result of the actions that will be implemented. The outcome validity of this research will be the improvement of students’ active participation and involvement in class through using group discussions. 3. Process validity To fulfill the process validity, the researcher, her colleague, and the English teachers will observe the process in implementing group discussions to improve students’ participation and involvement in class. After that, they will hold a discussion to determine whether the process is successful or not, based on the observation sheets and field notes which will be collected during the running of the process. The result of the discussion will then determine the treatment for the next cycle. 4. Catalytic validity The researcher will apply the catalytic validity through the cycle of the action research plans, implementation and its observation, as well as reflection. Catalytic validity deals with teacher’s comprehension about the factors that may obstruct and facilitate the teaching and learning process. The researcher will observe students’ change of behavior before and after given the actions. 5. Dialogic validity Dialogic validity, which according to Burns 1999 means that the stakeholders can participate in the process of the research, will be done by having dialogue among the researcher, the teachers of English of SMAN 1 Kasihan, and grade X IIS 1 students at SMAN 1 Kasihan. In order to get the validity of the quantitative data, the researcher used some validity proposed by Cohen et al. such as content validity and face validity. Content validity is related to the range of subject matter in question that wants to be covered. It means that the items of the test should cover the materials that have been taught before. The content validity is implemented by designing a table of content specifications. Face validity concerns with a matter whether the items of the tests test things that should be tested. It means that if the study wants to find out t he students’ reading comprehension level, the type of the test should be able to asses reading skill. The type of the test administered in this study was an achievement test since it was related to classroom lessons. In order to make the pretest and posttest as the quantitative data reliable, there were some aspects to consider in designing the test items. Brown 2004 proposes three aspects to consider: item facility, item discrimination, and distractor efficiency. Item facility is related to item difficulty levels that should be adjusted to test- takers or students’ proficiency level. Item discrimination is related to the ability of the items to distinguish between low test- takers and high test-takers. Distractor efficiency is related to the appropriateness of the distracters used in order to trick both high and low test-takers. The distracters should not be too easy for high test-takers and not be too difficult for low test-takers. The test items then are tried-out to the students in the same level. After that, the result of the try-out is analyzed. To analyze those three aspects, the researcher used ITEMAN. In this study, she used ITEMAN 3.00. The next, the test items were revised based on the result of the analysis. The scores obtained from ITEMAN were compared to the range of item difficulty and item discrimination presented by Hingorjo 2012 in his journal. The acceptable range of item difficulty and item discrimination is presented below. Table 2: The Acceptable Item Difficulty and Item Discrimination Item Difficulty Range Interpretations Acceptable Item Discrimination 0.30 Difficult 0.24 0.30 to 0.70 Good 0.70 Easy Based on the analysis of the result of the try-out, there were 10 items which are invalid. Items number 1, 2, 10, 23, and 41 in the pretest and posttest prototype were invalid because the item facility values were below 0.3 and above 0.7. Meanwhile, items number 3, 28, 30, 36, and 39 were unacceptable because the item discrimination indices were below 0.3. The researcher removed those invalid items and revised some items so that there were 35 items for each test. The further analysis of the test items can be found in Appendix E. To get the trustworthiness of the qualitative data and to reduce subjectivity in analyzing the data, the researcher used the triangulation techniques as follows: 1. Time triangulation: the data were collected at different point in time to know what the processes of the changes were. 2. Investigator triangulation: in order to avoid the bias that might happen in the process of changes, the researcher worked together with the English teacher and a colleague from PBI as the collaborators.

H. Procedures of the Study