VOCABULARY COVERAGE OF WHEN ENGLISH RINGS THE BELL

50

CHAPTER IV RESEARCH FINDINGS AND DISCUSSION

This chapter presents the results of the data analysis and the interpretation of each analysis to answer the two research questions. This chapter covers two main discussion, which are vocabulary coverage of When English Rings the Bell to answer the first research question and how words in the textbook are recycled to answer the second research question.

A. VOCABULARY COVERAGE OF WHEN ENGLISH RINGS THE BELL

The knowledge of vocabulary coverage of a textbook is important for a teacher to know as it gives information about the suitability of the vocabulary level for a specific group and the vocabulary load. To identify the vocabulary coverage of a textbook, it is necessary to find out the number of its tokens, types, and word family. In this research, a program called RANGE Heatley, Nation, Coxhead, n.d. is used as the data analysis instrument to count the number of tokens, types, and word family in a Junior High School JHS textbook entitled When English Rings the Bell. 1. Vocabulary Coverage of the Textbook The data shown below are the results after running RANGE over the all parts of the textbook corpus. This includes chapter I-VIII, Classroom Language for Students, and Glosarium. Based on the data analysis, the vocabulary coverage of the textbook is as follows. Table 4.1 Vocabulary Coverage of When English Rings the Bell All Parts Word Lists Token Type Word Family GSL_1 5,32485.69 64261.20 443 GSL_2 4056.52 17116.30 147 AWL 1111.79 484.58 47 Not in the list 3736.00 18817.92 NA Total 6,213 1,049 637 The summary data show the vocabulary coverage of the corpus by the words in three word lists, which are the first 1,000 most frequent words of English from A General Service List of English Words GSL_1, the second 1,000 most frequent words of English from A General Service List of English Words GSL_2 and the Academic Word List AWL, and also those not in those three lists. For example, the corpus covers 6,213 tokens, consisting of 1,049 types. The second row shows that 5,324 tokens are found in the list of the first 1,000 words from A General Service List of English Words, made up 85.69 of all tokens in the corpus. The 642 types found in the list are made up 61.20 of all types in the corpus, and 443 word families are represented. According to Alberding 2006: 714, type and token distribution throughout a list and words not in the list can determine the suitability of a textbook for a particular group of learners. In the corpus, 92.21 of the tokens are covered in the first 2,000 words types of English, which is the most frequently referred as the level for the basic initial goal of second language learners Schmitt, 2000b. The percentage confirms that the textbook provides good opportunities for students to learn and deepen knowledge so that they can achieve the goal later. In the starting level of learning English, 1,049 types and 637 word families are not too demanding, moreover most of them 642 types and 443 word families are found in GSL_1. Besides, Sánchez Criado 2009 say that only the first 1,000 most frequent types needs to be recognized by beginning level students. It means that the textbook meets the criteria of the number of vocabulary for students in elementary or beginning level like what it is supposed to be. Thus, it can be inferred that the textbook is appropriately aimed for students in the beginning level of learning English. The finding is in line with what the government has set that in Curriculum 2013, English is not a compulsory subject in elementary school. As the result, English is firstly introduced formally in junior high schools as a compulsory subject. It means that the contents are accessible enough to students whose vocabulary knowledge is within that range. Moreover, in average, the number of token 6,213 is 10 times more than the word families 637. This finding is in line with what Coady and Nation say 1988 that ten-time occurences are the ideal number of repetition of a type to make an effect on learners. This makes this textbook suitable for low-level students who have not had any experience of learning English. The result also shows that the textbook provides small learning opportunity for students to learn new vocabulary outside those listed in the first and second 1,000 frequent words. The learning opportunity of the new vocabulary is obtained from the Academic Word List and those words not in the list, only covering 7.79 tokens, 22.5 types, and 47 word families found in the textbook. Having vocabulary knowledge of the 2,000 most frequent types will help learners recognize 84 of the tokens in various authentic texts and after that they can move to special purposes vocabulary Hwang and Nation, 1995. So, after accomplishing the textbook, students are expected to understand and use general English. On the other hand, to be successful for unassisted reading, Laufer 1992 suggests that 95 of words tokens in the texts should be recognized. To help students have better understanding of the textbook with unassisted reading, which requires recognition of 95 of tokens in it, teachers need to pre-teach the tokens listed in the Academic Word List assuming that students already recognize tokens in GSL_1 and GSL_2. It is 111 tokens in total. It is not very demanding, considering that the tokens are comprised of 48 types and 47 word families spread throughout the parts of the textbook. Recognizing all tokens in GSL_1, GSL_2, and AWL means that students’ text coverage is up to 94. This percentage is close to the percentage of 95. To fulfill the gap, a small number of tokens outside GSL_1, GSL_2, and AWL those listed in ‘not in the list’ list should also be pre-taught. However, this is not a big matter considering that the textbook is targeted for JHS students, English learners in beginning level. In order to reach the appropriate level, the textbook would need to be supplemented by other books which help students acquire a higher level of vocabulary size. In terms of incidental learning of vocabulary, this textbook does not meet the criteria proposed by Nation and Meara 2002 for incidental learning to occur. First, incidental learning requires students to know 98 tokens in the textbook Hu and Nation, 2000. Assuming that students only recognize tokens in GSL_1 and GSL_2, it does not support incidental learning to occur as it only covers 92.21 tokens. Second, students need to receive large amount of input, at least one million tokens or more per year. Unfortunately, the textbook only offers 6,213 tokens for a year long course. Third, students need to learn the unknown words in the textbook deliberately to increase the learning. However, the textbook does not provide any vocabulary exercise for deliberate learning. Although the textbook does not provide opportunities for incidental learning, students can still learn the vocabulary in the textbook incidentally if they receive more input from other sources and have opportunities to learn the vocabulary deliberately. By doing so, it is expected that their token coverage increases to 98 to enable incidental learning. 2. Vocabulary Coverage of Chapter I The data shown below are the results after running RANGE over chapter I of the textbook corpus. Based on the data analysis, the vocabulary coverage of chapter I of the textbook is as follows. Table 4.2 Vocabulary Coverage of Chapter 1

Chapter I Word Lists