METHOD 4.1 Location Prosiding INAFOR III 2015 FullIssue low rest

Bogor, 21-22 October 2015 251 covers Francois M. J, and Ramires.I. 1996. Classification technique that combines spectral and thematic information, such as slope, elevation, and vegetation indices is expected to improve the accuracy of classification results Danoedoro, P. 2003. Objective of the study is to classify the land cover in the study area based on three classification techniques unsupervised, supervisedmaximum likelihood and rule-based that utilized multi-source data, both spectral and spatialthematic using Landsat 8 OLI satellite imagery.

2. METHOD 4.1 Location

The research was conducted in Citanduy Watershed which is one of the critical watersheds in Indonesia related to hydrologic condition and erosion rates Dwiprabowo, Basuki, Purnomo, Haryono, 2001; Dwiprabowo Wulan, 2003. Citanduy Watershed is geographically located between 108 o 04’ – 109 o 30’ East longitude and 7 o 03’ – 7 o 52’ South latitude Adly, 2009. Administratively, the watershed lies in two provinces, namely West Java and Central Java, where most of the area is located in the districts of Tasikmalaya, Ciamis, Banjar, and Cilacap, while a small area of the watershed is located in Majalengka District and Kuningan District Prasetyo, 2004. 4.2 Materials and Tools The materials used in this study are digital satellite images and thematic maps. The satellite images include Landsat 8 OLI Operational Land Imager recorded in May, 2013 and ASTER- DEM Advanced Spaceborne Thermal Emission and Reflection Radiometer-Digital Elevation Model with spatial resolution of 30 m. While digital thematic maps used are forest area function, watershed boundary and previous land usecover. The derived spectral data are Normalized Difference Vegetation Index NDVI, Normalized Difference Water Index NDWI and slope. NDVI and NDWI are derived from Landsat 8 OLI image, while slope are generated from ASTER-DEM. The tools used in this study are 1 hardware laptop and software Erdas Imagine 9.1 and ArcGIS 9.3 for digital image processing and GIS analysis and 2 field survey equipment consists of GPS, notebook and ballpoint. 4.3 Data Analysis Reference points comprising 384 training and 244 testing samples, which were used for image classification and accuracy assessment, were obtained from field survey and high resolution satellite imageries. The performance comparison of the three classification techniques was conducted based on accuracy assessment, i.e. error matrix or confusion matrix and Kappa coefficient of agreement Congalton, 1991; Foody, 2002. The error matrix represents a contingency table in which the diagonal entries represent correct classification agreement between the map and reference data and the off-diagonal entries represent misclassification Stehman Czaplewski, 1998. More over, it can be used to calculate producer’s accuracy, user’s accuracy, overall accuracy Congalton, 1991; Foody, 2002; Wynne, Joseph, Browder, Summers, 2007 . The producer’s accuracy indicates the probability that reference data are correctly classified or mapped and measures the omission error 1- producer’s accuracy Wynne, et al., 2007 . On the other hand, the user’s accuracy indicates the probability that sample data from classified image match those on the ground and measures the commission error 1- user’s accuracy. The overall accuracy is calculated by dividing the total correct, which is the sum of major diagonal, by the total number of pixels in the error matrix Congalton, 1991. Kappa coefficient of agreement is an index which indicates to what extent classification accuracy is due to true agreement between classified image and reference data, and to what Bogor, 21-22 October 2015 252 extent it could have been achieved by chance Lillesand Kiefer, 2000; Salovaara, Thessler, Malik, Tuomisto, 2005. In addition, to estimate Kappa using sample data, Kappa statistics Khat: Kˆ is used and the value can be interpreted based on the Table 1 Landis Koch, 1977. It is calculated using Equation 1 Salovaara, et al., 2005.                    r 1 i i i 2 r 1 i r 1 i i i ii x x N x x x N Kˆ where r is number of the rowscolumns in the error matrix, x ii is number of observations in the cell ii row i and column i, x i+ is marginal totals of row i, x +i is marginal totals of column i and N is total number of observations. Table 1: Interpretation of Kappa statistics Landis Koch, 1977 Kappa Agreement 0.00 –0.20 0.21 – 0.40 0.41 –0.60 0.61 –0.80 0.81 –1.00 Less than chance agreementpoor agreement Slight agreement Fair agreement Moderate agreement Substantial agreement Almost perfect agreement 3. RESULT AND DISCUSSION 3.1 Unsupervised classification-ISODATA