Performance criteria Directory UMM :Data Elmu:jurnal:A:Advances In Water Resources:Vol22.Issue2.1998:

are held constant at the base values, and eqn 18 is reduced to C p ¼ C p F , t , T 19a where F ¼ {a , K s ; q} 19b T ¼ {q , e , l 1 , k 1 , n} 19c where F ¼ subset of flow parameters; and T ¼ subset of transport parameters. It should be noted that when eqn 18 is simplified to eqn 19, a number of flow and transport parameters, L; b; v s ; v r , r b , l 2 , C , and t , are kept constant. This simplification is made to simplify studying the applic- ability of ANN while considering the most significant flow and transport parameters such as a or K s . Also, the simpli- fication helps to make conclusive remarks on the overall applicability by considering the effects of the few most significant parameters instead of all parameters. In the next section, the applicability of ANN in simulating eqn 19 is assessed.

5.3 Training and testing subset

The flow and transport parameters are allowed to vary over the domain given by a [ [0.025, 0.100]; K s [ [15, 60]; q [ [5, 20]; e [ [0.8, 3.2]; l 1 [ [0.0005, 0.0020]; k 1 [ [0.15, 0.50]; and n [ [0.5, 1.0], and the training and testing subset, S, is sampled from this domain. A subset of 100 realizations of x is sampled at random; C for each realization is deter- mined using HYDRUS, and the 100 patterns are placed in the training and testing subset, S.

5.4 Allocation method

In allocating S to a training subset, S 1 , and a testing subset, S 2 , a simple allocation method is applied pattern-by-pattern on S and this method is based on an user-defined expected fraction, ˜f, where ˜f [ , 1 . In allocating a pattern to S 1 or S 2 , a random number r [ 0, 1 is generated. If r ˜f, the pattern is allocated to S 1 ; otherwise, it is allocated to S 2 . The sequence of r is simulated using a random number generator initiated by a seed, r, and the allocation method becomes a function of ˜f and r. For a given S, the allocation method is expected to allocate ˜f and 1 ¹ ˜f fractions of S to S 1 and S 2 , respectively, and this approach is expected to generate S 1 and S 2 of different or similar sizes for different values of ˜f and r.

5.5 Artificial neural network development

In this manuscript, ANNs are developed using the Neural Works Professional IIPlus, a commercially available soft- ware package. 12 The internal parameters of the ANN include initial weight distribution, transfer function, input–output scaling, training rate m, momentum factor y, training rule, and the number of weight updates ¯ M. ANN training is performed using the default values in the package, except the seed, r. A common r is used for the allocation method and training for consistency. As such, the initial weights are randomly distributed over [ ¹ 0.1, þ 0.1]. The transfer function used was sgm· or tanh·. The input–output scaling is performed using the minimum and maximum values of each input and output component contained in S 1 ; the input components are line- arly scaled over [ ¹ 1, þ 1], and the output components are linearly scaled over [ þ 0.2, þ 0.8] or [ ¹ 0.8, þ 0.8] depending on the use of sgm· or tanh·, respectively. 12 The default values of m and y are shown in Table 1. The generalized delta training rule for BPA and ¯ M ¼ 50 000 are used. As such, the performance of BPA may be expressed as h ¼ h r , ˜f , J ÿ 20 where h ¼ performance of BPA.

5.6 Performance criteria

ANN is trained to approximate a vector function, G·, and the performances of ANN in approximating each ith component, G i ·, during training and testing phases need to be assessed. In the ANN training phase, the objective is to match the desired response, d i , with the ANN response y i , at each ith output neuron for all the patterns in S 1 . As such, the performance of training in approximating G i · is assessed using the correlation coefficient defined as R i ¼ jd i y i j d i j y i j d i . 0; j y i . 0 21a j 2 d i þ y i ¼ j 2 d 2 i þ 2j d i y i þ j 2 y 2 i 21b where R i ¼ correlation coefficient between d i and y i ; j d i ¼ standard deviation of d i ; j y i ¼ standard deviation of y i ; and j d i þ y i ¼ standard deviation of d i þ y i . Finally, an average performance of the training phase in approximating G· is assessed using the average correlation coefficient, R, Table 1. Default values for training rate m and momentum factor y Layer Parameter Weight update 0–10 000 10 001–30 000 30 001–50 000 Hidden Training rate, m 0.30 0.150 0.03750 Momentum factor, y 0.40 0.200 0.05000 Output Training rate, m 0.15 0.075 0.01875 Momentum factor, y 0.40 0.200 0.05000 ANN and algorithms in flow and transport simulations 151 defined as R ¼ 1 K X K i ¼ 1 R i 22 In addition, the performance of training is assessed using scatter plots of y i versus d i , and the scatter of ðd i , y i from the 458 line is assessed using two error bounds defined as y a ¼ 1 ¹ e d 23a y b ¼ 1 þ e d 23b where e ¼ specified error in decimal values; y a ¼ lower error bound corresponding to e; and y b ¼ upper error bound corresponding to e. A scatter plot helps to assess ANN performance more effectively compared to R. In the ANN testing phase, the objective is to match d i with y i at each ith output neuron for all the patterns in S 2 . As such, the performance of testing in predicting G· is assessed by repeating the above procedure for S 2 . 6 ARTIFICIAL NEURAL NETWORK ASSESSMENT Several example scenarios related to the GFCT model are solved to assess the ANN applicability. In general, the accu- racy of ANN training is observed to be approximately 100, and the generalization of ANN testing is observed to limit the applicability of ANN. In the next section, the applicability of ANN with respect to only ANN testing is discussed.

6.1 Example 1: simulation of the BTC with linear adsorption