are held constant at the base values, and eqn 18 is reduced to
C p ¼ C p F
, t
, T
19a where
F ¼ {a ,
K
s
; q}
19b T ¼ {q
, e
, l
1
, k
1
, n}
19c where F ¼ subset of flow parameters; and T ¼ subset of
transport parameters. It should be noted that when eqn 18 is simplified to eqn 19, a number of flow and transport
parameters, L; b; v
s
; v
r
, r
b
, l
2
, C , and t
, are kept constant. This simplification is made to simplify studying the applic-
ability of ANN while considering the most significant flow and transport parameters such as a or K
s
. Also, the simpli- fication helps to make conclusive remarks on the overall
applicability by considering the effects of the few most significant parameters instead of all parameters. In the
next section, the applicability of ANN in simulating eqn 19 is assessed.
5.3 Training and testing subset
The flow and transport parameters are allowed to vary over the domain given by a [ [0.025, 0.100]; K
s
[ [15, 60]; q [ [5, 20]; e [ [0.8, 3.2]; l
1
[ [0.0005, 0.0020]; k
1
[ [0.15, 0.50]; and n [ [0.5, 1.0], and the training and testing subset,
S, is sampled from this domain. A subset of 100 realizations of x is sampled at random; C
for each realization is deter- mined using HYDRUS, and the 100 patterns are placed in
the training and testing subset, S.
5.4 Allocation method
In allocating S to a training subset, S
1
, and a testing subset, S
2
, a simple allocation method is applied pattern-by-pattern on S and this method is based on an user-defined expected
fraction, ˜f, where ˜f [ ,
1 . In allocating a pattern to S
1
or S
2
, a random number r [ 0, 1 is generated. If r ˜f, the pattern is allocated to S
1
; otherwise, it is allocated to S
2
. The sequence of r is simulated using a random number
generator initiated by a seed, r, and the allocation method becomes a function of ˜f and r. For a given S, the allocation
method is expected to allocate ˜f and 1 ¹ ˜f fractions of S to S
1
and S
2
, respectively, and this approach is expected to generate S
1
and S
2
of different or similar sizes for different values of ˜f and r.
5.5 Artificial neural network development
In this manuscript, ANNs are developed using the Neural Works Professional IIPlus, a commercially available soft-
ware package.
12
The internal parameters of the ANN include initial
weight distribution, transfer function, input–output scaling, training rate m, momentum factor
y, training rule, and the number of weight updates ¯ M.
ANN training is performed using the default values in the package, except the seed, r. A common r is used for
the allocation method and training for consistency. As such, the initial weights are randomly distributed over
[ ¹ 0.1, þ 0.1]. The transfer function used was sgm· or tanh·. The input–output scaling is performed using the
minimum and maximum values of each input and output component contained in S
1
; the input components are line- arly scaled over [ ¹ 1, þ 1], and the output components are
linearly scaled over [ þ 0.2, þ 0.8] or [ ¹ 0.8, þ 0.8] depending on the use of sgm· or tanh·, respectively.
12
The default values of m and y are shown in Table 1. The generalized delta training rule for BPA and ¯
M ¼ 50 000 are used. As such, the performance of BPA may be
expressed as h ¼ h r
, ˜f
, J
ÿ 20
where h ¼ performance of BPA.
5.6 Performance criteria