Learning results Directory UMM :Data Elmu:jurnal:B:Biosystems:Vol55.Issue1-3.2000:

search is necessary to give direction to the learning process. In an individual-based model as opposed to one that uses populations variations will have increasingly negative effects as the performance increases and especially so for a randomly con- structed task such as the one used here. Some memory mechanism is necessary that allows the system to return to a previous state if the variation has destructive effects. The fitness function would not otherwise provide direction by selecting better variations; it would only control the extent to which the microtubules and MAP distribution are shaken.

4. Learning results

The signal processing capabilities of the model were tested using three randomly generated train- ing sets. Each training set consisted of binary input patterns of fixed size: 4-bit, 5-bit, and 6-bit. All possible combinations of input patterns are repre- sented in each training set. Thus the 6 bit set contained 64 instances 2 6 patterns. A single out- put bit was randomly assigned to each input pattern, resulting in an approximately 50 split of the training set between the two subsets. The purpose was to test the model’s capacity to learn associations under the most difficult conditions, namely the absence of any structure in the training set. The issue of generalization has not at this point been addressed. Training sets without any struc- ture i.e. without any similarity among input pat- terns that should be classified the same way actually afford no possibility for sensible general- ization Ugur and Conrad, 1997. Each of the three training sets was tested by varying the hydrolysis coefficient to give different microtubule densitites and run with 20 different random seed values, producing 60 program runs per training set. The program runs were broken into groups of 20 but with the hydrolysis coeffi- cient held constant within each group. The net- work was required to respond within 7 time steps on the signal processing time scale which was sufficiently long for signals to propagate through the network. Fig. 6 shows the average performance for a single group or runs for each training set for 1000 growth cycles using the group whose hydrolysis coefficients yield the best average performance. The 4-bit group reached the 76 learning level; the 5- and 6-bit groups hit averages of 70 and 63, respectively. Within each group there are individu- als that achieved a reasonably good performance see Fig. 7. The best individual in the 4-bit training group reached 94 at the time of termination. The 5-bit training group included 11 individuals out of 20 that exceeded 70, with the best reaching 78. The 6-bit group included only one individual that reached the 70 level. As expected, the performance decreased as the size of the training patterns increased, correspond- ing to the combinatorial increase in the difficulty of pattern recognition problems. This is reflected in the fact that the 4- and 5-bit cases were frozen at the 65 level of learning, whereas the 6-bit case was frozen at the 60 level in the data reported. The chance of randomly reaching a pre-specified freezing level decreases as the problem size in- creases, due to the fact that the number of patterns that must be recognized correctly increases. The important point is that most of the curves show a general increasing trend, indicating that learning is occurring. Fine grain variations can, however, produce erratic effects. The two 6-bit examples illustrated in Fig. 8 are indicative. Both come from separate runs, not included in the series of 20 runs described previously. Freezing occurred at the 65 rather than 60 level. Neither example reached the freezing level. The binding affinity curve used to generate Fig. 8a had an increasing slope, as in the runs previously described, whereas binding affinity used to generate Fig. 8b had a decreasing slope. At a number of points the learn- ing level jumps up, lasts for a while, is lost, and then jumps back to near its former level. Pre- sumably this is due to movement of a critical MAP, such as a linker, that was quickly replaced This feature was much more prominent in Fig. 8a than in Fig. 8b, corresponding to the fact that the jumpiness is damped out by the decreasing slope of the binding affinity. But this is at the expense of restricting the possibility of exploring the solution space. If the binding affinity has an increasing slope as in the runs reported the jumpiness increases the chance of finding better solutions, but at the expense of a greater chance of losing these solutions during the periods of exploration. However, once freezing occurs and gradient search is initiated only improvements are retained. This also, of course, restricts the exploratory search. The results reported in this section give a gen- eral impression of the behaviors observed. The system as currently constituted exhibits learning despite the crudeness of the learning algorithm. Using different binding affinity curves for the different MAP types, depending on how critical they are for integration of signals in space and time, should be an important refinement. The microtubule structure provides the main data pathways; if learning nears stagnation at an inad- equate level these should be allowed to depoly- merize and regrow into a new structure. This was not allowed in the present implementation.

5. Conclusion