We are concerned with developmental progressions in learning such that in early training, simplified data sets are presented, and as training progresses, the data items become more complex. We claim that it is not always the case that the important question is whether or not to include a variable in a model but, rather, researchers should think about whether a variable should be included in a model at some stages of training but perhaps not at other stages. Our work shows the usefulness of thinking about things in this novel way.
This talk will consider the hypothesis that systems learning aspects of visual perception may benefit from the use of suitably designed developmental progressions during training. We compare the results of simulations in which several different artificial neural network models were trained to detect binocular disparities in pairs of visual images. All of the models had the same feature set available during training, but in the experimental models some features were held back during early training. The control model had the entire feature set during all stages of training. Two of the experimental models were developmental models in the sense that the nature of their training input changed in an orderly fashion during the course of training (either a coarse-scale-to-multiscale progression or a fine-scale-to-multiscale progression). In the other two experimental models the feature set expanded in number during training at the same rate as the developmental models, but the features in at any given point in time were chosen randomly. The simulation results show that the two developmental models consistently outperformed the non-developmental models. We conclude that choosing when to include a feature during training can be as important as choosing whether or not to include the feature.