Abstract:
After a neuronal structure is trained using a training set, an important problem is to generalise the learned. If the system memorises only the data used for the training session it might be possible, the network to give us erroneous results, for another similar set of data. This paper proposes a study concerning some techniques for the elimination of neurones from the layers of a multi-layer neural network. This procedure is applied after the training stage. The study leads us to a network structure with a smaller number of neurones and layers, structure that approximates in the same manner unlinear function. Another experimental aspect is the fact that there are neurones with outputs which are not modified when at the input of the network is presented a set vector from training sequence. In this case the neurones with the constant output will be eliminated.