Reason: Access restricted by the author. A copy can be requested for private research and study by contacting your institution's library service. This copy cannot be republished
A novel approach to training neurons with dynamic relational learning
thesis
posted on 2017-02-08, 05:16authored byGarner, Bernadette
Data mining techniques have become extremely important with the proliferation of data. One technique that has attracted much attention is the use of feedforward neural networks. This is because feedforward neural networks are excellent at finding relationships between the input and the output in data sets that are not understood. As a result they are commonly used for function approximation and classification for their ability to generalize. However, the traditional training methods for feedforward neural networks have meant that it is difficult to determine what the network has learnt and can lead to exponential training times if the data can be learnt at all. Long training times are a result of the network being of fixed-size, which can mean the network is either too small to learn the data or too large to learn it well. Also the dominant approach to training artificial neurons in networks is to iteratively search for single numeric values for the weights that approximately satisfy the training conditions. The search is guided by
attempting to reduce the amount of error in the network. However these iterative approximations are unlikely to produce accurate single weight values that satisfy the learning conditions and the rules the network learns encoded in the weights. In this thesis, a novel method of training neurons is presented, which leads to a dynamic training algorithm for feedforward neural networks in an attempt to overcome the problems of fixed-sized networks. This method of training neurons allows neurons to be interrogated to determine if they can learn a particular input vector. This forms a natural criterion for dynamically allocating neurons into the network. This means that
each input vector can be learnt as it is presented to the network. Therefore the algorithm is a single pass training algorithm and eliminates the local minima problem. The novel approach of training neurons is based on learning the relationships between the input vector into the neuron and the associated output. These relationships are a transform of relationships between the neuron's weights and threshold and define regions in the neuron's weight-space, instead of a
single numeric weight vector for each neuron. This means that rules can be easily extracted from the network which indicate what the network has learnt. We call this method Dynamic Relational learning. In the past, often a statistical sensitivity analysis was performed on the trained neural network to find something about the range of values that the weights could take that would cause the neuron to activate. We call the region in the weight-space that causes a neuron to activate the Activation Volume. The Dynamic Relational algorithm works by examining the surfaces of the volume in the weight-space that causes the neuron to activate. The surfaces of the volume express relationships between the weights in each neuron. We can analyze these surfaces to determine precisely what the neuron has learnt. Using the principles of Dynamic Relational learning we can formulate the maximum number of neurons required to implement any data set. The algorithm is tested on a number of popular data sets to evaluate the effectiveness of this technique and the results are presented in this thesis. Although the algorithm works using binary, methods of converting floatingpoint and other non-binary data sets to binary are given and used. We find that the networks do learn the data sets in a single pass, produce small networks and the Activation Volume is found. We see that the maximum number of neurons required to learn the data sets confirms the formula. We see that it is not necessarily the case that we need more input vectors in the training set than there are weights in the network, this is because each input vector is used to train each weight. We can find which input vectors require
neurons to be allocated into the network. Also we can interrogate whether a neuron knows how to classify an input vector or whether the input is unknown. Finally we can determine precisely what each neuron has learnt and what logical relationships are used to connect the neurons into the layer. A number of new theorems for analyzing constraints have been developed to be able to facilitate Dynamical Relational learning. The current implementation relies on constraint satisfaction programs, and hence this program binds the performance of the implementation.