Saturday, October 26, 2019

Playing With Neural Networks

I got interested in neural networks about 20 years ago. I was convinced they would become useful very quickly, but it took all this time for them to come into the real world. The problem was, I imagine, the lack of computing power at that time. Nowadays even an interpreted language like Python can be used to create neural networks.

I was working at CERN in Switzerland and read an article about a neural network which had invented a new opening move for backgammon. (And hell, I can't remember the move!) Anyway that article came back to my mind when I heard about the Alpha GO. And I thought that it would be interesting to see what was happening in neural networks now, in a practical way, and as a hobby. So I bought...




... "Make Your Own Neural Network" by Tariq Rashid. It is brilliant! This is one of those rare books where the author really really wants to help you understand, and takes the time to make sure you do. 

(I read it in August in 2019 instead of going on "holiday". Holiday to my family means the joys of churches/museums/beaches, but which to me means  boredom/back pain/sand in your pants. So I stayed at home.)

I got to the part of the book where there would be a demonstration of image recognition, written character recognition. But I had in mind a simpler test. I always think that if the simpler test of a system won't work, then a complex one is bound to fail.

So my idea was that the neural network should be trained to tell me the position of a spike in a list. This can be done in other ways, but I as I say wanted a simple test before I went ahead. So...



And here is the code which does that. Amazingly it is very short for its potential.


# this is for matrix multiplication neccessary for training
# and querying the neural network
import numpy

# this is for the sigmoid function used inside each neuron
import scipy.special

# this is for my test training set
import random

# Neural network class definition
class neuralNetwork:
    # initialise the nerual network
    def __init__ (self, num_input_nodes, num_hidden_nodes, num_output_nodes, learningrate) :
        self.inodes = num_input_nodes # how many input nodes
        self.hnodes = num_hidden_nodes # how many hidden nodes
        self.onodes = num_output_nodes # how many output nodes
        self.lrate = learningrate # how fast to learn
      
        # create the link weight matrices
        # input to hidden.
        # num columns = number of inputs = 2nd parameter
        # num rows = number of outputs = 1st parameter
        # that -0.5 moves the weights from 0...1 to -0.5...+0.5
        self.wih = (numpy.random.rand(self.hnodes, self.inodes) - 0.5)
      
        # create the link weight matrices
        # hidden to output
        self.who = (numpy.random.rand(self.onodes, self.hnodes) - 0.5)
      
        # For the moment use the sigmoid (expit) function
        self.activation_function = lambda x: scipy.special.expit(x)
  
    # Train the neural network. Each call to this function will adjust the
    # connection weights slightly in a direction which results in the output vector
    # being closer to the target vector
    # So we are given the inputs and the targets and we twiddle the innards
    def train(self, inputs_list, target_list):
        # convert inputs to 2d array vertical vector
        inputscol = numpy.array(inputs_list,ndmin=2).T
      
        # convert targets into a vertical vector
        targetscol = numpy.array(target_list,ndmin=2).T
             
        # calculate signals into the hidden layer
        hidden_inputs = numpy.dot(self.wih,inputscol)
        hidden_outputs = self.activation_function (hidden_inputs)
      
        # calculate the signals into the final output layer...
        final_inputs = numpy.dot (self.who, hidden_outputs)
      
        # calculate the output vector
        final_outputs = self.activation_function (final_inputs)
      
        # error is target - current_output
        output_errors = targetscol - final_outputs
      
        # hidden layer error is the output errors split by weights recombined at hidden nodes
        hidden_errors = numpy.dot(self.who.T, output_errors)
      
        # update the weights for the links between the hidden and output layers
        self.who += self.lrate * numpy.dot((output_errors*final_outputs*(1-final_outputs)),numpy.transpose(hidden_outputs))
      
        # update the weights for the links between the input and hidden layers
        self.wih += self.lrate * numpy.dot((hidden_errors*hidden_outputs*(1-hidden_outputs)),numpy.transpose(inputscol))
      
  
    # Query the neural network
    # Returns the networks estimation of the right answer given the question.
    # The question is in the inputs_list
    def query(self, inputs_list):
      
        # convert inputs list to 2d array
        # ndmin says min of two dimensions, .T changes it from a row to a column vector.
        # .T only works with 2d arrays, even though currently we only have a single row or single column
        inputsarray = numpy.array(inputs_list, ndmin=2).T
      
        # this will do the sum part of the calculation as well
        # So hidden_inputs is a vector
        hidden_inputs = numpy.dot(self.wih,inputsarray)
      
        # calculate the signals emerging from each output of the hidden layer
        hidden_outputs = self.activation_function (hidden_inputs)
  
        # now we have the outputs of the hidden layer.
        # time to calculate the outputs of the output, so too speak.
        final_inputs = numpy.dot(self.who, hidden_outputs)
        final_outputs = self.activation_function (final_inputs)
      
        return final_outputs
 
  
# Setup the nn parameters  
num_input_nodes = 11
num_output_nodes = 1
num_hidden_nodes = 5
learning_rate = 0.1
num_trainings = 25000

# create the neural network with these parameters
nn = neuralNetwork (num_input_nodes,num_hidden_nodes,num_output_nodes,learning_rate)

# Create many training set and train the NN
for i in range (num_trainings) :

    # create an input array which will contain a peak
    input_array = numpy.zeros([1,num_input_nodes])
  
    # choose a place for a peak
    i_peak_pos = random.randint(0,num_input_nodes-1)
  
    # put the peak in the array, for example
    input_array[0,i_peak_pos] = 1.0 ; 

    # So now we have a vector with a single 1 at a given position
  
    # If the question is [1,0,0,0] the answer should be 0.0
    # If the question is [0,1,0,0] the answer should be 0.333
    # If the question is [0,0,1,0] the answer should be 0.667
    # If the question is [0,0,0,1] the answer should be 1.0
 
    # create the output array with the correct answer
    output_array = numpy.zeros ([1,num_output_nodes])
  
    output_array[0,0] = i_peak_pos/(num_input_nodes-1)
  
    nn.train (input_array,output_array)
  
print ("querying " + str(num_input_nodes) + " input nodes" )
query_output = nn.query([0,0,0,0,0,1,0,0,0,0,0])

print (query_output)


Those last two lines are where the test occurs. I create a test spike, then with nn.query, ask the neural network to tell me where that spike is.

As suggested in Tariq's book, I use Python which runs on the web on Jupyter. If you are interested in the nitty gritty of neural networks I suggest you read his book and try the above code. 

The code above is also great for testing how the performance of the neural network changes if you have fewer trainings, or if you change how many hidden nodes there are.

PS:  Initially I'd forgotten to prepare the data properly. In this code almost everything is scaled to a range from 0.0 to 1.0, but I started out my tests with spikes of 100 on a range from 0 to 500, and got bad answers of course. Note the comments above about "If the question is..."

PPS: I've now written an Arduino C version of a neural network.

No comments:

Post a Comment