Saturday, August 21, 2010

Backpropagation Algorithm

Backpropagation algorithm was first formulated by Werbos and popularized by Rumelhart and Mc. Clelland to wear on ANN (Artificial Neural Nets), and is commonly abbreviated algorithm with BP. These algorithms include supervised learning method and designed for operation on the feed forward multi-layer nets.
Backpropagation method is used widely. An estimated 90% is used in many fields, among others in the financial sector, handwriting pattern recognition, and the introduction of sound and color.
This algorithm is widely used in applications settings because the learning process is based on a simple relationship, ie, if the output gives the wrong result, the weight is corrected so that errors can be reduced and the net response is expected to be closer to the true price. Backpropagation is also capable to handle weight in the hidden layer (hidden).
Broadly speaking this algorithm describes, when the nets are given input patterns as training patterns so that pattern to the nodes in the hidden layer to be forwarded to the output layer nodes. Then the output layer nodes is called the output response of the net. When the net output is not equal to the expected output then the output will be spread back on the hidden layer forwarded to the node in the input layer. Therefore, the mechanism is called the backpropagation training.

Training Phase
This training phase is the step how a neural net was trained, that is by making changes the connection weights. While the phase of problem solving will be done if the learning process is completed, phase is the process of testing or testing.
Backpropagation algorithm consists of two processes, namely the feed forward and backpropagation of error. To more clearly be described as follows :

1. Initialize weight factors with small random values.

2. Repeat steps 2 through 9 until the stop condition is met.

3. Perform steps 3 through 8 for each pair of training.

4. Each input unit (Xi, i = 1, ... n) receives input signals Xi and the signals are distributed to the upper unit of the hidden layer (hidden units).

5. Each hidden summing weighted factors :
and counted in accordance with the activation function:

Because the sigmoid function is used:

then sends a signal to all units on it (output units).

6. Each unit of output (Yk, k = 1,2,3, ..., m), summed the weighted factors :
Calculating according to the activation function :

7. Each unit of output (Yk, k = 1,2,3, ..., m) receives a target pattern in accordance with the input pattern during training and calculate error :because f'(Y_ink) = Yk by using the sigmoid function, then :
Calculating the weight factor correction (to correct Wjk)
Calculating the correction correction :
and deliver value to the unit k layer beneath.

8. Each hidden unit (Zj, j = 1,2,3, ..., p), summing the delta inputs (from units in the upper layer)
then multiplied by the activation function to compute the error.
Then calculate the weight correction (used to improve Vij)
then calculate the bias correction (to correct Voj)

9. Each output unit (Yk, k = 1,2,3, ..., m) fixed bias and weight for (j = 0,1,2, ..., p)
Each hidden unit (Zj, j = 1,2,3, ... p) fixed bias and weight for (j = 0,1,2, ..., n)

No comments:

Post a Comment