BACK PROPAGATION IN NEURAL NETWORKS BY Packiashri.A

Back propagation is the main part of neural networks .it is the process of fine tuning the weights of neural network ,that is also called as iteration. proper tuning of these weights allows to reduce the error rates.by making the model more generalized .

  • The back propagation  has no parameters to tune apart from the numbers of inputs given.
  • Its a flexible method as it does not require prior knowledge about the network
  • It is a standard method that generally works well
  • It does not need any special mention of the features of the function to be learned.
  • Backpropagation is fast, simple and easy to program

It is also called as backward propagation of errors. The back propagation refers to an algorithm which is used for computing the gradients.

   The back propagation works on computing the gradient loss function with respect to the weights by the chain rule.

This is how the process works ,the input X arrives ,then the input is given with a weight W , mostly the weights are randomly selected.and then we have to calculate the output for every neuron from the input layer to the middle layer and finally at the output layer. Calculate the error in the outputs.

              ErrorB= Actual Output – Desired Output

Travel back from the output layer to the hidden layer to adjust the weights such that the error is getting decreased.

Why We Need Backpropagation?

Most prominent advantages of Backpropagation are listed below:

  • Backpropagation is fast, simple and easy to program
  • It has no parameters to tune apart from the numbers of input
  • It is a flexible method as it does not require prior knowledge about the network
  • It is a standard method that generally works well
  • It does not need any special mention of the features of the function to be learned.

Types of Backpropagation Networks

 The Two Types of Backpropagations are:

  • Static Back-propagation
  • Recurrent Backpropagation

Static Backpropagation:

It is one kind of backpropagation network which produces a mapping of a static input for static output. It is useful to solve static classification issues like optical character recognition.

Recurrent Backpropagation:

Recurrent backpropagation is fed forward until a fixed value is achieved. After that, the error is computed and propagated backward. The main difference between both of these methods is: that the mapping is rapid in static back-propagation while it is non static in recurrent backpropagation.

Disadvantages of using Backpropagation

  • The actual performance of backpropagation of a specific problem is dependent on the input data which is given .
  • Backpropagation can be quite sensitive to noisy data.
  • need to use  matrix-based approach for backpropagation instead of mini-batch.

Backpropagation can be used for both classification and regression problems.

classification, y=0 or 1 represent the two class labels called one hot encoding. The best results are achieved when the network consists of one neuron in the output layer for each class value.

regression problems, we consider scaling our outputs so that they lie in the [0, 1] range.

Conclusion:

That is, backpropagation allows artificial neural networks to roughly mimic certain narrow mechanisms of intelligence that are found in the example of intelligence that we have: the human brain.

Leave a Comment

Your email address will not be published. Required fields are marked *