# A note for Back Propagation

Through my learning process on machine learning, back propgation was ever a confusing part. Many tutorials on the Internet focus on numeric calculation, cover the long history of the algorithm, or use examples and diagrams leading to a serveral-page-long paragraph. That’s too complex for such a simple thing I think. If we can’t explain the algorithm simply, we may not explain it very well and even make readers confused. So I decide to write it as simple as I can, and later I will write about its relation to history.

## 1. Prerequisites

Before we proceed, let’s recall some basic concepts quickly.

### a neuron

A neural network is formed by many neurons. For clarity, let’s omit the bias of a neuron and write it as follows:

where means the *net summation* over these -s, is the activation function and we take sigmoid here .

We can use many neurons to from a layer, and use many layers to build the whole network.

### loss function and optimzation

And the loss function is the target we want to minimize. Usually we may take the squared error:

where is the output of the last layer, is the known label for a single sample, is the L2 norm of a vector.

is the variable. Our goal is to find the very that gives the smallest value.

And we will use the gradient descent algorithm, which needs to compute the gradient or partial derivatives for each parameter in .

### partial derivative for a compound function

Suppose we have many functions as follows:

Then if we want to get the parital derivative of with respect to , we may write:

## 2. How To: Back Propagation

### notations

Suppose we are building a feedforward neural network that is very very deep (up to 100 layers), each layer consists several independent neurons eating the output from the previous layer. Let neurons in two adjacent layers be fully connected (like a bipartite graph).

Denote the neurons in the first layer as , and those in the second layer as , and weight from to is denoted as (**NOT** , the reason is below), that is,

We can use matrix to rewrite the equation above for simplicity (that’s why we use the notation , for omitting the transpose notation) :

### computation

Here’s where the BP algorithm actually comes in. Let’s take only one example to train, then the loss function is just . Every is a constituent of the output of the last layer and thus they are unknown functions w.r.t. all parameter .

The parameters is unknown, and we use gradient descent to find the local optimum of them, which means we have to compute the gradient or partial derivative for all .

For the last layer, we could derive .

Recall that: , we could get all partial derivatives w.r.t. all parameters as follows (by intuition, the formula may be not formal):

So far, we’ve got the way to compute each gradient for every and also every . The first is what we really want, and the second helps us to compute backward iteratively, i.e., use to get gradient for and then and so on so forth.

You may note that these -s and -s are all matrices, the derivative note is actually for matrices. If you try to compute each of them, some items could be ignored. For example, suppose is the weight of from to , when computing its gradient, all gradients of nodes in the -layer except the -th node are useless. They are not functions of and thus their derivatives w.r.t. are all ZERO.

Using words rather than formula, we know is function w.r.t , and each is a function of all and .

Just follow the rule to compute partial derivatives of compound functions above, we could get everything.

And note that this is somewhat similar to dynamic programming in computer science. We are breaking the whole problem into smaller ones, i.e. the partial derivatives between two adjacent layers. And we use the computed gradients of (k+1)-th layer directly, to compute the gradients of the k-th layer, which saves a lot of computations.

## 3. relation to history

People in various fields have re-invented the algorithms in many times. But they are just as simple as dynamic programming and partial derivatives actually, and can be derived directly using not too much knowledge of calculus.

Again, people keep saying about a **delta rule**. Actually the **delta** is the gradient(or error) of the (k+1)-th layer. They say we can use delta to multiply the error of this layer. That’s pretty obvious, right? Delta is the partial derivative of higher order functions, and the rules for compound functions make us to use multiplication.

## 4. More References

I found this blog post pretty useful: http://sebastianruder.com/optimizing-gradient-descent/

If you want to know more about gradient descent and optimization, just follow it.

And the open course CS231n of Stanford provides some useful tips and notes on implementation of optimization too. http://cs231n.github.io/neural-networks-3/ You may like it.

This work is licensed under a Creative Commons Attribution 4.0 International License.