The purpose of this exercise is to use graphical illustration to demonstrate neural network training algorithms.It is suggested that a ruler(or a graph paper with grids) is used to draw the graphs.

Let E(*w*) represent the training error as a function of neural network internal weights,where *w *is a vector of neural network internal weights.For convenience of graphical explanation,we assume that we consider only two variables for neural network training,and the function E(*w*) is simplified using 2nd order Taylor series.

More specifically,let E(*w*) for batch mode training be described as

where *w *is a vector of two variables

and superscript T denotes the transpose of the vector.

Suppose the initial value of *w *for neural network training is :

*w*=[0.5 1.0]T

Use graphical illustration to carry out two epochs of neural network training with batch mode backpropagation(part 2a) and conjugate gradient (part 2b) methods.

**Part 2(a):** You are required to:

1.Draw the contour plot of the E(*w*) in the 2-dimensional *w *space.

2.On the contour plot indicate the initial point of *w*

3.On the contour plot,show the gradient direction

4.On the contour plot,show the direction *h *for the batch mode backpropagation method assuming the momentum factor is zero.

5.On the contour plot,show the new location of *w *after one epoch of training is finished,assuming we have used line minimization to determine the optimal step size η

6.Repeat 3-5 above for one more epoch.Indicate the new location of *w *at end of the 2nd epoch.

**Part 2(b):** Suppose that we use Conjugate Gradient method to do the training where E(*w*),* w *and initial values of

*w*are defined as in (1)-(3).

## Reviews

There are no reviews yet.