Gradient Descent

So we have our hypothesis function and we have a way of measuring how accurate it is. Now what we need is a way to automatically improve our hypothesis function. That's where gradient descent comes in.

The gradient descent equation is:

repeat until convergence:{

}

for j=0 and j=1

The equation for gradient descent for Linear Regression in One Variable can be obtained by substituting the hypothesis function and the cost function in the gradient descent formula above. We get:

repeat until convergence:{

}

The Gradient Descent used here is also called Batch Gradient Descent because it uses all the training examples in each step.

Last updated