Machine Learning - Stanford - Coursera
1.0.0
1.0.0
  • Acknowledgements
  • Introduction
  • Linear Algebra Review
  • Types of Machine Learning
  • Supervised Learning
    • Linear Regression
      • Linear Regression in One Variable
        • Cost Function
        • Gradient Descent
      • Multivariate Linear Regression
        • Cost Function
        • Gradient Descent
        • Feature Scaling
        • Mean Normalization
        • Choosing the Learning Rate α
    • Polynomial Regression
      • Normal Equation
      • Gradient Descent vs. Normal Equation
Powered by GitBook
On this page

Was this helpful?

  1. Supervised Learning
  2. Linear Regression
  3. Multivariate Linear Regression

Gradient Descent

The Gradient Descent can be given by:

repeat:{

θj:=θj−α∂/∂θjJ(θ)θ_j:= θ_j−α ∂/∂θ_j J(θ)θj​:=θj​−α∂/∂θj​J(θ)

} (simultaneously update for every j=0,1,2….)

On computing the partial differential in the above equation, we get:

repeat:{

θj:=θj−α(1/m)∑i=1m((hθ(x(i))−y(i))xj(i))θ_j:=θ_j−α (1/m) ∑_{i=1}^{m}((h_θ(x^{(i)})−y^{(i)})x_{j}^{(i)})θj​:=θj​−α(1/m)∑i=1m​((hθ​(x(i))−y(i))xj(i)​)

} (simultaneously update every j=0,1,2…,n)

PreviousCost FunctionNextFeature Scaling

Last updated 5 years ago

Was this helpful?