Machine Learning - Stanford - Coursera
1.0.0
1.0.0
  • Acknowledgements
  • Introduction
  • Linear Algebra Review
  • Types of Machine Learning
  • Supervised Learning
    • Linear Regression
      • Linear Regression in One Variable
        • Cost Function
        • Gradient Descent
      • Multivariate Linear Regression
        • Cost Function
        • Gradient Descent
        • Feature Scaling
        • Mean Normalization
        • Choosing the Learning Rate α
    • Polynomial Regression
      • Normal Equation
      • Gradient Descent vs. Normal Equation
Powered by GitBook
On this page

Was this helpful?

  1. Supervised Learning
  2. Linear Regression
  3. Linear Regression in One Variable

Cost Function

PreviousLinear Regression in One VariableNextGradient Descent

Last updated 5 years ago

Was this helpful?

We can measure the accuracy of our hypothesis function by using a cost function. This takes an average (actually a fancier version of an average) of all the results of the hypothesis with inputs from x's compared to the actual output y's.

If m is the number of training examples, the cost function for Linear Regression in One Variable is given by:

J(θ0,θ1)=(1/2m)∑i=1m(hθ(x(i))−y(i))2J(θ_0,θ_1) = (1/2m) ∑_{i=1}^{m}(h_θ(x^{(i)})−y^{(i)})^2J(θ0​,θ1​)=(1/2m)∑i=1m​(hθ​(x(i))−y(i))2

Lower values indicate more accuracy.

This function is otherwise called the "Squared error function", or Mean squared error.

We can plot it on a graph taking θ0θ_0θ0​ and θ1θ_1θ1​ on the x and z axis respectively, and J on the y axis: