Linear Regression
As discussed earlier, Regression is a Supervised Learning technique that is used to predict a real value.
Given a dataset, , our aim is to find the line that minimizes the Mean Squared Error.
The function that we want to learn can be denoted as:
We need to find the best values for (denoted using the term , meaning parameters) using the available training data.
The Mean Squared Error of f on the data set X is given by:
Note that the absolute error would be:
One advantage of using MSE instead of absolute error is that it will penalize a line more for being further away from the data points. However, this makes it sensitive to outliers i.e. a line would be penalized for being far away from outliers, although it shouldn't be. Another reason for using MSE over absolute error is that MSE is continuous while absolute error is not. It is easier to minimize a continuous function by taking the derivative.
The Role of Noise
When we obtain a data set, we assume that the values were computed with an instrument that was susceptible to noise. It is generally assumed that the noise follows a Gaussian distribution.
Therefore, will not be exactly equal to f(). Instead:
where the noise
We assume that each is independent.
Noise also affects the probability:
Now, we want to find the ML (Maximum Likelihood) value for i.e. the ML 'estimator' f.
--- (1)
Recall that if , then the pdf for the distribution on X is given by:
If we use this formula in (1), it might make things complicated. Instead, we use the log trick. We can do this because argmax is unaffected by log, for non-negative valued functions. Using a log turns products into sums and gets rid of exponents, making things less complicated.
Now, (1) becomes:
(ignoring the denominator).
This proves that, under the assumption of independent additive Gaussian noise, the line that denotes the ML estimator (i.e. the ML hypothesis) is the same line that minimizes the MSE.
Last updated