# Logistic Regression

This is a **classification** model and is used for real values attributes only, like all other linear discriminant methods.

Consider a problem with 2 classes.

*We assume that the log likelihood ratio (log odds) is linear.*

$$log \frac{p(x|C\_1)}{p(x|C\_2)} = w^Tx+w\_0^0$$

This assumption only holds if the distributions for $$C\_1$$ and $$C\_2$$ are Gaussian, with a shared covariance matrix.

We want to classify x into $$C\_1 (+) ,, or ,, C\_2 (-)$$.

Put in class $$C\_1$$ if $$\frac{p(x|C\_1)}{p(x|C\_2)} > 1$$ i.e. if $$log \frac{p(x|C\_1)}{p(x|C\_2)} > 0$$.\
Using the property that the inverse of logit is sigmoid, we can show that $$P(C\_1|x)=\frac{1}{1+e^{-(w^Tx+w\_0)}}$$.

This is the posterior probability that x belongs to $$C\_1$$.

**Proof:**

$$logit(P(C\_1|x)) = log\frac{P(C\_1|x)}{1-P(C\_1|x)} = log \frac{P(C\_1|x)}{P(C\_2|x)} = log\frac{P(C\_1)p(x|C\_1)}{P(C\_2)p(x|C\_2)} = log \frac{p(x|C\_1)}{p(x|C\_2)} + log \frac{P(C\_1)}{P(C\_2)} = w^Tx+w\_0^0 + log\frac{P(C\_1)}{P(C\_2)}$$

$$= w^Tx + w\_0$$

where $$w\_0 = w\_0^0 + log \frac{P(C\_1)}{P(C\_2)}$$

Since sigmoid is the inverse of logit, we get:

$$P(C\_1|x) = \frac{1}{1+e^{-(w^Tx+w\_0)}}$$

We predict + if $$P(C\_1|x) \gt 0.5$$ and predict - otherwise.

Note that when $$(w^Tx+w\_0) \rightarrow +\infty, ,, P(C\_1|x) \rightarrow 1$$ and when $$(w^Tx+w\_0) \rightarrow -\infty, ,, P(C\_1|x) \rightarrow 0$$

So we can predict + if $$w^Tx+w\_0 \gt 0$$ and - otherwise.

## Learning the Weights

We need to learn weights to be able to compute $$P(C\_1|x) = \frac{1}{1+e^{-(w^Tx+w\_0)}}$$

Consider the data below:

$$\begin{array}{c|c|c} x\_1 & x\_2 & r\3 & 21 & 1 (i.e. +)\6 & 5 & 1 (i.e. +)\ 2 & 9 & 0 (i.e. -)\end{array}$$

We need to compute $$w\_2, w\_1, w\_0$$.

Suppose $$y^t$$ is the real probability that example t is in class $$C\_1$$, then the label of example t can be viewed as a random variable.

Assuming that the labelings of the examples are independent, the probability that example 1 is labeled + and example 2 is labeled + and example 3 is labeled - can be given by $$P(C\_1|x^1)*P(C\_1|x^2)*(1-P(C\_1|x^3)) = y^1*y^2*(1-y^3)$$

We want to choose $$w, w\_0$$ (w is a vector that contains $$w\_2, w\_1$$) such that the estimates $$y^t$$ "match" the labels in the dataset as well as possible, i.e. the must maximize:

$$\Pi\_{t=1}^N (\hat{P}(C\_1|x^t))^{r^t} \* (1-\hat{P}(C\_1|x^t))^{(1-r^t)}$$ where $$\hat{P}(C\_1|x^t)$$ is the **estimate** of the probability that $$x^t$$ is labeled 1.

We can simplify the above by using logs. Let $$y^t = \frac{1}{1+e^{-(w^tx+w\_0)}}$$

We need to find $$argmax\_{w,w\_0},\Pi\_t (y^t)^{r^t}(1-y^t)^{(1-r^t)}$$

$$=argmax\_{w,w\_0} \sum\_t \[r^t, log, y^t , + , (1-r^t), log, (1-y^t)]$$

$$= argmin\_{w,w\_0} -\sum\_t \[r^t, log, y^t , + , (1-r^t), log, (1-y^t)]$$

Let us define $$Err(w,w\_0|X) = -\sum\_t \[r^t, log, y^t , + , (1-r^t), log, (1-y^t)]$$, so in other words, we need to find the weights $$w, w\_0$$ that minimize this error.

To do so, we use an iterative technique known as **Gradient Descent**.

### Gradient Descent

![](https://335259370-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-M5-0RGi-3PgC2puufMH%2F-M5-0Ri_FPdoeNpowHRz%2F-M5-0Sn1H77TDCRR5SFp%2Fgradient%20descent.png?generation=1586990801116897\&alt=media)We start with **random** weights. Then, we compute the derivative of the error function w\.r.t. the weights. Then, we take a step in the direction of steepest descent i.e. in the negative direction of the gradient (since the gradient points to the direction of steepest ascent). This is done iteratively until we can no longer move in a direction that further minimizes the error.

Note that the error function mentioned above has a single minimum (i.e. no local minima, just one global minimum).

For $$j \neq 0,$$

$$\frac{\partial E}{\partial j} = - \sum\_t , (\frac{r^t}{y^t}-\frac{(1-r^t)}{1-y^t})y^t(1-y^t)x\_j^t$$, where $$y^t=\frac{1}{1+e^{-(w\_2x\_2^t+w\_1x\_1^t+w\_0)}}$$ and using $$\frac{d}{da}y = y(1-y)$$ for $$y=\frac{1}{1+e^{-a}}$$

Therefore, we get:

$$\frac{\partial E}{\partial j} = - \sum\_t (r^t-y^t) x\_j^t$$

The iterative **update rule** for $$w\_j ,, (j\neq 0)$$ is given by:

$$w\_j \leftarrow w\_j + \eta(-\frac{\partial E}{\partial w\_j})$$ where $$\eta$$ is the **learning rate**. This can be re-written as:\
$$w\_j \leftarrow w\_j + \eta \sum\_t (r^t-y^t)x\_j^t$$\
Similarly, $$w\_0 \leftarrow w\_0 + \eta \sum\_t (r^t-y^t)$$

Put generally, $$w\_j \leftarrow w\_j + \eta , \Delta w\_j$$

#### Pseudocode for Gradient Descent for Logistic Regression

```
# assume x_0 = 1

for j = 0...d:
    w_j = rand(-0.01, 0.01)
    repeat:
        for j = 0...d
            delta_w_j = 0
        for j = 1...N
            theta = 0
            for j = 0...d
                theta = theta + w_jx_j^t  # theta = wTx^t + w_0x_0 where x_0=1
            y = sigmoid(theta)
            delta_w_j = delta_w_j + (r^t-y)x_j^t
        for j = 0...d
            w_j = w_j + n * delta_w_j
    until convergence
```
