Equation and Assumption
The simple linear regression model is typically written using notation
where is the intercept and is the slope for the regression line.
The are random variables capturing the irreducible error, the scatter around the regression line. The basic assumptions of
- for all
- for all (homoscedaticity)
- The are uncorrelated, i.e.,
Estimators
Betas
Minimizing the following residual sum of squares (RSS)
estimations:
and
Remarks:
- is an unbiased estimator of
- The variance of is
Sigma
An unbiased estimator for is
This is the estimator reported by Python, R, and other software packages.
Note that if the irreducible errors are assumed to be i.i.d. normal, the MLE for can be shown to be
the MLE yields a biased estimator.
R square
is also called the coefficient of determination. Intuition of : it equals the proportion of the sum of squared response which is accounted for by the model that has been fit, relative to the model with no predictors (but does include the intercept)
Note that if the intercept term is included in the model.
In the case of simple linear regression (a single predictor), equal the sample correlation () squared.
Case Analysis
Suppose that, by mistake, each observed pair (response and predictor) is included twice in the analysis. What is the effect on , the t-statistics, and
- still unbiased
- become smaller, -statistics become bigger
- unchanged
Adding a Predictor
Practical regression models typically contain many predictors
As more predictors are added to the model, there can be surprising effects on the previously-included predictors:
- predictors can switch from being “significant” to “not significant”
Reason: is strongly correlated with , then some of the variability in previously attributed to could now be attributed to , and no longer seems as important
- predictors can switch from being “not significant” to “significant”
Reason: Suppose are uncorrelated. If explains enough variability in , may not change much when is included, but will decrease. Hence, decreases, and its t-stat increase.
- the signs on the parameter estimates can flip
Reason: In two dimensions (), the shape of the relationship is such that when project onto the axis, it looks like a negative relationship with . But cross-sections, for fixed , show a positive relationship between and .
Loading Comments...