CFA should give bonus points to those that can spell the word Heteroskedasticity!
The LOS reads:
"Discuss the types of heteroskedasticity and the effects of heteroskedasticity and serial correlation on statistical inference"
There are 4 areas that appear to be important:
1. What is heteroskedasticity
2. What are the effects of heteroskedasticity
3. Detecting heteroskedasticity
4. How do you correct for heteroskedasticity
1. What is heteroskedasticity?
In Reading 11 & earlier in Reading 12 the underlying assumptions of the Linear Regression Model were discussed. Assumption 4 relates to Homoscedasticity, being the opposite of Heteroskedasticity.
The summary of the definition was:
(NB - Remember that Error term also refers to the Residual)
There are 2 types of Heteroskedasticity:
Unconditional - This means that the variance of the error term is not consistent, but it is NOT related to the level of the independent variables! So, if X increases, the variances does not necessarily increases. It is therefore NOT an issue!
Conditional - It is related to the level of the independent variables. IT IS AN ISSUE!
2. What are the effects of heteroskedasticity
Heteroskedasticity implies that the error terms will be unreliable, resulting in standard errors that are too small. (Potentially)
But we also use the error terms to calculate the Standard Error, which is used in our T tests (as the denominator)
Thus, heteroskedasticity results in t statistics being too large, with the effect that we are rejecting null hypothesis that should be accepted.
3. How do you detect it?
3.1 You view a diagram. The text book has a pretty nice graph that explains it
3.2 You do a Breusch-Pagan test
NB: The Ho = No heteroskedasticity
Step 1: Take the initial residuals from the regression
Step 2: You square the residuals (also called error!)
Step 3: You regress the residuals on the independent variables
Step 4: You use a test statistic, being:
n X R squared, with k degrees of freedom
where n = number of observations
R squared = calculated in step 3
degrees of freedom = Will be provided in question, dependent on the type of regression (I hope!)
Step 5: You use the chi-square tables to calculate the answer (NB - It is a one sided test!)
An example
Step 1-3 provided
Step 4:
1.5 X 2 = 3
Step 5
With a significance level of 5% And Degrees of Freedom of 2; on Table
= 5.991
Conclusion
Accept the Null hypothesis = No heteroskedasticity
4. How do you correct for heteroskedasticity
Use White-corrected standard errors. (No more, no less)
The LOS reads:
"Discuss the types of heteroskedasticity and the effects of heteroskedasticity and serial correlation on statistical inference"
There are 4 areas that appear to be important:
1. What is heteroskedasticity
2. What are the effects of heteroskedasticity
3. Detecting heteroskedasticity
4. How do you correct for heteroskedasticity
1. What is heteroskedasticity?
In Reading 11 & earlier in Reading 12 the underlying assumptions of the Linear Regression Model were discussed. Assumption 4 relates to Homoscedasticity, being the opposite of Heteroskedasticity.
The summary of the definition was:
(NB - Remember that Error term also refers to the Residual)
There are 2 types of Heteroskedasticity:
Unconditional - This means that the variance of the error term is not consistent, but it is NOT related to the level of the independent variables! So, if X increases, the variances does not necessarily increases. It is therefore NOT an issue!
Conditional - It is related to the level of the independent variables. IT IS AN ISSUE!
2. What are the effects of heteroskedasticity
Heteroskedasticity implies that the error terms will be unreliable, resulting in standard errors that are too small. (Potentially)
But we also use the error terms to calculate the Standard Error, which is used in our T tests (as the denominator)
Thus, heteroskedasticity results in t statistics being too large, with the effect that we are rejecting null hypothesis that should be accepted.
3. How do you detect it?
3.1 You view a diagram. The text book has a pretty nice graph that explains it
3.2 You do a Breusch-Pagan test
NB: The Ho = No heteroskedasticity
Step 1: Take the initial residuals from the regression
Step 2: You square the residuals (also called error!)
Step 3: You regress the residuals on the independent variables
Step 4: You use a test statistic, being:
n X R squared, with k degrees of freedom
where n = number of observations
R squared = calculated in step 3
degrees of freedom = Will be provided in question, dependent on the type of regression (I hope!)
Step 5: You use the chi-square tables to calculate the answer (NB - It is a one sided test!)
An example
Info for question:
- R squared = 1.5
- Degrees of Freedom = 2
- Significance level is 5%
Step 1-3 provided
Step 4:
1.5 X 2 = 3
Step 5
With a significance level of 5% And Degrees of Freedom of 2; on Table
= 5.991
Conclusion
Accept the Null hypothesis = No heteroskedasticity
4. How do you correct for heteroskedasticity
Use White-corrected standard errors. (No more, no less)
df=# slopes
ReplyDelete