statsmodels
패키지의 경우 도구변수 추정이 복잡하고, linearmodels
패키지가 도구변수 추정에 간편한 인터페이스를 제공하나 statsmodels
와의 통일성이 없다. 특히 linearmodels
의 formula에서는 모형이 절편을 포함하면 반드시 1
을 써 주어야 한다. 대부분의 모형에 절편이 포함되고 많은 사용자들이 절편에 관심을 갖지 않음을 고려하면 이는 상당히 비효율적이고 코딩 오류를 불러일으키는 접근방식이다. 또한 linearmodels
의 default 표준오차는 이분산에 견고한 표준오차라는 점에도 주의하여야 한다.
인터페이스에 통일성이 없다는 것은 매우 불편하고 혼란을 야기한다. 공부할 목적이 아니면 도구변수 추정(또는 『계량경제학강의』의 다른 부분)에 파이썬(python)을 사용하는 것은 추천하지 않는다. (패키지 코드를 수정하는 것은 본 실습의 목적이 아니다.)
import pandas as pd
from statsmodels.formula.api import ols as OLS
Ivdata = pd.read_csv('csv/loedata/Ivdata.csv')
ols = OLS('y~x1+x2', data=Ivdata).fit()
print(ols.summary())
OLS Regression Results ============================================================================== Dep. Variable: y R-squared: 0.708 Model: OLS Adj. R-squared: 0.702 Method: Least Squares F-statistic: 117.4 Date: Sat, 17 Dec 2022 Prob (F-statistic): 1.24e-26 Time: 10:25:28 Log-Likelihood: -208.90 No. Observations: 100 AIC: 423.8 Df Residuals: 97 BIC: 431.6 Df Model: 2 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept -0.3650 0.820 -0.445 0.657 -1.992 1.262 x1 0.4831 0.057 8.458 0.000 0.370 0.597 x2 0.9448 0.064 14.677 0.000 0.817 1.073 ============================================================================== Omnibus: 0.581 Durbin-Watson: 1.882 Prob(Omnibus): 0.748 Jarque-Bera (JB): 0.722 Skew: -0.112 Prob(JB): 0.697 Kurtosis: 2.649 Cond. No. 51.4 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
stage1 = OLS('x2~x1+z2a', data=Ivdata).fit()
print(stage1.summary())
OLS Regression Results ============================================================================== Dep. Variable: x2 R-squared: 0.295 Model: OLS Adj. R-squared: 0.281 Method: Least Squares F-statistic: 20.33 Date: Sat, 17 Dec 2022 Prob (F-statistic): 4.22e-08 Time: 10:25:28 Log-Likelihood: -241.32 No. Observations: 100 AIC: 488.6 Df Residuals: 97 BIC: 496.5 Df Model: 2 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 6.3374 0.936 6.769 0.000 4.479 8.196 x1 -0.1319 0.079 -1.669 0.098 -0.289 0.025 z2a 21.8401 4.044 5.401 0.000 13.814 29.866 ============================================================================== Omnibus: 3.112 Durbin-Watson: 2.285 Prob(Omnibus): 0.211 Jarque-Bera (JB): 2.268 Skew: -0.204 Prob(JB): 0.322 Kurtosis: 2.385 Cond. No. 163. ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Ivdata['x2hat'] = stage1.fittedvalues
stage2 = OLS('y~x1+x2hat', data=Ivdata).fit()
print(stage2.summary())
OLS Regression Results ============================================================================== Dep. Variable: y R-squared: 0.138 Model: OLS Adj. R-squared: 0.120 Method: Least Squares F-statistic: 7.748 Date: Sat, 17 Dec 2022 Prob (F-statistic): 0.000756 Time: 10:25:29 Log-Likelihood: -262.98 No. Observations: 100 AIC: 532.0 Df Residuals: 97 BIC: 539.8 Df Model: 2 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 1.8827 2.250 0.837 0.405 -2.583 6.348 x1 0.4169 0.111 3.760 0.000 0.197 0.637 x2hat 0.6867 0.230 2.986 0.004 0.230 1.143 ============================================================================== Omnibus: 0.127 Durbin-Watson: 2.167 Prob(Omnibus): 0.938 Jarque-Bera (JB): 0.296 Skew: 0.045 Prob(JB): 0.862 Kurtosis: 2.750 Cond. No. 81.8 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
위 Second-stage regression에 보고되는 표준오차들에는 오류가 있다. 이를 보정해야 하는데, 완전수동 분산 추정이나 반자동 분산 추정은 건너뛰고 완전자동 도구변수 추정으로 넘어간다.
# pip install linearmodels
from linearmodels import IV2SLS
ivfm = 'y~1+x1+[x2~z2a]' # need constant (1) explicitly
ivmodel = IV2SLS.from_formula(ivfm, data=Ivdata)
tsls = ivmodel.fit(cov_type='unadjusted')
print(tsls)
IV-2SLS Estimation Summary ============================================================================== Dep. Variable: y R-squared: 0.6592 Estimator: IV-2SLS Adj. R-squared: 0.6522 No. Observations: 100 F-statistic: 40.418 Date: Sat, Dec 17 2022 P-value (F-stat) 0.0000 Time: 10:25:29 Distribution: chi2(2) Cov. Estimator: unadjusted Parameter Estimates ============================================================================== Parameter Std. Err. T-stat P-value Lower CI Upper CI ------------------------------------------------------------------------------ Intercept 1.8827 1.3932 1.3513 0.1766 -0.8479 4.6132 x1 0.4169 0.0687 6.0725 0.0000 0.2824 0.5515 x2 0.6867 0.1424 4.8229 0.0000 0.4076 0.9657 ============================================================================== Endogenous: x2 Instruments: z2a Unadjusted Covariance (Homoskedastic) Debiased: False
표준오차가 책과 약간 다른데, 오차분산을 추정할 때 $n-k-1$이 아니라 $n$으로 나누었기 때문이다. $n-k-1$로 나누는 옵션은 보이지 않는다(help(IV2SLS.fit)
참고). HC0 유형 표준오차는 다음과 같이 구한다.
tslsh = ivmodel.fit(cov_type='robust')
print(tslsh)
IV-2SLS Estimation Summary ============================================================================== Dep. Variable: y R-squared: 0.6592 Estimator: IV-2SLS Adj. R-squared: 0.6522 No. Observations: 100 F-statistic: 31.850 Date: Sat, Dec 17 2022 P-value (F-stat) 0.0000 Time: 10:25:29 Distribution: chi2(2) Cov. Estimator: robust Parameter Estimates ============================================================================== Parameter Std. Err. T-stat P-value Lower CI Upper CI ------------------------------------------------------------------------------ Intercept 1.8827 1.5194 1.2391 0.2153 -1.0954 4.8607 x1 0.4169 0.0815 5.1142 0.0000 0.2571 0.5767 x2 0.6867 0.1299 5.2860 0.0000 0.4321 0.9413 ============================================================================== Endogenous: x2 Instruments: z2a Robust Covariance (Heteroskedastic) Debiased: False
print(OLS('x2~x1+z2a', data=Ivdata).fit().summary())
OLS Regression Results ============================================================================== Dep. Variable: x2 R-squared: 0.295 Model: OLS Adj. R-squared: 0.281 Method: Least Squares F-statistic: 20.33 Date: Sat, 17 Dec 2022 Prob (F-statistic): 4.22e-08 Time: 10:25:29 Log-Likelihood: -241.32 No. Observations: 100 AIC: 488.6 Df Residuals: 97 BIC: 496.5 Df Model: 2 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 6.3374 0.936 6.769 0.000 4.479 8.196 x1 -0.1319 0.079 -1.669 0.098 -0.289 0.025 z2a 21.8401 4.044 5.401 0.000 13.814 29.866 ============================================================================== Omnibus: 3.112 Durbin-Watson: 2.285 Prob(Omnibus): 0.211 Jarque-Bera (JB): 2.268 Skew: -0.204 Prob(JB): 0.322 Kurtosis: 2.385 Cond. No. 163. ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
print(OLS('x2~x1+z2a', data=Ivdata).fit(cov_type="HC3").summary())
OLS Regression Results ============================================================================== Dep. Variable: x2 R-squared: 0.295 Model: OLS Adj. R-squared: 0.281 Method: Least Squares F-statistic: 17.75 Date: Sat, 17 Dec 2022 Prob (F-statistic): 2.69e-07 Time: 10:25:29 Log-Likelihood: -241.32 No. Observations: 100 AIC: 488.6 Df Residuals: 97 BIC: 496.5 Df Model: 2 Covariance Type: HC3 ============================================================================== coef std err z P>|z| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 6.3374 1.061 5.972 0.000 4.258 8.417 x1 -0.1319 0.088 -1.491 0.136 -0.305 0.042 z2a 21.8401 4.389 4.976 0.000 13.238 30.442 ============================================================================== Omnibus: 3.112 Durbin-Watson: 2.285 Prob(Omnibus): 0.211 Jarque-Bera (JB): 2.268 Skew: -0.204 Prob(JB): 0.322 Kurtosis: 2.385 Cond. No. 163. ============================================================================== Notes: [1] Standard Errors are heteroscedasticity robust (HC3)
stage1a = OLS('x2~x1+z2a+z2b', data=Ivdata).fit()
print(stage1a.f_test('z2a=0,z2b=0'))
<F test: F=14.539152594855231, p=3.049887117946441e-06, df_denom=96, df_num=2>
Ivdata['v2hat'] = stage1.resid
print(OLS('y~x1+x2+v2hat', data=Ivdata).fit().summary())
OLS Regression Results ============================================================================== Dep. Variable: y R-squared: 0.722 Model: OLS Adj. R-squared: 0.714 Method: Least Squares F-statistic: 83.21 Date: Sat, 17 Dec 2022 Prob (F-statistic): 1.33e-26 Time: 10:25:29 Log-Likelihood: -206.34 No. Observations: 100 AIC: 420.7 Df Residuals: 96 BIC: 431.1 Df Model: 3 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 1.8827 1.284 1.467 0.146 -0.665 4.431 x1 0.4169 0.063 6.591 0.000 0.291 0.543 x2 0.6867 0.131 5.234 0.000 0.426 0.947 v2hat 0.3358 0.150 2.245 0.027 0.039 0.633 ============================================================================== Omnibus: 0.788 Durbin-Watson: 1.872 Prob(Omnibus): 0.674 Jarque-Bera (JB): 0.897 Skew: -0.190 Prob(JB): 0.638 Kurtosis: 2.733 Cond. No. 82.6 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
tsls = IV2SLS.from_formula('y~x1+[x2~z2a+z2b]', data=Ivdata).fit(cov_type='unadjusted') # see above for ivmodel
print(tsls.sargan)
Sargan's test of overidentification H0: The model is not overidentified. Statistic: 0.2278 P-value: 0.6331 Distributed: chi2(1)
책에 있는 내용과 약간 다르다.
import pandas as pd
from statsmodels.formula.api import ols as OLS
Schooling = pd.read_csv('csv/Ecdat/Schooling.csv')
fm = 'lwage76~ed76+exp76+smsa76'
ols = OLS(fm, data=Schooling).fit()
print(ols.summary())
OLS Regression Results ============================================================================== Dep. Variable: lwage76 R-squared: 0.215 Model: OLS Adj. R-squared: 0.214 Method: Least Squares F-statistic: 274.7 Date: Sat, 17 Dec 2022 Prob (F-statistic): 1.43e-157 Time: 10:25:29 Log-Likelihood: -1460.6 No. Observations: 3010 AIC: 2929. Df Residuals: 3006 BIC: 2953. Df Model: 3 Covariance Type: nonrobust ================================================================================= coef std err t P>|t| [0.025 0.975] --------------------------------------------------------------------------------- Intercept 4.6020 0.063 73.373 0.000 4.479 4.725 smsa76[T.yes] 0.1837 0.016 11.386 0.000 0.152 0.215 ed76 0.0878 0.004 24.610 0.000 0.081 0.095 exp76 0.0411 0.002 17.985 0.000 0.037 0.046 ============================================================================== Omnibus: 47.980 Durbin-Watson: 1.794 Prob(Omnibus): 0.000 Jarque-Bera (JB): 54.640 Skew: -0.259 Prob(JB): 1.37e-12 Kurtosis: 3.410 Cond. No. 141. ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
fm_iq = fm + '+iqscore'
print(OLS(fm_iq, data=Schooling).fit().summary())
OLS Regression Results ============================================================================== Dep. Variable: lwage76 R-squared: 0.192 Model: OLS Adj. R-squared: 0.190 Method: Least Squares F-statistic: 122.0 Date: Sat, 17 Dec 2022 Prob (F-statistic): 1.70e-93 Time: 10:25:29 Log-Likelihood: -905.86 No. Observations: 2061 AIC: 1822. Df Residuals: 2056 BIC: 1850. Df Model: 4 Covariance Type: nonrobust ================================================================================= coef std err t P>|t| [0.025 0.975] --------------------------------------------------------------------------------- Intercept 4.4761 0.089 50.331 0.000 4.302 4.650 smsa76[T.yes] 0.1548 0.019 8.145 0.000 0.118 0.192 ed76 0.0657 0.005 13.286 0.000 0.056 0.075 exp76 0.0451 0.003 16.288 0.000 0.040 0.051 iqscore 0.0044 0.001 6.972 0.000 0.003 0.006 ============================================================================== Omnibus: 48.239 Durbin-Watson: 1.887 Prob(Omnibus): 0.000 Jarque-Bera (JB): 61.311 Skew: -0.289 Prob(JB): 4.86e-14 Kurtosis: 3.616 Cond. No. 1.13e+03 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 1.13e+03. This might indicate that there are strong multicollinearity or other numerical problems.
fm_stage1 = 'ed76~exp76+smsa76+momed'
print(fm_stage1)
ed76~exp76+smsa76+momed
stage1 = OLS(fm_stage1, data=Schooling).fit()
print(stage1.summary())
OLS Regression Results ============================================================================== Dep. Variable: ed76 R-squared: 0.488 Model: OLS Adj. R-squared: 0.487 Method: Least Squares F-statistic: 953.2 Date: Sat, 17 Dec 2022 Prob (F-statistic): 0.00 Time: 10:25:29 Log-Likelihood: -6228.3 No. Observations: 3010 AIC: 1.246e+04 Df Residuals: 3006 BIC: 1.249e+04 Df Model: 3 Covariance Type: nonrobust ================================================================================= coef std err t P>|t| [0.025 0.975] --------------------------------------------------------------------------------- Intercept 13.9863 0.179 78.008 0.000 13.635 14.338 smsa76[T.yes] 0.4783 0.078 6.110 0.000 0.325 0.632 exp76 -0.3690 0.009 -41.498 0.000 -0.386 -0.352 momed 0.2132 0.012 17.327 0.000 0.189 0.237 ============================================================================== Omnibus: 19.013 Durbin-Watson: 1.750 Prob(Omnibus): 0.000 Jarque-Bera (JB): 19.290 Skew: 0.196 Prob(JB): 6.47e-05 Kurtosis: 2.987 Cond. No. 72.6 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
from linearmodels import IV2SLS
iv_model = IV2SLS.from_formula('lwage76~1+[ed76~momed]+exp76+smsa76', data=Schooling)
tsls = iv_model.fit(cov_type='unadjusted')
print(tsls)
IV-2SLS Estimation Summary ============================================================================== Dep. Variable: lwage76 R-squared: 0.1499 Estimator: IV-2SLS Adj. R-squared: 0.1490 No. Observations: 3010 F-statistic: 339.12 Date: Sat, Dec 17 2022 P-value (F-stat) 0.0000 Time: 10:25:29 Distribution: chi2(3) Cov. Estimator: unadjusted Parameter Estimates ================================================================================= Parameter Std. Err. T-stat P-value Lower CI Upper CI --------------------------------------------------------------------------------- Intercept 3.6712 0.2044 17.960 0.0000 3.2705 4.0718 exp76 0.0644 0.0054 11.925 0.0000 0.0538 0.0750 smsa76[T.yes] 0.1501 0.0182 8.2525 0.0000 0.1144 0.1857 ed76 0.1442 0.0123 11.712 0.0000 0.1201 0.1684 ================================================================================= Endogenous: ed76 Instruments: momed Unadjusted Covariance (Homoskedastic) Debiased: False
tsls_h = iv_model.fit(cov_type='robust')
print(tsls_h)
IV-2SLS Estimation Summary ============================================================================== Dep. Variable: lwage76 R-squared: 0.1499 Estimator: IV-2SLS Adj. R-squared: 0.1490 No. Observations: 3010 F-statistic: 357.82 Date: Sat, Dec 17 2022 P-value (F-stat) 0.0000 Time: 10:25:29 Distribution: chi2(3) Cov. Estimator: robust Parameter Estimates ================================================================================= Parameter Std. Err. T-stat P-value Lower CI Upper CI --------------------------------------------------------------------------------- Intercept 3.6712 0.2023 18.144 0.0000 3.2746 4.0677 exp76 0.0644 0.0053 12.056 0.0000 0.0540 0.0749 smsa76[T.yes] 0.1501 0.0180 8.3391 0.0000 0.1148 0.1854 ed76 0.1442 0.0122 11.778 0.0000 0.1202 0.1682 ================================================================================= Endogenous: ed76 Instruments: momed Robust Covariance (Heteroskedastic) Debiased: False
stage1 = OLS(fm_stage1, data=Schooling).fit()
Schooling['vhat'] = stage1.resid
aux = OLS(fm + '+vhat', data=Schooling).fit()
print(aux.summary())
OLS Regression Results ============================================================================== Dep. Variable: lwage76 R-squared: 0.222 Model: OLS Adj. R-squared: 0.221 Method: Least Squares F-statistic: 214.0 Date: Sat, 17 Dec 2022 Prob (F-statistic): 9.56e-162 Time: 10:25:29 Log-Likelihood: -1448.0 No. Observations: 3010 AIC: 2906. Df Residuals: 3005 BIC: 2936. Df Model: 4 Covariance Type: nonrobust ================================================================================= coef std err t P>|t| [0.025 0.975] --------------------------------------------------------------------------------- Intercept 3.6712 0.196 18.754 0.000 3.287 4.055 smsa76[T.yes] 0.1501 0.017 8.618 0.000 0.116 0.184 ed76 0.1442 0.012 12.230 0.000 0.121 0.167 exp76 0.0644 0.005 12.452 0.000 0.054 0.075 vhat -0.0621 0.012 -5.017 0.000 -0.086 -0.038 ============================================================================== Omnibus: 57.035 Durbin-Watson: 1.802 Prob(Omnibus): 0.000 Jarque-Bera (JB): 67.028 Skew: -0.278 Prob(JB): 2.79e-15 Kurtosis: 3.474 Cond. No. 443. ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
print(OLS(fm + '+vhat', data=Schooling).fit(cov_type='HC3').summary())
OLS Regression Results ============================================================================== Dep. Variable: lwage76 R-squared: 0.222 Model: OLS Adj. R-squared: 0.221 Method: Least Squares F-statistic: 217.3 Date: Sat, 17 Dec 2022 Prob (F-statistic): 5.44e-164 Time: 10:25:29 Log-Likelihood: -1448.0 No. Observations: 3010 AIC: 2906. Df Residuals: 3005 BIC: 2936. Df Model: 4 Covariance Type: HC3 ================================================================================= coef std err z P>|z| [0.025 0.975] --------------------------------------------------------------------------------- Intercept 3.6712 0.196 18.732 0.000 3.287 4.055 smsa76[T.yes] 0.1501 0.017 8.717 0.000 0.116 0.184 ed76 0.1442 0.012 12.161 0.000 0.121 0.167 exp76 0.0644 0.005 12.399 0.000 0.054 0.075 vhat -0.0621 0.013 -4.888 0.000 -0.087 -0.037 ============================================================================== Omnibus: 57.035 Durbin-Watson: 1.802 Prob(Omnibus): 0.000 Jarque-Bera (JB): 67.028 Skew: -0.278 Prob(JB): 2.79e-15 Kurtosis: 3.474 Cond. No. 443. ============================================================================== Notes: [1] Standard Errors are heteroscedasticity robust (HC3)
# Don't miss out 1 below
tsls2 = IV2SLS.from_formula('lwage76~1+[ed76~momed+daded]+exp76+smsa76', data=Schooling).fit()
print(tsls2.sargan)
Sargan's test of overidentification H0: The model is not overidentified. Statistic: 3.2653 P-value: 0.0708 Distributed: chi2(1)
# first stage regression
stage1 = OLS('ed76~momed+daded+exp76+smsa76', data=Schooling).fit()
Schooling['ed76hat'] = stage1.fittedvalues
# orthogonalized overidentifying instruments
w1 = IV2SLS.from_formula('daded~[ed76hat~ed76]+exp76+smsa76', data=Schooling).fit().resids
# multiply w1 to 2SLS residuals
u = IV2SLS.from_formula('lwage76~[ed76~momed+daded]+exp76+smsa76', data=Schooling).fit().resids
Schooling['w1u'] = w1*u
# get the n*R2 stat
Schooling['one'] = [1]*len(Schooling)
aux = OLS('one~w1u-1', data=Schooling).fit()
stat = aux.nobs*aux.rsquared
stat
3.185349075970766
# p-value
from scipy.stats import chi2
1-chi2.cdf(stat,1)
0.07430112606543426
import pandas as pd
Ivdata = pd.read_csv('csv/loedata/Ivdata.csv')
# Require 1!
ivreg = IV2SLS.from_formula('y~1+x1+[x2+I(x2**2)~z2a+I(z2a**2)]', data=Ivdata).fit(cov_type='unadjusted')
print(ivreg)
IV-2SLS Estimation Summary ============================================================================== Dep. Variable: y R-squared: 0.6619 Estimator: IV-2SLS Adj. R-squared: 0.6514 No. Observations: 100 F-statistic: 40.760 Date: Sat, Dec 17 2022 P-value (F-stat) 0.0000 Time: 10:25:29 Distribution: chi2(3) Cov. Estimator: unadjusted Parameter Estimates ============================================================================== Parameter Std. Err. T-stat P-value Lower CI Upper CI ------------------------------------------------------------------------------ Intercept 1.9795 1.5627 1.2668 0.2052 -1.0833 5.0423 x1 0.4188 0.0705 5.9396 0.0000 0.2806 0.5571 I(x2**2) 0.0066 0.0536 0.1236 0.9017 -0.0985 0.1118 x2 0.6159 0.5829 1.0567 0.2907 -0.5265 1.7583 ============================================================================== Endogenous: I(x2**2), x2 Instruments: I(z2a**2), z2a Unadjusted Covariance (Homoskedastic) Debiased: False
Ivdata['v2hat'] = OLS('x2~x1+z2a+z2b', data=Ivdata).fit().resid
print(OLS('y~x1+x2+I(x2**2)+v2hat', data=Ivdata).fit().summary())
OLS Regression Results ============================================================================== Dep. Variable: y R-squared: 0.723 Model: OLS Adj. R-squared: 0.712 Method: Least Squares F-statistic: 62.06 Date: Sat, 17 Dec 2022 Prob (F-statistic): 1.12e-25 Time: 10:25:29 Log-Likelihood: -206.17 No. Observations: 100 AIC: 422.3 Df Residuals: 95 BIC: 435.4 Df Model: 4 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 1.9932 1.305 1.528 0.130 -0.597 4.583 x1 0.4217 0.064 6.632 0.000 0.295 0.548 x2 0.5732 0.210 2.734 0.007 0.157 0.989 I(x2 ** 2) 0.0112 0.015 0.731 0.467 -0.019 0.042 v2hat 0.3219 0.150 2.145 0.035 0.024 0.620 ============================================================================== Omnibus: 1.003 Durbin-Watson: 1.851 Prob(Omnibus): 0.606 Jarque-Bera (JB): 1.106 Skew: -0.203 Prob(JB): 0.575 Kurtosis: 2.683 Cond. No. 415. ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
위의 표준오차들은 (v2hat
의 계수가 0인 경우를 제외하면) 잘못되었으므로 수정하여야 한다. 더 이상 진행하지 않는다. 코딩 오류의 위험을 무릅쓰면서 파이썬을 사용해야 할 이유를 찾기 어렵다.