Skip to content

Commit

Permalink
Added week 7 solution
Browse files Browse the repository at this point in the history
  • Loading branch information
wmutschl committed Dec 2, 2023
1 parent 094b94b commit f59201f
Show file tree
Hide file tree
Showing 10 changed files with 586 additions and 2 deletions.
10 changes: 9 additions & 1 deletion .github/workflows/matlab.yml
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,12 @@ jobs:
matrixAlgebraKroneckerFormula
matrixAlgebraRotation
matrixAlgebraCholesky
matrixAlgebraLyapunovComparison
matrixAlgebraLyapunovComparison
- name: Run week 7 scripts
uses: matlab-actions/run-command@v1
with:
command: |
cd("progs/matlab")
VARpDimensionsIllustration
threeVariableVAROLS
threeVariableVARML
75 changes: 75 additions & 0 deletions exercises/identification_problem_svar_solution.tex
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
\begin{enumerate}

\item Rewrite the equations:
\begin{align*}
i_t - \beta \pi_t &= \gamma_1 i_{t-1} + \gamma_2 \pi_{t-1} + \varepsilon_t^{MP}\\
\pi_t - \delta i_t &= \gamma_3 i_{t-1} + \gamma_4 \pi_{t-1} + \varepsilon_t^{\pi}
\end{align*}
or in matrix notation:
\begin{align*}
\underbrace{\begin{pmatrix} 1 & -\beta \\ -\delta & 1 \end{pmatrix}}_{B_0} \underbrace{\begin{pmatrix} i_t \\ \pi_t \end{pmatrix}}_{y_t} = \underbrace{\begin{pmatrix} \gamma_1 & \gamma_2 \\ \gamma_3 & \gamma_4 \end{pmatrix}}_{B_1} \underbrace{\begin{pmatrix} i_{t-1} \\ \pi_{t-1} \end{pmatrix}}_{y_{t-1}} + \underbrace{\begin{pmatrix} \varepsilon_t^{MP} \\ \varepsilon_t^{\pi} \end{pmatrix}}_{\varepsilon_t}
\end{align*}

\item Pre-multiply both sides by \(B_0^{-1}\):
\begin{align*}
y_t = \underbrace{B_0^{-1} B_1}_{A_1} y_{t-1} + \underbrace{B_0^{-1} \varepsilon_t}_{u_t}
\end{align*}
Note that the reduced-form innovations \(u_t\) are a composite of the underlying structural shocks \(\varepsilon_t\):
\begin{align*}
u_t = B_0^{-1} \varepsilon_t
\end{align*}
The covariance matrices are related by:
\begin{align*}
E[u_t u_t'] = \Sigma_u = B_0^{-1} \Sigma_\varepsilon B_0^{-1'} = B_0^{-1} B_0^{-1'}
\end{align*}
Above, we make use of a normalization rule for \(\Sigma_\varepsilon=I\).
For the example above:
\begin{align*}
B_0 = \begin{pmatrix} 1 & -\beta \\ -\delta & 1 \end{pmatrix}
\\
B_0^{-1} = \frac{1}{\det(B_0)} \begin{pmatrix} 1 & \beta \\ \delta & 1 \end{pmatrix} \equiv \begin{pmatrix} a & b \\ c & d \end{pmatrix}
\end{align*}
So the system of equations that relates reduced-form innovations to structural shocks is given by:
\begin{align*}
u_t^{i} = a \varepsilon_t^{MP} + b \varepsilon_t^{\pi}
\\
u_t^{\pi} = c \varepsilon_t^{MP} + d \varepsilon_t^{\pi}
\end{align*}
Each reduced-form shock is a \textbf{weighted average} of structural shocks,
where \(a,b,c,d\) represents the amounts by which a particular structural shock contributes to the variation in each residual.

\item There is not enough information to solve this system of equations, because in \(B_0\) we have 4 unknowns,
but due to symmetry from \(\Sigma_u = B_0^{-1} B_0^{-1'}\) we only have 3 elements in \(\Sigma_u\): two variances and one covariance.
More generally, the covariance structure leaves \(K(K-1)/2\) degrees of freedom in specifying \(B_0^{-1}\)
and hence further restrictions are needed to achieve identification.

Some popular strategies:
\begin{enumerate}
\item Recursive ordering of variables (aka orthogonalization):
In the above example, we would set \(b=0\) to get a lower triangular \(B_0^{-1}\).
The \emph{economics} behind this choice is based on \emph{delay} assumptions,
i.e.\ how long it takes for a variable to react to a certain shock.
We can think of the structural shock in terms of the effect it exerts \textbf{contemporaneously} on the variable of interest:
\(\partial y_t = u_t = B_0^{-1} \varepsilon_t\), so we could write:
\begin{align*}
\begin{pmatrix} i_t \\ \pi_t \end{pmatrix} = \begin{pmatrix} a & 0 \\ c & d \end{pmatrix} \begin{pmatrix} \varepsilon_t^{MP} \\ \varepsilon_t^{\pi} \end{pmatrix}
\end{align*}
This lower triangular structure can be obtained by e.g.\ a Cholesky decomposition of \(\Sigma_u\) and yields \textbf{exact identification}.
The order of variables, however, matters!
\item Short-run restrictions: Exclusion restrictions on the impact matrix \(B_0^{-1}\), more flexible than orthogonalization.
\item Separating transitory from permanent components by assuming long-run structural relationships,
i.e.\ on the long-run multiplier matrix \({(I-A(L))}^{-1} B_0^{-1}\).
\item Combination of short-run and long-run relationships.
\item Sign restrictions: Take the Cholesky decomposition which yields exact identification \(\Sigma_u = B_0^{-1}B_0^{-1'} = P P'\).
In this special case: \({B_0}^{-1}=P\), but this is just \textbf{ONE} possible solution.
It is also possible to decompose \(\Sigma_u = \tilde{P}\tilde{P}'\), where \(\tilde{P} = PQ'\) and \(Q\) is an orthogonal rotation matrix: \(Q'Q = QQ'=I\);
that is, \(\tilde{P}\) and \(P\) are \textbf{observationally equivalent}, because they both reproduce \(\Sigma_u\).
\(Q\) is called a rotation matrix because it allows us to \emph{rotate} the initial Cholesky (recursive) matrix while maintaining the property that shocks are uncorrelated.
Put differently, it helps us generate new weights!
This is the basic idea of sign restrictions:
Examine a large number of candidate impact matrices by repeatedly drawing at random from the set of orthogonal matrices \(Q\).
For each \(B_0^{-1}\) check whether the candidate impact matrix is compatible with the sign restrictions that characterize a certain structural shock.
Then we construct the set of admissable models based on accepted draws.
\end{enumerate}

\end{enumerate}
19 changes: 19 additions & 0 deletions exercises/ml_var_p_solution.tex
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
\begin{enumerate}
\item From the univariate case (and undergraduate econometrics),
we know that both estimators are identical;
hence, the asymptotic normal distribution holds as well.

\item Taking the derivative of the conditional log-likelihood function with respect to \(\Sigma_u\) yields:
\begin{align*}
\widetilde{\Sigma}_u = \frac{\hat{U}\hat{U}'}{T}
\end{align*}
where \(\hat{U}\) are both the ML and OLS residuals (as \(\widetilde{A}=\widehat{A}\)).
Note that from previous exercises in the univariate case
we have already seen that the only difference to the OLS estimator of \(\Sigma_u\)
is given in the fact that for ML we don't correct the degrees of freedom,
but simply divide by the \textbf{effective} sample size used in the estimation \(T\).

\item See the previous exercise, as the \texttt{VARReducedForm} function also outputs the ML estimate of \(\Sigma_u\):
\lstinputlisting[style=Matlab-editor,basicstyle=\mlttfamily,title=\lstname]{progs/matlab/threeVariableVARML.m}

\end{enumerate}
16 changes: 16 additions & 0 deletions exercises/ols_var_p_solution.tex
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
\begin{enumerate}
\item The dimensions are: \(y_t\) is \(K \times 1\), \(Y\) is \(K \times T\), \(u_t\) is \(K \times 1\),
\(U\) is \(K \times T\), \(c\) is \(K \times 1\), \(A_1\) is \(K \times K\), \ldots, \(A_p\) is \(K \times K\),
\(A\) is \(K \times (1+pK)\), \(\alpha\) is \((pK^2+K) \times 1\),
\(Z_{t-1}\) is \((1+pK) \times 1\), \(Z\) is \((Kp+1) \times T\),
\(\Sigma_u\) is \(K \times K\) and \(\Sigma_{\hat{\alpha}}\) is \((pK^2+K) \times (pK^2+K)\).
\lstinputlisting[style=Matlab-editor,basicstyle=\mlttfamily,title=\lstname]{progs/matlab/VARpDimensionsIllustration.m}

\item A modified version looks like this.
Note that it also includes OLS estimation of each equation in turn.
\lstinputlisting[style=Matlab-editor,basicstyle=\mlttfamily,title=\lstname]{progs/matlab/VARReducedForm.m}

\item The OLS estimation of the three variables VAR model might look like this:
\lstinputlisting[style=Matlab-editor,basicstyle=\mlttfamily,title=\lstname]{progs/matlab/threeVariableVAROLS.m}
The data for Federal Funds Rate as well as the GNP Deflator Inflation do seem to have some trend in it, but nothing serious.
\end{enumerate}
249 changes: 249 additions & 0 deletions progs/matlab/VARReducedForm.m
Original file line number Diff line number Diff line change
@@ -0,0 +1,249 @@
function VAR = VARReducedForm(ENDO,nlag,opt)
% VAR = VARReducedForm(ENDO,nlag,opt)
% -------------------------------------------------------------------------
% Perform vector estimation with OLS and Gaussian-ML of a VAR(p) model:
% y_t = [c d A_1 ... A_p] [1 t y_{t-1}' ... y_{t-p}']' + u(t) = A Z_{t-1} + u_t
% -----------------------------------------------------------------------
% INPUT
% - ENDO: [nobs x nvar] matrix of endogenous variables, nobs is number of observations and nvar is number of variables
% - nlag: [integer] lag length
% - opt: [structure] optional, with possible fields
% * const [flag] 0 no constant; 1 constant; 2 constant and linear trend
% * dispestim [boolean] 1: display estimation results, 0: do not
% * eqOLS [boolean] 1: perform additional estimation for each equation in turn, 0: do not
% -----------------------------------------------------------------------
% OUTPUT
% VAR: structure including VAR estimation results with the following fields:
% * ENDO: [nobs x nvar] matrix of endogenous variables
% * nlag: [integer] lag length
% * opt: [structure] options used in estimation
% * Z: [(opt.const+nvar*nlag)x(nobs-nlag)] matrix of regressors
% * Y: [nvar x (nobs-nlag)] matrix of lagged endogenous variables actually used in estimation
% * A: [nvar x (opt.const+nvar*nlag] matrix of estimated coefficients
% * residuals: [(nobs-nlag) x 1] vector of residuals
% * SigmaOLS: [nvar x nvar] OLS estimate of covariance matrix of innovations u
% * SigmaML: [nvar x nvar] ML estimate of covariance matrix of innovations u
% * Acomp [nvar*nlag x nvar*nlag] matrix of companion VAR(1) form
% * maxEig [double] maximum absolute Eigenvalue of Acomp
%
% Moreover, equation by equation OLS estimation results can be accessed
% by the substructures VAR.eqj where j=1,...,nvar, i.e. VAR.eq1, VAR.eq2,...
% with the following fields for equation j
% * beta: [opt.const+nvar*nlag x 1] double vector of regression coefficients
% * yhat: [(nobs-nlag) x 1] predicted values of endogenous variable
% * resid: [(nobs-nlag) x 1] residuals
% * sige: [double] estimated standard error of error term
% * bstd: [opt.const+nvar*nlag x 1] estimated standard error of regression coefficients
% * bint: [opt.const+nvar*nlag x 2] confidence intervall for regression coefficients
% * tstat: [opt.const+nvar*nlag x 1] t-statistic of regression coefficients
% * rsqr: [double] determination coefficient
% * rbar: [double] adjusted determination coefficient
% * dw: [double] Durbin-Watson statistic
% * y: [(nobs-nlag) x 1] endogenous variable used in estimation
% * x: [(nobs-nlag) x (opt.const+nvar*nlag)] exogenous variables used in estimation
% * nobs: [double] effective sample size used in estimation
% * nvar: [double] number of exogenous variables
% -------------------------------------------------------------------------
% CALLS
% - OLSmodel.m: builtin function (see below) to robustly estimate regression models with ols
% -------------------------------------------------------------------------
% Willi Mutschler, November 29, 2022, [email protected]
% Codes are based on
% - vare.m function of James P. LeSage
% - VARmodel.m function of Ambrogio Cesa-Bianchi
% -------------------------------------------------------------------------


%% Get some parameters and set defaults
if nargin < 2
error('You need to specify the number of lags ''nlag''.');
end
if nlag < 1
error('nlag needs to be positive');
end

% set default options
if nargin < 3
opt.const = 1;
opt.dispestim = true;
opt.eqOLS = true;
end

if ~isfield(opt,'const')
opt.const = 1;
else
if ~ismember(opt.const,[0,1,2])
error('''opt.const'' can only take values 0, 1, or 2');
end
end

if ~isfield(opt,'dispestim')
opt.dispestim = 1;
end

if ~isfield(opt,'eqOLS')
opt.eqOLS = 1;
end

[nobs, nvar] = size(ENDO);
% feasability check
if nobs < nvar
error('The number of observations is smaller than the number of variables, you probably need to transpose the ''ENDO'' input.')
end
nobs_eff = nobs - nlag; % effective sample size used in estimation

%% create independent vector and lagged dependent matrix
% Y = [y_{nlag+1},..., y_{nobs}] is [nvarx(nobs-nlag)] matrix of lagged endogenous variables; note that we need to start in t=nlag+1 not in t=1
Y = transpose(ENDO((nlag+1):nobs,:));

% Z = [Z_{nlag} Z_{nlag+1} ... Z_{nobs-1}] is [(opt.const+nvar*nlag)x(nobs-nlag)] matrix of regressors
Z = transpose(lagmatrix(ENDO,[1:nlag]));
Z = Z(:,nlag+1:nobs); % remove initial observations
% add deterministic terms if any
if opt.const == 1
Z = [ones(1,nobs_eff); Z];
elseif opt.const == 2
Z=[ones(1,nobs_eff); (nlag+1):nobs; Z];
end


%% compute the matrix of coefficients and covariance matrix
A = (Y*Z')/(Z*Z'); % OLS and Gaussian ML estimate
U = Y-A*Z; % OLS and Gaussian ML residuals
UUt = U*U'; % sum of squared residuals
SIGOLSu = (1/(nobs_eff-nvar*nlag-opt.const))*UUt; % OLS: adjusted for number of estimated coefficients
SIGMLu = (1/nobs_eff)*UUt; % Gaussian ML: not adjusted for number of estimated coefficients

% compute maximum absolute Eigenvalue of companion VAR(1) matrix to check for stability
Acomp = [A(:,1+opt.const:nvar*nlag+opt.const);
eye(nvar*(nlag-1)) zeros(nvar*(nlag-1),nvar)];
maxEig = max(abs(eig(Acomp)));


%% OLS estimation equation by equation
if opt.eqOLS == 1
for j=1:nvar
y = Y(j,:)';
x = Z';
% put into structure
aux = sprintf('eq%d',j); % this creates strings 'eq1' 'eq2' 'eq3' which you can use below, i.e. VAR.(aux) is then VAR.eq1, VAR.eq2, etc.
VAR.(aux) = OLSmodel(y,x); %uses built-in function (see below)
end
end

%% display estimation results
if opt.dispestim
if opt.const == 0
estimtable = table([]);
elseif opt.const == 1
nuhat = A(:,1);
estimtable = table(nuhat);
elseif opt.const == 2
nuhat = A(:,1);
timehat = A(:,2);
estimtable = table(nuhat,timehat);
end
ntrend = size(estimtable,2);
Ai = reshape(A(:,(1+ntrend):end),[nvar,nvar,nlag]);
for ii = 1:nlag
estimtable = [estimtable table(Ai(:,:,ii),'VariableNames', {sprintf('Ahat%d',ii)})];
end
disp(estimtable);
disp([table(SIGOLSu) table(SIGMLu)]);
end

%% save into structure
VAR.ENDO = ENDO;
VAR.nlag = nlag;
VAR.opt = opt;
VAR.Z = Z;
VAR.Y = Y;
VAR.A = A;
VAR.residuals = U;
VAR.SigmaOLS = SIGOLSu;
VAR.SigmaML = SIGMLu; % Maximum Likelihood COV Matrix is not adjusted for # of estimated coefficients
VAR.Acomp = Acomp;
VAR.maxEig = maxEig;

%% OLSmodel.m
function OLS = OLSmodel(y,x,meth)
% OLS = OLSmodel(y,x)
% -----------------------------------------------------------------------
% INPUT
% - y: dependent variable vector (nobs x 1)
% - x: independent variables matrix (nobs x nvar)
% -----------------------------------------------------------------------
% OUPUT
% - OLS: structure including OLS estimation results
% -----------------------------------------------------------------------
% Based on OLSmodel.m from Ambrogio Cesa Bianchi and olse.m from James P.
% LeSage and fn_ols.m from Tao Tzha (Dynare implemenation).
if nargin < 3
meth = 0; % use SVD decomposition, it is not the fastest but most robust to compute the inverse
end
signifVal = 0.05;
[T, K] = size(x);
%% compute inv(X'X)
if meth == 0 % use SVD decomposition
[u d v] = svd(x,0);
vd = v.*(ones(size(v,2),1)*diag(d)');
dinv = 1./diag(d);
vdinv = v.*(ones(size(v,2),1)*dinv');
xtxinv = vdinv*vdinv';
uy = u'*y;
xty = vd*uy;
beta = xtxinv*xty;
yhat = u*uy;
else
if T < 10000 % use QR decomposition
[~, r] = qr(x,0);
xtxinv = (r'*r)\eye(K);
else % use built-in functions
xtxinv = (x'*x)\eye(K);
end
beta = xtxinv*(x'*y);
yhat = x*beta;
end
resid = y - yhat;
sigu = resid'*resid;
sige = sigu/(T-K);
tmp = (sige)*(diag(xtxinv));
sigb = sqrt(tmp);
tcrit = -tinv(signifVal/2,T);
bint = [beta-tcrit.*sigb, beta+tcrit.*sigb];
tsta = beta./(sigb);

ym = y - mean(y);
rsqr1 = sigu;
rsqr2 = ym'*ym;
rsqr = 1.0 - rsqr1/rsqr2;
rsqr1 = rsqr1/(T-K);
rsqr2 = rsqr2/(T-1.0);
if rsqr2 ~= 0
rbar = 1 - (rsqr1/rsqr2);
else
rbar = rsqr;
end
ediff = resid(2:T) - resid(1:T-1);
dw = (ediff'*ediff)/sigu; % durbin-watson

% put into output structure
OLS.beta = beta;
OLS.yhat = yhat;
OLS.resid = resid;
OLS.sige = sige;
OLS.bstd = sigb;
OLS.bint=bint;
OLS.tstat = tsta;
OLS.rsqr = rsqr;
OLS.rbar = rbar;
OLS.dw = dw;
OLS.y = y;
OLS.x = x;
OLS.nobs = T;
OLS.nvar = K;

end % OLSmodel end


end % main Function end
Loading

0 comments on commit f59201f

Please sign in to comment.