Ko Manual 1

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 27

DownloadKo Manual-1
Open PDF In BrowserView PDF
Manual to accompany MATLAB package for
Bayesian VAR models
Gary Koop

Dimitris Korobilis

University of Strathclyde

University of Strathclyde

Glasgow, September 2009

Contents
1 Introduction

2

2 VAR models

4

2.1 Analytical results for VAR models . . . . . . . . . . . . . . . . . . . .

4

2.1.1 The Diffuse Prior . . . . . . . . . . . . . . . . . . . . . . . . . .

5

2.1.2 The Natural Conjugate Prior . . . . . . . . . . . . . . . . . . .

5

2.1.3 The Minnesota Prior . . . . . . . . . . . . . . . . . . . . . . . .

5

2.2 Estimation of VARs using the Gibbs sampler . . . . . . . . . . . . .

6

2.2.1 The Independent Normal-Wishart Prior-Posterior algorithm .

6

2.2.2 Stochastic Search Variable Selection in VAR models . . . . .

7

2.2.3 Flexible Variable Selection in VAR models . . . . . . . . . . .

9

3 Time-Varying parameters VAR models

12

3.1 Homoskedastic TVP-VAR . . . . . . . . . . . . . . . . . . . . . . . . .

12

3.1.1 Variable Selection in the Homoskedastic TVP-VAR . . . . . .

13

3.2 Hierarchical TVP-VAR . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

3.3 Heteroskedastic TVP-VAR . . . . . . . . . . . . . . . . . . . . . . . . .

15

4 Factor models

18

1 Introduction

1

2

4.1 Static factor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

4.2 Dynamic factor model (DFM) . . . . . . . . . . . . . . . . . . . . . .

18

4.3 Factor-augmented VAR (FAVAR) . . . . . . . . . . . . . . . . . . . . .

19

4.4 Time-varying parameters Factor-augmented VAR (FAVAR) . . . . . .

20

4.5 Data used for factor model applications . . . . . . . . . . . . . . . .

22

Introduction

This notes manual accompanies the monograph on empirical VAR models and
the associated MATLAB code. The ultimate purpose is to introduce academics,
students and applied economists to the world of Bayesian time series modelling
combining theory with easily digestable computer code. For that reason, we
present code in a format that follows the theoretical equations as close as possible, so that users can make the connection easily and understand the models
they are estimating. This means that in some cases the code might not be as
computationally efficient as it should be in practice, if there is the danger to sacrifice clarity. We try to avoid structure arrays which can be confusing, that is we
only represent our variables as vectors or matrices (the SSVS model is the only
exception, where we use the MATLAB cell array capabilities).
The directories in the file BAYES_VARS.zip, are:

1 Introduction

BV AR_Analytical
BV AR_GIBBS

BV AR_F U LL

SSV S
V AR_Selection
T V P _V AR_CK
T V P _V AR_DK
T V P _V AR_GCK
HierarchicalT V P _V AR
F actor_M odels
F AV AR
T V P _F AV AR
GAU SS2M AT LAB

3

VAR models using analytical results
VAR models using the Gibbs sampler
Programs to replicate Empirical
Illustration 1. This code uses simulation
to get the posterior parameters with a
flexible choice of 6 different priors.
VAR with SSVS mixture prior as in George,
Sun and Ni (2008)
Variable selection in VARs as in
Korobilis (2009b)
TVP-VAR model using the Carter and
Kohn (1994) smoother as in Primiceri(2005)
TVP-VAR model using the Durbin and
Koopman (2002) smoother
Mixture innovations TVP-VAR as in Koop,
Leon-Gonzales and Strachan (2009)
Hierarchical TVP-VAR as in Chib and
Greenberg (1995)
Estimation of static and dynamic factor
models
FAVAR as in Bernanke, Boivin and
Eliasz (2005)
TVP-FAVAR as in Korobilis (2009a)
Some usefull GAUSS routines, transcribed
for MATLAB

2 VAR models

2
2.1

4

VAR models
Analytical results for VAR models

The simple, reduced-form VAR model can be writen as
Yt = Xt A + "t , with "t

N (0; )

(1)

As we have shown in the previous subsection, this model can be written in the
form
yt = (IM Xt ) + "t
(2)
or compactly
(3)

y t = Zt + "t
where a = vec (A) .

In the computations presented henceforth, we will need the OLS estimates of
a, A and . Subsequently, using the notation X = (X1 ; :::; XT )0 , we define

the OLS estimate of a,

the OLS estimate of A,

b=

X

b = (X 0 X)
A

Sb = Y

1

Zt0 Zt

b
XA

1

0

X

(X 0 Y )

Y

the sum of squared errors of the VAR, and

the OLS estimate of

.

b = S=
b (T

Zt0 yt

K)

b
XA

(4)

(5)

(6)

(7)

Code BVAR_ANALYT.m (found in folder BVAR_Analytical) gives posterior means
and variances of parameters & predictives, using the analytical formulas.
Code BVAR_FULL.m (found in the folder BVAR_FULL) estimates the BVAR
model combining all the priors discussed below, and provides predictions and
impulse responses (check Empirical Application 1 in the monograph)

2.1

Analytical results for VAR models

2.1.1

5

The Diffuse Prior

The diffuse (or Jeffreys’) prior for a and

takes the form
(M +1)=2

p( ; ) / j j

The conditional posteriors are easily derived, and it is proven that they are of the
form
b T K
j ; y N (b
a; ) ,
jy IW S;
2.1.2

The Natural Conjugate Prior

The natural conjugate prior has the form
j

N( ;

V)

and
1

W v; S

1

The posterior for a is
j ;y
where V = V

1

1

+ X 0X

The posterior for

and

2.1.3

=T+

;

V

= vec A with A = V V

1

is
1

where

N

jy

W

;S

b0 X 0 X A
b + A0 V
and S = S + S + A

1

1

A

0

A V

b :
A + X 0X A
1

+ X 0 X A.

The Minnesota Prior

The Minnesota Prior refers mainly to restricting the hyperparameters of a. The
data-based restrictions are the ones presented in the monograph. The prior for
a is still normal and the posteriors are the similar to the Natural conjugate prior
case. is assumed known in this case (for example equal to b ).

2.2

Estimation of VARs using the Gibbs sampler

2.2
2.2.1

6

Estimation of VARs using the Gibbs sampler
The Independent Normal-Wishart Prior-Posterior algorithm

We write the VAR as:
y t = Zt + "t
where Zt = IM

Xt and "t is N (0; ).

It can be seen that the restricted VAR can be written as a Normal linear
regression model with an error covariance matrix of a particular form. A very
general prior for this model (which does not involve the restrictions inherent in
the natural conjugate prior) is the independent Normal-Wishart prior:
p

1

;

1

= p( )p

where
(8)

N

;V

W

;S

and
1

1

(9)

:

Note that this prior allows for the prior covariance matrix, V , to be anything the
researcher chooses, rather than the restrictive
V form of the natural conjugate
prior. For instance, the researcher could choose a prior similar in spirit to the
Minnesota prior, but allow for different forms of shrinkage in different equations.
A noninformative prior can be obtained by setting = S = V 1 = 0.
The conditional posteriors are:
Posterior on

where

=V

V

Posterior on

= vec (B)

1

+

PT

i=1

jy;
Zt0

1

1

N

;V

yt and V = V

(10)

;
1

+

PT

t=1

Zt0

1

1

Zt

2.2

Estimation of VARs using the Gibbs sampler

1

where

=T+

and S = S +

PT

t=1

jy;
(yt

W

;S

Zt ) (yt

1

7

(11)

Zt )0 .

The one-step ahead predictive density, conditional on the parameters of the
model is:
yt jZt ; ;

N (Zt ; )

As we note in the monograph, in order to calculate reasonable predictions, Zt
should contain lags of the dependent variables, and exogenous variables which
are observed at time t h, where h is the desired forecast horizon. This result,
along with a Gibbs sampler producing draws (r) ; (r) for r = 1; ::; R allows for
predictive inference.1 For instance, the predictive mean (a popular point forecast)
could be obtained as:
PR

Zt (r)
R
and other predictive moments can be calculated in a similar fashion. Alternatively, predictive simulation can be done at each Gibbs sampler draw, but this
can be computationally demanding. For forecast horizons greater than one, the
direct method can be used. This strategy for doing predictive analysis can be
used with any of the models discussed below.
E (y jZ ) =

r=1

Code BVAR_GIBBS.m (found in the folder BVAR_Gibbs) estimates this model,
but also allows the prior mean and covariance of (i.e. the hyperparameters
; V ) to be set as in the Minnesota case.
2.2.2

Stochastic Search Variable Selection in VAR models

In the VAR model
Yt = Xt A + "t

(12)

we can introduce the SSVS prior (George and McCullogh, 1993) which is a hierarchical prior of the form
aj
N (0; D)
(13)
1

Typically, some initial draws are discarded as the “burn in”. Accordingly, r = 1; ::; R should
be the post-burn in draws.

2.2

Estimation of VARs using the Gibbs sampler

8

where
= vec (A) = ( 1 ; :::; KM )0 and D is a diagonal matrix. If we write its
j th diagonal element as Dj;j , this prior implies that there is dependence on a
hyperparameter = ( 1 ; :::; KM )0 of the following form
Dj;j =

(

2
0j
2
1j

if
if

j
j

=0
=1

(14)

where we a-priori set the hyperparameters 20j ! 0 and 21j ! 1. This prior
implies that when j = 0 the prior variance of the j th element of a, call it aj , will
be equal to 20j , which is very low since 20j ! 0. Subsequently, the posterior of the
j th parameter will be restricted in this case to shrink towards the prior mean,
which is 0. In the alternative case, j = 1, the parameter will remain unrestricted
and the posterior will be determined mainly by the likelihood. The SSVS prior
in (13) can be written in a mixture of Normals form, which is more illuminating
about the effect of each j on the prior of aj :
jj j

1

N 0;

j

2
0j

+

jN

0;

2
1j

The way in which it is determined whether j is 0 or 1 (and hence whether
aj is restricted or not) is not chosen by the researcher, as in the case of the
Minnesota prior which favors only own lags and the constant parameters (and
restricts the other R.H.S. variables in a semi-data-based way). The value of j
should is determined fully in a data-based fashion, and hence a prior is assigned
to . In a Bayesian context, a prior on a binomial variable which results in easy
computations is the Bernoulli density. Note also that it helps calculations if we
assume that the elements of are independent of each other and sample each j
individually. Subsequently, the prior for is of the form
jj n j

Bernoulli 1; q j

This prior can also be written in the form: Pr j = 1 = q j and Pr j = 0 = 1 q j .
A typical "noninformative" value of the hyperparameter q j is 0.5, although the
reader might want to consult Chipman et al. (2001) and George and McCullogh
(1997) on this issue.
Finally for

we assume the standard Wishart prior
1

;

W v; S

1

2.2

Estimation of VARs using the Gibbs sampler

9

George, Sun and Ni (2008) provide details on how to implement the restriction
search (SSVS prior) on elements of . The MATLAB code implements this approach, but it is not discussed here. The reader is referred to the article by
George, Sun and Ni.
The conditional posteriors are
1. Sample

from the density
jy; ;

where V = [ 1 (X 0 X) + (DD) 1 ]
the OLS estimate of .
2. Sample

j

; V );

N(
1

and

(X 0 X)^ ] where ^ is

)

from the density
j j n j ; b; y; Z

1
qj =

exp

2

1j
1

exp

2

1j

1

(15)

Bernoulli 1; q j

where

3. Sample

0

= V [(

2
j
2
1j

2
j
2
1j

qj +

1

W ishart v; S

1

qj

exp

0j

2

2
j
2
0j

1

qj

from the density

where v = T +v and S = S

1

+

PT

t=1

(Yt

Zt )0 (Yt

1

Zt )

.

Code SSVS_VAR.m and SSVS_VAR_CONST.m (found in the folder SSVS_VAR)
estimate this model. The first model assumes that all parameters are subject to
restriction search. The second code allows the intercepts to be unrestricted, as
in the example of George, Sun and Ni (2008).
2.2.3

Flexible Variable Selection in VAR models

Another way to incorporate variable selection in the VAR model is to explicitly
restrict the parameter to be zero, when the indicator variable is zero. As we

2.2

Estimation of VARs using the Gibbs sampler

10

explain in the monograph, the VAR model
y t = Zt + "t
can be written now as
y t = Zt + "t
where =
and
= diag ( ) = diag ( 1 ; :::; KM ). If we denote by j the j-th
element of the vector (which is also the j-th diagonal element of the matrix ),
and by = j the vector where the j-th element is removed, a Gibbs sampler for
this model takes the following form:
Priors:

jj n j

1

(16)

;V

NM K

Bernoulli (1; )

(17)

W ishart (v; S)

(18)

Conditional posteriors:
1. Sample

from the density
j ; H; y; Z

where V = V
Zt = Z t .
2. Sample

j

1

+

PT

t=1

Zt 0

1

NM K

(19)

;V

1

and

Zt

=V V

1

PT

+

t=1

Zt 0

1

Yt , and

from the density
j j n j ; b; y; Z

Bernoulli (1;

preferably in random order j, where

l0j = exp
l1j = exp

T
X
1
tr
(Yt
2
t=1
T
X
1
tr
(Yt
2
t=1

j

=

0

1

Zt )
Zt

l0j
,
l0j +l1j

0

)

(Yt
1

(Yt

(20)

j)

and
!!

Zt )
Zt

!!

)

0j

(1

0j ) :

2.2

Estimation of VARs using the Gibbs sampler

11

Here we define
to be equal to but with the j th element j = j (i.e.
when j = 1). Similarly, we define
to be equal to but with the j th
element j = 0 (i.e. when j = 0).
3. Sample

1

from the density
1

where v = T + vand S = S

1

+

W ishart v; S

PT

t=1 (Yt

Zt )0 (Yt

1

Zt )

.

Code VAR_SELECTION.m (found in the folder VAR_Selection) estimates this
model.

3 Time-Varying parameters VAR models

3

12

Time-Varying parameters VAR models

3.1

Homoskedastic TVP-VAR

The basic TVP-VAR can be written as:
y t = Zt

t

+ "t ,

(21)

t

+ ut ,

(22)

and

t+1

=

where "t is i.i.d. N (0; ) and ut is i.i.d. N (0; Q). "t and us are independent of one
another for all s and t.
In this model, using priors of the form
0

N

;V

1

W

;S

Q

W

1
Q; S Q

1

we sample t (conditional on the values of and Q) using the Kalman filter and
a smoother (see the monograph for more information), and
from the usual
Wishart density as
1

where

=T+

and S = S +

PT

j t ; Q; y

t=1

(yt

W

;S

1

(23)

Zt t )0 .

Zt t ) (yt

Finally we sample Q from the Wishart density
Q 1j t; ; y
where

Q

=T+

Q

and S Q = S Q +

PT

t=1

W
t

1
Q; S Q

t 1

t

(24)
t 1

0

.

There are two different versions of this model. The one is Homo_TVP_VAR.m
(found in the folder TVP_VAR_CK) which estimates this model plus impulse responses, using the Carter and Kohn (1994) algorithm. Thes second code is
Homo_TVP_VAR_DK.m (found in the folder TVP_VAR_DK) which estimates this
model plus impulse responses, using the Durbin and Koopman (2002) algorithm.

3.2
3.1.1

Hierarchical TVP-VAR

13

Variable Selection in the Homoskedastic TVP-VAR

Variable selection is defined by rewritting the TVP-VAR model as
y t = Zt
t+1

=

t

t

+ "t ,

+ ut ,

where now t =
is a diagonal matrix (see also the variable selection in
t and
the simple VAR case, when t is constant).
The time-varying parameters t and the covariance
are generated exactly
as in the homoskedastic TVP-VAR, described in the previous subsection, but
conditional on the RHS variables being Zt = Zt . The extra step added to the
standard Gibbs sampler for TVP-VAR models, is sampling of the indicators j .
These are generated - preferably in random order j - as in (20). The only modification required in this case is that in equations (??) and (??) the densities
p yj j ; n j ; j = 1 and p yj j ; n j ; j = 0 are derived from the full likelihood of
the TVP-VAR model, and hence l0j and l1j are written as
1X
(Yt
2 t=1

!

T

l0j =

0j

l1j = (1

exp

0

Zt t )

1X
(Yt
2 t=1

0j ) exp

1
t

(Yt

Zt t )

T

Zt

t

)0

1
t

(Yt

Zt

t

!

)

Code TVP_VAR_SELECTION.m (found in the folder VAR_Selection) estimates
this model.

3.2

Hierarchical TVP-VAR

The hierarchical TVP-VAR, based on the model of Chib and Greenberg (1995) is

y t = Zt

t

t+1

= A0

t+1

=

t

+ "t

t+1

+

+ ut ;

t:

(25)

3.2

Hierarchical TVP-VAR

where

14

2

3
0 2
31
"t
0 0
6
7 iid B 6
7C
4 ut 5 N @0; 4 0 Q 0 5A :
0 0 R
t

The priors for this model are
Prior on A0
A0
Prior on

N (A; V A )

t
0; V

0

N

1

W v ;S

Q

1

W v Q ; S Q1

R

1

W v R ; S R1

0

Prior on
1

Prior on Q

Prior on R

By defining priors on these parameters, we also implicitly specify a prior for
t

of the form
N (A0 t ; Q) , for t = 0; ::; T:

t jA0 ; 0 ; Q

The conditional posteriors are
1. Sample

t

from
tj

where

t

= V (Q

1

(A0 t ) + Zt

; A0 ; t ; Q; yt
1

N

yt ) and V = (Q

t; V
1

+ Zt0

1

2. Sample A0 from
A0 j t ; t ; Q
where A = V A (V A A + 0 Q

1

N A; V A

) and V A = (V A + 0 Q

1

1

) .

1

Zt ) .

3.3

Heteroskedastic TVP-VAR

3. Sample

15

from
1

jA0 ;

t ; t ; Q; yt

where v = T + v and S = S +
4. Sample Q from
Q 1 jA0 ;

PT

t=1

6. Sample

t

Zt t ) .

W vQ; S Q
(

t=1

t

A0 t )0 (

W vR; S R

t

where v R = T + v R and S R = S R +

Zt t )0 (yt

(yt

PT

5. Sample R from

1

1

t; t

where v Q = T + v Q and S Q = S Q +

R 1j

W v ;S

PT

t=1

(

t

t

A0 t ) .

1

0
t 1)

(

t

t 1)

.

using Carter and Kohn (1994)

Code HierarchicalTVP_VAR.m (found in the folder HierarchicalTVP_VAR) estimates the parameters of this model.

3.3

Heteroskedastic TVP-VAR

The Heteroskedastic TVP-VAR takes the form
y t = Zt

t

+ "t

where "t N (0; t ), and t = L 1 Dt Dt L 10 where Dt is a diagonal matrix with diagonal elements dit = exp 12 hit being the error time-varying standard deviations,
and L is a lower triangular matrix of time-varying covariances, with ones on the
diagonal. For instance, in the M = 3 case we have
2

1
6
L = 4 L21
L31

0
1
L32

3
0
7
0 5
1

If we first stack the unrestricted elements of L by rows into a M (M2 1) vector
0
as lt = L21;t ; L31;t ; L32;t ; ::; Lp(p 1);t and ht = (h1t ; :::; hM t )0 , then t , lt and ht follow
independent random walks

3.3

Heteroskedastic TVP-VAR

16

t+1

=

t

+ ut

lt+1 = lt +

t

ht+1 = ht +

t

The errors in the three state equations are
2
6
4

ut
t
t

3

0 2

31
Q 0 0
7 iid B 6
7C
5 N @0; 4 0 S 0 5A
0 0 W

The state-space methods used to estimate t can also be used to estimate lt and
ht . We remind that Zt is of dimensions M KM , so that the number of elements of
each column vector t is n = KM (for each t). Similarly, the number of elements
in each column lt is nl = M (M2 1) , and the number of elements in each column
vector ht is nh = M . The priors (initial condition at time t = 0) on the time-varying
parameters are:

0

N 0; 4In

l0

N (0; 4Inl )

h0

N (0; 4Inh )

(26)

and the priors on their error covariances, are

Q

1

W 1 + n ; (kQ )2 (1 + n ) In

S

1

W 1 + nl ; (kS )2 (1 + nl ) Inl

W

1

W 1 + nh ; (kW )2 (1 + nh ) Inh

1
1

(27)
1

where the hyperparameters are set to kQ = 0:01, kS = 0:1 and kW = 0:01, and Im is
the identity matrix of dimensions m m. The user can also specify a prior based
on the OLS estimates of a constant parameters VAR on a training sample (see
Primiceri (2005) and the code for this approach).
One can inform the priors using a training sample. In particular, assume that

3.3

Heteroskedastic TVP-VAR

17

bOLS and V bOLS are the mean and variance respectively of the OLS estimate (or
a Bayesian estimate using noninformative priors) of = f ; l; hg based on a VAR
with constant parameters using an initial, training sample. Then the priors can
be rewritten as
0

N(

OLS ; 4V

(

OLS ))

l0

N (lOLS ; 4V (lOLS ))

h0

N (hOLS ; 4V (hOLS ))

(28)

Q

1

W 1 + n ; (kQ )2 (1 + n ) V (

S

1

W 1 + nl ; (kS )2 (1 + nl ) V (lOLS )

W

1

W

2

1 + nh ; (kW )

(1 + nh ) V

1

OLS )
1

.
h..OLS

(29)
1

!

The posterior of t is easily obtained, as in the case of the Homoskedastic VAR.
The only difference is that now, we draw t conditional on the VAR covariance
matrix being t . Draws of lt and ht will provide us with draws of Lt and Dt
respectively, and then we can recover t using t = Lt 1 Dt Dt Lt 10 . For detailed
info see the monograph, and the appendix Primiceri (2005).
Code Hetero_TVP_VAR.m (found in the folder TVP_VAR_CK) estimates the
parameters and impulse responses from this model.

4 Factor models

4
4.1

18

Factor models
Static factor model

The static factor model is (ignoring the intercept)
yt = ft + "t
where yt is an M 1 vector of observed time series variables, ft is a q 1 vector of
unobserved factors with ft N (0; Iq ), is an M q matrix of coefficients (factor
loadings), and "t N (0; ) with = diag ( 21 ; :::; 2M ). A popular way to identify this
model (see Lopes and West, 2004, and Geweke and Zhou, 1996) is to impose
to be block lower triangular with diagonal elements strictly positive, i.e. jj > 0
and jk = 0 for j > k, j = 1; ::; q. Since the covariance matrix
is diagonal, we
can treat this model as M indepedent regressions (conditional on knowing ft ).
Subsequently we set proper priors of the Normal - inverse-Gamma form, and
is sampled with the restriction that its diagonal elements come from a truncated
normal density, while the upper diagonal elements are zero.
Code BFM.m (found in folder Factor_Models) replicates this model, following
Lopes and West (2004).

4.2

Dynamic factor model (DFM)

The dynamic factor model assumes that the factors follow a VAR. A simple form
of this model is
yit = 0i + i ft + "it
ft = 1 ft 1 + :: + p ft p + "ft
This model needs only a small modification in order to write it as a linear statespace model, with ft the state variable. This modification is to write the p-lag
state equation as a first order Markov system (i.e. transform the VAR(p) equation
ft = 1 ft 1 + :: + p ft p + "ft , into a VAR(1) model; we have seen how to do this
in the simple VAR models when we wanted to compute the impulse responses).
Conditional on this transformation, a Gibbs sampler is used to draw ft using
the Carter and Kohn (1994) algorithm, while conditional on the draw of ft the
parameters can be estimated using any of the VAR priors of section 2 (see also
the monograph). In the first (the measurement) equation, conditional on ft , we

4.3

Factor-augmented VAR (FAVAR)

19

sample each i using the arguments for simple regression models, i.e. a prior of
the Normal-Gamma form (see Koop, 2003).
Code BAYES_DFM.m (found in folder Factor_Models) estimates the above
model.

4.3

Factor-augmented VAR (FAVAR)

The factor augmented VAR builds on the dynamic factor model structure and
allows to identify monetary policy shocks. We use the simple formulation of
Bernanke, Boivin and Eliasz (2005) which is
yit =

ft
rt

!

where e
"ft is i.i.d. N 0; e f

= e1

ft
rt

i ft

1
1

+

!

i rt

+ :: + e p

and rt is a kr

(30)

+ "it ;

ft
rt

p
p

!

+e
"ft

(31)

1 vector of observed variables. For

instance, Bernanke, Boivin and Eliasz (2005) set rt to be the Fed Funds rate (a
monetary policy instrument) and, thus, kr = 1. All other assumptions about the
measurement equation are the same as for the DFM.
Note that conditional on the parameters, the factors can be sampled using
state-space methods (see the previous section) where ft is the unobserved state
variable. This is easily implemented if we convert equation (31) from a VAR(p)
model into a VAR(1) model (so that ft is Markov, which is a necessary assumption in order to use the Kalman filter). Subsequently, the parameter matrices
e and e f have to be augmented with zeros in order to conform with the VAR(1)
transformation, but we sample the non-zero elements the usual way.

A different option is to use Principal Components to approximate the factors
ft . Bayesian estimation provides us with dynamic factors, with covariance e f .
Principal Components provide us only with static factors with normalized covariance I (since Principal Components provide a solution only to the factor equation
(30) without taking into account the dynamics of the factors in equation (31)).
PC estimates is a computationally tractable method regardless of the dimension
of the data or the factors we want to extract but are subject to sampling error.
MCMC estimation can be cumbersome in very large problems, but having the full

4.4

Time-varying parameters Factor-augmented VAR (FAVAR)

20

posterior of the factors eliminates any sampling error uncertainty. MCMC estimation (and in general likelihood-based estimation) of dynamic factors using the
Kalman filter requires strong identification restrictions which may lead to factors
with poor economic content. Subsequently the practitioner of this model should
be very carefull when choosing a specific method to sample latent factors. Our
empirical application proceeds with Principal Components, due to their computational simplicity. No matter the chosen method, the parameters are sampled
conditional on the current draw (for MCMC) or final estimate (if using PC) of the
factors, i.e. just as if the factors were observed data.
In (30) we have M independent equations, so we can sample the parameters
i and i equation-by-equation. Subsequently, we have M univariate regression
models and a standard conjugate prior that can be used for the parameters is
!
ft
the Normal-Gamma (see Koop, 2003). Equation (31) is a VAR model on
rt
and the reader is free to use any of the priors discussed in the respective Section
about VAR models. For the purpose of the empirical illustration, we use the
Noninformative prior. To summarize, we use priors of the form:
i
2
i

1

e; ef _

N (0; cI)
G (a; )
ef

(M +kr +1)=2

where, in the absense of prior information, c, a and can be used in a data-based
fashion, or set to uninformative values, like 100, 0:01 and 0:01, respectively.
Code FAVAR.m (found in folder FAVAR) estimates this model using Principal
Components and gives impulse responses for 115 + 3 variables in total. There is
the option to use MCMC estimation of the factors, but this is not done automatically. There are directions to the user in order to comment some parts of the
code, and uncomment others, in order to do that.

4.4

Time-varying parameters Factor-augmented VAR (FAVAR)

Extending the FAVAR model into a model with time-varying parameters is as
"easy" as extending the VAR model into the TVP-VAR model we examined in
the previous section. Given that one can still use a principal components ap-

4.4

Time-varying parameters Factor-augmented VAR (FAVAR)

21

proximation of the factors in the TVP-FAVAR model, questions of interest are
which parameters should be allowed to vary over time2 ? For the purpose of our
empirical illustration, we extend the FAVAR
model
of the previous subsection,
h
i
equations (30) - (31) by allowing e = e 1 ; ::::; e p and e f to be time-varying in a
form which is exactly the same as the Heteroskedastic VAR model already discussed (i.e. random walk evolution on each parameter). Exact details are given
in the monograph, and section 3.3. We only have to note here that in contrast
with the empirical illustration of the Homoskedastic and Heteroskedastic TVPVARs, we do not use a training sample for the TVP-FAVAR application (though
the reader can define a training sample in the same fashion as in section 3.3).
Subsequently, the priors on the parameters e t and e ft are given by equations
(26) and (27). The priors for the constant parameters i and i are the ones used
in the FAVAR model above.
One can allow the loadings matrix to be time varying, as well as the logvariances of the errors in the equation (30). However the loadings matrix contains many parameters, so the reader should be carefull to avoid overparametrization when relaxing the assumption of constant loadings.
Code TVP_FAVAR_FULL.m (found in folder TVP-FAVAR) estimates the TVPFAVAR model and gives impulse responses for 115 + 3 variables in total.

2

This issue is addressed in Korobilis (2009a) and the reader is referred to this paper for more
information.

4.5

4.5

Data used for factor model applications

22

Data used for factor model applications

All series were downloaded from St. Louis’ FRED database and cover the quarters Q1:1959 to Q3:2006. The series HHSNTN, PMNO, PMDEL, PMNV, MOCMQ,
MSONDQ (series numbered 152 - 157 in the following table) were kindly provided
by Mark Watson and come from the Global Insights Basic Economics Database.
All series were seasonally adjusted: either taken adjusted from FRED or by applying to the non-seasonally adjusted series a quarterly X11 filter based on an
AR(4) model (after testing for seasonality). Some series in the database were observed only on a monthly basis and quarterly values were computed by averaging the monthly values over the quarter. Following [?], the fast moving variables
are interest rates, stock returns, exchange rates and commodity prices. The
rest of the variables in the dataset are the slow moving variables (output, employment/unemployment etc). All variables are transformed to be approximate
stationary. In particular, if zi;t is the original untransformed series, the transformation codes are (column Tcode below): 1 - no transformation (levels), xi;t = zi;t ;
2 - first difference, xi;t = zi;t zi;t 1 ; 4 - logarithm, xi;t = log zi;t ; 5 - first difference
of logarithm, xi;t = log zi;t log zi;t 1 .
#

Mnemonic

Tcode

1
2
3
4
5
6
7
8
9

CBI
GDPC96
FINSLC96
CIVA
CP
CNCF
GDPCTPI
FPI
GSAVE

1
5
5
1
5
5
5
5
5

Description
Change in Private Inventories
Real Gross Domestic Product, 3 Decimal
Real Final Sales of Domestic Product, 3 Decimal
Corporate Inventory Valuation Adjustment
Corporate Profits After Tax
Corporate Net Cash Flow
Gross Domestic Product: Chain-type Price Index
Fixed Private Investment
Gross Saving

4.5

Data used for factor model applications

10
11

PRFI
CMDEBT

5
5

12
13
14
15
16
17
18

INDPRO
NAPM
HCOMPBS
HOABS
RCPHBS
ULCBS
COMPNFB

5
1
5
5
5
5
5

19
20

HOANBS
COMPRNFB

5
5

21
22
23
24
25
26
27

ULCNFB
UEMPLT5
UEMP5TO14
UEMP15OV
UEMP15T26
UEMP27OV
NDMANEMP

5
5
5
5
5
5
5

28
29
30
31
32
33
34
35

MANEMP
SRVPRD
USTPU
USWTRADE
USTRADE
USFIRE
USEHS
USPBS

5
5
5
5
5
5
5
5

Private Residential Fixed Investment
Household Sector: Liabilites: Household Credit
Market Debt Outstanding
Industrial Production Index
ISM Manufacturing: PMI Composite Index
Business Sector: Compensation Per Hour
Business Sector: Hours of All Persons
Business Sector: Real Compensation Per Hour
Business Sector: Unit Labor Cost
Nonfarm Business Sector: Compensation Per
Hour
Nonfarm Business Sector: Hours of All Persons
Nonfarm Business Sector: Real Compensation
Per Hour
Nonfarm Business Sector: Unit Labor Cost
Civilians Unemployed - Less Than 5 Weeks
Civilian Unemployed for 5-14 Weeks
Civilians Unemployed - 15 Weeks & Over
Civilians Unemployed for 15-26 Weeks
Civilians Unemployed for 27 Weeks & Over
All Employees: Nondurable Goods Manufacturing
Employees on Nonfarm Payrolls: Manufacturing
All Employees: Service-Providing Industries
All Employees: Trade, Transportation & Utilities
All Employees: Wholesale Trade
All Employees: Retail Trade
All Employees: Financial Activities
All Employees: Education & Health Services
All Employees: Professional & Business Services

23

4.5

Data used for factor model applications

36
37
38
39
40
41
42
43

USINFO
USSERV
USPRIV
USGOVT
USLAH
AHECONS
AHEMAN
AHETPI

5
5
5
5
5
5
5
5

44

AWOTMAN

1

45
46

AWHMAN
HOUST

1
4

47
48
49
50
51

HOUSTNE
HOUSTMW
HOUSTS
HOUSTW
HOUST1F

4
4
4
4
4

52

PERMIT

4

53

NONREVSL

5

54

USGSEC

5

55
56
57

OTHSEC
TOTALSL
BUSLOANS

5
5
5

58

CONSUMER

5

59
60

LOANS
LOANINV

5
5

All Employees: Information Services
All Employees: Other Services
All Employees: Total Private Industries
All Employees: Government
All Employees: Leisure & Hospitality
Average Hourly Earnings: Construction
Average Hourly Earnings: Manufacturing
Average Hourly Earnings: Total Private Industries
Average Weekly Hours: Overtime: Manufacturing
Average Weekly Hours: Manufacturing
Housing Starts: Total: New Privately Owned
Housing Units Started
Housing Starts in Northeast Census Region
Housing Starts in Midwest Census Region
Housing Starts in South Census Region
Housing Starts in West Census Region
Privately Owned Housing Starts: 1-Unit Structures
New Private Housing Units Authorized by Building Permit
Total Nonrevolving Credit Outstanding, SA, Billions of Dollars
U.S. Government Securities at All Commercial
Banks
Other Securities at All Commercial Banks
Total Consumer Credit Outstanding
Commercial and Industrial Loans at All Commercial Banks
Consumer (Individual) Loans at All Commercial
Banks
Total Loans and Leases at Commercial Banks
Total Loans and Investments at All Commercial
Banks

24

4.5

Data used for factor model applications

61
62
63

INVEST
REALLN
BOGAMBSL

5
5
5

64

TRARR

5

65

BOGNONBR

5

66

NFORBRES

1

67
68
69

M1SL
CURRSL
CURRDD

5
5
5

70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85

DEMDEPSL
TCDSL
TB3MS
TB6MS
GS1
GS3
GS5
GS10
MPRIME
AAA
BAA
sTB3MS
sTB6MS
sGS1
sGS3
sGS5

5
5
1
1
1
1
1
1
1
1
1
1
1
1
1
1

Total Investments at All Commercial Banks
Real Estate Loans at All Commercial Banks
Board of Governors Monetary Base, Adjusted for
Changes in Reserve Requirements
Board of Governors Total Reserves, Adjusted for
Changes in Reserve Requirements
Non-Borrowed Reserves of Depository Institutions
Net Free or Borrowed Reserves of Depository Institutions
M1 Money Stock
Currency Component of M1
Currency Component of M1 Plus Demand Deposits
Demand Deposits at Commercial Banks
Total Checkable Deposits
3-Month Treasury Bill: Secondary Market Rate
6-Month Treasury Bill: Secondary Market Rate
1-Year Treasury Constant Maturity Rate
3-Year Treasury Constant Maturity Rate
5-Year Treasury Constant Maturity Rate
10-Year Treasury Constant Maturity Rate
Bank Prime Loan Rate
Moody’s Seasoned Aaa Corporate Bond Yield
Moody’s Seasoned Baa Corporate Bond Yield
TB3MS - FEDFUNDS
TB6MS - FEDFUNDS
GS1 - FEDFUNDS
GS3 - FEDFUNDS
GS5 - FEDFUNDS

25

4.5

Data used for factor model applications

86
87
88
89
90
91
92
93

sGS10
sMPRIME
sAAA
sBAA
EXSZUS
EXJPUS
PPIACO
PPICRM

1
1
1
1
5
5
5
5

94
95
96

PPIFCF
PPIFCG
PFCGEF

5
5
5

97
98

PPIFGS
PPICPE

5
5

99

PPIENG

5

100
101

PPIIDC
PPIITM

5
5

102

CPIAUCSL

5

103

CPIUFDSL

5

104

CPIENGSL

5

105

CPILEGSL

5

106

CPIULFSL

5

107

CPILFESL

5

GS10 - FEDFUNDS
MPRIME - FEDFUNDS
AAA - FEDFUNDS
BBB - FEDFUNDS
Switzerland / U.S. Foreign Exchange Rate
Japan / U.S. Foreign Exchange Rate
Producer Price Index: All Commodities
Producer Price Index: Crude Materials for Further Processing
Producer Price Index: Finished Consumer Foods
Producer Price Index: Finished Consumer Goods
Producer Price Index: Finished Consumer Goods
Excluding Foods
Producer Price Index: Finished Goods
Producer Price Index Finished Goods: Capital
Equipment
Producer Price Index: Fuels & Related Products
& Power
Producer Price Index: Industrial Commodities
Producer Price Index: Intermediate Materials:
Supplies & Components
Consumer Price Index For All Urban Consumers:
All Items
Consumer Price Index for All Urban Consumers:
Food
Consumer Price Index for All Urban Consumers:
Energy
Consumer Price Index for All Urban Consumers:
All Items Less Energy
Consumer Price Index for All Urban Consumers:
All Items Less Food
Consumer Price Index for All Urban Consumers:
All Items Less Food & Energy

26

4.5

Data used for factor model applications

108
109

OILPRICE
HHSNTN

5
1

110
111
112
113
114

PMI
PMNO
PMDEL
PMNV
MOCMQ

1
1
1
1
5

115

MSONDQ

5

Spot Oil Price: West Texas Intermediate
Uni. of Mich. Index of Consumer Expectations
(BCD-83)
Purchasing Managers’ Index
NAPM New Orders Index
NAPM Vendor Deliveries Index
NAPM Inventories Index
New Orders (NET) - Consumer Goods & Materials, 1996 Dollars (BCI)
New Orders - Non-defence Capital Goods, 1996
Dollars (BCI)

27



Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.4
Linearized                      : Yes
Author                          : 
Create Date                     : 2009:09:25 21:59:29+01:00
Modify Date                     : 2014:01:23 10:59:51+02:00
Subject                         : 
XMP Toolkit                     : Adobe XMP Core 5.2-c001 63.139439, 2010/09/27-13:37:26
Format                          : application/pdf
Creator                         : 
Description                     : 
Title                           : 
Creator Tool                    : LaTeX with hyperref package
Metadata Date                   : 2014:01:23 10:59:51+02:00
Keywords                        : 
Producer                        : pdfTeX-1.11a
Document ID                     : uuid:e3f38c16-e1de-407b-8424-5a571a744cba
Instance ID                     : uuid:578ca0f1-5f9d-4878-8d54-3cda7ee8d07c
Page Mode                       : UseOutlines
Page Count                      : 27
EXIF Metadata provided by EXIF.tools

Navigation menu