Mean-variance portfolio optimization when means and covariances are - - PowerPoint PPT Presentation

mean variance portfolio optimization when means and
SMART_READER_LITE
LIVE PREVIEW

Mean-variance portfolio optimization when means and covariances are - - PowerPoint PPT Presentation

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion Mean-variance portfolio optimization when means and covariances are estimated Zehao Chen June 1, 2007 Joint work with


slide-1
SLIDE 1

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion

Mean-variance portfolio optimization when means and covariances are estimated

Zehao Chen June 1, 2007

Joint work with Tze Leung Lai (Stanford Univ.) and Haipeng Xing (Columbia Univ.)

Zehao Chen M.V. optimization when means and covariances are estimated

slide-2
SLIDE 2

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion

Outline

1

Introduction and review The Markowitz framework Different efficient frontier definitions under stochastic setting

2

A high dimensional plug-in covariance matrix estimator The L2 boosting estimator Simulation and empirical study

3

A modified Markowitz framework The framework An example and simulation

4

Conclusion

Zehao Chen M.V. optimization when means and covariances are estimated

slide-3
SLIDE 3

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The Markowitz framework Different efficient frontier definitions under stochastic setting

The basic formulation

We denote the returns of p risky assets (e.g. stock returns) by a p × 1 vector R, and an unobserved future return by r, E(R) = µ, Cov(R) = Σ The mean-variance optimization solves for an asset allocation w, which minimizes the portfolio risk σ2

w, while achieving a certain

target return µ∗, i.e, min

w wTΣw = min w σ2 w

subject to wTµ ≥ µ∗, w1 + w2 + ... + wp = 1

Zehao Chen M.V. optimization when means and covariances are estimated

slide-4
SLIDE 4

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The Markowitz framework Different efficient frontier definitions under stochastic setting

The basic formulation

This formulation has a closed form solution for w, w∗ = f (µ, µ∗, Σ−1) The weights sometimes have additional constraints, i.e. li ≤ wi ≤ ui, i = 1, ..., p If li ≥ 0, the constraint is also called no short selling constraint. The solution under this additional constraint requires quadratic programming.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-5
SLIDE 5

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The Markowitz framework Different efficient frontier definitions under stochastic setting

Efficient frontier

Zehao Chen M.V. optimization when means and covariances are estimated

slide-6
SLIDE 6

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The Markowitz framework Different efficient frontier definitions under stochastic setting

”Plug-in” efficient frontier

For w = w(X, µ∗) and reasonable µ and Σ estimates ˆ µ(X) and ˆ Σ(X), min

w wT ˆ

Σw subject to wT ˆ µ ≥ µ∗ Define the ”plug-in” efficient frontier as parametrized by µ∗ C(µ∗) = (wT Σw, wTµ)

Zehao Chen M.V. optimization when means and covariances are estimated

slide-7
SLIDE 7

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The Markowitz framework Different efficient frontier definitions under stochastic setting

”Plug-in” covariance estimates

Factor model (with domain knowledge) R = α + BF + ǫ Σ = W1 + W2 = BΩBT + Cov(ǫ)

Zehao Chen M.V. optimization when means and covariances are estimated

slide-8
SLIDE 8

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The Markowitz framework Different efficient frontier definitions under stochastic setting

”Plug-in” covariance estimates

Factor model (with domain knowledge) R = α + BF + ǫ Σ = W1 + W2 = BΩBT + Cov(ǫ) Shrinkage estimator αˆ F + (1 − α)ˆ Σ

Zehao Chen M.V. optimization when means and covariances are estimated

slide-9
SLIDE 9

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The Markowitz framework Different efficient frontier definitions under stochastic setting

Random efficient frontier?

0.04 0.06 0.08 0.1 0.12 0.14 0.16 1 2 3 4 5 6 7 8 9 x 10

−3

0.04 0.06 0.08 0.1 0.12 0.14 0.16 1 2 3 4 5 6 7 8 9 x 10

−3

0.04 0.06 0.08 0.1 0.12 0.14 0.16 1 2 3 4 5 6 7 8 9 x 10

−3

0.04 0.06 0.08 0.1 0.12 0.14 0.16 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 x 10

−3

Figure: n = 50, p = 5 i.i.d multivariate normal

Zehao Chen M.V. optimization when means and covariances are estimated

slide-10
SLIDE 10

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The Markowitz framework Different efficient frontier definitions under stochastic setting

Random efficient frontier?

The data dependent efficient frontier is not well defined, and is a random curve.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-11
SLIDE 11

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The Markowitz framework Different efficient frontier definitions under stochastic setting

Random efficient frontier?

The data dependent efficient frontier is not well defined, and is a random curve. By plugging the estimates of mean ˆ µ, it’s essentially constraining on wT ˆ µ ≥ µ∗

Zehao Chen M.V. optimization when means and covariances are estimated

slide-12
SLIDE 12

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The Markowitz framework Different efficient frontier definitions under stochastic setting

Random efficient frontier?

The data dependent efficient frontier is not well defined, and is a random curve. By plugging the estimates of mean ˆ µ, it’s essentially constraining on wT ˆ µ ≥ µ∗ Conceptually, one should constrain on E(wTr) ≥ µ∗

Zehao Chen M.V. optimization when means and covariances are estimated

slide-13
SLIDE 13

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The Markowitz framework Different efficient frontier definitions under stochastic setting

Resampled efficient frontier

For bootstrapped samples X ∗

1 ,...,X ∗ B of X,

wM(X, µ∗) = 1 B

B

Σ

i=1 w(X ∗ i , µ∗)

Define the resampled efficient frontier as CM(µ∗) = (wT

MΣwM, wT Mµ)

Zehao Chen M.V. optimization when means and covariances are estimated

slide-14
SLIDE 14

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The Markowitz framework Different efficient frontier definitions under stochastic setting

Previous definitions of efficient frontiers

”Plug-in” efficient frontier min

w wT ˆ

Σw, wT ˆ µ ≥ µ∗ C(µ∗) = (wT Σw, wTµ) Resampled efficient frontier wM(X, µ∗) = 1 B

B

Σ

i=1 w(X ∗ i , µ∗)

CM(µ∗) = (wT

MΣwM, wT Mµ)

Zehao Chen M.V. optimization when means and covariances are estimated

slide-15
SLIDE 15

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

If one really wants to use the plug-in

Estimate Σ or Σ−1?

Zehao Chen M.V. optimization when means and covariances are estimated

slide-16
SLIDE 16

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

If one really wants to use the plug-in

Estimate Σ or Σ−1? Employ a sparsity assumption (in practice, residuals from some factor model): reduce the number of parameters to estimate.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-17
SLIDE 17

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

If one really wants to use the plug-in

Estimate Σ or Σ−1? Employ a sparsity assumption (in practice, residuals from some factor model): reduce the number of parameters to estimate. Impose proper weight constraints.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-18
SLIDE 18

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

A high dimensional covariance estimator

Use Modified Cholesky decomposition Σ−1 = T ′D−1T to reparametrize covariance matrix, and enforce sparsity on T.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-19
SLIDE 19

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

A high dimensional covariance estimator

Use Modified Cholesky decomposition Σ−1 = T ′D−1T to reparametrize covariance matrix, and enforce sparsity on T. Use coordinate-wise greedy (component-wise L2 boosting) with a modified BIC stopping criterion. Can be shown to be a consistent estimator.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-20
SLIDE 20

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

A high dimensional covariance estimator

Use Modified Cholesky decomposition Σ−1 = T ′D−1T to reparametrize covariance matrix, and enforce sparsity on T. Use coordinate-wise greedy (component-wise L2 boosting) with a modified BIC stopping criterion. Can be shown to be a consistent estimator. Spectral norm convergence for both Σ and Σ−1.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-21
SLIDE 21

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

Modified Cholesky decomposition

For a random vector Y = (y1, y2, ..., yp), one can write them in the ”auto-regressive” form, for k ≥ 2 yk = µk + φk,1y1 + φk,2y2 + ... + φk,k−1yk−1 + ǫk Let T be a p × p unit lower triangular matrix, with −φi,j (i < j) on the lower triangular. And let D be the p × p diagonal matrix D = diag(var(y1), var(ǫ2), var(ǫ3), ..., var(ǫp)) The Modified Cholesky decomposition is Σ−1 = T ′D−1T

Zehao Chen M.V. optimization when means and covariances are estimated

slide-22
SLIDE 22

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

The coordinate-wise greedy algorithm

The main ideas is to iteratively look for the predictor which is mostly correlated with the current model residual. Include this predictor and re-calculate the residual.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-23
SLIDE 23

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

The coordinate-wise greedy algorithm

The main ideas is to iteratively look for the predictor which is mostly correlated with the current model residual. Include this predictor and re-calculate the residual. This process will generate a sequence of nested models M1 ⊆ M2 ⊆ ... ⊆ Mmn.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-24
SLIDE 24

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

The coordinate-wise greedy algorithm

The main ideas is to iteratively look for the predictor which is mostly correlated with the current model residual. Include this predictor and re-calculate the residual. This process will generate a sequence of nested models M1 ⊆ M2 ⊆ ... ⊆ Mmn. BIC: log ˆ σ2 + log n

n

· #{Mk}

Zehao Chen M.V. optimization when means and covariances are estimated

slide-25
SLIDE 25

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

The coordinate-wise greedy algorithm

The main ideas is to iteratively look for the predictor which is mostly correlated with the current model residual. Include this predictor and re-calculate the residual. This process will generate a sequence of nested models M1 ⊆ M2 ⊆ ... ⊆ Mmn. BIC: log ˆ σ2 + log n

n

· #{Mk} Proposed Modified BIC: nˆ σ2 + ˆ σ2 log p log n · #{Mk}

Zehao Chen M.V. optimization when means and covariances are estimated

slide-26
SLIDE 26

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

Convergence results

Definition of spectral norm: ||A||2 =

  • λmax(A′A)

Convergence: ||ˆ Σ − Σ||2 → 0 ||ˆ Σ−1 − Σ−1||2 → 0

Zehao Chen M.V. optimization when means and covariances are estimated

slide-27
SLIDE 27

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

An example

Setting: n = 300, p = 600, number of nonzero parameters in T is 900 (i.e. 3 per row), and T elements uniformly distributed in [0.5, 1].

Zehao Chen M.V. optimization when means and covariances are estimated

slide-28
SLIDE 28

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

MSE to true inverse covariance

50 100 150 200 250 300 10 20 30 QE1 50 100 150 200 250 300 500 1000 1500 2000 p (p = 2n) QE1 Boosting PLM Shrinkage Zehao Chen M.V. optimization when means and covariances are estimated

slide-29
SLIDE 29

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

MSE to true covariance

50 100 150 200 250 300 2 4 6 8 QE2 / 1000 Boosting Shrinkage 50 100 150 200 250 300 200 400 600 p (p = 2n) QE2 / 1000 PLM Around 8 Zehao Chen M.V. optimization when means and covariances are estimated

slide-30
SLIDE 30

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The L2 boosting estimator Simulation and empirical study

An empirical example: NASDAQ-100, 1990-2006

0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 Boosting Shrinkage Sample

Figure: Empirical performance curve with S&P500 index as market factor

Zehao Chen M.V. optimization when means and covariances are estimated

slide-31
SLIDE 31

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

Efficient frontier when means and covariances are unknown

For w(X, µ∗), solve min

w var(wT r)

E(wTr) ≥ µ∗ Define the expected efficient frontier as C(µ∗) = EX{(wT Σw, wTµ)}

Zehao Chen M.V. optimization when means and covariances are estimated

slide-32
SLIDE 32

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

A comparison of efficient frontiers

”Plug-in” efficient frontier min

w wT ˆ

Σw, wT ˆ µ ≥ µ∗, C(µ∗) = (wTΣw, wT µ) Resampled efficient frontier wM(X, µ∗) = 1 B

B

Σ

i=1 w(X ∗ i , µ∗), CM(µ∗) = (wT MΣwM, wT Mµ)

Expected efficient frontier min

w var(wT r), E(wTr) ≥ µ∗, C(µ∗) = EX{(wT Σw, wTµ)}

Zehao Chen M.V. optimization when means and covariances are estimated

slide-33
SLIDE 33

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

Solving for the expected efficient frontier

Introduce a risk-aversion parameter λ and reformulate the problem as min

w −E(wTr) + λvar(wT r)

Zehao Chen M.V. optimization when means and covariances are estimated

slide-34
SLIDE 34

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

A not-so-bayesian stochastic control approach

Solve for a portfolio weight w(X, λ) −E(wT r) + λvar(wTr) = −EwTr + λE(wTr)2 − λE2(wT r)

Zehao Chen M.V. optimization when means and covariances are estimated

slide-35
SLIDE 35

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

A not-so-bayesian stochastic control approach

Solve for a portfolio weight w(X, λ) −E(wT r) + λvar(wTr) = −EwTr + λE(wTr)2 − λE2(wT r) However, one needs the following conditional form, EX[−wTE(r|X) + λwTE(rrT |X)w − λwTE(??|X)w]

Zehao Chen M.V. optimization when means and covariances are estimated

slide-36
SLIDE 36

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

A not-so-bayesian stochastic control approach

This conditional form entitles us to plug in reasonable guesses for E(r|X), E(rrT |X) and E(??|X).

Zehao Chen M.V. optimization when means and covariances are estimated

slide-37
SLIDE 37

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

A not-so-bayesian stochastic control approach

This conditional form entitles us to plug in reasonable guesses for E(r|X), E(rrT |X) and E(??|X). Use bayesian way to develop a procedure and approximate it in frequentist setting.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-38
SLIDE 38

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

Embedding technique

Embedding technique1: This bayesian formulation can be shown to be equivalent to EX[−ηwT E(r|X) + λwTE(rrT |X)w] where η = 1 + 2λE((w∗

λ)T r) and w∗ λ is the solution to the original

  • ptimization problem with risk aversion λ.
  • 1X. Y. Zhou and D. Li (2000)

Zehao Chen M.V. optimization when means and covariances are estimated

slide-39
SLIDE 39

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

How to proceed? A simple example

Take prior Σ ∼ IW (Φ, v), so E(Σ|X) = αΦ + (1 − α)ˆ Σ E(r|X) = E(µ|X) ≈ ¯ X E(rrT |X) = E(Σ|X) + var(µ|X, Σ) + E(r|X)E(r|X)T ≈ n + 1 n [αΦ + (1 − α)ˆ Σ] + ¯ X ¯ X T ”Semi-Empirical Bayes”: estimate Φ by diag(ˆ Σ)

Zehao Chen M.V. optimization when means and covariances are estimated

slide-40
SLIDE 40

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

How to proceed? A simple example

Define a parametrized function w(X, λ, α, η) that solves −ηwT E(r|X) + λwTE(rrT |X)w with E(r|X) and E(rrT|X) replaced by approximations.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-41
SLIDE 41

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

How to proceed? A simple example

Having a parametrized procedure w(X, λ, α, η), we wish to evaluate R(λ, α, η) = −EwTr + λvar(wTr) =EX[−E(wTr|X) + λvar(wT r|X)] + λvar[E(wTr|X)]

Zehao Chen M.V. optimization when means and covariances are estimated

slide-42
SLIDE 42

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

How to proceed? A simple example

Having a parametrized procedure w(X, λ, α, η), we wish to evaluate R(λ, α, η) = −EwTr + λvar(wTr) =EX[−E(wTr|X) + λvar(wT r|X)] + λvar[E(wTr|X)] Bootstrap Bootstrap: {X ∗

1 , X ∗ 2 , ..., X ∗ B}

Zehao Chen M.V. optimization when means and covariances are estimated

slide-43
SLIDE 43

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

How to proceed? A simple example

The above can be approximated by the bootstrapped empirical risk, ˆ R(λ, α, η) = EX [−E(wTr|X)] + λ EX[var(wT r|X)] + λ var[E(wT r|X)]

  • EX [−E(wTr|X)] = 1

B

B

Σ

i=1[−wT(X ∗ i , λ, α, η)ˆ

µ(X ∗

i )]

  • EX [var(wTr|X)] = 1

B

B

Σ

i=1[wT(X ∗ i , λ, α, η)ˆ

Σ(X ∗

i )w(X ∗ i , λ, α, η)]

  • var[E(wT r|X)] = var[wT(X ∗

i , λ, α, η)ˆ

µ(X ∗

i )]

where ˆ µ(X ∗

i ) and ˆ

Σ(X ∗

i ) are reasonable estimates for mean and

covariance matrix of X ∗

i .

Zehao Chen M.V. optimization when means and covariances are estimated

slide-44
SLIDE 44

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

How to proceed? A simple example

Solve for min

α,η

ˆ R(λ, α, η) with reasonable initial values.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-45
SLIDE 45

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

Example: n = 50, p = 5

0.03 0.035 0.04 0.045 0.05 0.055 0.06 0.065 0.07 0.075 2.5 3 3.5 4 4.5 5 5.5 x 10

−3

Std. Return SBF Shrinkage Resampling Sample True Zehao Chen M.V. optimization when means and covariances are estimated

slide-46
SLIDE 46

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

Example: n = 50, p = 30

0.03 0.035 0.04 0.045 0.05 0.055 0.06 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 x 10

−3

Std. Return SBF Shrinkage Resampling Sample Zehao Chen M.V. optimization when means and covariances are estimated

slide-47
SLIDE 47

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion The framework An example and simulation

Example: n = 50, p = 50

4 4.5 5 5.5 6 6.5 7 x 10

−3

0.01 0.011 0.012 0.013 0.014 0.015 0.016 Std. Return SBF Shrinkage HCD Zehao Chen M.V. optimization when means and covariances are estimated

slide-48
SLIDE 48

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion

Conclusion

The proposed high dimensional covariance estimator is appropriate in sparse setting, and has better performance.

Zehao Chen M.V. optimization when means and covariances are estimated

slide-49
SLIDE 49

Introduction and review A high dimensional plug-in covariance matrix estimator A modified Markowitz framework Conclusion

Conclusion

The proposed high dimensional covariance estimator is appropriate in sparse setting, and has better performance. The modified Markowitz framework tries to optimize on the expected efficient frontier. It’s a general framework and has better performance in many practical scenarios.

Zehao Chen M.V. optimization when means and covariances are estimated