Performance assessment of optimal allocation for large portfolios - - PowerPoint PPT Presentation

performance assessment of optimal allocation for large
SMART_READER_LITE
LIVE PREVIEW

Performance assessment of optimal allocation for large portfolios - - PowerPoint PPT Presentation

Performance assessment of optimal allocation for large portfolios Luigi Grossi and Fabrizio Laurini luigi.grossi@univr.it fabrizio.laurini@unipr.it Dipartimento di Economie, Societ e Istituzioni, Universit di Verona Dipartimento di


slide-1
SLIDE 1

Performance assessment of optimal allocation for large portfolios

Luigi Grossi and Fabrizio Laurini

luigi.grossi@univr.it fabrizio.laurini@unipr.it

Dipartimento di Economie, Societá e Istituzioni, Universitá di Verona Dipartimento di Economia, Universitá di Parma

– p. 1/2

slide-2
SLIDE 2

Basic idea of this talk

Idea: Having got 10,000 euros, we want to find an

  • ptimal way of investing this money into a set (say N) of

given shares in an optimal way as to maximize our expected returns and minimize risks after a fixed investment period. This investment should be: robust to estimation errors, stable over time.

– p. 2/2

slide-3
SLIDE 3

Definitions

Let Pt = (P1t, . . . , PNt) be the closing price of N stocks on time t. Let yt = log(Pt/Pt−1) be 1-period returns of stocks. We assume that the period return yt ∼ N(µ, Σ). Let x = (x1, ..., xN) denote the share of our wealth in stocks (vector of weights), such that N

i=1 xi = 1.

Let µp = x′µ and σ2

p = x′Σx the expected return and

variance of the portfolio, respectively.

– p. 3/2

slide-4
SLIDE 4

Mean-variance optimization procedure

In traditional mean variance optimization of a portfolio with

N assets, expected utility is maximized using the following

Lagrangian function

L = µ′x − 1 2γx′Σx − λ(x′1 − 1)

where

γ: is the parameter of relative risk aversion, λ: is a Lagrange multiplier.

R1: (x′1 − 1) is the constraint requiring that the optimal weight vector’s elements sum up to 1. Sometimes no short selling constraint is imposed, that is xi ≥ 0, ∀i. R2: µ and Σ are estimated by ML. R3: Sensitivity to “outliers”.

– p. 4/2

slide-5
SLIDE 5

The problem

Traditional Mean-Variance optimization (Markowitz) assumes known expected returns µ and covariance matrix Σ In practice, µ and Σ must be estimated and therefore contain estimation error. Outline simulation to show the impact of estimation error on

  • ptimal asset allocation.

new robust estimators of portfolio mean-variance and comparison with other robust estimators (basically MCD). application to real data.

– p. 5/2

slide-6
SLIDE 6

Simulation experiment

N = 6, with true given parameters µ and Σ. T = 120,

10-years time series. Uncontaminated series

yt ∼ N(µ, Σ)

Contaminated series

y∗

t = yt + θδt

where θ > 0 is the magnitude of the Additive Outlier and δt is a stochastic contamination process.

– p. 6/2

slide-7
SLIDE 7

True, actual and estimated frontiers

Efficient frontier is obtained linking points on the mean-variance plot. Different types of frontiers can be drawn to compare estimators. True efficient frontier

µp = x′µ σ2

p = x′Σx

Estimated frontier

µp = ˆ x′ ˆ µ σ2

p = ˆ

x′ ˆ Σˆ x

Actual frontier

µp = ˆ x′µ σ2

p = ˆ

x′Σˆ x

– p. 7/2

slide-8
SLIDE 8

True, actual and estimated frontiers

0.028 0.030 0.032 0.034 0.036 0.0070 0.0075 0.0080 0.0085 0.0090 0.0095 0.0100

True, actual and estimated frontiers

Risk Expected return True Actual Estimated

– p. 8/2

slide-9
SLIDE 9

Uncontaminated data

True and actual frontiers with MLE.

0.028 0.030 0.032 0.034 0.036 0.038 0.005 0.006 0.007 0.008 0.009 0.010

True frontier (solid black) and non−contaminated actual frontiers

Risk Expected return

– p. 9/2

slide-10
SLIDE 10

Contaminated data

True and actual frontiers with MLE.

0.028 0.030 0.032 0.034 0.036 0.038 0.005 0.006 0.007 0.008 0.009 0.010

True frontier (solid black) and contaminated actual frontiers

Risk Expected return

– p. 10/2

slide-11
SLIDE 11

Assessing estimation errors

Quantitative measure of the estimation error using a given estimator.

∆µ(γ) =

  • 1

S

S

  • i=1

[µp(γ) − ˜ µs

p(γ)]2

⇒ RMSE for µp ∆σ(γ) =

  • 1

S

S

  • i=1

[σp(γ) − ˜ σs

p(γ)]2

⇒ RMSE for σp

where s = 1, . . . , S; S=number of simulations; (µs

p(γ), σs p(γ))

is the target point on the true frontier (given λ) maximizing the function µp − γσp.

– p. 11/2

slide-12
SLIDE 12

RMSE for MLE

RMSE for µ and for σ as γ increases. 1000 simulations.

2 4 6 8 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

Contaminated vs non−contaminated MLE estimates

log(gamma) Delta (mu) MLE − CONTAM MLE 2 4 6 8 0.0 0.2 0.4 0.6 0.8 1.0 1.2

Contaminated vs non−contaminated MLE estimates

log(gamma) Delta (sigma) MLE − CONTAM MLE

– p. 12/2

slide-13
SLIDE 13

A new robust estimator

Target: to get a new robust version of the covariance matrix where observations are weighted according to their degree

  • f outlyingness.

Method: distance of the Mahalanobis trajectories during the forward search and a distribution percentile

d2

t = (yt − ˆ

µ)′ ˆ Σ−1(yt − ˆ µ)

where ˆ

µ and ˆ Σ are MLE obtained using T observations ⇒

sensitivity to extreme observations. Multiple outliers could be masked.

– p. 13/2

slide-14
SLIDE 14

Forward search Mahalanobis distances

d∗2

(t) = (yt − ˆ

µ∗

m)′( ˆ

Σ∗

m)−1(yt − ˆ

µ∗

m)

where ˆ

µ∗

m and ˆ

Σ∗

m are the mean and covariance matrix

estimated on a m-sized subset with m < T

R1: observations belonging to the subset S(m)

are selected according to the forward search criteria.

R2: extreme multivariate observations are added to the

subset in the very last steps of the procedure.

R3: inclusion of extreme observations are pointed out by

sudden increase/descrease of forward search

  • trajectories. Influential observations show large values
  • f d∗2

(t) during the forward search.

– p. 14/2

slide-15
SLIDE 15

Assessing the degree of outlyingness

Distribution of Mahalanobis distance when m = T (Riani, Atkinson and Cerioli, 2009):

d∗2

(t) ∼ [T/(T − 1)][N(m − 1)/(m − N)]FN,T−N

Target: to get weights wt ∈ [0, 1] for t = 1, . . . , T to compute a weighted covariance matrix, such that the most outlying

  • bservations are down-weighted. In practice, a series of

weighted returns are obtained, that is:

y⋆

it = yitw1/2 t

.

The weighted covariance matrix will be simply the covariance matrix based on the weighted observations, which is denoted as ˜

W.

– p. 15/2

slide-16
SLIDE 16

Weighted covariance matrix

The procedure: measuring the distance between d∗2

(t) and the quantile

±Fδ, as follows: π(t)

m =

  • if d∗2

(t) ∈ [0, Fδ],

(d∗2

(t) − Fδ)2

if d∗2

(t) > Fδ,

getting the overall distance of the t-th obervation

πt =

T

m=m0 π(t) m

T − m0 .

mapping the distance

wt = exp(−πt) ∈ [0, 1].

– p. 16/2

slide-17
SLIDE 17

Back to the example on simulated data

RMSE (µ and σ) for MCD, MLE and FWD with contaminated data. Single outliers.

2 4 6 8 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

Replicate of 6 isolated outliers in each series

log(gamma) Delta (mu) MLE MCD FWD 2 4 6 8 0.0 0.2 0.4 0.6 0.8 1.0 1.2

Replicate of 6 isolated outliers in each series

log(gamma) Delta (sigma) MLE MCD FWD

– p. 17/2

slide-18
SLIDE 18

Back to the example on simulated data

RMSE (µ and σ) for MCD, MLE and FWD with contaminated data. Patches of outliers.

2 4 6 8 0.1 0.2 0.3 0.4 0.5

Patches of outliers in each series

log (gamma) Delta (mu) MLE MCD FWD 2 4 6 8 0.0 0.2 0.4 0.6 0.8

Patches of outliers in each series

log (gamma) Delta (sigma) MLE MCD FWD

– p. 18/2

slide-19
SLIDE 19

US market stocks

Monthly returns of six stocks (Boeing, General Electric, General Motors, IBM, Procter and Gamble, Walt Disney) of the US market with data from January 1973 to March 2009

  • included. Data come from Datastream.

GM

Time 100 200 300 400 −0.4 0.0 0.4

DIS

Time 100 200 300 400 −0.4 0.0 0.4

BA

Time 100 200 300 400 −0.4 0.0 0.4

PG

Time 100 200 300 400 −0.4 0.0 0.4

IBM

Time 100 200 300 400 −0.4 0.0 0.4

GE

Time 100 200 300 400 −0.4 0.0 0.4

– p. 19/2

slide-20
SLIDE 20

Mahalanobis distances trajectories

subset size (m) Mahalanobis distance 50 100 150 200 250 300 5 10 15

– p. 20/2

slide-21
SLIDE 21

Example - Efficient frontiers

0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.000 0.002 0.004 0.006 0.008 0.010 0.012 Portfolio Volatility Portfolio Expected Return

Gray line: MLE Markowitz frontier. Black dashed line: estimated frontier using MCD robust estimator of the covariance matrix. Black solid line: estimated frontier through the forward weighted covariance matrix.

– p. 21/2

slide-22
SLIDE 22

Example - Portfolio returns

Time 2006.0 2006.2 2006.4 2006.6 2006.8 −5 5 10 15 MLE FWD MCD

Portfolio performance using rolling windows in 2006. That is: estimate weights (MLE, MCD and FWD) using data for t = 1, ..., T − 1 and get the average portfolio return in T. estimate weights (MLE, MCD and FWD) using data for t = 2, ..., T and get the average portfolio return in T + 1. ... estimate weights (MLE, MCD and FWD) using data for t = d, ..., T − d and get the average portfolio return in T − d + 1.

– p. 22/2

slide-23
SLIDE 23

Final remarks and open issues

robust estimator based on forward search performs well with simulated data; simulations with more general distributional assumptions; theoretical problems should be fixed (distribution of Mahalanobis distances during the search); new indexes to assess the superiority of robust methods in real applications.

– p. 23/2

slide-24
SLIDE 24

References

[1] Atkinson, A. C., Riani, M. and Cerioli, A. (2004), Exploring Multivariate Data with the Forward Search. Springer–Verlag, New York. [2] Atkinson, A. C., Riani, M. and Cerioli, A. (2009), Finding an unknown number of multivariate outliers. J.R. Statistical Society, Series B, 71(2), 447–466. [3] DeMiguel, V. and Nogales, F .J. (2009), Portfolio selection with robust estimation, Operations Research, doi: 10.1287/opre.1080.0566. [4] Fabozzi, F .J., Kolm, P .N., Pachamanova, D.A. and Focardi, S.M. (2007), Robust Portfolio Optimization Management, Wiley, New York. [5] Welsch, R.E. and Zhou, X. (2007), Application of robust statistics to asset allocation models. Revstat. Statistical Journal, 5(1), 97–114.

– p. 24/2