SLIDE 1
Harmonic Means of Wishart Matrices Hi! I’m Asad Lodhia, I’m a postdoc at the University of Michigan. Feel free to email me at alodhia@umich.edu Based on joint work with Keith Levin and Liza Levina. Thanks to the organizers for the invitation! I hope you enjoy it.
SLIDE 2 What’s a Harmonic Mean? ake two positive definite matrices P1 and P2
1
+ P−1
2
2 −1
he AMHM inequality:
1
+ P−1
2
2 −1
2
- the operator norm is smaller...
SLIDE 3 What’s a Harmonic Mean? Take two positive definite matrices P1 and P2
1
+ P−1
2
2 −1
he AMHM inequality:
1
+ P−1
2
2 −1
2
- the operator norm is smaller...
SLIDE 4 What’s a Harmonic Mean? Take two positive definite matrices P1 and P2
1
+ P−1
2
2 −1
The AMHM inequality:
1
+ P−1
2
2 −1
2
- the operator norm is smaller...
SLIDE 5 What’s a Harmonic Mean? Take two positive definite matrices P1 and P2
1
+ P−1
2
2 −1
The AMHM inequality:
1
+ P−1
2
2 −1
2
So the operator norm is smaller...
SLIDE 6 The Wishart Ensemble et X be P × N, i.i.d complex standard normals P < N:
N − γ
P 2,
for some K > 0 and γ ∈ (0, 1). he matrix W = XX∗
N
has limiting spectral measure
ργ(x) :=
- ((1 + √γ)2 − x)(x − (1 − √γ)2)
2γπx
his is invertible with probability 1.
SLIDE 7 The Wishart Ensemble Let X be P × N, i.i.d complex standard normals P < N:
N − γ
P 2,
for some K > 0 and γ ∈ (0, 1). he matrix W = XX∗
N
has limiting spectral measure
ργ(x) :=
- ((1 + √γ)2 − x)(x − (1 − √γ)2)
2γπx
his is invertible with probability 1.
SLIDE 8 The Wishart Ensemble Let X be P × N, i.i.d complex standard normals P < N:
N − γ
P 2,
for some K > 0 and γ ∈ (0, 1). The matrix W = XX∗
N
has limiting spectral measure
ργ(x) :=
- ((1 + √γ)2 − x)(x − (1 − √γ)2)
2γπx
his is invertible with probability 1.
SLIDE 9 The Wishart Ensemble Let X be P × N, i.i.d complex standard normals P < N:
N − γ
P 2,
for some K > 0 and γ ∈ (0, 1). The matrix W = XX∗
N
has limiting spectral measure
ργ(x) :=
- ((1 + √γ)2 − x)(x − (1 − √γ)2)
2γπx
This is invertible with probability 1.
SLIDE 10
Covariance Estimation he matrix W is an estimate of the Covariance Matrix (in this case I). he MP-Law show’s W isn’t good in this high-d setting. uantitatively
lim
P,N→∞ W − I → γ + 2√γ
a.s. s there something closer to I in operator norm? ptimizing the Frobenius Norm has been done. (Ledoit, Pech´ e, Wolf)
SLIDE 11
Covariance Estimation The matrix W is an estimate of the Covariance Matrix (in this case I). he MP-Law show’s W isn’t good in this high-d setting. uantitatively
lim
P,N→∞ W − I → γ + 2√γ
a.s. s there something closer to I in operator norm? ptimizing the Frobenius Norm has been done. (Ledoit, Pech´ e, Wolf)
SLIDE 12
Covariance Estimation The matrix W is an estimate of the Covariance Matrix (in this case I). The MP-Law show’s W isn’t good in this high-d setting. uantitatively
lim
P,N→∞ W − I → γ + 2√γ
a.s. s there something closer to I in operator norm? ptimizing the Frobenius Norm has been done. (Ledoit, Pech´ e, Wolf)
SLIDE 13
Covariance Estimation The matrix W is an estimate of the Covariance Matrix (in this case I). The MP-Law show’s W isn’t good in this high-d setting. Quantitatively
lim
P,N→∞ W − I → γ + 2√γ
a.s. s there something closer to I in operator norm? ptimizing the Frobenius Norm has been done. (Ledoit, Pech´ e, Wolf)
SLIDE 14
Covariance Estimation The matrix W is an estimate of the Covariance Matrix (in this case I). The MP-Law show’s W isn’t good in this high-d setting. Quantitatively
lim
P,N→∞ W − I → γ + 2√γ
a.s. Is there something closer to I in operator norm? ptimizing the Frobenius Norm has been done. (Ledoit, Pech´ e, Wolf)
SLIDE 15
Covariance Estimation The matrix W is an estimate of the Covariance Matrix (in this case I). The MP-Law show’s W isn’t good in this high-d setting. Quantitatively
lim
P,N→∞ W − I → γ + 2√γ
a.s. Is there something closer to I in operator norm? Optimizing the Frobenius Norm has been done. (Ledoit, Pech´ e, Wolf)
SLIDE 16 Bear with me here... et’s imagine I have multiple matrices {Xi}n
i=1 all P × N.
uppose we form the average A = 1
n
n
Wi, here Wi =
XiX∗ i
N
ust line up the Xi, side by side
lim
P,N→∞ A − I → γ
n + 2 γ n
a.s. loser now.
SLIDE 17 Bear with me here... Let’s imagine I have multiple matrices {Xi}n
i=1 all P × N.
uppose we form the average A = 1
n
n
Wi, here Wi =
XiX∗ i
N
ust line up the Xi, side by side
lim
P,N→∞ A − I → γ
n + 2 γ n
a.s. loser now.
SLIDE 18 Bear with me here... Let’s imagine I have multiple matrices {Xi}n
i=1 all P × N.
Suppose we form the average A = 1
n
n
Wi, here Wi =
XiX∗ i
N
ust line up the Xi, side by side
lim
P,N→∞ A − I → γ
n + 2 γ n
a.s. loser now.
SLIDE 19 Bear with me here... Let’s imagine I have multiple matrices {Xi}n
i=1 all P × N.
Suppose we form the average A = 1
n
n
Wi, here Wi =
XiX∗ i
N
Just line up the Xi, side by side
lim
P,N→∞ A − I → γ
n + 2 γ n
a.s. loser now.
SLIDE 20 Bear with me here... Let’s imagine I have multiple matrices {Xi}n
i=1 all P × N.
Suppose we form the average A = 1
n
n
Wi, here Wi =
XiX∗ i
N
Just line up the Xi, side by side
lim
P,N→∞ A − I → γ
n + 2 γ n
a.s. Closer now.
SLIDE 21 Alternative Means ake their Harmonic Mean: H = n(W−1
1
+ · · · + W−1
n )−1,
esult: the limiting ESD is
n 2πγx
where
e± = 1 − γ + 2γ n ± 2 γ n
n
lso:
lim
P,N→∞ H − I = 1 − e− = γ − 2γ
n + 2 γ n
n
SLIDE 22 Alternative Means Take their Harmonic Mean: H = n(W−1
1
+ · · · + W−1
n )−1,
esult: the limiting ESD is
n 2πγx
where
e± = 1 − γ + 2γ n ± 2 γ n
n
lso:
lim
P,N→∞ H − I = 1 − e− = γ − 2γ
n + 2 γ n
n
SLIDE 23 Alternative Means Take their Harmonic Mean: H = n(W−1
1
+ · · · + W−1
n )−1,
Result: the limiting ESD is
n 2πγx
where
e± = 1 − γ + 2γ n ± 2 γ n
n
lso:
lim
P,N→∞ H − I = 1 − e− = γ − 2γ
n + 2 γ n
n
SLIDE 24 Alternative Means Take their Harmonic Mean: H = n(W−1
1
+ · · · + W−1
n )−1,
Result: the limiting ESD is
n 2πγx
where
e± = 1 − γ + 2γ n ± 2 γ n
n
Also:
lim
P,N→∞ H − I = 1 − e− = γ − 2γ
n + 2 γ n
n
SLIDE 25
Figure of the ESD vs LSD P = 500, N = 1000 and n = 2
SLIDE 26 Improved Operator Norm Estimate e have the a.s. result
lim
P,N→∞ H − I <
lim
P,N→∞ A − I
for n ≤ n∗(γ). ndeed this is equivalent to
γ − 2γ n + 2 γ n
n < γ n + 2 γ n
- r n = 2 it’s always true for γ ∈ (0, 1)
- 2γ
- 1 − γ
2 < γ 2 +
SLIDE 27 Improved Operator Norm Estimate We have the a.s. result
lim
P,N→∞ H − I <
lim
P,N→∞ A − I
for n ≤ n∗(γ). ndeed this is equivalent to
γ − 2γ n + 2 γ n
n < γ n + 2 γ n
- r n = 2 it’s always true for γ ∈ (0, 1)
- 2γ
- 1 − γ
2 < γ 2 +
SLIDE 28 Improved Operator Norm Estimate We have the a.s. result
lim
P,N→∞ H − I <
lim
P,N→∞ A − I
for n ≤ n∗(γ). Indeed this is equivalent to
γ − 2γ n + 2 γ n
n < γ n + 2 γ n
- r n = 2 it’s always true for γ ∈ (0, 1)
- 2γ
- 1 − γ
2 < γ 2 +
SLIDE 29 Improved Operator Norm Estimate We have the a.s. result
lim
P,N→∞ H − I <
lim
P,N→∞ A − I
for n ≤ n∗(γ). Indeed this is equivalent to
γ − 2γ n + 2 γ n
n < γ n + 2 γ n
For n = 2 it’s always true for γ ∈ (0, 1)
2 < γ 2 +
SLIDE 30
Error Comparison for n = 2 as a function of γ
SLIDE 31 Is this Just an Identity Matrix Fact? nswer: No, but general Covar is tricky. ubmultiplicative bound
ΣH √ Σ − Σ
ΣA √ Σ − Σ
A − I
lim sup
P,N→∞
ΣΣ−1H − I A − I < 1
then
lim sup
P,N→∞
ΣH √ Σ − Σ
ΣA √ Σ − Σ
SLIDE 32 Is this Just an Identity Matrix Fact? Answer: No, but general Covar is tricky. ubmultiplicative bound
ΣH √ Σ − Σ
ΣA √ Σ − Σ
A − I
lim sup
P,N→∞
ΣΣ−1H − I A − I < 1
then
lim sup
P,N→∞
ΣH √ Σ − Σ
ΣA √ Σ − Σ
SLIDE 33 Is this Just an Identity Matrix Fact? Answer: No, but general Covar is tricky. Submultiplicative bound
ΣH √ Σ − Σ
ΣA √ Σ − Σ
A − I
lim sup
P,N→∞
ΣΣ−1H − I A − I < 1
then
lim sup
P,N→∞
ΣH √ Σ − Σ
ΣA √ Σ − Σ
SLIDE 34 Is this Just an Identity Matrix Fact? Answer: No, but general Covar is tricky. Submultiplicative bound
ΣH √ Σ − Σ
ΣA √ Σ − Σ
A − I
So if we have
lim sup
P,N→∞
ΣΣ−1H − I A − I < 1
then
lim sup
P,N→∞
ΣH √ Σ − Σ
ΣA √ Σ − Σ
SLIDE 35 Condition on the Condition Number! he ratio
c := ΣΣ−1 = λmax(Σ) λmin(Σ)
is the condition number of Σ
2 the condition number can be below
c < 5 4
3 ≈ 1.44337567 . . .
and the result still holds.
SLIDE 36 Condition on the Condition Number! The ratio
c := ΣΣ−1 = λmax(Σ) λmin(Σ)
is the condition number of Σ
2 the condition number can be below
c < 5 4
3 ≈ 1.44337567 . . .
and the result still holds.
SLIDE 37 Condition on the Condition Number! The ratio
c := ΣΣ−1 = λmax(Σ) λmin(Σ)
is the condition number of Σ For γ = 1
2 the condition number can be below
c < 5 4
3 ≈ 1.44337567 . . .
and the result still holds.
SLIDE 38 Condition on the Condition Number! The ratio
c := ΣΣ−1 = λmax(Σ) λmin(Σ)
is the condition number of Σ For γ = 1
2 the condition number can be below
c < 5 4
3 ≈ 1.44337567 . . .
and the result still holds. More on general Σ later.
SLIDE 39 Applications: Data Splitting uppose T = nN is my total observations, define
Γ := lim
P,T→∞
P T = γ n ∈
2
lim
P,T→∞ A − I = Γ + 2
√ Γ
and
lim
P,T→∞ H − I = (n − 2)Γ +
√ Γ
he argmin is 2! If T is at least twice P split your data in two and take the harmonic mean.
SLIDE 40 Applications: Data Splitting Suppose T = nN is my total observations, define
Γ := lim
P,T→∞
P T = γ n ∈
2
lim
P,T→∞ A − I = Γ + 2
√ Γ
and
lim
P,T→∞ H − I = (n − 2)Γ +
√ Γ
he argmin is 2! If T is at least twice P split your data in two and take the harmonic mean.
SLIDE 41 Applications: Data Splitting Suppose T = nN is my total observations, define
Γ := lim
P,T→∞
P T = γ n ∈
2
lim
P,T→∞ A − I = Γ + 2
√ Γ
and
lim
P,T→∞ H − I = (n − 2)Γ +
√ Γ
he argmin is 2! If T is at least twice P split your data in two and take the harmonic mean.
SLIDE 42 Applications: Data Splitting Suppose T = nN is my total observations, define
Γ := lim
P,T→∞
P T = γ n ∈
2
lim
P,T→∞ A − I = Γ + 2
√ Γ
and
lim
P,T→∞ H − I = (n − 2)Γ +
√ Γ
The argmin is 2! If T is at least twice P split your data in two and take the harmonic mean.
SLIDE 43 Proof of Results (Techniques) eed Xi to be P × N complex gaussian entries and
N − γ
P 2
for some K. hen Wi are asymptotically free and Q is a non-commutative polynomial (result due to Donati-Martin and Capitaine)
lim
P,N→∞ Q(W1, . . . , Wn) = Q(p1, . . . , pn)F
the variables pj are freely independent non-commutative Free Poisson Random Variables.
SLIDE 44 Proof of Results (Techniques) Need Xi to be P × N complex gaussian entries and
N − γ
P 2
for some K. hen Wi are asymptotically free and Q is a non-commutative polynomial (result due to Donati-Martin and Capitaine)
lim
P,N→∞ Q(W1, . . . , Wn) = Q(p1, . . . , pn)F
the variables pj are freely independent non-commutative Free Poisson Random Variables.
SLIDE 45 Proof of Results (Techniques) Need Xi to be P × N complex gaussian entries and
N − γ
P 2
for some K. Then Wi are asymptotically free and Q is a non-commutative polyno- mial (result due to Donati-Martin and Capitaine)
lim
P,N→∞ Q(W1, . . . , Wn) = Q(p1, . . . , pn)F
the variables pj are freely independent non-commutative Free Poisson Random Variables.
SLIDE 46 Free Probability Calculation he pi can be thought of as bounded linear operators on some Hilbert space whose spectrum is the MP-law with parameter γ here is a unit vector e0 in the hilbert space such that
ν(pk
j) := e0, pk je0 =
ree independence means
ν n
- l=1
- Ql(pl) − ν[Ql(pl)]
- = 0
SLIDE 47 Free Probability Calculation The pi can be thought of as bounded linear operators on some Hilbert space whose spectrum is the MP-law with parameter γ here is a unit vector e0 in the hilbert space such that
ν(pk
j) := e0, pk je0 =
ree independence means
ν n
- l=1
- Ql(pl) − ν[Ql(pl)]
- = 0
SLIDE 48 Free Probability Calculation The pi can be thought of as bounded linear operators on some Hilbert space whose spectrum is the MP-law with parameter γ There is a unit vector e0 in the hilbert space such that
ν(pk
j) := e0, pk je0 =
ree independence means
ν n
- l=1
- Ql(pl) − ν[Ql(pl)]
- = 0
SLIDE 49 Free Probability Calculation The pi can be thought of as bounded linear operators on some Hilbert space whose spectrum is the MP-law with parameter γ There is a unit vector e0 in the hilbert space such that
ν(pk
j) := e0, pk je0 =
Free independence means
ν n
- l=1
- Ql(pl) − ν[Ql(pl)]
- = 0
SLIDE 50
More Proof Ingredients ree random variables have an explicit way of computing their laws (Voiculescu). et µ be a measure compactly supported on the positive reals,
mµ(z) = µ(dx) z − x Kµ(mµ(z)) = mµ(Kµ(z)) = z.
efine µ1 ⊞ µ2 as the measure such that
Rµ(z) := Kµ(z) − 1 z Rµ1⊞µ2(z) = Rµ1(z) + Rµ2(z)
SLIDE 51
More Proof Ingredients Free random variables have an explicit way of computing their laws (Voiculescu). et µ be a measure compactly supported on the positive reals,
mµ(z) = µ(dx) z − x Kµ(mµ(z)) = mµ(Kµ(z)) = z.
efine µ1 ⊞ µ2 as the measure such that
Rµ(z) := Kµ(z) − 1 z Rµ1⊞µ2(z) = Rµ1(z) + Rµ2(z)
SLIDE 52
More Proof Ingredients Free random variables have an explicit way of computing their laws (Voiculescu). Let µ be a measure compactly supported on the positive reals,
mµ(z) = µ(dx) z − x Kµ(mµ(z)) = mµ(Kµ(z)) = z.
efine µ1 ⊞ µ2 as the measure such that
Rµ(z) := Kµ(z) − 1 z Rµ1⊞µ2(z) = Rµ1(z) + Rµ2(z)
SLIDE 53
More Proof Ingredients Free random variables have an explicit way of computing their laws (Voiculescu). Let µ be a measure compactly supported on the positive reals,
mµ(z) = µ(dx) z − x Kµ(mµ(z)) = mµ(Kµ(z)) = z.
Define µ1 ⊞ µ2 as the measure such that
Rµ(z) := Kµ(z) − 1 z Rµ1⊞µ2(z) = Rµ1(z) + Rµ2(z)
SLIDE 54 More Proof Ingredients e add the R-transforms and work backwards to get the spectrum. e are studying is H which we want to say (more on this) converges to h := n(p−1
1
+ · · · + p−1
n )−1
f we compute
nRp−1(z) =
n
Rp−1
i (z)
we can obtain the Stieltjest transform of
nh−1
SLIDE 55 More Proof Ingredients We add the R-transforms and work backwards to get the spectrum. e are studying is H which we want to say (more on this) converges to h := n(p−1
1
+ · · · + p−1
n )−1
f we compute
nRp−1(z) =
n
Rp−1
i (z)
we can obtain the Stieltjest transform of
nh−1
SLIDE 56 More Proof Ingredients We add the R-transforms and work backwards to get the spectrum. We are studying is H which we want to say (more on this) converges to h := n(p−1
1
+ · · · + p−1
n )−1
f we compute
nRp−1(z) =
n
Rp−1
i (z)
we can obtain the Stieltjest transform of
nh−1
SLIDE 57 More Proof Ingredients We add the R-transforms and work backwards to get the spectrum. We are studying is H which we want to say (more on this) converges to h := n(p−1
1
+ · · · + p−1
n )−1
If we compute
nRp−1(z) =
n
Rp−1
i (z)
we can obtain the Stieltjest transform of
nh−1
SLIDE 58
More Proof Ingredients rick due to Edelman and Rao ach mpj(z) satisfies a quadratic
γzmpj(z)2 + mpj(z)(1 − z − γ) + 1 = 0.
his means each p−1
j
satisfies a quadratic
γz2mp−1
j (z)2 − mp−1 j (z)[z(1 + γ) − 1] + 1 = 0.
f you directly invert to get Kp−1
j (z) you are doing more work than you
need to. lug in z = Kp−1
j (w) and then plug in the R-transform.
SLIDE 59
More Proof Ingredients Trick due to Edelman and Rao ach mpj(z) satisfies a quadratic
γzmpj(z)2 + mpj(z)(1 − z − γ) + 1 = 0.
his means each p−1
j
satisfies a quadratic
γz2mp−1
j (z)2 − mp−1 j (z)[z(1 + γ) − 1] + 1 = 0.
f you directly invert to get Kp−1
j (z) you are doing more work than you
need to. lug in z = Kp−1
j (w) and then plug in the R-transform.
SLIDE 60
More Proof Ingredients Trick due to Edelman and Rao Each mpj(z) satisfies a quadratic
γzmpj(z)2 + mpj(z)(1 − z − γ) + 1 = 0.
his means each p−1
j
satisfies a quadratic
γz2mp−1
j (z)2 − mp−1 j (z)[z(1 + γ) − 1] + 1 = 0.
f you directly invert to get Kp−1
j (z) you are doing more work than you
need to. lug in z = Kp−1
j (w) and then plug in the R-transform.
SLIDE 61
More Proof Ingredients Trick due to Edelman and Rao Each mpj(z) satisfies a quadratic
γzmpj(z)2 + mpj(z)(1 − z − γ) + 1 = 0.
This means each p−1
j
satisfies a quadratic
γz2mp−1
j (z)2 − mp−1 j (z)[z(1 + γ) − 1] + 1 = 0.
f you directly invert to get Kp−1
j (z) you are doing more work than you
need to. lug in z = Kp−1
j (w) and then plug in the R-transform.
SLIDE 62
More Proof Ingredients Trick due to Edelman and Rao Each mpj(z) satisfies a quadratic
γzmpj(z)2 + mpj(z)(1 − z − γ) + 1 = 0.
This means each p−1
j
satisfies a quadratic
γz2mp−1
j (z)2 − mp−1 j (z)[z(1 + γ) − 1] + 1 = 0.
If you directly invert to get Kp−1
j (z) you are doing more work than you
need to. lug in z = Kp−1
j (w) and then plug in the R-transform.
SLIDE 63
More Proof Ingredients Trick due to Edelman and Rao Each mpj(z) satisfies a quadratic
γzmpj(z)2 + mpj(z)(1 − z − γ) + 1 = 0.
This means each p−1
j
satisfies a quadratic
γz2mp−1
j (z)2 − mp−1 j (z)[z(1 + γ) − 1] + 1 = 0.
If you directly invert to get Kp−1
j (z) you are doing more work than you
need to. Plug in z = Kp−1
j (w) and then plug in the R-transform.
SLIDE 64 More Proof Ingredients
γzRp−1(z)2 + (γ − 1)Rp−1(z) + 1 = 0.
anipulate some more and you get
γz n mh(z)2 + (1 − γ − z)mh(z) + 1 = 0.
alculation is quick but how to justify and why does the operator norm converge?
SLIDE 65
More Proof Ingredients You get the quadratic
γzRp−1(z)2 + (γ − 1)Rp−1(z) + 1 = 0.
anipulate some more and you get
γz n mh(z)2 + (1 − γ − z)mh(z) + 1 = 0.
alculation is quick but how to justify and why does the operator norm converge?
SLIDE 66
More Proof Ingredients You get the quadratic
γzRp−1(z)2 + (γ − 1)Rp−1(z) + 1 = 0.
Manipulate some more and you get
γz n mh(z)2 + (1 − γ − z)mh(z) + 1 = 0.
alculation is quick but how to justify and why does the operator norm converge?
SLIDE 67
More Proof Ingredients You get the quadratic
γzRp−1(z)2 + (γ − 1)Rp−1(z) + 1 = 0.
Manipulate some more and you get
γz n mh(z)2 + (1 − γ − z)mh(z) + 1 = 0.
Calculation is quick but how to justify and why does the operator norm converge?
SLIDE 68
More Proof Ingredients nly have non-commutative polynomials in Wi. se Neumann series to approximate W−1
i
and then H. se operator norm bounds on smallest singular value of Xi due to Rudel- son and Vershynin — applies for subgaussian! nd AMHM inequality: P(H > t) ≤ P(A > t) ≤ nP(W1 > t) e prove that there exists a κ > 0: P({max (Wi, W−1
i , H, H−1) > κ})
is summable in P .
SLIDE 69
More Proof Ingredients Only have non-commutative polynomials in Wi. se Neumann series to approximate W−1
i
and then H. se operator norm bounds on smallest singular value of Xi due to Rudel- son and Vershynin — applies for subgaussian! nd AMHM inequality: P(H > t) ≤ P(A > t) ≤ nP(W1 > t) e prove that there exists a κ > 0: P({max (Wi, W−1
i , H, H−1) > κ})
is summable in P .
SLIDE 70
More Proof Ingredients Only have non-commutative polynomials in Wi. Use Neumann series to approximate W−1
i
and then H. se operator norm bounds on smallest singular value of Xi due to Rudel- son and Vershynin — applies for subgaussian! nd AMHM inequality: P(H > t) ≤ P(A > t) ≤ nP(W1 > t) e prove that there exists a κ > 0: P({max (Wi, W−1
i , H, H−1) > κ})
is summable in P .
SLIDE 71
More Proof Ingredients Only have non-commutative polynomials in Wi. Use Neumann series to approximate W−1
i
and then H. Use operator norm bounds on smallest singular value of Xi due to Rudel- son and Vershynin — applies for subgaussian! nd AMHM inequality: P(H > t) ≤ P(A > t) ≤ nP(W1 > t) e prove that there exists a κ > 0: P({max (Wi, W−1
i , H, H−1) > κ})
is summable in P .
SLIDE 72
More Proof Ingredients Only have non-commutative polynomials in Wi. Use Neumann series to approximate W−1
i
and then H. Use operator norm bounds on smallest singular value of Xi due to Rudel- son and Vershynin — applies for subgaussian! And AMHM inequality: P(H > t) ≤ P(A > t) ≤ nP(W1 > t) e prove that there exists a κ > 0: P({max (Wi, W−1
i , H, H−1) > κ})
is summable in P .
SLIDE 73
More Proof Ingredients Only have non-commutative polynomials in Wi. Use Neumann series to approximate W−1
i
and then H. Use operator norm bounds on smallest singular value of Xi due to Rudel- son and Vershynin — applies for subgaussian! And AMHM inequality: P(H > t) ≤ P(A > t) ≤ nP(W1 > t) We prove that there exists a κ > 0: P({max (Wi, W−1
i , H, H−1) > κ})
is summable in P .
SLIDE 74 General Covar Fixed Point Equation air of fixed point equation for e = lim
√ ΣH √ Σ − Σ me(z) =
z − γx{ 1
n(zme(z) − 1)Sh−1(zme(z) − 1) + 1 nzme(z) − 1},
and
γ nzSh−1(z)2 + γ(1 + z) n Sh−1(z) − γSh−1(z) − 1 = 0
dentify the edge for any F ?
SLIDE 75 General Covar Fixed Point Equation Pair of fixed point equation for e = lim
√ ΣH √ Σ − Σ me(z) =
z − γx{ 1
n(zme(z) − 1)Sh−1(zme(z) − 1) + 1 nzme(z) − 1},
and
γ nzSh−1(z)2 + γ(1 + z) n Sh−1(z) − γSh−1(z) − 1 = 0
dentify the edge for any F ?
SLIDE 76 General Covar Fixed Point Equation Pair of fixed point equation for e = lim
√ ΣH √ Σ − Σ me(z) =
z − γx{ 1
n(zme(z) − 1)Sh−1(zme(z) − 1) + 1 nzme(z) − 1},
and
γ nzSh−1(z)2 + γ(1 + z) n Sh−1(z) − γSh−1(z) − 1 = 0
Identify the edge for any F ?
SLIDE 77 General Covar Fixed Point Equation Pair of fixed point equation for e = lim
√ ΣH √ Σ − Σ me(z) =
z − γx{ 1
n(zme(z) − 1)Sh−1(zme(z) − 1) + 1 nzme(z) − 1},
and
γ nzSh−1(z)2 + γ(1 + z) n Sh−1(z) − γSh−1(z) − 1 = 0
Identify the edge for any F ?
SLIDE 78 Acknowledgments and Advertisement Anna Maltsev is looking for a PhD Student at Queen Mary University
- f London. Ask me more questions
Thanks to Alice Guionnet, Alan Edelman and Jinho Baik for helpful comments and suggestions.