Can Random Matrices Change the Future of Machine Learning?
Malik TIOMOKO and Romain COUILLET
CentraleSup´ elec, L2S, University of ParisSaclay, France GSTATS IDEX DataScience Chair, GIPSA-lab, University Grenoble–Alpes, France.
March 8, 2020
1 / 46
Can Random Matrices Change the Future of Machine Learning? Malik - - PowerPoint PPT Presentation
Can Random Matrices Change the Future of Machine Learning? Malik TIOMOKO and Romain COUILLET CentraleSup elec, L2S, University of ParisSaclay, France GSTATS IDEX DataScience Chair, GIPSA-lab, University GrenobleAlpes, France. March 8, 2020
CentraleSup´ elec, L2S, University of ParisSaclay, France GSTATS IDEX DataScience Chair, GIPSA-lab, University Grenoble–Alpes, France.
1 / 46
2 / 46
2 / 46
2 / 46
2 / 46
2 / 46
2 / 46
2 / 46
2 / 46
2 / 46
2 / 46
2 / 46
3 / 46
Basics of Random Matrix Theory/ 4/46
4 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 5/46
5 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 6/46
1] = Cp:
6 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 6/46
1] = Cp:
p = 1
n
i
6 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 6/46
1] = Cp:
p = 1
n
i
a.s.
6 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 6/46
1] = Cp:
p = 1
n
i
a.s.
6 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 6/46
1] = Cp:
p = 1
n
i
a.s.
6 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 6/46
1] = Cp:
p = 1
n
i
a.s.
6 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 7/46
0.5 1 1.5 2 2.5 3 0.2 0.4 0.6 0.8 Eigenvalues of ˆ Cp Density
p = 50, n = 200
Figure: Histogram of the eigenvalues of ˆ Cp for c = 1/4, Cp = Ip.
7 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 7/46
0.5 1 1.5 2 2.5 3 0.2 0.4 0.6 0.8 Eigenvalues of ˆ Cp Density
p = 100, n = 400
Figure: Histogram of the eigenvalues of ˆ Cp for c = 1/4, Cp = Ip.
7 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 7/46
0.5 1 1.5 2 2.5 3 0.2 0.4 0.6 0.8 Eigenvalues of ˆ Cp Density
p = 250, n = 1000
Figure: Histogram of the eigenvalues of ˆ Cp for c = 1/4, Cp = Ip.
7 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 7/46
0.5 1 1.5 2 2.5 3 0.2 0.4 0.6 0.8 Eigenvalues of ˆ Cp Density
p = 500, n = 2000
Figure: Histogram of the eigenvalues of ˆ Cp for c = 1/4, Cp = Ip.
7 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 7/46
0.5 1 1.5 2 2.5 3 0.2 0.4 0.6 0.8 Eigenvalues of ˆ Cp Density
p = 1000, n = 4000
Figure: Histogram of the eigenvalues of ˆ Cp for c = 1/4, Cp = Ip.
7 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 7/46
0.5 1 1.5 2 2.5 3 0.2 0.4 0.6 0.8 Eigenvalues of ˆ Cp Density
p = 1000, n = 4000 The Mar˘ cenko–Pastur law
Figure: Histogram of the eigenvalues of ˆ Cp for c = 1/4, Cp = Ip.
7 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 8/46
p
8 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 8/46
p
nXpX∗ p satisfies
a.s.
8 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 8/46
p
nXpX∗ p satisfies
a.s.
8 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 9/46
0.5 1 1.5 2 2.5 3 0.2 0.4 0.6 0.8 1 1.2 x Density fc(x) c = 0.1
Figure: Mar˘ cenko-Pastur law for different limit ratios c = limp→∞ p/n.
9 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 9/46
0.5 1 1.5 2 2.5 3 0.2 0.4 0.6 0.8 1 1.2 x Density fc(x) c = 0.1 c = 0.2
Figure: Mar˘ cenko-Pastur law for different limit ratios c = limp→∞ p/n.
9 / 46
Basics of Random Matrix Theory/Motivation: Large Sample Covariance Matrices 9/46
0.5 1 1.5 2 2.5 3 0.2 0.4 0.6 0.8 1 1.2 x Density fc(x) c = 0.1 c = 0.2 c = 0.5
Figure: Mar˘ cenko-Pastur law for different limit ratios c = limp→∞ p/n.
9 / 46
Basics of Random Matrix Theory/Spiked Models 10/46
10 / 46
Basics of Random Matrix Theory/Spiked Models 11/46
1 2 3 4 5 6 7 8 0.2 0.4 0.6 0.8 1 p/n = 1/4 (p = 500)
Figure: Eigenvalues of
1 n YpY T p , eig(Cp) = {1, . . . , 1 p−4
, 2, 3, 4, 5}.
11 / 46
Basics of Random Matrix Theory/Spiked Models 11/46
1 2 3 4 5 6 7 8 0.2 0.4 0.6 0.8 1 p/n = 1/2 (p = 500)
Figure: Eigenvalues of
1 n YpY T p , eig(Cp) = {1, . . . , 1 p−4
, 2, 3, 4, 5}.
11 / 46
Basics of Random Matrix Theory/Spiked Models 11/46
1 2 3 4 5 6 7 8 0.2 0.4 0.6 0.8 1 p/n = 1 (p = 500)
Figure: Eigenvalues of
1 n YpY T p , eig(Cp) = {1, . . . , 1 p−4
, 2, 3, 4, 5}.
11 / 46
Basics of Random Matrix Theory/Spiked Models 11/46
1 2 3 4 5 6 7 8 0.2 0.4 0.6 0.8 1 p/n = 2 (p = 500)
Figure: Eigenvalues of
1 n YpY T p , eig(Cp) = {1, . . . , 1 p−4
, 2, 3, 4, 5}.
11 / 46
Basics of Random Matrix Theory/Spiked Models 12/46
1 2
p Xp, with
ij] < ∞.
12 / 46
Basics of Random Matrix Theory/Spiked Models 12/46
1 2
p Xp, with
ij] < ∞.
nYpY ∗ p ) (λm > λm+1),
a.s.
ωm
12 / 46
Basics of Random Matrix Theory/Spiked Models 13/46
1 2
p Xp, with
ij] < ∞.
i=1 ωiuiu∗ i , ω1 > . . . > ωM > 0.
13 / 46
Basics of Random Matrix Theory/Spiked Models 13/46
1 2
p Xp, with
ij] < ∞.
i=1 ωiuiu∗ i , ω1 > . . . > ωM > 0.
nYpY ∗ p ),
i b − 1 − cω−2 i
i
i b · 1ωi>√c a.s.
i ui|2 a.s.
i
i
13 / 46
Basics of Random Matrix Theory/Spiked Models 14/46
1 2 3 4 0.2 0.4 0.6 0.8 1 Population spike ω1 |ˆ uT
1u1|2 p = 100
Figure: Simulated versus limiting |ˆ uT
1u1|2 for Yp = C 1 2 p Xp, Cp = Ip + ω1u1uT 1, p/n = 1/3,
varying ω1.
14 / 46
Basics of Random Matrix Theory/Spiked Models 14/46
1 2 3 4 0.2 0.4 0.6 0.8 1 Population spike ω1 |ˆ uT
1u1|2 p = 100 p = 200
Figure: Simulated versus limiting |ˆ uT
1u1|2 for Yp = C 1 2 p Xp, Cp = Ip + ω1u1uT 1, p/n = 1/3,
varying ω1.
14 / 46
Basics of Random Matrix Theory/Spiked Models 14/46
1 2 3 4 0.2 0.4 0.6 0.8 1 Population spike ω1 |ˆ uT
1u1|2 p = 100 p = 200 p = 400
Figure: Simulated versus limiting |ˆ uT
1u1|2 for Yp = C 1 2 p Xp, Cp = Ip + ω1u1uT 1, p/n = 1/3,
varying ω1.
14 / 46
Basics of Random Matrix Theory/Spiked Models 14/46
1 2 3 4 0.2 0.4 0.6 0.8 1 Population spike ω1 |ˆ uT
1u1|2 p = 100 p = 200 p = 400 1−c/ω2 1 1+c/ω1
Figure: Simulated versus limiting |ˆ uT
1u1|2 for Yp = C 1 2 p Xp, Cp = Ip + ω1u1uT 1, p/n = 1/3,
varying ω1.
14 / 46
Basics of Random Matrix Theory/Spiked Models 15/46
n(I + P)
1 2 XpX∗
p(I + P)
1 2
nXpX∗ p + P
nX∗ p(I + P)X
n(Xp + P)∗(Xp + P)
15 / 46
Application to Machine Learning/ 16/46
16 / 46
Application to Machine Learning/ 18/46
18 / 46
Application to Machine Learning/ 18/46
1
na ∼ N(µa, Ca), a = 1, . . . , k
18 / 46
Application to Machine Learning/ 18/46
1
na ∼ N(µa, Ca), a = 1, . . . , k
18 / 46
Application to Machine Learning/ 18/46
1
na ∼ N(µa, Ca), a = 1, . . . , k
18 / 46
Application to Machine Learning/ 18/46
1
na ∼ N(µa, Ca), a = 1, . . . , k
i,j=1
18 / 46
Application to Machine Learning/ 18/46
1
na ∼ N(µa, Ca), a = 1, . . . , k
i,j=1 ,
18 / 46
Application to Machine Learning/ 18/46
1
na ∼ N(µa, Ca), a = 1, . . . , k
i,j=1 ,
18 / 46
Application to Machine Learning/ 19/46
2p xi − xj2) and second eigenvector v2
19 / 46
Application to Machine Learning/ 19/46
2p xi − xj2) and second eigenvector v2
19 / 46
Application to Machine Learning/ 19/46
2p xi − xj2) and second eigenvector v2
19 / 46
Application to Machine Learning/ 19/46
2p xi − xj2) and second eigenvector v2
1≤i=j≤n
k
19 / 46
Application to Machine Learning/ 19/46
2p xi − xj2) and second eigenvector v2
1≤i=j≤n
k
n!
19 / 46
Application to Machine Learning/ 19/46
2p xi − xj2) and second eigenvector v2
1≤i=j≤n
k
n!
19 / 46
Application to Machine Learning/ 20/46
20 / 46
Application to Machine Learning/ 20/46
20 / 46
Application to Machine Learning/ 20/46
n
20 / 46
Application to Machine Learning/ 20/46
n
20 / 46
Application to Machine Learning/ 20/46
n
20 / 46
Application to Machine Learning/ 20/46
n
20 / 46
Application to Machine Learning/ 20/46
n
20 / 46
Application to Machine Learning/ 21/46
10 20 30 40 50 5 · 10−2 0.1 0.15 0.2
Eigenvalues of K
Figure: Eigenvalues of K (red) and (equivalent Gaussian model) ˆ K (white), MNIST data, p = 784, n = 192.
21 / 46
Application to Machine Learning/ 21/46
10 20 30 40 50 5 · 10−2 0.1 0.15 0.2
Eigenvalues of K Eigenvalues of ˆ K as if Gaussian model
Figure: Eigenvalues of K (red) and (equivalent Gaussian model) ˆ K (white), MNIST data, p = 784, n = 192.
21 / 46
Application to Machine Learning/ 22/46
Figure: Leading four eigenvectors of K for MNIST data (red) and theoretical findings (blue).
22 / 46
Application to Machine Learning/ 22/46
Figure: Leading four eigenvectors of K for MNIST data (red) and theoretical findings (blue).
22 / 46
Application to Machine Learning/ 23/46
−.08 −.07 −.06 −0.1 0.1 Eigenvector 2/Eigenvector 1 −0.1 0.1 −0.1 0.1 0.2 Eigenvector 3/Eigenvector 2
Figure: 2D representation of eigenvectors of K, for the MNIST dataset. Theoretical means and 1- and 2-standard deviations in blue. Class 1 in red, Class 2 in black, Class 3 in green.
23 / 46
Application to Machine Learning/ 23/46
−.08 −.07 −.06 −0.1 0.1 Eigenvector 2/Eigenvector 1 −0.1 0.1 −0.1 0.1 0.2 Eigenvector 3/Eigenvector 2
Figure: 2D representation of eigenvectors of K, for the MNIST dataset. Theoretical means and 1- and 2-standard deviations in blue. Class 1 in red, Class 2 in black, Class 3 in green.
23 / 46
Application to Machine Learning/ 25/46
25 / 46
Application to Machine Learning/ 25/46
25 / 46
Application to Machine Learning/ 25/46
2 t) versus
25 / 46
Application to Machine Learning/ 25/46
2 t) versus
25 / 46
Application to Machine Learning/ 26/46
26 / 46
Application to Machine Learning/ 26/46
◮ x(a)
1
, . . . , x(a)
na,[l] already labelled (few),
◮ x(a)
na,[l]+1, . . . , x(a) na unlabelled (a lot). 26 / 46
Application to Machine Learning/ 26/46
◮ x(a)
1
, . . . , x(a)
na,[l] already labelled (few),
◮ x(a)
na,[l]+1, . . . , x(a) na unlabelled (a lot).
k
ia = δ{xi∈Ca}.
26 / 46
Application to Machine Learning/ 26/46
◮ x(a)
1
, . . . , x(a)
na,[l] already labelled (few),
◮ x(a)
na,[l]+1, . . . , x(a) na unlabelled (a lot).
k
ii − FjaDα jj
ia = δ{xi∈Ca}.
26 / 46
Application to Machine Learning/ 26/46
◮ x(a)
1
, . . . , x(a)
na,[l] already labelled (few),
◮ x(a)
na,[l]+1, . . . , x(a) na unlabelled (a lot).
k
ii − FjaDα jj
ia = δ{xi∈Ca}.
[u]
[u]
[u]
[l]F [l]
26 / 46
Application to Machine Learning/ 27/46
30 100 130 200 1
[F]·,1 (scores for C1) Figure: Outcome F of Laplacian algorithms (α = −1) for N(±µ, Ip) with p = 1.
27 / 46
Application to Machine Learning/ 27/46
30 100 130 200 1
[F]·,1 (scores for C1) [F]·,2 (scores for C2) Figure: Outcome F of Laplacian algorithms (α = −1) for N(±µ, Ip) with p = 1.
27 / 46
Application to Machine Learning/ 28/46
30 100 130 200 1
[F]·,1 (scores for C1) Figure: Outcome F of Laplacian algorithms (α = −1) for N(±µ, Ip) with p = 80.
28 / 46
Application to Machine Learning/ 28/46
30 100 130 200 1
[F]·,1 (scores for C1) [F]·,2 (scores for C2) Figure: Outcome F of Laplacian algorithms (α = −1) for N(±µ, Ip) with p = 80.
28 / 46
Application to Machine Learning/ 29/46
20 40 60 80 100 120 140 160 180 0.5 1 F (u)
·,a [F(u)]·,1 (Zeros)
Figure: Vectors [F (u)]·,a, a = 1, 2, 3, for 3-class MNIST data (zeros, ones, twos), n = 192, p = 784, nl/n = 1/16, Gaussian kernel.
29 / 46
Application to Machine Learning/ 29/46
20 40 60 80 100 120 140 160 180 0.5 1 F (u)
·,a [F(u)]·,1 (Zeros) [F(u)]·,2 (Ones)
Figure: Vectors [F (u)]·,a, a = 1, 2, 3, for 3-class MNIST data (zeros, ones, twos), n = 192, p = 784, nl/n = 1/16, Gaussian kernel.
29 / 46
Application to Machine Learning/ 29/46
20 40 60 80 100 120 140 160 180 0.5 1 F (u)
·,a [F(u)]·,1 (Zeros) [F(u)]·,2 (Ones) [F(u)]·,3 (Twos)
Figure: Vectors [F (u)]·,a, a = 1, 2, 3, for 3-class MNIST data (zeros, ones, twos), n = 192, p = 784, nl/n = 1/16, Gaussian kernel.
29 / 46
Application to Machine Learning/ 30/46
30 / 46
Application to Machine Learning/ 30/46
30 / 46
Application to Machine Learning/ 30/46
30 / 46
Application to Machine Learning/ 30/46
30 / 46
Application to Machine Learning/ 30/46
30 / 46
Application to Machine Learning/ 30/46
30 / 46
Application to Machine Learning/ 31/46
31 / 46
Application to Machine Learning/ 31/46
2 4 6 8 10 0.76 0.78 0.8 0.82 n[u]/p Accuracy Laplacian regularization
Figure: Accuracy as a function of n[u]/p with n[l]/p = 2, c1 = c2, p = 100, −µ1 = µ2 = [1; 0p−1], {C}i,j = .1|i−j|. Graph constructed with Kij = e−xi−xj 2/p.
31 / 46
Application to Machine Learning/ 31/46
2 4 6 8 10 0.76 0.78 0.8 0.82 n[u]/p Accuracy Laplacian regularization Spectral clustering (unsupervised)
Figure: Accuracy as a function of n[u]/p with n[l]/p = 2, c1 = c2, p = 100, −µ1 = µ2 = [1; 0p−1], {C}i,j = .1|i−j|. Graph constructed with Kij = e−xi−xj 2/p.
31 / 46
Application to Machine Learning/ 32/46
n.
32 / 46
Application to Machine Learning/ 32/46
n.
32 / 46
Application to Machine Learning/ 32/46
n.
2 4 6 8 10 0.76 0.78 0.8 0.82 n[u]/p Accuracy Laplacian regularization Spectral clustering (unsupervised)
32 / 46
Application to Machine Learning/ 32/46
n.
2 4 6 8 10 0.76 0.78 0.8 0.82 n[u]/p Accuracy Laplacian regularization Spectral clustering (unsupervised) Centered regularization
32 / 46
Application to Machine Learning/ 33/46
200 400 .92 .94 n[u]
Laplacian
Figure: Top: distribution of normalized pairwise distances for noisy MNIST data (8,9). Bottom: average accuracy as a function of n[u] with n[l] = 10, computed over 1000 random realizations.
33 / 46
Application to Machine Learning/ 33/46
200 400 .92 .94 n[u]
Laplacian Proposed
Figure: Top: distribution of normalized pairwise distances for noisy MNIST data (8,9). Bottom: average accuracy as a function of n[u] with n[l] = 10, computed over 1000 random realizations.
33 / 46
Application to Machine Learning/ 33/46
1 2 .10 Normalized distances Intra Inter 200 400 .92 .94 n[u]
Laplacian Proposed
Figure: Top: distribution of normalized pairwise distances for noisy MNIST data (8,9). Bottom: average accuracy as a function of n[u] with n[l] = 10, computed over 1000 random realizations.
33 / 46
Application to Machine Learning/ 33/46
1 2 .10 Normalized distances Intra Inter 0.8 1 1.2 .10 Normalized distances 200 400 .92 .94 n[u]
Laplacian Proposed
200 400 .82 .84 .86 .88 n[u]
Figure: Top: distribution of normalized pairwise distances for noisy MNIST data (8,9). Bottom: average accuracy as a function of n[u] with n[l] = 10, computed over 1000 random realizations.
33 / 46
Application to Machine Learning/ 33/46
1 2 .10 Normalized distances Intra Inter 0.8 1 1.2 .10 Normalized distances 0.8 1 1.2 .10 Normalized distances 200 400 .92 .94 n[u]
Laplacian Proposed
200 400 .82 .84 .86 .88 n[u] 200 400 .65 .70 .75 .80 n[u]
Figure: Top: distribution of normalized pairwise distances for noisy MNIST data (8,9). Bottom: average accuracy as a function of n[u] with n[l] = 10, computed over 1000 random realizations.
33 / 46
Application to Machine Learning/ 34/46
34 / 46
Application to Machine Learning/ 35/46
Class ID (2,7) (9,10) (11,18) nu = 100 Centered kernel (RMT) 79.0±10.4 77.5±9.2 78.5±7.1 Iterated centered kernel (RMT) 85.3±5.9 89.2±5.6 90.1±6.7 Laplacian 73.8±9.8 77.3±9.5 78.6±7.2 Iterated Laplacian 83.7±7.2 88.0±6.8 87.1±8.8 Manifold 77.6±8.9 81.4±10.4 82.3±10.8 nu = 1000 Centered kernel (RMT) 83.6±2.4 84.6±2.4 88.7±9.4 Iterated centered kernel (RMT) 84.8±3.8 88.0±5.5 96.4±3.0 Laplacian 72.7±4.2 88.9±5.7 95.8±3.2 Iterated Laplacian 83.0±5.5 88.2±6.0 92.7±6.1 Manifold 77.7±5.8 85.0±9.0 90.6±8.1 Table: Comparison of classification accuracy (%) on German Traffic Sign datasets with nl = 10. Computed over 1000 random iterations for nu = 100 and 100 for nu = 1000.
35 / 46
Application to Machine Learning/ 37/46
37 / 46
Application to Machine Learning/ 37/46
37 / 46
Application to Machine Learning/ 37/46
√p
x1+···+xp √p
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . .. . . .. . . . . . . . . . . . . . . . . .. . . .. . . . . . . .. . . . . .. . .. . . .. . . . . .. . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . .. . .. . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
O(1) O(1) 37 / 46
Application to Machine Learning/ 38/46
a.s.
n + 1
38 / 46
Application to Machine Learning/ 38/46
a.s.
n + 1
38 / 46
Application to Machine Learning/ 39/46
39 / 46
Application to Machine Learning/ 39/46
39 / 46
Application to Machine Learning/ 39/46
39 / 46
Application to Machine Learning/ 40/46
40 / 46
Application to Machine Learning/ 41/46
41 / 46
Application to Machine Learning/ 42/46
42 / 46
Application to Machine Learning/ 43/46
43 / 46
Application to Machine Learning/ 43/46
43 / 46
Application to Machine Learning/ 43/46
43 / 46
Application to Machine Learning/ 44/46
44 / 46
Application to Machine Learning/ 44/46
44 / 46
Application to Machine Learning/ 44/46
44 / 46
Application to Machine Learning/ 44/46
44 / 46
Application to Machine Learning/ 44/46
44 / 46
Application to Machine Learning/ 44/46
44 / 46
Application to Machine Learning/ 44/46
44 / 46
Application to Machine Learning/ 44/46
44 / 46
Application to Machine Learning/ 45/46
45 / 46
Application to Machine Learning/ 46/46
[C-Benaych’16] R. Couillet, Benaych-Georges, ”Kernel Spectral Clustering of Large Dimensional Data”, Electronic Journal of Statistics, vol. 10, no. 1, pp. 1393-1454, 2016. [article] [Mai,C’18] X. Mai, R. Couillet, ”A random matrix analysis and improvement of semi-supervised learning for large dimensional data”, Journal of Machine Learning Research,
[Louart,C’18] C. Louart, Z. Liao, R. Couillet, ”A Random Matrix Approach to Neural Networks”, The Annals of Applied Probability, vol. 28, no. 2, pp. 1190-1248, 2018. [article] [Seddik,C’19] M. Seddik, M. Tamaazousti, R. Couillet, ”Kernel Random Matrices of Large Concentrated Data: The Example of GAN-Generated Image”, IEEE International Conference
networks”, Journal of Machine Learning Research, vol. 18, no. 225, pp. 1-49, 2018. [article]
covariance matrix distances”, Journal of Multivariate Analysis, vol. 174, pp. 104531, 2019. [article]
Machines”, IEEE Transactions on Signal Processing, vol. 67, no.4, pp. 1065-1074, 2018. [article]
46 / 46