Neural Fields, Finite-Dimensional Approximation, Large Deviations, - - PowerPoint PPT Presentation
Neural Fields, Finite-Dimensional Approximation, Large Deviations, - - PowerPoint PPT Presentation
Neural Fields, Finite-Dimensional Approximation, Large Deviations, and SDE Continuation Christian Kuehn Vienna University of Technology Outline Part 1: Neural Fields (joint work with Martin Riedler , Linz/Vienna): 1. Neural Fields - Amari-type
Outline
Part 1: Neural Fields (joint work with Martin Riedler, Linz/Vienna):
- 1. Neural Fields - Amari-type
- 2. Galerkin Approximation
- 3. Large Deviation Principle(s)
Part 2: SDE Continuation
- 1. Numerical Continuation
- 2. Extension to SODEs
- 3. Calculating Kramers’ Law
- 4. Extension to SPDEs
Neural Fields
Amari-type neural field model: dUt(x) =
- −αUt(x) +
- B
w(x, y)f (Ut(y)) dy
- dt + ε dWt(x).
Neural Fields
Amari-type neural field model: dUt(x) =
- −αUt(x) +
- B
w(x, y)f (Ut(y)) dy
- dt + ε dWt(x).
Ingredients:
◮ B ⊂ Rd bounded closed domain. Hilbert space X = L2(B). ◮ (x, t) ∈ B × [0, T], u = u(x, t) ∈ R, α > 0, 0 < ε ≪ 1.
Neural Fields
Amari-type neural field model: dUt(x) =
- −αUt(x) +
- B
w(x, y)f (Ut(y)) dy
- dt + ε dWt(x).
Ingredients:
◮ B ⊂ Rd bounded closed domain. Hilbert space X = L2(B). ◮ (x, t) ∈ B × [0, T], u = u(x, t) ∈ R, α > 0, 0 < ε ≪ 1. ◮ w : B × B → R kernel, modelling neural connectivity. ◮ f : R → (0, +∞) gain function, modelling neural input.
Neural Fields
Amari-type neural field model: dUt(x) =
- −αUt(x) +
- B
w(x, y)f (Ut(y)) dy
- dt + ε dWt(x).
Ingredients:
◮ B ⊂ Rd bounded closed domain. Hilbert space X = L2(B). ◮ (x, t) ∈ B × [0, T], u = u(x, t) ∈ R, α > 0, 0 < ε ≪ 1. ◮ w : B × B → R kernel, modelling neural connectivity. ◮ f : R → (0, +∞) gain function, modelling neural input. ◮ Q : X → X trace-class, non-negative symmetric operator:
eigenvalues λ2
i ∈ R, eigenfunctions vi. ◮ Wt(x) := ∞ i=1 λiβi tvi(x),
βi
t iid Brownian motions.
Existence and Regularity
Assumptions: ◮ Kg(x) :=
B w(x, y)g(y) dy is a compact self-adjoint operator on L2(B).
◮ F(g)(x) := f (g(x)) is a Lipschitz continuous Nemytzkii operator on L2(B). Neural field as evolution equation dUt = [−αUt + KF(Ut)] dt + ε dWt.
Existence and Regularity
Assumptions: ◮ Kg(x) :=
B w(x, y)g(y) dy is a compact self-adjoint operator on L2(B).
◮ F(g)(x) := f (g(x)) is a Lipschitz continuous Nemytzkii operator on L2(B). Neural field as evolution equation dUt = [−αUt + KF(Ut)] dt + ε dWt. (daPrato-Zabczyk92) ⇒ Mild solution u ∈ C([0, T], L2(B)) Ut = e−αtU0 + t e−α(t−s)KF(Us) ds + ε t e−α(t−s) dWs.
Existence and Regularity
Assumptions: ◮ Kg(x) :=
B w(x, y)g(y) dy is a compact self-adjoint operator on L2(B).
◮ F(g)(x) := f (g(x)) is a Lipschitz continuous Nemytzkii operator on L2(B). Neural field as evolution equation dUt = [−αUt + KF(Ut)] dt + ε dWt. (daPrato-Zabczyk92) ⇒ Mild solution u ∈ C([0, T], L2(B)) Ut = e−αtU0 + t e−α(t−s)KF(Us) ds + ε t e−α(t−s) dWs.
Lemma (K./Riedler, 2013)
vi Lipschitz with constants Li and for some ρ ∈ (0, 1) sup
x∈B
- ∞
- i=1
λ2
i vi(x)2
- < ∞ ,
sup
x∈B
- ∞
- i=1
λ2
i L2ρ i |vi(x)|2(1−ρ)
- < ∞
⇒ u ∈ C([0, T], C(B)).
Galerkin Approximation
Spectral representation of solution: Ut(x) =
∞
- i=1
ui
t vi(x) .
Galerkin Approximation
Spectral representation of solution: Ut(x) =
∞
- i=1
ui
t vi(x) .
Take L2-inner product with vi in neural field model dUt, vi =
- −αUt, vi + KF(Ut), vi
- dt + εdWt, vi,
⇒ dui
t
=
- −αui
t + (KF)i(u1 t , u2 t , . . .)
- dt + ελi dβi
t.
Galerkin Approximation
Spectral representation of solution: Ut(x) =
∞
- i=1
ui
t vi(x) .
Take L2-inner product with vi in neural field model dUt, vi =
- −αUt, vi + KF(Ut), vi
- dt + εdWt, vi,
⇒ dui
t
=
- −αui
t + (KF)i(u1 t , u2 t , . . .)
- dt + ελi dβi
t.
where (KF)i(u1
t , u2 t , . . .) :=
- B
f
∞
- j=1
uj
t vj(x)
- B
w(x, y)vi(y)dy
- dx
Approximation Accuracy
Theorem (K./Riedler, 2013)
For all T > 0 lim
N→∞ supt∈[0,T] Ut − UN t L2(B) = 0
a.s.
Approximation Accuracy
Theorem (K./Riedler, 2013)
For all T > 0 lim
N→∞ supt∈[0,T] Ut − UN t L2(B) = 0
a.s. If “regularity-lemma” conditions hold and U0 ∈ C(B) such that lim
N→∞ U0 − PNU0C(B) = 0
then lim
N→∞ supt∈[0,T] Ut − UN t C(B) = 0
a.s.
Proof.
Lengthy calculation using a technique by Bl¨
- mker/Jentzen
(SINUM 2013).
Large Deviations Principle (LDP)
Example: Stochastic ordinary differential equation dut = g(ut) dt + εG(ut) dβt. where
◮ ut ∈ RN, g : RN → RN, G : RN → RN×k, ◮ βt = (β1 t , . . . , βk t )T vector of k iid Brownian motions, ◮ u0 ∈ D ⊂ RN.
Large Deviations Principle (LDP)
Example: Stochastic ordinary differential equation dut = g(ut) dt + εG(ut) dβt. where
◮ ut ∈ RN, g : RN → RN, G : RN → RN×k, ◮ βt = (β1 t , . . . , βk t )T vector of k iid Brownian motions, ◮ u0 ∈ D ⊂ RN.
Goal: Estimate first-exit time τ ε
D := inf{t > 0 : ut = uε t ∈ D}.
An Abstract Theorem
◮ X := C0([0, T], RN) = {φ ∈ C([0, T], RN) : φ(0) = u0}. ◮ HN
1 := {φ : [0, T] → RN : φ absolutely continuous, φ′ ∈ L2, φ(0) = 0}.
◮ Diffusion matrix D(u) := G(u)T G(u) ∈ RN×N positive definite.
An Abstract Theorem
◮ X := C0([0, T], RN) = {φ ∈ C([0, T], RN) : φ(0) = u0}. ◮ HN
1 := {φ : [0, T] → RN : φ absolutely continuous, φ′ ∈ L2, φ(0) = 0}.
◮ Diffusion matrix D(u) := G(u)T G(u) ∈ RN×N positive definite.
Theorem (Freidlin, Wentzell)
The SODE satisfies an LDP − inf
Γo I
≤ lim inf
ε→0 ε2 ln P((uε t )t∈[0,T] ∈ Γ) ≤
≤ lim sup
ε→0
ε2 ln P((uε
t )t∈[0,T] ∈ Γ) ≤
− inf
¯ Γ
I. for any measurable set of paths Γ ⊂ X with rate function I(φ) = 1
2
T
0 (φ′ t − g(φt))T D(φt)−1(φ′ t − g(φt))dt,
φ ∈ u0 + HN
1 ,
+∞
- therwise.
Arhennius-Eyring-Kramers’ Formula
◮ Gradient structure and additive noise
dut = −∇V (ut) dt + εId dβt.
◮ V has precisely two local minima u∗
±, single saddle point u∗ s .
◮ Hessian ∇2V (u∗
s ) at saddle has eigenvalues
ρ1(u∗
s ) < 0 < ρ2(u∗ s ) < · · · < ρN(u∗ s ).
Arhennius-Eyring-Kramers’ Formula
◮ Gradient structure and additive noise
dut = −∇V (ut) dt + εId dβt.
◮ V has precisely two local minima u∗
±, single saddle point u∗ s .
◮ Hessian ∇2V (u∗
s ) at saddle has eigenvalues
ρ1(u∗
s ) < 0 < ρ2(u∗ s ) < · · · < ρN(u∗ s ).
Theorem (Kramers’ Formula)
Mean first-passage u∗
− to u∗ + obeys:
E[τu∗
−→u∗ +}] ∼
2π |ρ1(u∗
s )|
- | det(∇2V (u∗
s ))|
det(∇2V (u∗
−)) e2(V (u∗
s )−V (u∗ −))/ε2.
Back to Neural Fields... Kramers’ Formula and LDP
Observations (K./Riedler, 2013)
◮ From [Laing/Troy03,Enulescu/Bestehorn07] ε = 0 ⇒ neural
field has energy-structure. Let g := f −1, P(x, t) = f (U(x, t)). ∂tP(x, t) = − 1 g′(P(x, t))∇E[P(x, t)].
Back to Neural Fields... Kramers’ Formula and LDP
Observations (K./Riedler, 2013)
◮ From [Laing/Troy03,Enulescu/Bestehorn07] ε = 0 ⇒ neural
field has energy-structure. Let g := f −1, P(x, t) = f (U(x, t)). ∂tP(x, t) = − 1 g′(P(x, t))∇E[P(x, t)]. But, there are problems for ε > 0 ⇒
◮ Change-of-variable ⇒ multiplicative noise. ◮ Space-time dependent factor 1/g ′(P(x, t)). ◮ Trace-class noise.
Back to Neural Fields... Kramers’ Formula and LDP
Observations (K./Riedler, 2013)
◮ From [Laing/Troy03,Enulescu/Bestehorn07] ε = 0 ⇒ neural
field has energy-structure. Let g := f −1, P(x, t) = f (U(x, t)). ∂tP(x, t) = − 1 g′(P(x, t))∇E[P(x, t)]. But, there are problems for ε > 0 ⇒
◮ Change-of-variable ⇒ multiplicative noise. ◮ Space-time dependent factor 1/g ′(P(x, t)). ◮ Trace-class noise.
◮ LDP follows from evolution equation [daPratoZabczyk92]. ◮ LDP can be approximated using Galerkin method.
Part 2 SDE Continuation: Motivation
Consider the general differential equation ∂u ∂t = F(u; λ) where λ ∈ Rp are parameters.
Part 2 SDE Continuation: Motivation
Consider the general differential equation ∂u ∂t = F(u; λ) where λ ∈ Rp are parameters. F(u; λ) could lead to ODE, DDE, PDE, SDE, SPDE, etc. Problem: Forward simulation is usually very restrictive!
- 1. Simulate over initial values u0.
- 2. Simulate over parameter space µ ∈ Rp.
- 3. Simulate over noise realizations ω ∈ Ω.
Do you really understand the nonlinear dynamics from averages?
Deterministic DEs Standard Method: Continuation
Consider the ODE x′ = f (x; µ), f : Rn × R → Rn. Let (x; µ) =: y. A curve y = γ(s) of equilibria satisfies f (γ(s)) = 0. (note: Df (γ(0))γ′(0) = 0) y0 := γ(0) ¯ y1 f (y) = 0
(a) Prediction Step
y1 ¯ y1 f (y) = 0
(b) Correction Step
Important: Excellent guess from (a) for Newton’s Method in (b).
Numerical Bifurcation Analysis for Stochastic Systems?
Consider the stochastic (ordinary) differential equation (SDE) dxt = g(xt; µ) dt + εG(xt; µ) dWt, xt ∈ Rn, Wt = (W1,t, W2,t, . . . , Wk,t)T Brownian motion, F(xt; µ) ∈ Rn×k; let D(x; µ) := G(x; µ)G(x; µ)T .
Numerical Bifurcation Analysis for Stochastic Systems?
Consider the stochastic (ordinary) differential equation (SDE) dxt = g(xt; µ) dt + εG(xt; µ) dWt, xt ∈ Rn, Wt = (W1,t, W2,t, . . . , Wk,t)T Brownian motion, F(xt; µ) ∈ Rn×k; let D(x; µ) := G(x; µ)G(x; µ)T .
◮ Appproach 1: Forward Monte-Carlo simulation. ◮ Problems: Sampling often prohibitive.
Numerical Bifurcation Analysis for Stochastic Systems?
Consider the stochastic (ordinary) differential equation (SDE) dxt = g(xt; µ) dt + εG(xt; µ) dWt, xt ∈ Rn, Wt = (W1,t, W2,t, . . . , Wk,t)T Brownian motion, F(xt; µ) ∈ Rn×k; let D(x; µ) := G(x; µ)G(x; µ)T .
◮ Appproach 1: Forward Monte-Carlo simulation. ◮ Problems: Sampling often prohibitive. ◮ Appproach 2: Use probability density p = p(x, t). Requires
Fokker-Planck solution ∂p ∂t = −
n
- i=1
∂ ∂xi (g(x; µ)p) + ε2 2
n
- i,j=1
∂ ∂xi∂xj (Dij(x; µ)p).
◮ Problems: High-dimensional PDE; not even ε = 0 is good!
Strategy - Generalization to SDEs
Step 1: Recall dxt = g(xt; µ) dt + εG(xt; µ) dWt. Step 2: Expand near (locally stable) deterministic equilibrium x∗ dXt = A(x∗; µ)Xt dt + εF(x∗; µ) dWt where A(x; µ) = (Dxf )(x; µ) ∈ Rn×n.
Strategy - Generalization to SDEs
Step 1: Recall dxt = g(xt; µ) dt + εG(xt; µ) dWt. Step 2: Expand near (locally stable) deterministic equilibrium x∗ dXt = A(x∗; µ)Xt dt + εF(x∗; µ) dWt where A(x; µ) = (Dxf )(x; µ) ∈ Rn×n. Step 3: The covariance matrix Ct := Cov(Xt) solves C ′
t
= A(x∗; µ)Ct + CtA(x∗; µ)T + ε2G(x∗; µ)G(x∗; µ)T
- equil. ⇒ 0
= A(x∗; µ)C + CA(x∗; µ)T + ε2G(x∗; µ)G(x∗; µ)T
Strategy - Generalization to SDEs
Step 1: Recall dxt = g(xt; µ) dt + εG(xt; µ) dWt. Step 2: Expand near (locally stable) deterministic equilibrium x∗ dXt = A(x∗; µ)Xt dt + εF(x∗; µ) dWt where A(x; µ) = (Dxf )(x; µ) ∈ Rn×n. Step 3: The covariance matrix Ct := Cov(Xt) solves C ′
t
= A(x∗; µ)Ct + CtA(x∗; µ)T + ε2G(x∗; µ)G(x∗; µ)T
- equil. ⇒ 0
= A(x∗; µ)C + CA(x∗; µ)T + ε2G(x∗; µ)G(x∗; µ)T Step 4: Define the covariance ellipsoid B(h) :=
- x ∈ Rn : (x − x∗)TC −1(x − x∗) ≤ h2
.
Covariance Ellipsoids via Continuation
Important observations:
◮ Continue the equilibrium x∗ = x∗(µ) as usual. ◮ For covariance ellipsoid one has to solve a Lyapunov equation
AC + CAT + B = 0
◮ During continuation the matrix
Dxg(x∗; µ) = A(x∗; µ) = A is available as a submatrix of Dg(x∗; µ).
Covariance Ellipsoids via Continuation
Important observations:
◮ Continue the equilibrium x∗ = x∗(µ) as usual. ◮ For covariance ellipsoid one has to solve a Lyapunov equation
AC + CAT + B = 0
◮ During continuation the matrix
Dxg(x∗; µ) = A(x∗; µ) = A is available as a submatrix of Dg(x∗; µ).
◮ Efficient iterative methods for Lyapunov equations exist. ◮ A simple initial guess for C(µ2) at (x∗(µ2), µ2) is
C(x∗(µ1); µ1).
Ellipsoids and Distance
Question: What is the distance between ellipsoids?
Ellipsoids and Distance
Question: What is the distance between ellipsoids? Let Q be positive semi-definite then E :=
- x ∈ Rn : v Tx ≤ v Tx∗ + (v T Qv)1/2
∀v ∈ Rn . defines an ellipsoid centered at x∗.
Ellipsoids and Distance
Question: What is the distance between ellipsoids? Let Q be positive semi-definite then E :=
- x ∈ Rn : v Tx ≤ v Tx∗ + (v T Qv)1/2
∀v ∈ Rn . defines an ellipsoid centered at x∗. Fact: May solve an optimization problem δ = δ(E(x∗
1 , Q1), E(x∗ 2 , Q2))
= max
v=1
- v Tx∗
1 − (v T Q1v)1/2 − v Tx∗ 2 − (v TQ2v)1/2
. Idea: Use iterative method (e.g. SQP) & initial guess from continuation to compute δ.
Neural Competition
Consider two neural populations x′
1
= −x1 + S(Ic − βx2 − gy1), x′
2
= −x2 + S(Ic − βx1 − gy2), y ′
1
= ǫ(x1 − y1), y ′
2
= ǫ(x2 − y2), where
◮ x1,2 = averaged firing rates, ◮ y1,2 = fatigue/reset variables, ◮ S(u) := 1 1+exp(−r(u−θ)).
Neural Competition
Consider two neural populations x′
1
= −x1 + S(Ic − βx2 − gy1), x′
2
= −x2 + S(Ic − βx1 − gy2), y ′
1
= ǫ(x1 − y1), y ′
2
= ǫ(x2 − y2), where
◮ x1,2 = averaged firing rates, ◮ y1,2 = fatigue/reset variables, ◮ S(u) := 1 1+exp(−r(u−θ)).
Look at noisy fast subsystem ǫ = 0 dx1 dx2
- =
−x1 + S(Ic − βx2 − gy1) −x2 + S(Ic − βx1 − gy2)
- dt + ε2G(x) dWt
Numerical Continuation...
0.6 0.8 1 1.2 1.4 1.6 −0.5 0.5 1 1.5 1 2 3 −0.4 0.4 0.8 1.2 −0.4 0.4 0.8 1.2 0.6 0.8 1 1.2 1.4 1.6 −0.5 0.5 1 1.5
(a) (b) (c)
Ic Ic Ic δ x1 x1 x2 For parameter values y1 = 0.7, y2 = 0.75, β = 1.1, g = 0.5, r = 10, θ = 0.2. and ε2G(x∗)G(x∗)T = ε2
- 1
0.4 0.4 1
- for ε2 = 0.3.
Metastability and Noise-Induced Switching
Consider a gradient system dxt = −∇Vµ(xt) dt + ε dWt, Vµ : Rn → R. (1) Assume
◮ two stable equilibria x∗ and y ∗ ◮ saddle z∗, one unstable direction eigenvalue λ(z∗; µ) > 0
Kramers’ Law E[τx∗→y ∗] = 2π |λ(z∗; µ)|
- | det(A(z∗; µ))|
det(A(x∗; µ)) e2[Vµ(z∗)−Vµ(x∗)]/ε2 where A(x∗; µ) = D2Uµ(x∗; µ) ∈ Rn×n.
Continuation and Kramers’ Law
Kramers’ Law E[τx∗→y ∗] = 2π |λ(z∗; µ)|
- | det(A(z∗; µ))|
det(A(x∗; µ)) e2[Vµ(z∗)−Vµ(x∗)]/ε2 Observations:
◮ Just continue the equilibria x∗, y ∗, z∗ as usual. ◮ Jacobian A(z∗; µ) is available. ◮ Compute det(A(x∗; µ)) via LU decomposition. ◮ Leading eigenvalue λ(z∗; µ) may use Rayleigh iteration.
Extension to SPDEs
Starting point: (cubic-quintic) Allen-Cahn PDE ∂u ∂t = ∆u − 4(µu + u3 − u5). u = u(x, t), x ∈ Ω ⊂ R2, given boundary conditions.
Extension to SPDEs
Starting point: (cubic-quintic) Allen-Cahn PDE ∂u ∂t = ∆u − 4(µu + u3 − u5). u = u(x, t), x ∈ Ω ⊂ R2, given boundary conditions. Main Steps:
- 1. Compute bifurcation for PDE (e.g. → pde2path).
Extension to SPDEs
Starting point: (cubic-quintic) Allen-Cahn PDE ∂u ∂t = ∆u − 4(µu + u3 − u5) + g(u)ξ. u = u(x, t), x ∈ Ω ⊂ R2, given boundary conditions. Main Steps:
- 1. Compute bifurcation for PDE (e.g. → pde2path).
- 2. Consider the SPDE version (e.g. → trace-class noise).
- 3. Discretize in space (e.g. → FDM, FEM, Galerkin).
Extension to SPDEs
Starting point: (cubic-quintic) Allen-Cahn PDE ∂u ∂t = ∆u − 4(µu + u3 − u5) + g(u)ξ. u = u(x, t), x ∈ Ω ⊂ R2, given boundary conditions. Main Steps:
- 1. Compute bifurcation for PDE (e.g. → pde2path).
- 2. Consider the SPDE version (e.g. → trace-class noise).
- 3. Discretize in space (e.g. → FDM, FEM, Galerkin).
- 4. Apply numerical continuation for SDEs.
PDE: Deterministic Numerical Continuation
u2 µ
x x x y y y u u u
(a) (b) (c) (d)
(c) (d) (b)
R0 R1 R2 R3 R4
SPDE: Stochastic Numerical Continuation
10
−2
10
−1
10 10
−2
10
−1
10 10
1
10
2
1.2 1.4 1.6 1.8 2 −0.5 0.5 1 1.5 2
· µ µf
1 − µ
· (a) (b) ◮ scaling law of the variance near bifurcation point ◮ link to early-warning signs ◮ Computation on standard desktop computer for SPDEs
Overview
◮ Infinite-dimensional neural fields ◮ Numerical continuation methods for SODEs ◮ Numerics extends to SPDEs and SPIDEs
Overview
◮ Infinite-dimensional neural fields ◮ Numerical continuation methods for SODEs ◮ Numerics extends to SPDEs and SPIDEs
A general strategy:
- 1. Abstract stochastic analysis
- 2. Conversion into numerical deterministic problem
- 3. Continuation and iterative methods
Overview
◮ Infinite-dimensional neural fields ◮ Numerical continuation methods for SODEs ◮ Numerics extends to SPDEs and SPIDEs
A general strategy:
- 1. Abstract stochastic analysis
- 2. Conversion into numerical deterministic problem
- 3. Continuation and iterative methods
◮ see also: www.asc.tuwien.ac.at/∼ckuehn and arXiv Remark: Multiscale Dynamics (almost) everywhere!
References
- K. Gowda and C. Kuehn.
Warning signs for pattern-formation in SPDEs.
- Comm. Nonl. Sci. & Numer. Simul., 22(1):55–69, 2015.
- C. Kuehn.
A mathematical framework for critical transitions: bifurcations, fast-slow systems and stochastic dynamics. Physica D, 240(12):1020–1035, 2011.
- C. Kuehn.
Deterministic continuation of stochastic metastable equilibria via Lyapunov equations and ellipsoids. SIAM J. Sci. Comp., 34(3):A1635–A1658, 2012.
- C. Kuehn.
A mathematical framework for critical transitions: normal forms, variance and applications.
- J. Nonlinear Sci., 23(3):457–510, 2013.
- C. Kuehn.
Numerical continuation and SPDE stability for the 2d cubic-quintic Allen-Cahn equation. arXiv:1408.4000, pages 1–26, 2014.
- C. Kuehn and M.G. Riedler.
Large deviations for nonlocal stochastic neural fields.
- J. Math. Neurosci., 4(1):1–33, 2014.
References
- K. Gowda and C. Kuehn.
Warning signs for pattern-formation in SPDEs.
- Comm. Nonl. Sci. & Numer. Simul., 22(1):55–69, 2015.
- C. Kuehn.
A mathematical framework for critical transitions: bifurcations, fast-slow systems and stochastic dynamics. Physica D, 240(12):1020–1035, 2011.
- C. Kuehn.
Deterministic continuation of stochastic metastable equilibria via Lyapunov equations and ellipsoids. SIAM J. Sci. Comp., 34(3):A1635–A1658, 2012.
- C. Kuehn.
A mathematical framework for critical transitions: normal forms, variance and applications.
- J. Nonlinear Sci., 23(3):457–510, 2013.
- C. Kuehn.
Numerical continuation and SPDE stability for the 2d cubic-quintic Allen-Cahn equation. arXiv:1408.4000, pages 1–26, 2014.
- C. Kuehn and M.G. Riedler.
Large deviations for nonlocal stochastic neural fields.
- J. Math. Neurosci., 4(1):1–33, 2014.