Deep Learning for Partial Differential Equations William Wei - - PowerPoint PPT Presentation

deep learning for partial differential equations
SMART_READER_LITE
LIVE PREVIEW

Deep Learning for Partial Differential Equations William Wei - - PowerPoint PPT Presentation

Deep Learning for Partial Differential Equations William Wei National Center for Supercomputing Applications Center for Artificial Intelligence Innovation Department of Physics, University of Illinois Urban-Champaign Reference:


slide-1
SLIDE 1

Deep Learning for Partial Differential Equations

William Wei

National Center for Supercomputing Applications Center for Artificial Intelligence Innovation Department of Physics, University of Illinois Urban-Champaign

Reference: http://www.dam.brown.edu/people/mraissi/research/1_physics_informed_neural_networks/

slide-2
SLIDE 2

ut + 𝒪[u; λ] = 0, x ∈ Ω, t ∈ [0,T]

is the solution to the equation.

u(t, x)

𝒪[u; λ] is a non-linear function of u(t, x) parameterized by λ

General formulation of partial differential equations (PDEs)

  • Data-driven solutions for PDEs
  • Data-driven discovery of PDEs
slide-3
SLIDE 3

Data-driven solutions for PDEs: Continuous Time Models

u(t, x) =

t x u

f := ut + 𝒪[u; λ]

ut + 𝒪[u; λ] = 0

Minimize the loss function:

slide-4
SLIDE 4

def u(t, x): u = neural_net(tf.concat([t,x],1), weights, biases) return u def f(t, x): u = u(t, x) u_t = tf.gradients(u, t)[0] u_x = tf.gradients(u, x)[0] u_xx = tf.gradients(u_x, x)[0] f = u_t + u*u_x - (0.01/tf.pi)*u_xx return f

slide-5
SLIDE 5

Example: Burgers

ut + uux − (0.01/π)uxx = 0, x ∈ [−1,1], t ∈ [0,1],

u(0,x) = − sin(πx) u(t, − 1) = u(t,1) = 0

u(t, x) ≈ ̂ uθ(t, x)

Neural network approximation:

f(θ) := ̂ ut + ̂ u ̂ ux − (0.01/π)uxx

L(θ) = ∥f(θ)∥2 + ∥ ̂ u(0,x) − u(0,x)∥2 + ∥ ̂ u(t, − 1)∥2 + ∥ ̂ u(t,1)∥2

slide-6
SLIDE 6
slide-7
SLIDE 7
slide-8
SLIDE 8

Data-driven solutions for PDEs: Discrete Time Models

Runge-Kutta methods:

un+ci = un − Δt

q

j=1

aij𝒪[un+cj], i = 1,...,q

Euler method

ut + 𝒪[u; λ] = 0

un+1 = un − Δt𝒪[un; λ]

u(t) ≈ {u0, u1, . . . , un, . . . uN}

un+1 = un − Δt

q

j=1

bj𝒪[un+cj]

slide-9
SLIDE 9

un

i := un+ci + Δt q

j=1

aij𝒪[un+cj], i = 1,...,q

un

q+1 := un+1 + Δt q

j=1

bj𝒪[un+cj]

un = un

i , i = 1,...,q

un = un

q+1

slide-10
SLIDE 10

multi-output neural network

un

i := un+ci + Δt q

j=1

aij𝒪[un+cj], i = 1,...,q

un

q+1 := un+1 + Δt q

j=1

bj𝒪[un+cj]

[un+c1(x), un+c2(x), . . . , un+cq(x), un+1(x)] [un

1(x), un 2(x), . . . , un q(x), un q+1(x)]

slide-11
SLIDE 11

un = un

i , i = 1,...,q

un = un

q+1

𝒫(Δt2q)

Temporal error accumulation :

Δt = 0.8, q = 500, Δt2q = 0.81000 ≈ 10−97

slide-12
SLIDE 12