A Potpourri of Nonlinear Algebra Chris Hillar 2 Det 2 , 2 , 2 ( A ) - - PowerPoint PPT Presentation

a potpourri of nonlinear algebra
SMART_READER_LITE
LIVE PREVIEW

A Potpourri of Nonlinear Algebra Chris Hillar 2 Det 2 , 2 , 2 ( A ) - - PowerPoint PPT Presentation

1 a 1 a 2 a 3 + c 1 c 2 c 3 = 2 , a 1 a 3 b 2 + c 1 c 3 d 2 = 0 , a 2 a 3 b 1 + c 2 c 3 d 1 = 0 , X d i = a 3 b 1 b 2 + c 3 d 1 d 2 = 4 , a 1 a 2 b 3 + c 1 c 2 d 3 = 0 , a 1 b 2 b 3 + c 1 d 2 d 3 = 4 , i + j a 2 b 1 b 3 + c 2 d 3 d 1


slide-1
SLIDE 1

A Potpourri of Nonlinear Algebra

Chris Hillar

ICERM | Computational Nonlinear Algebra | June 2014

di = X

j6=i

1 θi + θj

a1a2a3 + c1c2c3 = 2, a1a3b2 + c1c3d2 = 0, a2a3b1 + c2c3d1 = 0, a3b1b2 + c3d1d2 = − 4, a1a2b3 + c1c2d3 = 0, a1b2b3 + c1d2d3 = −4, a2b1b3 + c2d3d1 = 4, b1b2b3 + d1d2d3 = 0

a1c1 − b1d1 − u2, b1c1 + a1d1, c1u − a2

1 + b2 1, d1u − 2a1b1, a1u − c2 1 + d2 1, b1u − 2d1c1,

a2c2 − b2d2 − u2, b2c2 + a2d2, c2u − a2

2 + b2 2, d2u − 2a2b2, a2u − c2 2 + d2 2, b2u − 2d2c2,

a3c3 − b3d3 − u2, b3c3 + a3d3, c3u − a2

3 + b2 3, d3u − 2a3b3, a3u − c2 3 + d2 3, b3u − 2d3c3,

a4c4 − b4d4 − u2, b4c4 + a4d4, c4u − a2

4 + b2 4, d4u − 2a4b4, a4u − c2 4 + d2 4, b4u − 2d4c4,

a2

1 − b2 1 + a1a3 − b1b3 + a2 3 − b2 3, a2 1 − b2 1 + a1a4 − b1b4 + a2 4 − b2 4, a2 1 − b2 1 + a1a2 − b1b2 + a2 2 − b2 2,

a2

2 − b2 2 + a2a3 − b2b3 + a2 3 − b2 3, a2 3 − b2 3 + a3a4 − b3b4 + a2 4 − b2 4, 2a1b1 + a1b2 + a2b1 + 2a2b2,

2a2b2 + a2b3 + a3b2 + 2a3b3, 2a1b1 + a1b3 + a2b1 + 2a3b3, 2a1b1 + a1b4 + a4b1 + 2a4b4, 2a3b3 + a3b4 + a4b3 + 2a4b4, w2

1 + w2 2 + · · · + w2 17 + w2 18.

Det2,2,2(A) = 1 4  det ✓ a000 a010 a001 a011

  • +

 a100 a110 a101 a111 ◆ − det ✓ a000 a010 a001 a011

 a100 a110 a101 a111 ◆2 −4 det  a000 a010 a001 a011

  • det

 a100 a110 a101 a111

  • .

a000x0y0 + a010x0y1 + a100x1y0 + a110x1y1 = 0, a001x0y0 + a011x0y1 + a101x1y0 + a111x1y1 = 0, a000x0z0 + a001x0z1 + a100x1z0 + a101x1z1 = 0, a010x0z0 + a011x0z1 + a110x1z0 + a111x1z1 = 0, a000y0z0 + a001y0z1 + a010y1z0 + a011y1z1 = 0, a100y0z0 + a101y0z1 + a110y1z0 + a111y1z1 = 0,

slide-2
SLIDE 2

Outline Computational complexity of nonlinear algebra “Real-life” examples: Neuroscience: The Retina Equations

  • bipartite graphs, probability, matrix analysis

Tensor problems

  • graph theory, optimization, Groebner bases
slide-3
SLIDE 3

Computational Nonlinear Algebra

Undecidable (“Uncomputable”)

[Hilbert’s 10th Problem] [Davis, Putnam, Robinson, Matijasevič ’61/’70]

?????

[Poonen ’03]

Decidable (“Computable”)

[Tarski–Seidenberg] [Hironaka ’64, Buchberger ’70]

ring computability

C R Z Q

reference Problem: Solve on a finite computer in finite time a finite set of polynomial (quadratic) equations

slide-4
SLIDE 4

Some “Random” Polynomial Systems:

a1a2a3 + c1c2c3 = 2, a1a3b2 + c1c3d2 = 0, a2a3b1 + c2c3d1 = 0, a3b1b2 + c3d1d2 = − 4, a1a2b3 + c1c2d3 = 0, a1b2b3 + c1d2d3 = −4, a2b1b3 + c2d3d1 = 4, b1b2b3 + d1d2d3 = 0

a1c1 − b1d1 − u2, b1c1 + a1d1, c1u − a2

1 + b2 1, d1u − 2a1b1, a1u − c2 1 + d2 1, b1u − 2d1c1,

a2c2 − b2d2 − u2, b2c2 + a2d2, c2u − a2

2 + b2 2, d2u − 2a2b2, a2u − c2 2 + d2 2, b2u − 2d2c2,

a3c3 − b3d3 − u2, b3c3 + a3d3, c3u − a2

3 + b2 3, d3u − 2a3b3, a3u − c2 3 + d2 3, b3u − 2d3c3,

a4c4 − b4d4 − u2, b4c4 + a4d4, c4u − a2

4 + b2 4, d4u − 2a4b4, a4u − c2 4 + d2 4, b4u − 2d4c4,

a2

1 − b2 1 + a1a3 − b1b3 + a2 3 − b2 3, a2 1 − b2 1 + a1a4 − b1b4 + a2 4 − b2 4, a2 1 − b2 1 + a1a2 − b1b2 + a2 2 − b2 2,

a2

2 − b2 2 + a2a3 − b2b3 + a2 3 − b2 3, a2 3 − b2 3 + a3a4 − b3b4 + a2 4 − b2 4, 2a1b1 + a1b2 + a2b1 + 2a2b2,

2a2b2 + a2b3 + a3b2 + 2a3b3, 2a1b1 + a1b3 + a2b1 + 2a3b3, 2a1b1 + a1b4 + a4b1 + 2a4b4, 2a3b3 + a3b4 + a4b3 + 2a4b4, w2

1 + w2 2 + · · · + w2 17 + w2 18.

a000x0y0 + a010x0y1 + a100x1y0 + a110x1y1 = 0, a001x0y0 + a011x0y1 + a101x1y0 + a111x1y1 = 0, a000x0z0 + a001x0z1 + a100x1z0 + a101x1z1 = 0, a010x0z0 + a011x0z1 + a110x1z0 + a111x1z1 = 0, a000y0z0 + a001y0z1 + a010y1z0 + a011y1z1 = 0, a100y0z0 + a101y0z1 + a110y1z0 + a111y1z1 = 0,

a) b) c)

slide-5
SLIDE 5

A Briefer on Computational Complexity

Alan Turing

  • I. Model of Computation
  • What are inputs / outputs?
  • What is a computation?
  • II. Model of Complexity
  • Cost of computation?
  • III. Model of Reducibility
  • What are equivalent problems?

Dick Karp Stephen Cook Leonid Levin

slide-6
SLIDE 6

NP-Hard NP-Complete P NP

Matrix Problems Tensor Problems

Turing Machine (Mike Davey)

  • I. Model of Computation:

Turing Machine [Turing ’37][Turing 1936] Inputs: finite list of rational numbers Outputs: YES/NO or rational vectors

  • II. Model of Complexity:

Time complexity Number of Tape-Level moves

  • III. Model of Reducibility:

Classes: P (polynomial-time), NP ,, NP-complete, NP-hard, ...

P1

input I

YES/NO

P2

input I’

YES/NO

=

P1 P2

polynomial-sized transformation

the world of all computational problems

slide-7
SLIDE 7

NP-complete decision problems

[Cook-Karp-Levin 1971/2]

Graph coloring: Given graph G, is there a proper 3-coloring?

1 2 3 4 1 2 3 4

is an NP-complete (can verify quickly) problem

YES NO 1 Million $$$ prize (Clay Math)

slide-8
SLIDE 8

Connection to nonlinear algebra Theorem [Bayer ‘82]: Whether or not a graph is 3- colorable can be encoded as whether a system of cubic equations over has a nonzero solution

C

Reformulation [H., Lim ’13]: Whether or not a graph G on v vertices with edges E is 3-colorable can be encoded as whether the following homogeneous quadratics has a nonzero solution in C

xi = ai + ibi yi = ci + idi

Quadratic System

  • ver the reals R

CG = ( xiyi − u2, yiu − x2

i ,

xiu − y2

i ,

i = 1, . . . , v, P

j:{i,j}∈E(x2 i + xixj + x2 j),

i = 1, . . . , v.

slide-9
SLIDE 9

a1c1 − b1d1 − u2, b1c1 + a1d1, c1u − a2

1 + b2 1, d1u − 2a1b1, a1u − c2 1 + d2 1, b1u − 2d1c1,

a2c2 − b2d2 − u2, b2c2 + a2d2, c2u − a2

2 + b2 2, d2u − 2a2b2, a2u − c2 2 + d2 2, b2u − 2d2c2,

a3c3 − b3d3 − u2, b3c3 + a3d3, c3u − a2

3 + b2 3, d3u − 2a3b3, a3u − c2 3 + d2 3, b3u − 2d3c3,

a4c4 − b4d4 − u2, b4c4 + a4d4, c4u − a2

4 + b2 4, d4u − 2a4b4, a4u − c2 4 + d2 4, b4u − 2d4c4,

a2

1 − b2 1 + a1a3 − b1b3 + a2 3 − b2 3, a2 1 − b2 1 + a1a4 − b1b4 + a2 4 − b2 4, a2 1 − b2 1 + a1a2 − b1b2 + a2 2 − b2 2,

a2

2 − b2 2 + a2a3 − b2b3 + a2 3 − b2 3, a2 3 − b2 3 + a3a4 − b3b4 + a2 4 − b2 4, 2a1b1 + a1b2 + a2b1 + 2a2b2,

2a2b2 + a2b3 + a3b2 + 2a3b3, 2a1b1 + a1b3 + a2b1 + 2a3b3, 2a1b1 + a1b4 + a4b1 + 2a4b4, 2a3b3 + a3b4 + a4b3 + 2a4b4, w2

1 + w2 2 + · · · + w2 17 + w2 18.

b) The system has a (nonzero) solution over the reals:

1 2 3 4

Example: The following graph is 3-colorable:

35 homogeneous quadratics in 35 indeterminates:

xi = 1 xi = ω xi = ω2 ω

primitive cube root of 1

A ∈ Q35×35×35

1 1

ω ω2

slide-10
SLIDE 10

a1c1 − b1d1 − u2, b1c1 + a1d1, c1u − a2

1 + b2 1, d1u − 2a1b1, a1u − c2 1 + d2 1, b1u − 2d1c1,

a2c2 − b2d2 − u2, b2c2 + a2d2, c2u − a2

2 + b2 2, d2u − 2a2b2, a2u − c2 2 + d2 2, b2u − 2d2c2,

a3c3 − b3d3 − u2, b3c3 + a3d3, c3u − a2

3 + b2 3, d3u − 2a3b3, a3u − c2 3 + d2 3, b3u − 2d3c3,

a4c4 − b4d4 − u2, b4c4 + a4d4, c4u − a2

4 + b2 4, d4u − 2a4b4, a4u − c2 4 + d2 4, b4u − 2d4c4,

a2

1 − b2 1 + a1a3 − b1b3 + a2 3 − b2 3, a2 1 − b2 1 + a1a4 − b1b4 + a2 4 − b2 4, a2 1 − b2 1 + a1a2 − b1b2 + a2 2 − b2 2,

a2

2 − b2 2 + a2a3 − b2b3 + a2 3 − b2 3, a2 3 − b2 3 + a3a4 − b3b4 + a2 4 − b2 4, 2a1b1 + a1b2 + a2b1 + 2a2b2,

2a2b2 + a2b3 + a3b2 + 2a3b3, 2a1b1 + a1b3 + a2b1 + 2a3b3, 2a1b1 + a1b4 + a4b1 + 2a4b4, 2a3b3 + a3b4 + a4b3 + 2a4b4, w2

1 + w2 2 + · · · + w2 17 + w2 18.

b) The system does not have (nonzero) solution over : Example: The following graph is not 3-colorable:

1 2 3 4

a2

2 − b2 2 + a2a4 − b2b4 + a2 4 − b2 4,

2a2b2 + a2b4 + a4b2 + 2a4b4

R

slide-11
SLIDE 11

The coloring ideal is trivial (< 2 sec computation)

[H., Windfeldt ’08]

Example: The graph G below is uniquely 3-colorable

[Example of Akbari, Mirrokni, Sadjad ‘01 disproving a conjecture of Xu ’90]

IG

slide-12
SLIDE 12

Tensor eigenvalues

Problem: Given find with such that:

[Lim 2005], [Qi 2005], [Ni, et al 2007], [Qi 2007], [Cartwright and Sturmfels 2012]

Some Facts: Generic or random tensors over complex numbers have a finite number

  • f eigenvalues and eigenvectors (up to scaling equivalence), although their count is exponential.

Still, it is possible for a tensor to have an infinite number of non-equivalent eigenvalues, but in that case they comprise a cofinite set of complex numbers Another important fact is that over the reals, every 3-tensor has a real eigenpair.

A = [[aijk]] ∈ Qn×n×n (x, λ) x 6= 0

n

X

i,j=1

aijkxixj = λxk, k = 1, . . . , n

slide-13
SLIDE 13

Decision problem

Decidable (Computable on a Turing machine):

  • Quantifier elimination
  • Buchberger’s algorithm and Groebner bases
  • Multivariate resultants

All quickly become inefficient as n grows Is there an efficient algorithm?

Problem: Given

A = [[aijk]] ∈ Qn×n×n λ ∈ Q 0 6= x 2 Cn

and does there exist

n

X

i,j=1

aijkxixj = λxk, k = 1, . . . , n

slide-14
SLIDE 14

No, because Quadratic equations are hard to solve

[Bayer 1982], [Lovasz 1994], [Grenet et al 2010], ...

NP-Hard NP-Complete P NP

Matrix Problems Tensor Problems

Corollary: Deciding if is a tensor eigenvalue is NP-hard Corollary: Unless P = NP , there is no polynomial time approximation scheme for finding tensor eigenvectors to within

  • −1

2, √ 3 2

  • (1, 0)

3 4 3 4

λ = 0 xi = 1 xi = ω xi = ω2 ✏ = 3/4

slide-15
SLIDE 15

Computational complexity of tensor problems

[H., Lim ’13]

slide-16
SLIDE 16

Tensor Rank

Note: rank can change over changing fields (not true linear algebra)

Theorem [Hastad ’90]: Tensor rank is NP-hard over rank 1 tensors:

  • Segre variety

A = [[xiyjzk]] = x ⊗ y ⊗ z

rankF(A) = min

r {A = r

X

i=1

xi ⊗ yi ⊗ zi}.

Definition: Tensor rank over the field is

A = x ⊗ y ⊗ z, x, y, z ∈ Fn F

Question: Is tensor rank different over reals / complex?

rankF(A) = min

r {A = r

X

i=1

xi ⊗ yi ⊗ zi} Q

slide-17
SLIDE 17

a1a2a3 + c1c2c3 = 2, a1a3b2 + c1c3d2 = 0, a2a3b1 + c2c3d1 = 0, a3b1b2 + c3d1d2 = − 4, a1a2b3 + c1c2d3 = 0, a1b2b3 + c1d2d3 = −4, a2b1b3 + c2d3d1 = 4, b1b2b3 + d1d2d3 = 0

a) Theorem [H. Lim ’13] There are rational tensors with different rank over the rationals versus the reals

rankR(A) < rankQ(A)

The following system has no solution over the rationals

A = z ⊗ z ⊗ z + z ⊗ z ⊗ z z = x + √ 2y, z = x − √ 2y x = [1, 0]>, y = [0, 1]>

[Singular, Macaulay 2, Maple, ...]

slide-18
SLIDE 18

a000x0y0 + a010x0y1 + a100x1y0 + a110x1y1 = 0, a001x0y0 + a011x0y1 + a101x1y0 + a111x1y1 = 0, a000x0z0 + a001x0z1 + a100x1z0 + a101x1z1 = 0, a010x0z0 + a011x0z1 + a110x1z0 + a111x1z1 = 0, a000y0z0 + a001y0z1 + a010y1z0 + a011y1z1 = 0, a100y0z0 + a101y0z1 + a110y1z0 + a111y1z1 = 0,

c)

Det2,2,2(A) = 1 4  det ✓a000 a010 a001 a011

  • +

a100 a110 a101 a111 ◆ − det ✓a000 a010 a001 a011

a100 a110 a101 a111 ◆2 −4 det a000 a010 a001 a011

  • det

a100 a110 a101 a111

  • .

The 2 x 2 x 2 hyperdeterminant of a tensor is zero: There exist such that:

(x, y, z) 6= 0

[Cayley 1845]

(Defining equation for the dual variety to Segre variety)

Conjecture: NP-hard to compute hyperdeterminant

slide-19
SLIDE 19

and now for something completely different ...

slide-20
SLIDE 20

continuous signal in the world neural sensor circuit (retina) binary spike train (ganglion neurons)

Neuroscience Motivation: Spike Coding of Continuous Signals

Santiago Ramón y Cajal

???? ?

0100011 1010011

slide-21
SLIDE 21

Shannon Entropy (1948)

Claude Shannon

Father of “Entropy Theory”

“Theseus” first robot mouse

slide-22
SLIDE 22

Given a distribution on a finite number of states: Definition: The entropy of a distribution is

pi = probability of being in state i p = (p1, . . . , pN)

H(p) =

N

X

i=1

pi log 1 pi

  • entropy is a measure of the uncertainty in a random variable
  • entropy provides an absolute limit on the best possible lossless encoding
  • r compression of a communication
slide-23
SLIDE 23

Given a distribution on a finite number of states: Definition: The entropy of a distribution is

pi = probability of being in state i p = (p1, . . . , pN)

H(p) =

N

X

i=1

pi log 1 pi

  • entropy is a measure of the uncertainty in a random variable
  • entropy provides an absolute limit on the best possible lossless encoding
  • r compression of a communication (hence bounded above by log N)

≤ log N

slide-24
SLIDE 24

H(p) =

N

X

i=1

pi log 1 pi

A fair coin flip has 1 bit of entropy or information (n fair coin flips have n bits of entropy)

Heads

pH = 1 2 pT = 1 2 H(pH, pT ) = 1 2 log 2 + 1 2 log 2 = 1

Tails Example: The uniform distribution has highest entropy

p = ✓ 1 N , . . . , 1 N ◆ H(p) = log N

Example: Flipping coins

slide-25
SLIDE 25

Example: Compressing letters over the interwebs:

0.02 0.04 0.06 0.08 0.1 0.12 0.14 e t a

  • i

n s h r d l c u m w f g y p b v k j x q z

probability of letter in Oxford English Dictionary

H = 4.2 bits (per letter)

  • So it takes 4.2 < 4.7 = log(26) bits per character to code English letters
  • E.g., e = “0”, t = “01”, a = “10”, ...
slide-26
SLIDE 26
  • Statistical Physics (Jaynes, 1957)
  • Biology, Neuroscience (Bialek)

Maximum Entropy

Statistical Process

Measurements

MaxEnt Model

New measurements

=?

most random / generic distribution given constraints / measurements

slide-27
SLIDE 27

Input

Example: Neural Processing

Statistical Process

Measurements

MaxEnt Model Computation Output

Neural Circuit Learning (development)

???

new measurement

What about computation?

slide-28
SLIDE 28

1 2 3

p = 1/2

Distributions on graphs

1/2 1/2 Example: Erdős-Renyi (ER) distribution on graphs

i.i.d. Bernoulli distribution

  • n each edge

1 8

slide-29
SLIDE 29

Maximum Entropy for graphs

Example: The maximum entropy distribution on graphs is the ER distribution

1 2 3

p = 1/2 1/2 1/2

slide-30
SLIDE 30

Expected degree sequence

Given a distribution on graphs, can compute an expected degree sequence: Example: ER with p = 1/2

E[di] = E[ X

j6=i

wij] E[di] = n − 1 2

slide-31
SLIDE 31

Maximum Entropy for graphs

Problem: Given an expected degree sequence d, what is the maximum entropy distribution on graphs with this expected degree sequence? Answer (classical):

θ1 θ2 θ3

1 1 + eθ1+θ2 1 1 + eθ1+θ3 1 1 + eθ2+θ3

θ = 0

Erdős-Renyi

d1 = 1 1 + eθ1+θ3 + 1 1 + eθ1+θ2 d2 = 1 1 + eθ2+θ3 + 1 1 + eθ2+θ1 d3 = 1 1 + eθ3+θ1 + 1 1 + eθ3+θ2

slide-32
SLIDE 32

Chatterjee-Diaconis-Sly (2011)

Persi Diaconis

Theorem: One sample of a graph from such a maximum entropy distribution determines the distribution for large n

−1 −0.5 0.5 1 −1 −0.5 0.5 1 MLE estimate True θ ∗ n = 300, r = 2

slide-33
SLIDE 33

Single graph sample Original Reconstruction

θ

ˆ θ

Application: Binary Coding of Continuous Signals

“emperical” degree sequence

ˆ d

expected degrees

d

slide-34
SLIDE 34

What about graphs which have [H-Wibisono]: r different edge values {0,...,r-1} nonnegative integer edges values {0,1,....} nonnegative real values

Edges are exponential random variables with means

1 θi + θj di = X

j6=i

1 θi + θj

slide-35
SLIDE 35

Bernd Sturmfels

Sanyal-Sturmfels-Vinzant (2013) The “Retina Equations”:

di = X

j6=i

1 θi + θj

Studied (more generally) using matroid theory and algebraic geometry

i = 1, . . . , N

,

(1)

slide-36
SLIDE 36

Proof ingredients: (1) Large deviation theory (2) Inverses of special class of matrices (positive symmetric diagonally dominant matrices)

Theorem [H., Wibisono ’13]: There is almost surely a unique nonnegative solution to any retina equation. Moreover, given one sample from a graph distribution, solving the equations gives original parameters for large n:

|θ − b θ|∞ ≤ C r log n n with high probability

slide-37
SLIDE 37

Diagonally Dominant Matrices

Definition: A positive matrix is diagonally dominant if each off-diagonal row sum is at most the diagonal entry

    6 1 2 3 1 8 1 5 2 1 4 1 3 5 1 10    

Theorem [H., Lin, Wibisono]: For a positive, symmetric diagonally dominant (n x n) matrix J with smallest entry > 1: ||J−1||∞ ≤ ||S−1||∞ = 3n − 4 2(n − 2)(n − 1) S4 =     3 1 1 1 1 3 1 1 1 1 3 1 1 1 1 3     J =

slide-38
SLIDE 38

bipartite bipartite non-bipartite

N = lim

t→∞(S + tP)−1

(P is the signless Laplacian of G)

N G

slide-39
SLIDE 39

Large deviation theory (strengthening of central limit theorem) Matrix theorem ( J is the Jacobian of the map F )

di = X

j6=i

1 θi + θj

Theorem [H., Wibisono ’13]: There is almost surely a unique nonnegative solution to any retina equation. Moreover, given one sample from a graph distribution, solving the equations gives original parameters for large n:

|θ − ˆ θ|∞ ≤ |J−1|∞|d − ˆ d|∞ ≤ C n p n log n ≤ C r log n n

Proof sketch:

F(θ) = (d1, . . . , dn) F(θ) ≈ F(ˆ θ) + J · (θ − ˆ θ)

slide-40
SLIDE 40

θ ˆ θ F F −1 d ˆ d

  • Original parameters of graph distribution
  • Expected degree sequence of graph distribution
  • Degrees computed from single graph sample
  • Parameters inferred from sampled degrees

d ˆ d θ ˆ θ

|θ − ˆ θ|∞ ≤ |J−1|∞|d − ˆ d|∞ ≤ C r log n n

slide-41
SLIDE 41

Problems: Solve these other “Retina Equations”

i = 1, . . . , N

,

di = X

j6=i

1 θiθj + 1 di = X

j6=i

1 θiθj − 1 Maximum entropy graph distributions with binary edges: i = 1, . . . , N

,

with nonnegative integer edges:

also others ...

slide-42
SLIDE 42

Mathematical Sciences Research Institute National Science Foundation

Redwood Center for Theoretical Neuroscience (U.C. Berkeley)