Relaxation and Hopfield Networks Neural Networks Neural Networks - - - PDF document

relaxation and hopfield networks
SMART_READER_LITE
LIVE PREVIEW

Relaxation and Hopfield Networks Neural Networks Neural Networks - - - PDF document

Relaxation and Hopfield Networks Neural Networks Neural Networks - Hopfield 1 Bibliography Hopfield, J. J., "Neural networks and physical systems with emergent collective computational abilities," Proceedings of the National Academy


slide-1
SLIDE 1

Neural Networks - Hopfield 1

Relaxation and Hopfield Networks

Neural Networks

slide-2
SLIDE 2

Neural Networks - Hopfield 2

Bibliography

Hopfield, J. J., "Neural networks and physical systems with emergent collective computational abilities," Proceedings of the National Academy

  • f Sciences 79:2554-2558, 1982.

Hopfield, J. J., "Neurons with graded response have collective computational properties like those of two-state neurons." Proceedings of the National Academy of Sciences 81: 3088-3092, 1984. Abu-Mostafa, and J. St. Jacques, Information Capacity of the Hopfield Model, IEEE Trans. on Information Theory, Vol. IT-31, No. 4, 1985.

slide-3
SLIDE 3

Neural Networks - Hopfield 3

Hopfield Networks Relaxation Totally Connected Bidirectional Links (Symmetric) Auto-Associator 1 0 0 1 1 0 0 1 0 0 0 0 1 1 1 0 1 0 1 1 Energy Landscapes formed by weight settings No learning - Programmed weights through an energy function

slide-4
SLIDE 4

Neural Networks - Hopfield 4

Early Hopfield Each unit is a threshold unit (0,1) Real valued weights Vj =

  • Ij

More recent models use sigmoid rather than Threshold Similar in overall functionality Sigmoid gives improved performance

slide-5
SLIDE 5

Neural Networks - Hopfield 5

System Energy equation E = - 1 2 ∑ ij n (Tij • ViVj) - ∑ j=0 n (Ij • Vj) T: weights V: outputs I: Bias Correct Correlation gives Lower System energy Thus, minima must have proper correlations fitting weights

slide-6
SLIDE 6

Neural Networks - Hopfield 6

Programming the Hopfield Network Derive Proper Energy Function Stable local minima represent good states (memory) Set connectivity and weights to match the energy function.

slide-7
SLIDE 7

Neural Networks - Hopfield 7

Relaxation and Energy Contours

slide-8
SLIDE 8

Neural Networks - Hopfield 8

When does a node update Vj =

  • Ij

Continuous - Real System Random Update - Discrete Simulation If not random then oscillations can occur Processing: Start system in initial state random partial total Will relax to nearest stable minima

slide-9
SLIDE 9

Neural Networks - Hopfield 9

What are the stable minima in the following Hopfield Network assuming bipolar states. Each unit is a threshold unit with (1 if > 0, else -1) What would the weights be set to for an associative memory Hopfield net which is programmed to remember the following

  • patterns. Would the net be accurate?

a) 1 0 0 1 0 1 1 0 b) 1 0 1 1 0 1 1 1 1 1 1 0 0 0 0 1

  • 1

1 (1) (3) (2) (1) (2) (3) (4)

slide-10
SLIDE 10

Neural Networks - Hopfield 10

Hopfield as a CAM (Content Addressable Memory) Start with totally connected network with number of node equal number of bits in the training set patterns Set the weights according to: Tij = ∑ s=1 n (2Vis - 1)(2Vjs -1) i.e. increment weight between two nodes when they have the same value, else decrement the weight Could be viewed as a distributed learning mechanism in this case Number of storable patterns ≈ .15N No Guarantees, Saturation

slide-11
SLIDE 11

Neural Networks - Hopfield 11

Limited by Lower order constraints Has no hidden nodes, higher order units All nodes visible Program as CAM 0 0 0 0 1 1 1 0 1 1 1 0 However, relaxing auto-association allows a garbled input to return a clean output Assume two patterns trained A -> X B -> Y Now enter the example with .6A and .4B Result in a Backprop model? Result in the Hopfield autoassociator: X

slide-12
SLIDE 12

Neural Networks - Hopfield 12

Hopfield as a Computation Engine Optimization Travelling Salesman Problem (TSP) NP-Complete "Good" vs. Optimal Solutions Very Fast Processing

slide-13
SLIDE 13

Neural Networks - Hopfield 13

TSP

A C E F B D

Shortest Cycle with no repeat cities 1 2 3 4 5 6 A 0 0 0 0 1 0 B 0 0 1 0 0 0 C 1 0 0 0 0 0 D 0 0 0 1 0 0 E 0 1 0 0 0 0 F 0 0 0 0 0 1 N cities requires N2 nodes 2 ** (N2) possible states N! Legal paths N!/2N distinct legal paths

slide-14
SLIDE 14

Neural Networks - Hopfield 14

Derive Energy equation for TSP

  • 1. Legal State
  • 2. Good State

Set weights accordingly How would we do it

slide-15
SLIDE 15

Neural Networks - Hopfield 15

Network Weights

slide-16
SLIDE 16

Neural Networks - Hopfield 16

slide-17
SLIDE 17

Neural Networks - Hopfield 17

slide-18
SLIDE 18

Neural Networks - Hopfield 18

For N=30 4.4 * 1030 Distinct Legal Paths Typically finds one of 107 best, Thus pruning 1023 How do you handle occasional bad minima?

slide-19
SLIDE 19

Neural Networks - Hopfield 19

Summary Much Current Work Saturation and No Convergence For Optimization, Saturation is moot Many important Optimization problems Non learning, but reasonably intuitive programming - extensions to learning Highly Parallel Expensive Interconnect Lots of Physical Implementation work, optics