Q. Why do we have a brain? A. To produce adaptable and complex - - PowerPoint PPT Presentation

q why do we have a brain a to produce adaptable and
SMART_READER_LITE
LIVE PREVIEW

Q. Why do we have a brain? A. To produce adaptable and complex - - PowerPoint PPT Presentation

Probabilistic mechanisms in human sensorimotor control Daniel Wolpert, University College London Q. Why do we have a brain? A. To produce adaptable and complex movements movement is the only way we have of Interacting with the world


slide-1
SLIDE 1

Probabilistic mechanisms in human sensorimotor control

Daniel Wolpert, University College London

  • movement is the only way we have of

– Interacting with the world – Communication: speech, gestures, writing

  • sensory, memory and cognitive processes future motor outputs
  • Q. Why do we have a brain?

Sea Squirt

  • A. To produce adaptable and complex movements
slide-2
SLIDE 2

1000 1500 2000 2500 3000 3500

Why study computational sensorimotor control? Year Pages

r2=0.96

1980 1990 2000 2010 2020 2030 2040 2050 500

Experiments Theory

Principles of Neural Science, Kandel et al.

slide-3
SLIDE 3

vs. What to move where vs. Moving

The complexity of motor control

slide-4
SLIDE 4

Noise makes motor control hard

Noise = randomness The motor system is Noisy Perceptual noise

– Limits resolution

Motor Noise

– Limits control

Noisy Partial Noisy

Ambiguous Variable

slide-5
SLIDE 5

David Marr’s levels of understanding (1982)

1) the level of computational theory

  • f the system

2) the level of algorithm and representation, which are used make computations 3) the level of implementation: the underlying hardware or "machinery" on which the computations are carried out.

slide-6
SLIDE 6

Tutorial Outline

– Sensorimotor integration

  • Static multi-sensory integration
  • Bayesian integration
  • Dynamic sensor fusion & the Kalman filter

– Action evaluation

  • Intrinsic loss function
  • Extrinsic loss functions

– Prediction

  • Internal model and likelihood estimation
  • Sensory filtering

– Control

  • Optimal feed forward control
  • Optimal feedback control

– Motor learning of predictable and stochastic environments

Review papers on www.wolpertlab.com

slide-7
SLIDE 7

Multi-sensory integration

Multiple modalities can provide information about the same quantity

  • e.g. location of hand in space

– Vision – Proprioception

  • Sensory input can be

– Ambiguous – Noisy

  • What are the computations used in

integrating these sources?

slide-8
SLIDE 8

Ideal Observers

Maximum likelihood estimation (MLE)

i i

x x ε = +

1 2

( , ..., | )

n

P x x x x

1

( | )

n i i

P x x

=

=∏

2

(0, )

i i

N ε σ =

( )

2 2 1 1

ˆ with

n i i i i n i j j

x w x w σ σ

− − = =

= =

∑ ∑

( )

1 2 2 2 ˆ 1 n x i k j

k σ σ σ

− − =

= < ∀

2

x

1

x x Consider signals , {1 }

i

n x i n = …

Two examples of multi-sensory integration

slide-9
SLIDE 9

Visual-haptic integration (Ernst & Banks 2002)

Two alternative force choice size judgment

  • Visual
  • Haptic
  • Visual-haptic (with discrepancy)
slide-10
SLIDE 10

Visual-haptic integration

Measure

  • Visual reliability
  • Haptic reliability
  • Predict
  • Visual + Haptic noise
  • Weighting of

2 V

σ

2 H

σ

2

H

σ

Probability

Size

Probability Size difference

2

H

σ

H

σ

H

σ

slide-11
SLIDE 11

Visual-haptic integration

( )

2 2 1 2 2 2

1

i i n j j V H V H V H

w w w w σ σ σ σ σ

− − =

= = = − +

( )

2 2 1 2 2 ˆ 2 2 1 n V H x i j V H

σ σ σ σ σ σ

− − =

= = +

Weights Standard deviation (~threshold) Optimal integration of vision and haptic information in size judgement

slide-12
SLIDE 12

Visual-proprioceptive integration

Classical claim from prism adaptation “vision dominates proprioception”

slide-13
SLIDE 13

Reliability of proprioception depends on location

(Van Beers, 1998)

Reliability of visual localization is anisotropic

slide-14
SLIDE 14

Integration models with discrepancy

Winner takes all Linear weighting of mean Optimal integration

ˆ (1 )

V H

w w = + − x x x ˆ

V H

A B = + x x x

1 1 1 1 1

( ) ( )

PV P V PV PV P P V V

µ µ µ

− − − − −

Σ = Σ + Σ = Σ Σ + Σ

slide-15
SLIDE 15

Prisms displace along the azimuth

  • Measure V and P
  • Apply visuomotor discrepancy during right hand reach
  • Measure change in V and P to get relative adaptation

Vision 0.33 Prop 0.67

(Van Beers, Wolpert & Haggard, 2002)

slide-16
SLIDE 16

Visual-proprioceptive discrepancy in depth

Adaptation Vision 0.72 Prop 0.28

Visual adaptation in depth > visual adaptation in azimuth (p<0.01) > Proprioceptive adaptation in depth (p<0.05) Proprioception dominates vision in depth

slide-17
SLIDE 17

Priors and Reverend Thomas Bayes

“I now send you an essay which I have found among the papers of our deceased friend Mr Bayes, and which, in my opinion, has great merit....” Essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 1764.

1702-1761

slide-18
SLIDE 18

Bayes rule

  • A and B

( , ) P A B =

( | ) P A B

( | ) ( ) P B A P A =

Belief instate sensory inpu AFTER t

  • Posterior

( ) P B

P(state|sensoryinput)

Prior BEFORE Belief in state sensory input

  • Neuroscience

A= State of the world B=Sensory Input A = Disease B = Positive blood test

Likelihood Evidence

  • P(sensoryinput|state)P(state)

= P(sensoryinput)

  • AgivenB
slide-19
SLIDE 19

Bayesian Motor Learning

=

Optimal estimate (Posterior) Bayes rule

+

Task statistics (Prior) Not all locations are equally likely Sensory feedback (Evidence) Combine multiple cues to reduce uncertainty

Estimate

Evidence Prior

P (sensor P(state| yinput|s sensory tate P input) (st ) ) ate ∝

Real world tasks have variability, e.g. estimating ball’s bounce location

Does sensorimotor learning use Bayes rule? If so, is it implemented

  • Implicitly: mapping sensory inputs to motor outputs to minimize error?
  • Explicitly: using separate representations of the statistics of the prior and sensory noise?
slide-20
SLIDE 20

(Körding & Wolpert, Nature, 2004)

Prior

Lateral shift (cm) Probability

0 1 2

Task in which we control 1) prior statistics of the task 2) sensory uncertainty

slide-21
SLIDE 21

(Körding & Wolpert, Nature, 2004)

Prior

Lateral shift (cm) Probability

0 1 2

Task in which we control 1) prior statistics of the task 2) sensory uncertainty

slide-22
SLIDE 22

Sensory Feedback

Likelihood

Generalization Learning

slide-23
SLIDE 23

After 1000 trials

2 cm shift

1cm No visual feedback

Probability 0 1 2

Lateral shift (cm)

slide-24
SLIDE 24

0 1 2

Bayesian Compensation

Lateral shift (cm)

Models

Full Compensation

Lateral shift (cm)

0 1 2

0 1 2 Lateral Shift (cm) Average Error Bias (cm)

1

  • 1

0 1 2 Lateral Shift (cm) Average Error Bias (cm)

1

  • 1

Mapping

0 1 2 Lateral shift (cm)

0 1 2 Lateral Shift (cm) Average Error Bias (cm)

1

  • 1
slide-25
SLIDE 25

Supports model 2: Bayesian

Results: single subject

Full Bayes Map

0 1 2 Lateral Shift (cm) Average Error Bias (cm)

slide-26
SLIDE 26

Supports model 2: Bayesian

Results: 10 subjects

Full Bayes Map

0 1 2 Lateral Shift (cm) Average Error Bias (cm)

  • 0.5

1 2.5 1

lateral shift [cm]

Lateral shift (cm) Inferred Prior (normalized)

slide-27
SLIDE 27

Bayesian integration

Subjects can learn

  • multimodal priors
  • priors over forces
  • different priors one after the other

(Körding& Wolpert NIPS 2004, Körding, Ku & Wolpert J. Neurophysiol. 2004)

slide-28
SLIDE 28

Statistics of the world shape our brain Objects Configurations of our body

  • Statistics of visual/auditory stimuli representation visual/auditory cortex
  • Statistics of early experience what can be perceived in later life

(e.g. statistics of spoken language)

slide-29
SLIDE 29

Statistics of action

  • 4 x 6-DOF electromagnetic sensors
  • battery & notebook PC

With limited neural resources statistics of motor tasks motor performance

slide-30
SLIDE 30

Phase relationships and symmetry bias

slide-31
SLIDE 31

Multi-sensory integration

  • CNS

– In general the relative weightings of the senses is sensitive to their direction dependent variability – Represents the distribution of tasks – Estimates its own sensory uncertainty – Combines these two sources in a Bayesian way

  • Supports an optimal integration model
slide-32
SLIDE 32

Loss Functions in Sensorimotor system

What is the performance criteria (loss, cost, utility, reward)?

  • Often assumed in statistics & machine learning

– that we wish to minimize squared error for analytic or algorithmic tractability

  • What measure of error does the brain care about?

Target Position Posterior Probability

Prior Like Posterior lihood

P(sensoryi P(state|sensoryinpu nput|state)P(state t) ) ∝

  • [

] ( , ) ( | _ ) _ ˆ ( ) arg min [ ]

B actions

E Loss Loss state action P state sensory input dsensory input Bayesestimator x s E Loss = =

slide-33
SLIDE 33

Loss function

3 1 2 2

Target

Scenario 1 Scenario 2

( ) f error

2

Loss error =

1 2

Loss error = Loss error =

Loss=4+4=8 Loss=1+9=10 Loss=2+2=4 Loss=1+3=4 Loss=1+1.7=2.7 Loss=1.4+1.4=2.8

slide-34
SLIDE 34

Virtual pea shooter

Mean Starting location

Position (cm) Probability

  • 0.2 0 0.2

(Körding & Wolpert, PNAS, 2004)

slide-35
SLIDE 35

Probed distributions and optimal means

Possible Loss functions

2

Loss ( ) error =

MEAN

  • 2 -1 0 1 2

Error (cm) ρ=0.2 ρ=0.3 ρ=0.5 ρ=0.8

Distributions

MODE Maximize Hits

  • 2 -1 0 1 2

Error (cm)

MEDIAN

Loss error =

Robust estimator

slide-36
SLIDE 36

Shift of mean against asymmetry (n=8)

Mean squared error with robustness to outliers

slide-37
SLIDE 37

0.1 1.0 1.3 1.5 1.7 1.9 2.1 2.3 2.5 2.7 2.9 3.1 3.3 3.5 3.7 3.9

α=

Loss

i

errorα = ∑ Personalised loss function

slide-38
SLIDE 38

Bayesian decision theory

Increasing probability of avoiding keeper Increasing probability of being within the net

slide-39
SLIDE 39

Imposed loss function (Trommershäuser et al 2003)

  • 100
  • 500

+100 +100 +100

slide-40
SLIDE 40

Optimal performance with complex regions

slide-41
SLIDE 41

State estimation

  • State of the body/world

– Set of time-varying parameters which together with

  • Dynamic equations of motion
  • Fixed parameters of the system (e.g. mass)

– Allow prediction of the future behaviour

  • Tennis ball

– Position – Velocity – Spin

slide-42
SLIDE 42

State estimation

NOISE NOISE

Observer

slide-43
SLIDE 43

Kalman filter

  • Minimum variance estimator

– Estimate time-varying state – Can’t directly observe state but only measurement

1 t t t t

A B

+ =

+ + x x u w

1 t t t

C

+ =

+ y x v

1

ˆ ˆ ˆ [ ]

t t t t t t

A B K C

+ =

+ + − x x u y x

slide-44
SLIDE 44

State estimation

1 ForwardDynamicModel

ˆ ˆ ˆ [ ]

t t t t t t

A B K C

+ =

+ + − x x u y x

slide-45
SLIDE 45

Kalman Filter

  • Optimal state estimation is a mixture

– Predictive estimation (FF) – Sensory feedback (FB)

slide-46
SLIDE 46

Eye position

Location of object based on retinal location and gaze direction Percept Actual

Motor command FM Eye Position

slide-47
SLIDE 47

Sensory likelihood

(Wolpert & Kawato, Neural Networks 1998 Haruno, Wolpert, Kawato, Neural Computation 2001)

P(sensor P(state| yinput|s sensory tate input) P(st ) ) ate ∝

slide-48
SLIDE 48

Sensory prediction

Our sensors report

  • Afferent information:

changes in outside world

  • Re-afferent information:

changes we cause

+ = Internal source External source

slide-49
SLIDE 49

Tickling

Self-administered tactile stimuli rated as less ticklish than externally administered tactile stimuli. (Weiskrantz et al, 1971)

slide-50
SLIDE 50

Does prediction underlie tactile cancellation in tickle?

0.5 1 1.5 2 2.5 3 Self-produced tactile stimuli Robot-produced tactile stimuli Condition Tickle rating rank

Gain control or precise spatio-temporal prediction?

P<0.001

slide-51
SLIDE 51

Spatio-temporal prediction

0.5 1 1.5 2 2.5 3 3.5 4 Delay 0ms Delay 100ms Delay 200ms Delay 300ms Robot- produced tactile stimuli Condition Tickle rating rank

P<0.001 P<0.001

0.5 1 1.5 2 2.5 3 3.5 4 0 degrees 30 degrees 60 degrees 90 degrees External

Condition Tickle rating rank P<0.001 P<0.001

(Blakemore, Frith & Wolpert. J. Cog. Neurosci. 1999)

slide-52
SLIDE 52

The escalation of force

slide-53
SLIDE 53

Tit-for-tat

Force escalates under rules designed to achieve parity: Increase by ~40% per turn

(Shergill, Bays, Frith & Wolpert, Science, 2003)

slide-54
SLIDE 54

Perception of force

70% overestimate in force

slide-55
SLIDE 55

Perception of force

slide-56
SLIDE 56

Labeling of movements

Large sensory discrepancy

slide-57
SLIDE 57

Defective prediction in patients with schizophrenic

  • The CNS predicts sensory

consequences

  • Sensory cancellation in

Force production

  • Defects may be related to

delusions of control

Patients Controls

slide-58
SLIDE 58

Motor Learning

Required if:

  • organisms environment, body or task change
  • changes are unpredictable so cannot be pre-specified
  • want to master social convention skills e.g writing

Trade off between:

– innate behaviour (evolution)

  • hard wired
  • fast
  • resistant to change

– learning (intra-life)

  • adaptable
  • slow
  • Maleable
slide-59
SLIDE 59

Motor Learning

Actual behaviour

slide-60
SLIDE 60

Predicted outcome can be compared to actual

  • utcome to generate an

error

Supervised learning is good for forward models

slide-61
SLIDE 61

Weakly electric fish (Bell 2001)

Produce electric pulses to

  • recognize objects in the dark or in murky habitats
  • for social communication.

The fish electric organ is composed of electrocytes,

  • modified muscle cells producing action potentials
  • EOD = electric organ discharges
  • Amplitude of the signal is between 30 mV and 7V
  • Driven by a pacemaker in medulla, which triggers each discharge
slide-62
SLIDE 62

Sensory filtering

Skin receptors are derived from the lateral line system

Removal of expected or predicted sensory input is one of the very general functions of sensory processing. Predictive/associative mechanisms for changing environments

slide-63
SLIDE 63

Primary afferent terminate in cerebellar-like structures

Primary afferents terminate on principal cells either directly or via interneurons

slide-64
SLIDE 64

Block EOD discharge with curare

Specific for Timing (120ms), Polarity, Amplitude & Spatial distribution

slide-65
SLIDE 65

Proprioceptive Prediction

Tail bend affects feedback Passive Bend phase locked to stimulus:

Bend

slide-66
SLIDE 66

Learning rule

Changes in synaptic strength requires principal cell spike discharge Change depends on timing of EPSP to spike Anti-Hebbian learning T1 T2 T2-T1

  • Forward Model can be learned through self-supervised learning
  • Anti-hebbian rule in Cerebellar like structure of he electric fish
slide-67
SLIDE 67

Motor planning (what is the goal of motor control)

Duration Hand Trajectory Joint Muscles

  • Tasks are usually specified at a symbolic level
  • Motor system works at a detailed level, specifying muscle activations
  • Gap between high and low-level specification
  • Any high level task can be achieved in infinitely many low-level ways
slide-68
SLIDE 68

Eye-saccades Arm- movements

Motor evolution/learning results in stereotypy

Stereotypy between repetitions and individuals

Time (ms)

  • Main sequence
  • Donder’s law
  • Listings Law
  • 2/3 power law
  • Fitts’ law
slide-69
SLIDE 69

Models

HOW models

– Neurophysiological or black box models – Explain roles of brain areas/processing units in generating behavior

WHY models

– Why did the How system get to be the way it is? – Unifying principles of movement production

  • Evolutionary/Learning

– Assume few neural constraints

slide-70
SLIDE 70

The Assumption of Optimality

Movements have evolved to maximize fitness

– improve through evolution/learning – every possible movement which can achieve a task has a cost – we select movement with the lowest cost

Overall cost = cost1 + cost2 + cost3 ….

slide-71
SLIDE 71

Optimality principles

  • Parsimonious performance criteria

Elaborate predictions

  • Requires

– Admissible control laws – Musculoskeletal & world model – Scalar quantitative definition of task performance – usually time integral of f(state, action)

slide-72
SLIDE 72

Open-loop

  • What is the cost

– Occasionally task specifies cost

  • Jump as high as possible
  • Exert maximal force

– Usually task does not specify the cost directly

  • Locomotion well modelled by energy minimization
  • Energy alone is not good for eyes or arms
slide-73
SLIDE 73

What is the cost?

Saccadic eye movements

  • little vision over 4 deg/sec
  • frequent 2-3 /sec
  • deprives us of vision for 90

minutes/day ⇒Minimize time

slide-74
SLIDE 74

Arm movements

Movements are smooth – Minimum jerk (rate of change of acceleration) of the hand (Flash & Hogan 1985)

2 2 4 5 3 4 5 3

( ) ( ) ( ) ( )(15 6 10 ) ( ) ( )(15 6 10 ) /

T T T

Cost x t y t dt x t x x x y t y y y t T τ τ τ τ τ τ τ = + = + − − − = + − − − =

slide-75
SLIDE 75

Smoothness

  • Minimum Torque change (Uno et al, 1989)

Shoulder torque Elbow torque

2 2

( ) ( )

T s e

Cost t t dt τ τ = +

slide-76
SLIDE 76

The ideal cost for goal-directed movement

  • Makes sense - some evolutionary/learning advantage
  • Simple for CNS to measure
  • Generalizes to different systems

– e.g. eye, head, arm

  • Generalizes to different tasks

– e.g. pointing, grasping, drawing

→ Reproduces & predicts behavior

slide-77
SLIDE 77

Motor command noise

Motor System Noise Position Error minimized by rapidity

slide-78
SLIDE 78

Fundamental constraint=Signal-dependent noise

  • Signal-dependent noise:

– Constant coefficient of variation – SD (motor command) ~ Mean (motor command)

  • Evidence from

– Experiments: SD (Force) ~ Mean (Force) – Modelling

  • Spikes drawn from a renewal process
  • Recruitment properties of motor units

(Jones, Hamilton & Wolpert , J. Neurophysiol., 2002)

slide-79
SLIDE 79

Task optimization in the presence of SDN

Given SDN, Task Given SDN, Task ≡ ≡ optimizing

  • ptimizing f(statistics

f(statistics) )

An average motor command ⇒ probability distribution (statistics) of movement.

Controlling the statistics of action Controlling the statistics of action

slide-80
SLIDE 80

Finding optimal trajectories for linear systems

2 2 2

( ) ( )

u t

k u t σ =

time

T M

x(t)

[ ( )] ( ) ( )

M

E x M u p M d A τ τ τ = − =

2 2 2 2 2

[ ( )] [ ( ) ( )] [ ( )] ( ) ( ) ( ) ( ) ( )

M M M M

Var x T Var u p T d Var u p T d k u p T d w u d τ τ τ τ τ τ τ τ τ τ τ τ = − = − = − =

∫ ∫ ∫ ∫

time p(t) Impulse response Signal-dependent noise

( ) ( )

[ ( )] ( ) ( )

M n n

E x M u p M d τ τ τ = − =

Cost Constraints System A Linear constraints with quadratic cost: can use quadratic programming or isoperimetric optimization

slide-81
SLIDE 81

Saccade predictions

SDN

Motor command Jerk 3rd order linear system

slide-82
SLIDE 82

100 200 20 Time (ms) Degrees

Prediction: very slow saccade

22 degree saccade in 270 ms (normally ~ 70 ms)

slide-83
SLIDE 83

Head free saccade

100 200 300 400 500 600 700 800 20 40 60 80 100 120

Degrees Time (ms)

Head Gaze Eye

Τ1=0.3 Τ2=0.3 Τ1=0.15 Τ2=0.08

Free parameter, eye:head noise

100 200 300 400 500 600 700 800 20 40 60 80 100 120

Time (ms) Degrees

Eye Head Gaze

(Tomlinson & Bahra, 1986)

slide-84
SLIDE 84

Coordination: Head and eye

For a fixed duration (T), Var(A)=k A2

Var(A)=k A2 Var(A)= k (A/2)2 + k (A/2)2

= k A2 /2

Var(A)=k A2

Eye only Head only Eye & Head

slide-85
SLIDE 85

Movement extent vs. target eccentricity

Gaze amplitude

Angular deviation at acquisition

Eye Head

Gaze amplitude

20 40 60 80 100 120 140 10 20 30 40 50 60 70 80 90 100

100 200 300 400 500 600 700 800 20 40 60 80 100 120

} }

Head Eye

slide-86
SLIDE 86

Arm movements

Drawing (⅔ power law) f=path error Obstacle avoidance f= limit probability of collision Smoothness Non smooth movement ⇒ requires abrupt change in velocity ⇒ given low pass system ⇒ large motor command ⇒ increased noise Smoothness ⇒ accuracy

Feedforward control

  • Ignores role of feedback
  • Generates desired movements
  • Cannot model trial-to-trial variability
slide-87
SLIDE 87

Optimal feedback control (Todorov 2004)

– Optimize performance over all possible feedback control laws – Treats feedback law as fully programmable

  • command=f(state)
  • Models based on reinforcement learning optimal cost-to-go functions
  • Requires a Bayesian state estimator
slide-88
SLIDE 88

Minimal intervention principle

  • Do not correct deviations

from average behaviour unless they affect task performance

– Acting is expensive

  • energetically
  • noise

– Leads to

  • uncontrolled manifold
  • synergies

U n c

  • n

t r

  • l

l e d m a n i f

  • l

d

slide-89
SLIDE 89

Optimal control with SDN

  • Biologically plausible theoretical underpinning

for both eye, head, arm movements

  • No need to construct highly derived signals to

estimate the cost of the movement

  • Controlling statistics in the presence of noise
slide-90
SLIDE 90

What is being adapted?

  • Possible to break down the control process:
  • Visuomotor rearrangements
  • Dynamic perturbations
  • [timing, coordination , sequencing]
  • Internal models captures the relationship

between sensory and motor variables

slide-91
SLIDE 91

Altering dynamics

slide-92
SLIDE 92

Altering Kinematics

slide-93
SLIDE 93

Representation of transformations

Look-up Table Non-physical Parameters θ=f(x,ω) Physical Parameters θ=acos(x/L)

High storage High flexibility Low Generalization Low storage Low flexibility High Generalization

θ x L

÷

asin

x θ L x θ

x θ 1 10 3 35 . . . .

slide-94
SLIDE 94

Generalization paradigm

  • Baseline

– Assess performance over domain of interest – (e.g. workspace)

  • Exposure

– Perturbation: New task – Limitation: Limit the exposure to a subdomain

  • Test

– Re-assess performance

  • ver entire domain of

interest

slide-95
SLIDE 95

Difficulty of learning

(Cunningham 1989, JEPP-HPP)

  • Rotations of the visual field

from 0—180 degrees Difficulty

  • increases from 0 to 90
  • decreases from 120 to 180
  • What is the natural

parameterization

slide-96
SLIDE 96

Viscous curl field

(Shadmehr & Mussa-Ivaldi 1994, J. Neurosci.)

slide-97
SLIDE 97

Representation from generalization: Dynamic 1. Test: Movements over entire workspace 2. Learning – Right-hand workspace – Viscous field 3. Test: Movements over left workspace Two possible interpretations force = f(hand velocity)

  • r torque=f(joint velocity)

Joint-based learning of dynamics

(Shadmehr & Mussa-Ivaldi 1994, J. Neurosci.)

Left hand workspace Before After with Cartesian field

slide-98
SLIDE 98

Visuomotor coordinates

z y x (x,y,z) Cartesian z y x Spherical Polar θ

φ

r

(r,φ,θ)

α1 α2 α3

(α1,α2,α3) Joint angles

slide-99
SLIDE 99

Representation- Visuomotor

1. Test: Pointing accuracy to a set of targets 2. Learning

– visuomotor remapping – feedback only at one target

3. Test: Pointing accuracy to a set of targets

  • 50
  • 40
  • 30

y = 22.2/ 16.2 cm y = 29.2/ 26.2 cm y = 36.2 cm

  • 20
  • 10

10 20 20 30 40 z = - 43.6 cm z = - 35.6 cm

  • 20
  • 10

10 20 z = - 27.6 cm

  • 20
  • 10

10 20

x (cm )

Predict ion of ey e- ce nt red sphe rical coordinat es (Vetter et al, J. Neurophys, 1999)

Predictions of eye-centred spherical coordinates

  • Generalization paradigms can be used to assess

– Extent of generalization – Coordinate system of transformations

slide-100
SLIDE 100

Altering dynamics: Viscous curl field

Before Early with force Late with force Removal of force

slide-101
SLIDE 101

A muscle activation levels sets the spring constant k (or resting length) of the muscle

Stiffness control

Equilibrium point

slide-102
SLIDE 102

Equilibrium point control

  • Set of muscle activations (k1,k2,k3…) defines a posture
  • CNS learns a spatial mapping

– e.g. hand positions muscle activations (x,y,z) (k1,k2,k3…)

slide-103
SLIDE 103

Equilibrium control

The hand stiffness can vary with muscle activation levels.

Low stiffness High stiffness

slide-104
SLIDE 104

Controlling stiffness

Burdet et al (Nature, 2002)

slide-105
SLIDE 105

Stiffness ellipses

  • Internal models to learn stable tasks
  • Stiffness for unpredictable tasks
slide-106
SLIDE 106

Summary

– Sensorimotor integration

  • Static multi-sensory integration
  • Bayesian integration
  • Dynamic sensor fusion & the Kalman filter

– Action evaluation

  • Intrinsic loss function
  • Extrinsic loss functions

– Prediction

  • Internal model and likelihood estimation
  • Sensory filtering

– Control

  • Optimal feed forward control
  • Optimal feedback control

– Motor learning of predictable and stochastic environments

Wolpert-lab papers on www.wolpertlab.com

slide-107
SLIDE 107

References

  • Bell, c.(2001) Memory-based expectations in electrosensory systemsCurrent Opinion in

Neurobiology 2001, 11:481–487

  • Burdet, E., R. Osu, et al. (2001). "The central nervous system stabilizes unstable dynamics by

learning optimal impedance." Nature 414(6862): 446-9.

  • Cunningham, H. A. (1989). "Aiming error under transformed spatial maps suggest a structure for

visual-motor maps." J. Exp. Psychol. 15:3: 493-506.

  • Ernst, M. O. and M. S. Banks (2002). "Humans integrate visual and haptic information in a

statistically optimal fashion." Nature 415(6870): 429-33.

  • Flash, T. and N. Hogan (1985). "The co-ordination of arm movements: An experimentally

confirmed mathematical model " J. Neurosci. 5: 1688-1703.

  • Shadmehr, R. and F. Mussa-Ivaldi (1994 ). "Adaptive representation of dynamics during learning
  • f a motor task." J. Neurosci. 14:5: 3208-3224.
  • Todorov, E. (2004). "Optimality principles in sensorimotor control." Nat Neurosci 7(9): 907-15.
  • Trommershauser, J., L. T. Maloney, et al. (2003). "Statistical decision theory and the selection of

rapid, goal-directed movements." J Opt Soc Am A Opt Image Sci Vis 20(7): 1419-33.

  • Uno, Y., M. Kawato, et al. (1989). "Formation and control of optimal trajectories in human

multijoint arm movements: Minimum torque-change model " Biological Cybernetics 61: 89-101.

  • van Beers, R. J., A. C. Sittig, et al. (1998). "The precision of proprioceptive position sense." Exp

Brain Res 122(4): 367-77.

  • Weiskrantz, L., J. Elliott, et al. (1971). "Preliminary observations on tickling oneself." Nature

230(5296): 598-9.