hidden markov processes can explain complex sequencing

Hidden Markov processes can explain complex sequencing rules of - PowerPoint PPT Presentation

Hidden Markov processes can explain complex sequencing rules of birdsong: A statistical analysis and neural network modeling Kentaro Katahira 1,2,3 , Kenta Suzuki 3,4 , Kazuo Okanoya 1,2,3 , and Masato Okada 1,2,3 1. JST ERATO, Okanoya Emotional


  1. Hidden Markov processes can explain complex sequencing rules of birdsong: A statistical analysis and neural network modeling Kentaro Katahira 1,2,3 , Kenta Suzuki 3,4 , Kazuo Okanoya 1,2,3 , and Masato Okada 1,2,3 1. JST ERATO, Okanoya Emotional Information Project, 2. The University of Tokyo, 3. RIKEN Brain Science Institute, 4. Saitama University

  2. Motivation - What are neural substrates for sequential behavior? Sequential behavior • Speech • Playing music • Dancing Perception Generation Learning

  3. Motivation - What are neural substrates for sequential behavior? Birdsong Syllable: a b c d Frequency Perception Generation Learning

  4. Outline 1. Introduction – Neural substrates of birdsong – Neural network models 2. Statistics of birdsong – Higher-order history dependency 3. Statistical models for birdsong 4. Discussion – Neural implementation – Future directions

  5. Neural activity pattern during singing (Zebra finch) Hahnloser, Kozhevnikov and Fee, Nature, 2002

  6. Feedforward chain hypothesis • Spikes propagate on feedforward chain network Li & Greenside, Phys. Rev. E, 2006. Jin, Ramazanoglu, & Seung, J. Comput. Neurosci. 2007. Experimental evidences: Long & Fee, Nature, 2008; Long, Jin & Fee, Nature, 2010 It is suitable for fixed sequences. But how about variable sequences?

  7. Song of Bengalese finch - Variable sequences including branching points

  8. Branching-chain hypothesis • Mutual inhibition between branching chains b c Neuron Index b inhibition a a a c Time (Jin, Phys Rev E, 2009)

  9. Limitation of branching-chain model • The transition is a simple Markov process – The present active chain depends only on the last active chain Does not affect Chain A Chain D Chain C ? Chain B Chain E Question: Syllable sequences of Bengalese finch songs are Markov processes?

  10. Outline 1. Introduction – Neural substrates of birdsong – Neural network models 2. Statistics of birdsong – Higher-order history dependency 3. Statistical models for birdsong 4. Discussion – Neural implementation – Future directions

  11. Test of (first order) Markov assumption Null hypothesis : The transition probability to next syllable does not depend on preceding syllable (Markov assumption) Prob. 0.495 c a b d 0.408 b 0.097 e χ 2 goodness-of-fit test Prob. (For the case “a” precedes “b”) 0.385 c a b d 0.422 Significant difference → Second-order history dependency b e 0.193

  12. Result We found more than one significant second-order history dependency in all 16 birds. ( p < 0.01 with Bonferroni correction) a < 0.01 a 0.13 a a b c b c < 0.01 0.55 c c d 0.99 d 0.31 χ 2 (2) = 187.49, p < 0.0001

  13. Then,… • The branching-chain model is incorrect? B inhibition A C ?

  14. Two possible mechanism for history dependency Hypothesis 1: Chain transition with higher-order dependency Chain 4 Chain 1 Chain 2 Chain 3 b c 4 a b d x 10 a b c d 2 1.5 Freq. (Hz) 1 0.5 0 1.1 1.2 1.3 1.4 1.5 1.6 Hypothesis 2: Time (sec) Many-to-one mapping from chains to syllables Chain2 Chain3 Chain4 Chain5 Chain1 c d a b (Katahira, Okanoya and Okada, Biol. Cybern. 2007)

  15. However… • The neural activity data from HVC of singing Bengalese finches are not available. 4 x 10 2 1.5 Freq. (Hz) 1 0.5 0 1.1 1.2 1.3 1.4 1.5 1.6 Time (sec) HVC HVC ? Zebra finch Bengalese finch • We examined two hypotheses based on song data by using statistical models.

  16. Outline 1. Introduction – Neural substrates of birdsong – Neural network models 2. Statistics of birdsong – Higher-order history dependency 3. Statistical models for birdsong 4. Discussion – Neural implementation – Future directions

  17. Feature extraction - Auditory features ・ ・ ・ x 1 x 2 Spectral entropy (z-score) Auditory features •Spectral entropy •Duration •Mean frequency Duration (z-score) (c.f. Tchernichovski et al. 2000)

  18. Hidden Markov Model (HMM) a 22 a 11 a 33 a 23 a 12 State 3 ・ ・ ・ State 1 State 2 a 24 State 4 ・ ・ ・ a 41 Hidden Observable

  19. State transition dynamics in HMM 1 st order HMM: 2 nd order HMM: 0 th order HMM (Gaussian mixture):

  20. Relationship between two hypotheses and statistical models Hypothesis 1: → 2 nd order-HMM Chain transition with higher-order dependency Chain 2 Chain 3 Chain 4 Chain 1 a b c d Hypothesis 2: → 1 st order-HMM Many-to-one mapping from chains to syllables Chain1 Chain2 Chain3 Chain4 Chain5 c a b d

  21. Bayesian model selection Given data (auditory features): Model structure •L : Markov order (0,1,2) •K : the number of hidden states Model posterior : Marginal likelihood: ( → difficult to compute!) ( : model parameter set) Approximation Lower bound (variational free energy) (can be computed by variational Bayes method)

  22. Result – model selection (one bird) “Best model structure” 1 st order HMM log-marginal likelihood 2 nd order HMM Lower bound on Better model 0 th order HMM Number of states, K •With small number of states → 2nd order HMM •With large number of states → 1st order HMM

  23. Results – model selection, cross validation (averages over 16 birds) Predictive likelihood Lower bound on log-marginal likelihood (cross validation) 1 st order HMM 1 st order HMM 2 nd order HMM 2 nd order HMM Bound (z-score) 0 th order HMM 0 th order HMM

  24. HMM learns many-to-one mapping Many-to-one mapping from the states to a syllable “b” (Similar results were obtained for 30 syllables of the 54 syllables where significant second- order dependency was found)

  25. Outline 1. Introduction – Neural substrates of birdsong – Neural network models 2. Statisticss of birdsong – Higher-order history dependency 3. Statistical models for birdsong 4. Discussion – Neural implementation – Future directions

  26. Summary of results •Bengalese finch songs have at least second-order history dependency. 4 x 10 c a b b d 2 1.5 Freq. (Hz) 1 0.5 0 1.1 1.2 1.3 1.4 1.5 1.6 Time (sec) State transition with higher-order Many-to-one mapping – 1 st HMM dependency - 2 nd -order HMM state1 state2 state3 state4 state1 state2 state3 state4 state5 a b c d a b c d This mechanism is sufficient for Bengalese finch song

  27. Mapping onto neuroanatomy • HVC - hidden state (branch ⇔ state ) • RA - auditory features of each syllable (Katahira, Okanoya and Okada, 2007)

  28. Future directions (ongoing research) • How the brain can learn this representation? – Analysis of development of song from a juvenile period. – Developing a network model with synaptic plasticity for learning the many-to-one mapping. (e.g., Doya & Sejnowski, NIPS, 1995; Troyer & Doupe, J Neuropysiol, 2000; Fiete, Fee & Seung, J Neuropysiol,2007) • Applying HMMs to spike data recorded from songbird (Katahira, Nishikawa, Okanoya & Okada, Neural Comput, 2010)

  29. Overbiew of our approach Behavior 4 x 10 2 1.5 Freq. (Hz) 1 0.5 0 1.1 1.2 1.3 1.4 1.5 1.6 Time (sec) Parameter fitting, Model selection Constraints Neural network Statistical model model Support, Refinement Mapping Constraints Anatomy, Physiology

Recommend


More recommend


Explore More Topics

Stay informed with curated content and fresh updates.