Hidden Markov processes can explain complex sequencing rules of - PowerPoint PPT Presentation
Hidden Markov processes can explain complex sequencing rules of birdsong: A statistical analysis and neural network modeling Kentaro Katahira 1,2,3 , Kenta Suzuki 3,4 , Kazuo Okanoya 1,2,3 , and Masato Okada 1,2,3 1. JST ERATO, Okanoya Emotional
Hidden Markov processes can explain complex sequencing rules of birdsong: A statistical analysis and neural network modeling Kentaro Katahira 1,2,3 , Kenta Suzuki 3,4 , Kazuo Okanoya 1,2,3 , and Masato Okada 1,2,3 1. JST ERATO, Okanoya Emotional Information Project, 2. The University of Tokyo, 3. RIKEN Brain Science Institute, 4. Saitama University
Motivation - What are neural substrates for sequential behavior? Sequential behavior • Speech • Playing music • Dancing Perception Generation Learning
Motivation - What are neural substrates for sequential behavior? Birdsong Syllable: a b c d Frequency Perception Generation Learning
Outline 1. Introduction – Neural substrates of birdsong – Neural network models 2. Statistics of birdsong – Higher-order history dependency 3. Statistical models for birdsong 4. Discussion – Neural implementation – Future directions
Neural activity pattern during singing (Zebra finch) Hahnloser, Kozhevnikov and Fee, Nature, 2002
Feedforward chain hypothesis • Spikes propagate on feedforward chain network Li & Greenside, Phys. Rev. E, 2006. Jin, Ramazanoglu, & Seung, J. Comput. Neurosci. 2007. Experimental evidences: Long & Fee, Nature, 2008; Long, Jin & Fee, Nature, 2010 It is suitable for fixed sequences. But how about variable sequences?
Song of Bengalese finch - Variable sequences including branching points
Branching-chain hypothesis • Mutual inhibition between branching chains b c Neuron Index b inhibition a a a c Time (Jin, Phys Rev E, 2009)
Limitation of branching-chain model • The transition is a simple Markov process – The present active chain depends only on the last active chain Does not affect Chain A Chain D Chain C ? Chain B Chain E Question: Syllable sequences of Bengalese finch songs are Markov processes?
Outline 1. Introduction – Neural substrates of birdsong – Neural network models 2. Statistics of birdsong – Higher-order history dependency 3. Statistical models for birdsong 4. Discussion – Neural implementation – Future directions
Test of (first order) Markov assumption Null hypothesis : The transition probability to next syllable does not depend on preceding syllable (Markov assumption) Prob. 0.495 c a b d 0.408 b 0.097 e χ 2 goodness-of-fit test Prob. (For the case “a” precedes “b”) 0.385 c a b d 0.422 Significant difference → Second-order history dependency b e 0.193
Result We found more than one significant second-order history dependency in all 16 birds. ( p < 0.01 with Bonferroni correction) a < 0.01 a 0.13 a a b c b c < 0.01 0.55 c c d 0.99 d 0.31 χ 2 (2) = 187.49, p < 0.0001
Then,… • The branching-chain model is incorrect? B inhibition A C ?
Two possible mechanism for history dependency Hypothesis 1: Chain transition with higher-order dependency Chain 4 Chain 1 Chain 2 Chain 3 b c 4 a b d x 10 a b c d 2 1.5 Freq. (Hz) 1 0.5 0 1.1 1.2 1.3 1.4 1.5 1.6 Hypothesis 2: Time (sec) Many-to-one mapping from chains to syllables Chain2 Chain3 Chain4 Chain5 Chain1 c d a b (Katahira, Okanoya and Okada, Biol. Cybern. 2007)
However… • The neural activity data from HVC of singing Bengalese finches are not available. 4 x 10 2 1.5 Freq. (Hz) 1 0.5 0 1.1 1.2 1.3 1.4 1.5 1.6 Time (sec) HVC HVC ? Zebra finch Bengalese finch • We examined two hypotheses based on song data by using statistical models.
Outline 1. Introduction – Neural substrates of birdsong – Neural network models 2. Statistics of birdsong – Higher-order history dependency 3. Statistical models for birdsong 4. Discussion – Neural implementation – Future directions
Feature extraction - Auditory features ・ ・ ・ x 1 x 2 Spectral entropy (z-score) Auditory features •Spectral entropy •Duration •Mean frequency Duration (z-score) (c.f. Tchernichovski et al. 2000)
Hidden Markov Model (HMM) a 22 a 11 a 33 a 23 a 12 State 3 ・ ・ ・ State 1 State 2 a 24 State 4 ・ ・ ・ a 41 Hidden Observable
State transition dynamics in HMM 1 st order HMM: 2 nd order HMM: 0 th order HMM (Gaussian mixture):
Relationship between two hypotheses and statistical models Hypothesis 1: → 2 nd order-HMM Chain transition with higher-order dependency Chain 2 Chain 3 Chain 4 Chain 1 a b c d Hypothesis 2: → 1 st order-HMM Many-to-one mapping from chains to syllables Chain1 Chain2 Chain3 Chain4 Chain5 c a b d
Bayesian model selection Given data (auditory features): Model structure •L : Markov order (0,1,2) •K : the number of hidden states Model posterior : Marginal likelihood: ( → difficult to compute!) ( : model parameter set) Approximation Lower bound (variational free energy) (can be computed by variational Bayes method)
Result – model selection (one bird) “Best model structure” 1 st order HMM log-marginal likelihood 2 nd order HMM Lower bound on Better model 0 th order HMM Number of states, K •With small number of states → 2nd order HMM •With large number of states → 1st order HMM
Results – model selection, cross validation (averages over 16 birds) Predictive likelihood Lower bound on log-marginal likelihood (cross validation) 1 st order HMM 1 st order HMM 2 nd order HMM 2 nd order HMM Bound (z-score) 0 th order HMM 0 th order HMM
HMM learns many-to-one mapping Many-to-one mapping from the states to a syllable “b” (Similar results were obtained for 30 syllables of the 54 syllables where significant second- order dependency was found)
Outline 1. Introduction – Neural substrates of birdsong – Neural network models 2. Statisticss of birdsong – Higher-order history dependency 3. Statistical models for birdsong 4. Discussion – Neural implementation – Future directions
Summary of results •Bengalese finch songs have at least second-order history dependency. 4 x 10 c a b b d 2 1.5 Freq. (Hz) 1 0.5 0 1.1 1.2 1.3 1.4 1.5 1.6 Time (sec) State transition with higher-order Many-to-one mapping – 1 st HMM dependency - 2 nd -order HMM state1 state2 state3 state4 state1 state2 state3 state4 state5 a b c d a b c d This mechanism is sufficient for Bengalese finch song
Mapping onto neuroanatomy • HVC - hidden state (branch ⇔ state ) • RA - auditory features of each syllable (Katahira, Okanoya and Okada, 2007)
Future directions (ongoing research) • How the brain can learn this representation? – Analysis of development of song from a juvenile period. – Developing a network model with synaptic plasticity for learning the many-to-one mapping. (e.g., Doya & Sejnowski, NIPS, 1995; Troyer & Doupe, J Neuropysiol, 2000; Fiete, Fee & Seung, J Neuropysiol,2007) • Applying HMMs to spike data recorded from songbird (Katahira, Nishikawa, Okanoya & Okada, Neural Comput, 2010)
Overbiew of our approach Behavior 4 x 10 2 1.5 Freq. (Hz) 1 0.5 0 1.1 1.2 1.3 1.4 1.5 1.6 Time (sec) Parameter fitting, Model selection Constraints Neural network Statistical model model Support, Refinement Mapping Constraints Anatomy, Physiology
Recommend
More recommend
Explore More Topics
Stay informed with curated content and fresh updates.