Overfitting, Cross-Validation Recommended reading: Neural nets: - - PowerPoint PPT Presentation

overfitting cross validation
SMART_READER_LITE
LIVE PREVIEW

Overfitting, Cross-Validation Recommended reading: Neural nets: - - PowerPoint PPT Presentation

Overfitting, Cross-Validation Recommended reading: Neural nets: Mitchell Chapter 4 Decision trees: Mitchell Chapter 3 Machine Learning 10-701 Tom M. Mitchell Carnegie Mellon University Overview Followup on neural networks


slide-1
SLIDE 1

Overfitting, Cross-Validation

Recommended reading:

  • Neural nets: Mitchell Chapter 4
  • Decision trees: Mitchell Chapter 3

Machine Learning 10-701 Tom M. Mitchell Carnegie Mellon University

slide-2
SLIDE 2

Overview

  • Followup on neural networks

– Example: Face classification

  • Cross validation

– Training error – Test error – True error

  • Decision trees

– ID3, C4.5 – Trees and rules

slide-3
SLIDE 3
slide-4
SLIDE 4
slide-5
SLIDE 5
slide-6
SLIDE 6

# of gradient descent steps

slide-7
SLIDE 7

# of gradient descent steps

slide-8
SLIDE 8

# of gradient descent steps

slide-9
SLIDE 9
slide-10
SLIDE 10
slide-11
SLIDE 11

Cognitive Neuroscience Models Based on ANN’s

[McClelland & Rogers, Nature 2003]

slide-12
SLIDE 12
slide-13
SLIDE 13
slide-14
SLIDE 14

How should we choose the number of weight updates?

slide-15
SLIDE 15
slide-16
SLIDE 16

How should we choose the number of weight updates? How should we allocate N examples to training, validation sets? How will curves change if we double training set size? How will curves change if we double validation set size? What is our best unbiased estimate of true network error?

slide-17
SLIDE 17

Overfitting and Cross Validation

Overfitting: a learning algorithm overfits the training data if it outputs a hypothesis, h 2 H, when there exists h’ 2 H such that: where

slide-18
SLIDE 18

Three types of error

True error: Train set error: Test set error:

slide-19
SLIDE 19

Bias in estimates

Gives a biased (optimistically) estimate for Gives an unbiased estimate for

slide-20
SLIDE 20

Leave one out cross validation

Method for estimating true error of h’

  • e=0
  • For each training example z

– Training on {data – z} – Test on single example z; if error, then ee+1

Final error estimate (for training on sample of size |data|-1) is:

e / |data|

slide-21
SLIDE 21

Leave one out cross validation

The leave-one-out error, e / |data|, gives an almost unbiased estimate for

slide-22
SLIDE 22

Leave one out cross validation

In fact, the e / |data| estimate of leave-one-out cross validation is a slightly pessimistic estimate of

slide-23
SLIDE 23

How should we choose the number of weight updates? How should we allocate N examples to training, validation sets? How will curves change if we double training set size? How will curves change if we double validation set size? What is our best unbiased estimate of true network error?

slide-24
SLIDE 24

What you should know:

  • Neural networks

– Hidden layer representations

  • Cross validation

– Training error, test error, true error – Cross validation as low-bias estimator