Ensembles of Classifiers Larry Holder CSE 6363 Machine Learning - - PowerPoint PPT Presentation

ensembles of classifiers
SMART_READER_LITE
LIVE PREVIEW

Ensembles of Classifiers Larry Holder CSE 6363 Machine Learning - - PowerPoint PPT Presentation

Ensembles of Classifiers Larry Holder CSE 6363 Machine Learning Computer Science and Engineering University of Texas at Arlington 1 References Dietterich, Machine Learning Research: Four Current Directions, AI Magazine , pp.


slide-1
SLIDE 1

1

Ensembles of Classifiers

Larry Holder CSE 6363 – Machine Learning Computer Science and Engineering University of Texas at Arlington

slide-2
SLIDE 2

2

References

Dietterich, “Machine Learning Research: Four Current

Directions,” AI Magazine, pp. 97-105, Winter 1997.

slide-3
SLIDE 3

3

Learning Task

Given a set S of training examples

{(x1,y1),…,(xm,ym)}

Sampled from unknown function y = f(x) Each xi is a feature vector <xi,1,…xi,n> of n

discrete or real-valued features

Class y є {1,…,K} Example may contain noise

Find hypothesis h approximating f

slide-4
SLIDE 4

4

Ensemble of Classifiers

Goal

Improve accuracy of supervised learning task

Approach

Use an ensemble of classifiers, rather than just

  • ne

Challenges

How to construct ensemble How to use individual hypotheses of ensemble to

produce a classification

slide-5
SLIDE 5

5

Ensembles of Classifiers

Given ensemble of L classifiers h1,…,hL Decisions based on combination of

individual hl

E.g., weighted or unweighted voting

How to construct ensemble whose

accuracy is better than any individual classifier?

slide-6
SLIDE 6

6

Ensembles of Classifiers

Ensemble requirements

Individual classifiers disagree Each classifier’s error < 0.5 Classifiers’ errors uncorrelated

THEN, ensemble will outperform any hl

slide-7
SLIDE 7

7

Ensembles of Classifiers (Fig. 1)

P(l of 21 hypotheses errant) Each hypothesis has error 0.3 Errors independent P(11 or more errant) = 0.026

slide-8
SLIDE 8

8

Constructing Ensembles

Sub-sampling the training examples

One learning algorithm run on different sub-

samples of training to produce different classifiers

Works well for unstable learners, i.e., output

classifier undergoes major changes given only small changes in training data

Unstable learners

Decision tree, neural network, rule learners

Stable learners

Linear regression, nearest-neighbor, linear-

threshold (perceptron)

slide-9
SLIDE 9

9

Sub-sampling the Training Set

Methods

Cross-validated committees

k-fold cross-validation to generate k different

training sets

Learn k classifiers

Bagging Boosting

slide-10
SLIDE 10

10

Bagging

Given m training examples Construct L random samples of size m

with replacement

Each sample called a bootstrap replicate On average, each replicate contains 63.2%

  • f training data

Learn a classifier hl for each of the L

samples

slide-11
SLIDE 11

11

Boosting

Each of the m training examples weighted according

to classification difficulty pl(x)

Initially uniform: 1/m

Training sample of size m for iteration l drawn with

replacement according to distribution pl(x)

Learner biased toward higher-weight training

examples – if learner can use pl(x)

Error εl of classifier hl used to bias pl+1(x) Learn L classifiers

Each used to modify weights for next learned classifier

Final classifier a weighted vote of individual classifiers

slide-12
SLIDE 12

12

AdaBoost (Fig. 2)

slide-13
SLIDE 13

13

C4.5 with/without Boosting

Each point represents 1 of 27 test domains.

slide-14
SLIDE 14

14

C4.5 with/without Bagging

slide-15
SLIDE 15

15

Boosting vs. Bagging

slide-16
SLIDE 16

16

Constructing Ensembles

Manipulating input features

Classifiers constructed using different

subsets of features

Works only when some redundancy in

features

slide-17
SLIDE 17

17

Constructing Ensembles

Manipulating Output Targets

When large number K of classes Generate L binary partitions of K classes Generate L classifiers for these 2-class problems Classify according to class whose partitions

received most votes

Similar to error-correcting codes Generally improves performance

slide-18
SLIDE 18

18

Constructing Ensembles

Injecting Randomness

Multiple neural nets with different random initial

weights

Randomly-selected split attribute among top 20 in

C4.5

Randomly-selected condition among top 20% in

FOIL (Prolog rule learner)

Adding Gaussian noise to input features Make random modifications to current h and use

these classifiers weighted by their posterior probability (accuracy on training set)

slide-19
SLIDE 19

19

Constructing Ensembles using Neural Networks

Train multiple neural networks

minimizing error and correlation with

  • ther networks’ predictions

Use a genetic algorithm to generate

multiple, diverse networks

Have networks also predict various sub-

tasks (e.g., one of the input features)

slide-20
SLIDE 20

20

Constructing Ensembles

Use several different types of learning

algorithms

E.g., decision tree, neural network, nearest

neighbor

Some learners’ error rates may be bad

(i.e., > 0.5)

Some learners’ predictions may be

correlated

Need to check using, e.g., cross-validation

slide-21
SLIDE 21

21

Combining Classifiers

Unweighted vote

Majority vote If hl produce class probability distributions P(f(x)=k | hl)

Weighted vote

Classifier weights proportional to accuracy on training data

Learning combination

Gating function (learn classifier weights) Stacking (learn how to vote)

=

= = =

L l l

h k x f P L k x f P

1

) | ) ( ( 1 ) ) ( (

slide-22
SLIDE 22

22

Why Ensembles Work

Uncorrelated errors made by individual

classifiers can be overcome by voting

How difficult is it to find a set of

uncorrelated classifiers?

Why can’t we find a single classifier that

does as well?

slide-23
SLIDE 23

23

Finding Good Ensembles

Typical hypothesis spaces H are large Need are large number (ideally lg(|H|))

  • f training examples to narrow the

search through H

Typically, sample S of size m << lg(|H|) The subset of hypotheses H consistent

with S forms a good ensemble

slide-24
SLIDE 24

24

Finding Good Ensembles

Typical learning algorithms L employ

greedy search

Not guaranteed to find optimal

hypothesis (minimal size and/or minimal error)

Generating hypotheses using different

perturbations of L produces good ensembles

slide-25
SLIDE 25

25

Finding Good Ensembles

Typically, the hypothesis space H does not contain

the target function f

Weighted combinations of several approximations

may represent classifiers outside of H

Decision surfaces defined by learned decision trees. Decision surface defined by vote over Learned decision trees.

slide-26
SLIDE 26

26

Summary

Advantages

Ensemble of classifiers typically

  • utperforms any one classifier

Disadvantages

Difficult to measure correlation between

classifiers from different types of learners

Learning time and memory constraints Learned concept difficult to understand