Biomedicine Enrico Grisan enrico.grisan@dei.unipd.it fMRI - - PowerPoint PPT Presentation

biomedicine
SMART_READER_LITE
LIVE PREVIEW

Biomedicine Enrico Grisan enrico.grisan@dei.unipd.it fMRI - - PowerPoint PPT Presentation

Applied Machine Learning in Biomedicine Enrico Grisan enrico.grisan@dei.unipd.it fMRI experiment Test whether a classifier could distinguish the activation as a result of seeing words that were either kinds of tool or kinds of building. The


slide-1
SLIDE 1

Applied Machine Learning in Biomedicine

Enrico Grisan enrico.grisan@dei.unipd.it

slide-2
SLIDE 2

fMRI experiment

Test whether a classifier could distinguish the activation as a result of seeing words that were either kinds of tool or kinds of building. The subject was shown one word per trial and performed the following task: the subject should think about the item and its properties while the word was displayed (3 s) and try to clear her mind afterwards (8s of blank screen).

(Pereira, Mitchell, Botvinick, Neuroimage, 2009)

For each patient and each task (recognizing word) fMRI data provide a signal correlated to metabolism at each voxel of the acquired 3D barin volume 16000 features/ example 42 example s for the «building» class 42 examples for the «tools» class

slide-3
SLIDE 3

fMRI feature selection

classifier training & feature selection

the goal is:

  • reduce the ratio of features to examples,
  • decrease the chance of overfitting,
  • get rid of uninformative features
  • let the classifier focus on informative ones.
slide-4
SLIDE 4

fMRI feature selection

16000 features!!!

slide-5
SLIDE 5

Regularization

  • Do not need validation set to know some fits are silly
  • Discourage solutions we don’t like
  • Formalizing the cost of solutions we do not like
slide-6
SLIDE 6

Shrinkage

Minimizing the objective function 𝒙 = 𝑏𝑠𝑕 min

𝒙 𝑀

𝑧 𝒚∗ ; 𝑧∗ + 𝜇 𝒙 𝑞 Or constraining the estimates 𝒙 = 𝑏𝑠𝑕 min

𝒙 𝑀

𝑧 𝒚∗ ; 𝑧∗ 𝒙 𝑞 = 𝑢

slide-7
SLIDE 7

Ridge regression

p=2

𝒙𝑠𝑗𝑒𝑕𝑓 = 𝑏𝑠𝑕 min

𝒙 𝑀

𝑧 𝒚∗ ; 𝑧∗ + 𝜇 𝒙 2 𝒙𝑠𝑗𝑒𝑕𝑓= 𝑏𝑠𝑕 min

𝒙 𝑗=1 𝑂 (𝑧𝑗 −

𝑧𝑗)2 + 𝜇 𝑘=1

𝐸

𝑥

𝑘2

𝑆𝑇𝑇 𝜇 = 𝒛 − 𝒀𝒙 𝑈 𝒛 − 𝒀𝒙 + 𝜇𝒙𝑈𝒙 𝒙𝑠𝑗𝑒𝑕𝑓 = (𝒀𝑈𝒀 + 𝜇𝑱)−1𝒀𝑈𝒛

slide-8
SLIDE 8

LASSO

Least Absolute Shrinkage and Selection Operator p=1

𝒙𝑚𝑏𝑡𝑡𝑝 = 𝑏𝑠𝑕 min

𝒙 𝑀

𝑧 𝒚∗ ; 𝑧∗ + 𝜇 𝒙 1 𝒙𝑚𝑏𝑡𝑡𝑝= 𝑏𝑠𝑕 min

𝒙 𝑗=1 𝑂 (𝑧𝑗 −

𝑧𝑗)2 + 𝜇 𝑘=1

𝐸

|𝑥

𝑘|

slide-9
SLIDE 9

Geometry of shrinkage

𝑀 𝑧 𝒚∗ ; 𝑧∗ 𝜇 𝒙 1 𝜇 𝒙 2

slide-10
SLIDE 10

Other shrinkage norms

𝑞 = 4 𝑞 = 2 𝑞 = 1 𝑞 = 0.5 𝑞 = 0.2 𝑞 = 1.2 𝛽 = 0.2 Elastic net: 𝛽 𝒙 2 + (1 − 𝛽) 𝒙 1

slide-11
SLIDE 11

Regularization constants

How do we pick 𝝁 or 𝒖? 1) Based on validation 2) Based on bounds on generalization error 3) Based on empirical Bayes 4) Reinterpreting 𝜇 5) Going full Bayesian approach

slide-12
SLIDE 12

Least angle regression (LAR)

1. Center and standardize all features 2. Start with 𝒔 = 𝒛 − 𝑧 and 𝑥𝑗 = 0, 𝑗 = 1, … , 𝐸 3. Find the feature 𝒀𝑙 most correlated with 𝒔 and add it to the active set 𝒝0 4. At each iteration 𝜐 evaluate the least squares direction: 𝜺𝜐 = (𝒀𝒝𝜐 𝑈𝒀𝒝𝜐 )−1𝒀𝒝𝜐 𝑈𝒔𝜐 5. Update the weights of the features in the active set 𝒙𝒝𝜐+1 = 𝒙𝒝𝜐 + 𝜃𝜺𝜐 6. Evaluate the least squares fit of 𝒀𝒝𝜐 and update the residuals 𝒔𝜐 7. Repeat 4-6 until some other variable 𝒀𝑘is correlated to 𝒔𝜐 as much as 𝒀𝒝𝜐 8. Add 𝒀𝑘 to 𝒀𝒝𝜐 9. Repeat 4-8

slide-13
SLIDE 13

Incremental Forward Stagewise regression

1. Center and standardize all features 2. Start with 𝒔 = 𝒛 and 𝑥𝑗 = 0, 𝑗 = 1, … , 𝐸 3. Find the feature 𝒀𝑙 most correlated with 𝒔 4. Evaluate the change 𝜀 = 𝜁 ∙ 𝑡𝑗𝑕𝑜( 𝒀𝑙, 𝒔 )𝜐 5. Update the weights of the features in the active set 𝑥𝑙 = 𝑥𝑙 + 𝜀 6. Update the residuals 𝒔 = 𝒔 − 𝜀𝒀𝑙 7. Repeat 3-6 until the residuals are uncorrelated with the features

slide-14
SLIDE 14

Pre processing

Centering

– Might have all features at 500 ± 10 – Hard to predict ball park of bias – Subtract mean from all input features

Rescaling

– Heights can be measured in cm or m – Rescale inputs to have unit variance … or interquartile ranges

Care at test time:

apply same scale to all inputs and reverse scaling to prediction

slide-15
SLIDE 15

Some tricks of the trade

  • Preprocessing
  • Transformations
  • Features
slide-16
SLIDE 16

Log transform inputs

Positive quantities are often highly skewed Log-domain id often much more natural

slide-17
SLIDE 17

Creating extra data

Dirty trick: Create more training ‘data’ by corrupting examples in the real training set Changes could respect invariances that would be difficult or burdensome to measure directly

slide-18
SLIDE 18

Encoding attributes

  • Categorical variables

– A study has three individuals – Three different colours – Possible encoding: 100, 010, 001

  • Ordinal variables

– Movie rating, stars – Tissue anomaly rating, expert scores 1-3 – Possible encoding: 00, 10, 11

slide-19
SLIDE 19

Basis function features

In the regression and classification examples we used polynomials 𝑦, 𝑦2, 𝑦3, … Often a bad choice Polynomials of sparse binary features may make sense: 𝑦1𝑦2, 𝑦1𝑦3,…, 𝑦1𝑦2𝑦2 Other options:

  • radial basis function: 𝑓− 𝑦−𝜈 2/ℎ2
  • sigmoids: 1/(1 + 𝑓−𝑤𝑈𝑦)
  • Fouries, wavelets …
slide-20
SLIDE 20

Feature engineering

The difference in performance in an application can depen more on the original features than the algorithms Working out clever way of making features from complex objects like images can be worthwhile, is hard, and is not always respected …

slide-21
SLIDE 21

The SIFT story

Taken from http://yann.lecun.com/ex/pamphlets/publishing-models.html Many of us have horror stories about how some of our best papers have been rejected by conferences. Perhaps the best case in point of the last few years is David Lowe's work on the SIFT method. After years of being rejected from conference starting in 1997, the journal version published in 2004 went on to become the most highly cited paper in all of engineering sciences in 2005. David Lowe relates the story: I did submit papers on earlier versions of SIFT to both ICCV and CVPR (around 1997/98) and both were rejected. I then added more of a systems flavor and the paper was published at ICCV 1999, but just as a poster. By then I had decided the computer vision community was not interested, so I applied for a patent and intended to promote it just for industrial applications.

slide-22
SLIDE 22

A rant about least squares

Whenever a person eagerly inquires if my computer can solve a set of 300 equations in 300

  • unknowns. . . The odds are all too high that our inquiring friend. . . Has collected a set of

experimental data and is now attempting to fit a 300-parameter model to it - by Least Squares! The sooner this guy can be eased out of your office, the sooner you will be able to get back to useful work - but these chaps are persistent. . . you end up by getting angry and throwing the guy out of your office. There is usually a reasonable procedure. Unfortunately, it is undramatic, laborious, and requires thought - which most of these charlatans avoid like the plague. They should merely fit a five-parameter model, then a six-parameter one. . . Somewhere along the line - and it will be much closer to 15 parameters than to 300 – the significant improvement will cease and the fitting operation is over. There is no system of 300 equations, no 300 parameters, and no glamor. The computer center's director must prevent the looting of valuable computer time by these would-be tters of many parameters. The task is not a pleasant one, but the legitimate computer users have rights, too. . . the impasse finally has to be broken by violence - which therefore might as well be used in the very beginning. Numerical methods that (mostly) work, Forman S. Acton (1990, original edition 1970)

slide-23
SLIDE 23

Dumb and dumber

What happens if the feature distribution does not allow simple classifiers to work well? Simple classifiers (few parameters, simple structure …) 1) Are good: do not usually overfit 2) Are bad: can not solve hard problems

slide-24
SLIDE 24

Exploiting weak classifiers

Instead of learning a single classifier Learn many weak classifiers that are good at difference parts of the input space Output class: vote of each classifier

slide-25
SLIDE 25

Ensamble methods

We like that:

  • Classifiers that are most ‘sure’ will vote with more

conviction

  • Classifiers will be most ‘sure’ about a particular part of the

space

  • On average, do better than single classifiers

How?

  • Force a classifier ℎ𝑢 to learn different part of the input space?
  • Weight the votes of each classifier 𝛽𝑢?
slide-26
SLIDE 26

Boosting

Idea: given a weak classifier, run it multiple times on (reweighted) training data, then voting At each iteration 𝑢:

  • Weight each training sample 𝑦𝑗 by how incorrectly has been classified
  • Learn a weak classifier ℎ𝑢
  • Estimate the strength 𝛽𝑢 of ℎ𝑢

Evaluate the final classification: 𝐼 𝑦𝑗 = 𝑡𝑗𝑕𝑜

𝑢

𝛽𝑢ℎ𝑢(𝑦)

slide-27
SLIDE 27

Learning from weighted data

  • Weighted data set 𝑦𝑗, 𝑧𝑗, 𝐸𝑗

– 𝐸𝑗 is the weigth is the training sample 𝑦𝑗, 𝑧𝑗 – The 𝑗𝑢ℎ example counts as 𝐸𝑗 samples Unweighted loss function

𝑀 𝑧; 𝑧 =

𝑗=1 𝑂

( 𝑧𝑗 − 𝑧𝑗)2

Weighted loss function

𝑀𝐸 𝑧; 𝑧 =

𝑗=1 𝑂

𝐸𝑗( 𝑧𝑗 − 𝑧𝑗)2

slide-28
SLIDE 28

Ada-Boost

Training set: 𝑦1, 𝑧1 , … , 𝑦𝑂, 𝑧𝑂 , 𝑧𝑗 ∈ −1,1 Initialize 𝐸𝑗 = 1

𝑂

  • Train a simple classifer ℎ𝑢 minimizing

𝜁𝑢 = 𝑀𝐸 𝑧; 𝑧

  • Set 𝛽𝑢 = 1

2 𝑚𝑜 1−𝜁𝑢 𝜁𝑢

  • Update the weights 𝐸𝑗,𝑢+1 = 𝐸𝑗,𝑢𝑓−𝛽𝑢𝑧𝑗ℎ𝑢(𝑦𝑗)
  • Renormalize the weights so that 𝑗=1

𝑂

𝐸𝑗,𝑢+1 = 1

slide-29
SLIDE 29

Final Ada-Boost classifier

𝑧𝑗 = 𝐼 𝑦𝑗 = 𝑡𝑗𝑕𝑜

𝑢

𝛽𝑢ℎ𝑢(𝑦𝑗)

slide-30
SLIDE 30

Ada-Boost in action

slide-31
SLIDE 31

Ada-boost in action

slide-32
SLIDE 32

Training error

1 𝑂

𝑗=1 𝑂

( 𝑧𝑗 − 𝑧𝑗)2 ≤ 1 𝑂

𝑗=1 𝑂

𝑓−𝑧𝑗𝑔(𝑦𝑗) 𝑔 𝑦𝑗 =

𝑢

𝛽𝑢ℎ𝑢(𝑦𝑗)

slide-33
SLIDE 33

Boosting and logistic regression

slide-34
SLIDE 34

Bagging

1) Learn independent weak classifiers on bootstrap replicates of the training set 2) Average/vote over different classifiers