Biomedicine Enrico Grisan enrico.grisan@dei.unipd.it fMRI - - PowerPoint PPT Presentation
Biomedicine Enrico Grisan enrico.grisan@dei.unipd.it fMRI - - PowerPoint PPT Presentation
Applied Machine Learning in Biomedicine Enrico Grisan enrico.grisan@dei.unipd.it fMRI experiment Test whether a classifier could distinguish the activation as a result of seeing words that were either kinds of tool or kinds of building. The
fMRI experiment
Test whether a classifier could distinguish the activation as a result of seeing words that were either kinds of tool or kinds of building. The subject was shown one word per trial and performed the following task: the subject should think about the item and its properties while the word was displayed (3 s) and try to clear her mind afterwards (8s of blank screen).
(Pereira, Mitchell, Botvinick, Neuroimage, 2009)
For each patient and each task (recognizing word) fMRI data provide a signal correlated to metabolism at each voxel of the acquired 3D barin volume 16000 features/ example 42 example s for the «building» class 42 examples for the «tools» class
fMRI feature selection
classifier training & feature selection
the goal is:
- reduce the ratio of features to examples,
- decrease the chance of overfitting,
- get rid of uninformative features
- let the classifier focus on informative ones.
fMRI feature selection
16000 features!!!
Regularization
- Do not need validation set to know some fits are silly
- Discourage solutions we don’t like
- Formalizing the cost of solutions we do not like
Shrinkage
Minimizing the objective function 𝒙 = 𝑏𝑠 min
𝒙 𝑀
𝑧 𝒚∗ ; 𝑧∗ + 𝜇 𝒙 𝑞 Or constraining the estimates 𝒙 = 𝑏𝑠 min
𝒙 𝑀
𝑧 𝒚∗ ; 𝑧∗ 𝒙 𝑞 = 𝑢
Ridge regression
p=2
𝒙𝑠𝑗𝑒𝑓 = 𝑏𝑠 min
𝒙 𝑀
𝑧 𝒚∗ ; 𝑧∗ + 𝜇 𝒙 2 𝒙𝑠𝑗𝑒𝑓= 𝑏𝑠 min
𝒙 𝑗=1 𝑂 (𝑧𝑗 −
𝑧𝑗)2 + 𝜇 𝑘=1
𝐸
𝑥
𝑘2
𝑆𝑇𝑇 𝜇 = 𝒛 − 𝒀𝒙 𝑈 𝒛 − 𝒀𝒙 + 𝜇𝒙𝑈𝒙 𝒙𝑠𝑗𝑒𝑓 = (𝒀𝑈𝒀 + 𝜇𝑱)−1𝒀𝑈𝒛
LASSO
Least Absolute Shrinkage and Selection Operator p=1
𝒙𝑚𝑏𝑡𝑡𝑝 = 𝑏𝑠 min
𝒙 𝑀
𝑧 𝒚∗ ; 𝑧∗ + 𝜇 𝒙 1 𝒙𝑚𝑏𝑡𝑡𝑝= 𝑏𝑠 min
𝒙 𝑗=1 𝑂 (𝑧𝑗 −
𝑧𝑗)2 + 𝜇 𝑘=1
𝐸
|𝑥
𝑘|
Geometry of shrinkage
𝑀 𝑧 𝒚∗ ; 𝑧∗ 𝜇 𝒙 1 𝜇 𝒙 2
Other shrinkage norms
𝑞 = 4 𝑞 = 2 𝑞 = 1 𝑞 = 0.5 𝑞 = 0.2 𝑞 = 1.2 𝛽 = 0.2 Elastic net: 𝛽 𝒙 2 + (1 − 𝛽) 𝒙 1
Regularization constants
How do we pick 𝝁 or 𝒖? 1) Based on validation 2) Based on bounds on generalization error 3) Based on empirical Bayes 4) Reinterpreting 𝜇 5) Going full Bayesian approach
Least angle regression (LAR)
1. Center and standardize all features 2. Start with 𝒔 = 𝒛 − 𝑧 and 𝑥𝑗 = 0, 𝑗 = 1, … , 𝐸 3. Find the feature 𝒀𝑙 most correlated with 𝒔 and add it to the active set 0 4. At each iteration 𝜐 evaluate the least squares direction: 𝜺𝜐 = (𝒀𝜐 𝑈𝒀𝜐 )−1𝒀𝜐 𝑈𝒔𝜐 5. Update the weights of the features in the active set 𝒙𝜐+1 = 𝒙𝜐 + 𝜃𝜺𝜐 6. Evaluate the least squares fit of 𝒀𝜐 and update the residuals 𝒔𝜐 7. Repeat 4-6 until some other variable 𝒀𝑘is correlated to 𝒔𝜐 as much as 𝒀𝜐 8. Add 𝒀𝑘 to 𝒀𝜐 9. Repeat 4-8
Incremental Forward Stagewise regression
1. Center and standardize all features 2. Start with 𝒔 = 𝒛 and 𝑥𝑗 = 0, 𝑗 = 1, … , 𝐸 3. Find the feature 𝒀𝑙 most correlated with 𝒔 4. Evaluate the change 𝜀 = 𝜁 ∙ 𝑡𝑗𝑜( 𝒀𝑙, 𝒔 )𝜐 5. Update the weights of the features in the active set 𝑥𝑙 = 𝑥𝑙 + 𝜀 6. Update the residuals 𝒔 = 𝒔 − 𝜀𝒀𝑙 7. Repeat 3-6 until the residuals are uncorrelated with the features
Pre processing
Centering
– Might have all features at 500 ± 10 – Hard to predict ball park of bias – Subtract mean from all input features
Rescaling
– Heights can be measured in cm or m – Rescale inputs to have unit variance … or interquartile ranges
Care at test time:
apply same scale to all inputs and reverse scaling to prediction
Some tricks of the trade
- Preprocessing
- Transformations
- Features
Log transform inputs
Positive quantities are often highly skewed Log-domain id often much more natural
Creating extra data
Dirty trick: Create more training ‘data’ by corrupting examples in the real training set Changes could respect invariances that would be difficult or burdensome to measure directly
Encoding attributes
- Categorical variables
– A study has three individuals – Three different colours – Possible encoding: 100, 010, 001
- Ordinal variables
– Movie rating, stars – Tissue anomaly rating, expert scores 1-3 – Possible encoding: 00, 10, 11
Basis function features
In the regression and classification examples we used polynomials 𝑦, 𝑦2, 𝑦3, … Often a bad choice Polynomials of sparse binary features may make sense: 𝑦1𝑦2, 𝑦1𝑦3,…, 𝑦1𝑦2𝑦2 Other options:
- radial basis function: 𝑓− 𝑦−𝜈 2/ℎ2
- sigmoids: 1/(1 + 𝑓−𝑤𝑈𝑦)
- Fouries, wavelets …
Feature engineering
The difference in performance in an application can depen more on the original features than the algorithms Working out clever way of making features from complex objects like images can be worthwhile, is hard, and is not always respected …
The SIFT story
Taken from http://yann.lecun.com/ex/pamphlets/publishing-models.html Many of us have horror stories about how some of our best papers have been rejected by conferences. Perhaps the best case in point of the last few years is David Lowe's work on the SIFT method. After years of being rejected from conference starting in 1997, the journal version published in 2004 went on to become the most highly cited paper in all of engineering sciences in 2005. David Lowe relates the story: I did submit papers on earlier versions of SIFT to both ICCV and CVPR (around 1997/98) and both were rejected. I then added more of a systems flavor and the paper was published at ICCV 1999, but just as a poster. By then I had decided the computer vision community was not interested, so I applied for a patent and intended to promote it just for industrial applications.
A rant about least squares
Whenever a person eagerly inquires if my computer can solve a set of 300 equations in 300
- unknowns. . . The odds are all too high that our inquiring friend. . . Has collected a set of
experimental data and is now attempting to fit a 300-parameter model to it - by Least Squares! The sooner this guy can be eased out of your office, the sooner you will be able to get back to useful work - but these chaps are persistent. . . you end up by getting angry and throwing the guy out of your office. There is usually a reasonable procedure. Unfortunately, it is undramatic, laborious, and requires thought - which most of these charlatans avoid like the plague. They should merely fit a five-parameter model, then a six-parameter one. . . Somewhere along the line - and it will be much closer to 15 parameters than to 300 – the significant improvement will cease and the fitting operation is over. There is no system of 300 equations, no 300 parameters, and no glamor. The computer center's director must prevent the looting of valuable computer time by these would-be tters of many parameters. The task is not a pleasant one, but the legitimate computer users have rights, too. . . the impasse finally has to be broken by violence - which therefore might as well be used in the very beginning. Numerical methods that (mostly) work, Forman S. Acton (1990, original edition 1970)
Dumb and dumber
What happens if the feature distribution does not allow simple classifiers to work well? Simple classifiers (few parameters, simple structure …) 1) Are good: do not usually overfit 2) Are bad: can not solve hard problems
Exploiting weak classifiers
Instead of learning a single classifier Learn many weak classifiers that are good at difference parts of the input space Output class: vote of each classifier
Ensamble methods
We like that:
- Classifiers that are most ‘sure’ will vote with more
conviction
- Classifiers will be most ‘sure’ about a particular part of the
space
- On average, do better than single classifiers
How?
- Force a classifier ℎ𝑢 to learn different part of the input space?
- Weight the votes of each classifier 𝛽𝑢?
Boosting
Idea: given a weak classifier, run it multiple times on (reweighted) training data, then voting At each iteration 𝑢:
- Weight each training sample 𝑦𝑗 by how incorrectly has been classified
- Learn a weak classifier ℎ𝑢
- Estimate the strength 𝛽𝑢 of ℎ𝑢
Evaluate the final classification: 𝐼 𝑦𝑗 = 𝑡𝑗𝑜
𝑢
𝛽𝑢ℎ𝑢(𝑦)
Learning from weighted data
- Weighted data set 𝑦𝑗, 𝑧𝑗, 𝐸𝑗
– 𝐸𝑗 is the weigth is the training sample 𝑦𝑗, 𝑧𝑗 – The 𝑗𝑢ℎ example counts as 𝐸𝑗 samples Unweighted loss function
𝑀 𝑧; 𝑧 =
𝑗=1 𝑂
( 𝑧𝑗 − 𝑧𝑗)2
Weighted loss function
𝑀𝐸 𝑧; 𝑧 =
𝑗=1 𝑂
𝐸𝑗( 𝑧𝑗 − 𝑧𝑗)2
Ada-Boost
Training set: 𝑦1, 𝑧1 , … , 𝑦𝑂, 𝑧𝑂 , 𝑧𝑗 ∈ −1,1 Initialize 𝐸𝑗 = 1
𝑂
- Train a simple classifer ℎ𝑢 minimizing
𝜁𝑢 = 𝑀𝐸 𝑧; 𝑧
- Set 𝛽𝑢 = 1
2 𝑚𝑜 1−𝜁𝑢 𝜁𝑢
- Update the weights 𝐸𝑗,𝑢+1 = 𝐸𝑗,𝑢𝑓−𝛽𝑢𝑧𝑗ℎ𝑢(𝑦𝑗)
- Renormalize the weights so that 𝑗=1
𝑂
𝐸𝑗,𝑢+1 = 1
Final Ada-Boost classifier
𝑧𝑗 = 𝐼 𝑦𝑗 = 𝑡𝑗𝑜
𝑢
𝛽𝑢ℎ𝑢(𝑦𝑗)
Ada-Boost in action
Ada-boost in action
Training error
1 𝑂
𝑗=1 𝑂
( 𝑧𝑗 − 𝑧𝑗)2 ≤ 1 𝑂
𝑗=1 𝑂
𝑓−𝑧𝑗𝑔(𝑦𝑗) 𝑔 𝑦𝑗 =
𝑢