Biomedicine Enrico Grisan enrico.grisan@dei.unipd.it From biology - - PowerPoint PPT Presentation

biomedicine
SMART_READER_LITE
LIVE PREVIEW

Biomedicine Enrico Grisan enrico.grisan@dei.unipd.it From biology - - PowerPoint PPT Presentation

Applied Machine Learning in Biomedicine Enrico Grisan enrico.grisan@dei.unipd.it From biology to models (Artificial) Neural Networks Backpropagation Loop: Sample a batch of data Backpropagate the loss function to calculate the


slide-1
SLIDE 1

Applied Machine Learning in Biomedicine

Enrico Grisan enrico.grisan@dei.unipd.it

slide-2
SLIDE 2

From biology to models

slide-3
SLIDE 3

(Artificial) Neural Networks

slide-4
SLIDE 4

Backpropagation

  • Loop:

– Sample a batch of data – Backpropagate the loss function to calculate the analytical gradient – Perform parameter updata

slide-5
SLIDE 5

ANN Practicalities

1) Preprocess the data!

% X = NxD test cases mx = mean(X); sx=std(X); X=X-ones(size(X,1),1)*mx; X=X./ones(size(X,1),1)*sx; mx = mean(X); sx=std(X); for ct=1:size(x,1), X(ct,:)=X(ct,:)-mx; X(ct,:)=X(ct,:)./sx; end

slide-6
SLIDE 6

ANN Practicalities

1) Preprocess the data!

% X = NxD test cases Sx = cov(X); [E,D]=eig(Sx); R=E’; W=sqrt(inv(D)); Xw=(W*(R*X’))’;

slide-7
SLIDE 7

ANN practicalities

slide-8
SLIDE 8

ANN practicalities

  • Run cross validation across many tips & tricks
  • Use visualization (training/validation, loss

curve, weight update, weights plot …) to guide the hyperparameters ranges and cross- validate

  • Ensamble multiple models
slide-9
SLIDE 9

ANN in Matlab

% X = NxD test cases % Y = Nx1 target values % Initialize the network net = feedforwardnet(10); net = train(net,X,Y); Yhat = net(X); perfs = mse(net,Y,Yhat); % Adding L2 regularization net.performParam.regularization = 0.5; net_reg = train(net,X,Y);

slide-10
SLIDE 10

Pancreatic tissue

Clinical Gast. Hepat., 2012

slide-11
SLIDE 11

Removing bones from RX

Chen et al. IEEE TMI 2014

slide-12
SLIDE 12

Autoencoder

Encoder Decoder

𝑡 𝑡

slide-13
SLIDE 13

Convolutional Neural Networks

slide-14
SLIDE 14

Hyerarchical organization

slide-15
SLIDE 15

Shared weights

slide-16
SLIDE 16

Local connectivity

Image RGB: 32x32x3 before: each neuron will connect to 32x32x3weight Full connectiviy now: one neuron will connect to, e.g, a 5x5x3 patch, and have 5x5x3 weights Local connectivity

slide-17
SLIDE 17

Depth

slide-18
SLIDE 18

Receptive fields

Replicate the column of hidden neurons across space, With some stride 7x7 input 3x3 connectivity (receptor field) 1 stride

slide-19
SLIDE 19

Receptive fields

Replicate the column of hidden neurons across space, With some stride 7x7 input 3x3 connectivity (receptor field) 1 stride

slide-20
SLIDE 20

Receptive fields

Replicate the column of hidden neurons across space, With some stride 7x7 input 3x3 connectivity (receptor field) 1 stride

slide-21
SLIDE 21

Receptive fields

Replicate the column of hidden neurons across space, With some stride 7x7 input 3x3 connectivity (receptor field) 1 stride

slide-22
SLIDE 22

Receptive fields

Replicate the column of hidden neurons across space, With some stride 7x7 input 3x3 connectivity (receptor field) 1 stride 5x5 output

slide-23
SLIDE 23

Receptive fields

Replicate the column of hidden neurons across space, With some stride 7x7 input 3x3 connectivity (receptor field) 2 stride

slide-24
SLIDE 24

Receptive fields

Replicate the column of hidden neurons across space, With some stride 7x7 input 3x3 connectivity (receptor field) 2 stride

slide-25
SLIDE 25

Receptive fields

Replicate the column of hidden neurons across space, With some stride 7x7 input 3x3 connectivity (receptor field) 2 stride 3x3 output

slide-26
SLIDE 26

Receptive fields

Output size: (N-F)/stride+1 N=7, F=3 stride 1 (7-3)/3+1=5 stride 2 (7-3)/2+1=3 stride 3 (7-3)/3+1= ?? N N F

slide-27
SLIDE 27

From columns to volume

All stacked neurons form [1x1xdepth] columns, that combine in the output volume

slide-28
SLIDE 28

Volume dimension

Input image: 32x32x3 Receptive field: 5x5, stride 1 Number of neurons: 5 Output size: (32-5)/1+1=28 Number of neurons: 28x28x5 Number of weights per neuron: 5x5x3 Number of weights: 28x28x5x5x5x3=294000

slide-29
SLIDE 29

Padding

zero-pad the image 7x7 input 3x3 connectivity (receptor field) 1 stride 7x7 output

slide-30
SLIDE 30

Volume dimension

Input image: 32x32x3 Zero padding Receptive field: 5x5, stride 1 Number of neurons: 30 Output size: 32 Number of neurons: 32x32x30 Number of weights per neuron: 5x5x3 Number of weights: 32x32x5x5x5x3=2304000

slide-31
SLIDE 31

Weight sharing

If all neurons at the same depth in the output columns share the same weight, we can have «slices»: One activation map (a depth slice) computed with only a set of weights

slide-32
SLIDE 32

Convolution and filters

When all weights at the same depth are equal, the activation map can be computed through a convolution: 𝑔 𝑦, 𝑧 ∗ 𝑕 𝑦, 𝑧 =

𝑜1 𝑜2

𝑔 𝑜1, 𝑜2 𝑕(𝑦 − 𝑜1, 𝑧 − 𝑜2) The weights represent a filter! The slices (filtered images) represent feature maps!

slide-33
SLIDE 33

C-NN

slide-34
SLIDE 34

CNN architecture

slide-35
SLIDE 35

CNN architecture

slide-36
SLIDE 36

Weight interpetation

slide-37
SLIDE 37

Max pooling

In convolutional NN architectures, conv. Layers are often followed by pool (downsampling) layers

slide-38
SLIDE 38

ICPR 2012 Mitosis detection

Wang et al. JMI 2014

slide-39
SLIDE 39

MICCAI 2013 Mitosis Detection

Ciresan et al. MICCAI 2013

slide-40
SLIDE 40

Lymph node detection

Roth et al. MICCAI 2014