SLIDE 1
On Discriminative Learning of Prediction Uncertainty Vojtch Franc, - - PowerPoint PPT Presentation
On Discriminative Learning of Prediction Uncertainty Vojtch Franc, - - PowerPoint PPT Presentation
On Discriminative Learning of Prediction Uncertainty Vojtch Franc, Daniel Pra Department of Cybernetics Faculty of Electrical Engineering Czech Technical University in Prague On Discriminative Learning of Prediction Uncertainty h ( x
SLIDE 2
SLIDE 3
Selective classifier: (h, c)(x) = h(x) with probability c(x) reject with probability 1 − c(x) where h: X → Y is a classifier and c: X → [0, 1] is a selection function Example: Linear SVM h(x) = sign(φ(x), w + b) c(x) = [ [|φ(x), w + b| ≥ θ] ]
2/2
On Discriminative Learning of Prediction Uncertainty
SLIDE 4
Selective classifier: (h, c)(x) = h(x) with probability c(x) reject with probability 1 − c(x) where h: X → Y is a classifier and c: X → [0, 1] is a selection function Example: Linear SVM h(x) = sign(φ(x), w + b) c(x) = [ [|φ(x), w + b| ≥ θ] ] Coverage: φ(c) = Ex∼p
- c(x)
- Selective risk:
RS(h, c) =
E(x,y)∼p
- ℓ(y,h(x)) c(x)
- φ(x)
20 40 60 80 100
coverage [%]
2 4 6 8
selective risk [%]
RS = 2.1%
SVM + distance to hyperplane
2/2
On Discriminative Learning of Prediction Uncertainty
SLIDE 5
Selective classifier: (h, c)(x) = h(x) with probability c(x) reject with probability 1 − c(x) where h: X → Y is a classifier and c: X → [0, 1] is a selection function Example: Linear SVM h(x) = sign(φ(x), w + b) c(x) = [ [|φ(x), w + b| ≥ θ] ]
20 40 60 80 100
coverage [%]
2 4 6 8
selective risk [%]
RS = 2.1%
SVM + distance to hyperplane
In our paper we show: 1) What is the optimal c(x) 2) How to learn c(x) discriminatively
2/2
On Discriminative Learning of Prediction Uncertainty
SLIDE 6