Efficient Product Sampling using Hierarchical Thresholding Fabrice - - PowerPoint PPT Presentation

efficient product sampling using hierarchical thresholding
SMART_READER_LITE
LIVE PREVIEW

Efficient Product Sampling using Hierarchical Thresholding Fabrice - - PowerPoint PPT Presentation

Efficient Product Sampling using Hierarchical Thresholding Fabrice Rousselle Petrik Clarberg EPFL / University of Montreal Lund University Luc Leblanc, Victor Ostromoukhov, Pierre Poulin University of Montreal Objective Plan 1.


slide-1
SLIDE 1

Efficient Product Sampling using Hierarchical Thresholding

Luc Leblanc, Victor Ostromoukhov, Pierre Poulin

University of Montreal

Fabrice Rousselle

EPFL / University of Montreal

Petrik Clarberg

Lund University

slide-2
SLIDE 2

Objective

slide-3
SLIDE 3

Plan

1. Introduction 2. Description of Hierarchical Thresholding

  • Basic algorithm (Ostromoukhov et al, Siggraph 2004)
  • Our extensions

3. Application to direct illumination 4. Results 5. Conclusion and future work

slide-4
SLIDE 4

Introduction

  • Monte Carlo ray tracing

– Widely used in photo-realistic rendering – Many samples are needed for noise-free results

  • Importance sampling

– Offers significant noise reduction by concentrating the sampling to important regions – Ideally, the importance function should be proportional to the function sampled hFi = 1 n

n

X

i=1

f(xi) p(xi) ; with Z p(x)dx = 1 p(x) hFi = 1 n

n

X

i=1

f(xi), with n the number of samples

slide-5
SLIDE 5

Introduction

  • Proposition: Hierarchical Thresholding (HT)

– A simple sampling scheme – Applicable to products of multiple functions – Can be easily integrated in a Monte Carlo ray tracer

slide-6
SLIDE 6

HT: Basic Algorithm

  • HT is a rejection sampling scheme

– Generate a uniform distribution – Reject all points outside the integral volume – The sample density is proportional to the function value

slide-7
SLIDE 7

HT: Basic Algorithm

  • HT is a rejection sampling scheme

– Generate a uniform distribution – Reject all points outside the integral volume – The sample density is proportional to the function value

slide-8
SLIDE 8

HT: Basic Algorithm

  • HT is a rejection sampling scheme

– Generate a uniform distribution – Reject all points outside the integral volume – The sample density is proportional to the function value

slide-9
SLIDE 9

HT: Basic Algorithm

  • HT is a rejection sampling scheme

– Generate a uniform distribution – Reject all points outside the integral volume – The sample density is proportional to the function value

slide-10
SLIDE 10

HT: Basic Algorithm

  • HT is a rejection sampling scheme

– Generate a uniform distribution – Reject all points outside the integral volume – The sample density is proportional to the function value

slide-11
SLIDE 11

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-12
SLIDE 12

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-13
SLIDE 13

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-14
SLIDE 14

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-15
SLIDE 15

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-16
SLIDE 16

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-17
SLIDE 17

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-18
SLIDE 18

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-19
SLIDE 19

HT: Basic Algorithm

  • The number of candidate samples generated is: N = b L

– with b the base of the VDC sequence (2 in this example) – with L the maximum level of subdivision (4 in this example)

  • A sample value is i/N, with i its index in the sequence

Van der Corput sequence in base 2

slide-20
SLIDE 20

HT: Basic Algorithm

  • The number of candidate samples generated is: N = b L

– with b the base of the VDC sequence (2 in this example) – with L the maximum level of subdivision (4 in this example)

  • A sample value is i/N, with i its index in the sequence

Van der Corput sequence in base 2

slide-21
SLIDE 21

HT: Basic Algorithm

  • The number of candidate samples generated is: N = b L

– with b the base of the VDC sequence (2 in this example) – with L the maximum level of subdivision (4 in this example)

  • A sample value is i/N, with i its index in the sequence

Van der Corput sequence in base 2

slide-22
SLIDE 22

HT: Basic Algorithm

  • The number of candidate samples generated is: N = b L

– with b the base of the VDC sequence (2 in this example) – with L the maximum level of subdivision (4 in this example)

  • A sample value is i/N, with i its index in the sequence

Van der Corput sequence in base 2

slide-23
SLIDE 23

HT: Basic Algorithm

  • The number of candidate samples generated is: N = b L

– with b the base of the VDC sequence (2 in this example) – with L the maximum level of subdivision (4 in this example)

  • A sample value is i/N, with i its index in the sequence

Van der Corput sequence in base 2

slide-24
SLIDE 24

HT: Basic Algorithm

  • Exploiting the hierarchy

– Prune a branch when it reaches the local maximum – Less candidate samples implies a lower rejection rate

Van der Corput sequence in base 2

slide-25
SLIDE 25

HT: Basic Algorithm

  • Exploiting the hierarchy

– Prune a branch when it reaches the local maximum – Less candidate samples implies a lower rejection rate

Van der Corput sequence in base 2

slide-26
SLIDE 26

HT: Basic Algorithm

  • Exploiting the hierarchy

– Prune a branch when it reaches the local maximum – Less candidate samples implies a lower rejection rate

Van der Corput sequence in base 2

slide-27
SLIDE 27

HT: Basic Algorithm

  • Exploiting the hierarchy

– Prune a branch when it reaches the local maximum – Less candidate samples implies a lower rejection rate

Van der Corput sequence in base 2

slide-28
SLIDE 28

HT: Basic Algorithm

  • Exploiting the hierarchy

– Prune a branch when it reaches the local maximum – Less candidate samples implies a lower rejection rate

Van der Corput sequence in base 2

slide-29
SLIDE 29

HT: Basic Algorithm

  • Exploiting the hierarchy

– Prune a branch when it reaches the local maximum – Less candidate samples implies a lower rejection rate

Van der Corput sequence in base 2

slide-30
SLIDE 30

HT: Avoiding Bias

  • Low discrepancy sequences are deterministic

deterministic 16 samples 64 samples exaggerated difference biased

slide-31
SLIDE 31

HT: Avoiding Bias

  • Low discrepancy sequences are deterministic

– Randomize key steps of the processus (see paper for details)

deterministic radnomized 16 samples 64 samples exaggerated difference biased unbiased

slide-32
SLIDE 32

HT: Use of Max/Avg Tree

  • Local maximum value

– Needed for pruning branches

  • Local average value

– Predict the number of selected samples – Speed up the rejection test (avoid BRDF evaluation) – Perform further performance optimizations – Yields a piecewise-constant importance function:

  • The local maximum and average values are pre-

computed for all nodes of the hierarchy

h(x)

slide-33
SLIDE 33

HT: Product of Functions

  • Direct illumination: sampling a product of functions

– Environment only (works for diffuse surfaces) – BRDF only (works for specular surfaces) – Environment x BRDF (works well, but problematic occlusions)

Environment Rendered using 16 samples BRDF Env x BRDF

slide-34
SLIDE 34

HT: Product of Functions

  • Direct illumination: sampling a product of functions

– Environment only (works for diffuse surfaces) – BRDF only (works for specular surfaces) – Environment x BRDF (works well, but problematic occlusions) – Environment x BRDF x visibility

Rendered using 16 samples BRDF Env x BRDF x Vis Environment Env x BRDF

slide-35
SLIDE 35

HT: Product of Functions

  • Our algorithm extends to the product of functions:
  • The local maximum is conservatively approximated:
  • The local average is similarly approximated:

– The error of this approximation only affects the variance of the estimation, the estimator still convergences to the right result

f(x) =

n

Y

i=1

fi(x) (1) n avg f(x) ¼

n

Y

i=1

(avg fi(x)) ; x 2 [a; b] (3) max f(x) ·

n

Y

i=1

(max fi(x)) ; x 2 [a; b] (2)

slide-36
SLIDE 36

HT: Sampling in 3D

  • Samples are positioned on a 2D plane

Sample indexing according to the VDC sequence in base 4

slide-37
SLIDE 37

Application: Direct Illumination

  • The mapping used: HEALPix

– hierarchical representation (needed for our algorithm) – area preservation – low distorsion (allows dynamic rotations)

  • Each face is sampled individually
slide-38
SLIDE 38

Application: Direct Illumination

  • The mapping used: HEALPix

– hierarchical representation (needed for our algorithm) – area preservation – low distorsion (allows dynamic rotations)

  • Each face is sampled individually
slide-39
SLIDE 39

Application: Direct Illumination

  • The mapping used: HEALPix

– hierarchical representation (needed for our algorithm) – area preservation – low distorsion (allows dynamic rotations)

  • Each face is sampled individually
slide-40
SLIDE 40

HEALPix: Rotations

  • Environment Map in world space
  • BRDF in local space

– Fetching the BRDF value implies a rotation – The max area is enlarged to remain conservative – The avg is interpolated from the four neighbors

slide-41
SLIDE 41

Application: Visibility Computation

  • Computed from a simplified model using inner spheres
slide-42
SLIDE 42

Application: Visibility Computation

  • Computed from a simplified model using inner spheres
slide-43
SLIDE 43

Application: Visibility Computation

no visibility

  • ccluded rays

16 samples

  • Computed from a simplified model using inner spheres
slide-44
SLIDE 44

Application: Visibility Computation

no visibility with visibility

  • ccluded rays

16 samples

  • Computed from a simplified model using inner spheres
slide-45
SLIDE 45

Results

  • Comparison with other product sampling methods

64 samples WIS biased TSIS unbiased HT no visibility HT with visibility

slide-46
SLIDE 46

Wavelet Importance Sampling

Clarberg et al. Siggraph 2005 - 16 samples

slide-47
SLIDE 47

HT: no visibility

16 samples

slide-48
SLIDE 48

Two Stage Importance Sampling

Cline et al. EGSR 2006 - 16 samples

slide-49
SLIDE 49

HT: no visibility

16 samples

slide-50
SLIDE 50

HT: with visibility

16 samples

slide-51
SLIDE 51

Conclusion

  • Positive

– Fast sample generation – Simple (based on rejection sampling) – Flexible (functions must only be bounded) – Extendable to n functions

  • Negative

– Relies on precomputed BRDFs

  • Future work

– Compute the BRDF max on the fly (removes precomputation and dynamic rotations) – Smarter visibility computation

slide-52
SLIDE 52

Rendering Times

  • Core 2 Duo 2.4GHz (1 core)
slide-53
SLIDE 53

Questions

slide-54
SLIDE 54

Results

  • Comparison with other state of the art methods
slide-55
SLIDE 55

HT: Improving Performances

  • Relies on a more agressive

branch pruning: faster rendering

  • Use the local average

instead of computing the exact sample contribution: introduces a bias

  • Seamlessy blend to exact

sample contribution if the local average resolution is too coarse

unbiased

slide-56
SLIDE 56

HT: Improving Performances

  • Relies on a more agressive

branch pruning: faster rendering

  • Use the local average

instead of computing the exact sample contribution: introduces a bias

  • Seamlessy blend to exact

sample contribution if the local average resolution is too coarse

biased

slide-57
SLIDE 57

1

Efficient Product Sampling using Hierarchical Thresholding

Luc Leblanc, Victor Ostromoukhov, Pierre Poulin

University of Montreal

Fabrice Rousselle

EPFL / University of Montreal

Petrik Clarberg

Lund University

slide-58
SLIDE 58

2

Objective

The objective of our work is to produce photo-realistic renderings such as this

  • ne using a wide range of surface reflectances going from the very diffuse

(such as the ground) to the highly specular (such as the mirror ball in the front row), while lighting the scene with HDR environment maps. This map has a dynamic range of ~1:10^6.

slide-59
SLIDE 59

3

Plan

1. Introduction 2. Description of Hierarchical Thresholding

  • Basic algorithm (Ostromoukhov et al, Siggraph 2004)
  • Our extensions

3. Application to direct illumination 4. Results 5. Conclusion and future work

slide-60
SLIDE 60

4

Introduction

  • Monte Carlo ray tracing

– Widely used in photo-realistic rendering – Many samples are needed for noise-free results

  • Importance sampling

– Offers significant noise reduction by concentrating the sampling to important regions – Ideally, the importance function should be proportional to the function sampled hFi = 1 n

n

X

i=1

f(xi) p(xi) ; with Z p(x)dx = 1 p(x) hFi = 1 n

n

X

i=1

f(xi), with n the number of samples

Standard Monte Carlo methods use random sampling, while importance sampling uses samples drawn from an importance function. Our method addresses the question of defining the importance function and drawing samples from it.

slide-61
SLIDE 61

5

Introduction

  • Proposition: Hierarchical Thresholding (HT)

– A simple sampling scheme – Applicable to products of multiple functions – Can be easily integrated in a Monte Carlo ray tracer

slide-62
SLIDE 62

6

HT: Basic Algorithm

  • HT is a rejection sampling scheme

– Generate a uniform distribution – Reject all points outside the integral volume – The sample density is proportional to the function value

slide-63
SLIDE 63

7

HT: Basic Algorithm

  • HT is a rejection sampling scheme

– Generate a uniform distribution – Reject all points outside the integral volume – The sample density is proportional to the function value

slide-64
SLIDE 64

8

HT: Basic Algorithm

  • HT is a rejection sampling scheme

– Generate a uniform distribution – Reject all points outside the integral volume – The sample density is proportional to the function value

slide-65
SLIDE 65

9

HT: Basic Algorithm

  • HT is a rejection sampling scheme

– Generate a uniform distribution – Reject all points outside the integral volume – The sample density is proportional to the function value

slide-66
SLIDE 66

10

HT: Basic Algorithm

  • HT is a rejection sampling scheme

– Generate a uniform distribution – Reject all points outside the integral volume – The sample density is proportional to the function value

slide-67
SLIDE 67

11

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

Here, only the first 8 points of the sequence are shown. We seen that, as samples are added, the distribution remains relatively uniform.

slide-68
SLIDE 68

12

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-69
SLIDE 69

13

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-70
SLIDE 70

14

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-71
SLIDE 71

15

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-72
SLIDE 72

16

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-73
SLIDE 73

17

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-74
SLIDE 74

18

HT: Basic Algorithm

  • Samples are generated with low-discrepancy sequences

– Samples are iteratively added, while being uniformly distributed – In practice, we use the Van der Corput sequence

VDC sequence in base 2

slide-75
SLIDE 75

19

HT: Basic Algorithm

  • The number of candidate samples generated is: N = b L

– with b the base of the VDC sequence (2 in this example) – with L the maximum level of subdivision (4 in this example)

  • A sample value is i/N, with i its index in the sequence

Van der Corput sequence in base 2

The VDC sequence generates points in 1D, we need to project them in 2D. As the projection uses the sample index as its value, we end up progressively filling the sampling domain, from the bottom to the top. This property is key and will be exploited to improve the efficiency of the algorithm.

slide-76
SLIDE 76

20

HT: Basic Algorithm

  • The number of candidate samples generated is: N = b L

– with b the base of the VDC sequence (2 in this example) – with L the maximum level of subdivision (4 in this example)

  • A sample value is i/N, with i its index in the sequence

Van der Corput sequence in base 2

slide-77
SLIDE 77

21

HT: Basic Algorithm

  • The number of candidate samples generated is: N = b L

– with b the base of the VDC sequence (2 in this example) – with L the maximum level of subdivision (4 in this example)

  • A sample value is i/N, with i its index in the sequence

Van der Corput sequence in base 2

slide-78
SLIDE 78

22

HT: Basic Algorithm

  • The number of candidate samples generated is: N = b L

– with b the base of the VDC sequence (2 in this example) – with L the maximum level of subdivision (4 in this example)

  • A sample value is i/N, with i its index in the sequence

Van der Corput sequence in base 2

slide-79
SLIDE 79

23

HT: Basic Algorithm

  • The number of candidate samples generated is: N = b L

– with b the base of the VDC sequence (2 in this example) – with L the maximum level of subdivision (4 in this example)

  • A sample value is i/N, with i its index in the sequence

Van der Corput sequence in base 2

slide-80
SLIDE 80

24

HT: Basic Algorithm

  • Exploiting the hierarchy

– Prune a branch when it reaches the local maximum – Less candidate samples implies a lower rejection rate

Van der Corput sequence in base 2

We add a constraint to the algorithm: we subdivide nodes until we reach the maximum level of subdivision (4 in this example) or the local maximum value.

slide-81
SLIDE 81

25

HT: Basic Algorithm

  • Exploiting the hierarchy

– Prune a branch when it reaches the local maximum – Less candidate samples implies a lower rejection rate

Van der Corput sequence in base 2

slide-82
SLIDE 82

26

HT: Basic Algorithm

  • Exploiting the hierarchy

– Prune a branch when it reaches the local maximum – Less candidate samples implies a lower rejection rate

Van der Corput sequence in base 2

Here we see that the last node (on the right) has reached the local maximum. This node sample will necessarily be rejected and, as we progressively fill the domain, all subsequent samples would be placed « higher » and therefore be necessarily rejected. We therefore prune that branch (ie. we won't subdivide it anymore).

slide-83
SLIDE 83

27

HT: Basic Algorithm

  • Exploiting the hierarchy

– Prune a branch when it reaches the local maximum – Less candidate samples implies a lower rejection rate

Van der Corput sequence in base 2

slide-84
SLIDE 84

28

HT: Basic Algorithm

  • Exploiting the hierarchy

– Prune a branch when it reaches the local maximum – Less candidate samples implies a lower rejection rate

Van der Corput sequence in base 2

slide-85
SLIDE 85

29

HT: Basic Algorithm

  • Exploiting the hierarchy

– Prune a branch when it reaches the local maximum – Less candidate samples implies a lower rejection rate

Van der Corput sequence in base 2

In this example, we ended up with 11 instead of 16 candidate samples

slide-86
SLIDE 86

30

HT: Avoiding Bias

  • Low discrepancy sequences are deterministic

deterministic 16 samples 64 samples exaggerated difference biased

Sample distribution is deterministic. This induces a local coherence in the image, which creates a bias. Shown here: the visual difference between an image rendered using 16 and 64 samples. The bias in sample placement yields a color shift (blue in the top right and yellow in the middle).

slide-87
SLIDE 87

31

HT: Avoiding Bias

  • Low discrepancy sequences are deterministic

– Randomize key steps of the processus (see paper for details)

deterministic radnomized 16 samples 64 samples exaggerated difference biased unbiased

By randomizing some key steps of the algorithm, the distribution is effectively randomized, yielding an unbiased distribution. The visual difference is now « white noise » illustrating the unbiased estimation. The randomizing does not affect the uniformity of sample distribution and the fact that it progressively fills the domain.

slide-88
SLIDE 88

32

HT: Use of Max/Avg Tree

  • Local maximum value

– Needed for pruning branches

  • Local average value

– Predict the number of selected samples – Speed up the rejection test (avoid BRDF evaluation) – Perform further performance optimizations – Yields a piecewise-constant importance function:

  • The local maximum and average values are pre-

computed for all nodes of the hierarchy

h(x)

slide-89
SLIDE 89

33

HT: Product of Functions

  • Direct illumination: sampling a product of functions

– Environment only (works for diffuse surfaces) – BRDF only (works for specular surfaces) – Environment x BRDF (works well, but problematic occlusions)

Environment Rendered using 16 samples BRDF Env x BRDF

The point of this slide is to illustrate what can be gained by sampling the product

  • f functions.
slide-90
SLIDE 90

34

HT: Product of Functions

  • Direct illumination: sampling a product of functions

– Environment only (works for diffuse surfaces) – BRDF only (works for specular surfaces) – Environment x BRDF (works well, but problematic occlusions) – Environment x BRDF x visibility

Rendered using 16 samples BRDF Env x BRDF x Vis Environment Env x BRDF

slide-91
SLIDE 91

35

HT: Product of Functions

  • Our algorithm extends to the product of functions:
  • The local maximum is conservatively approximated:
  • The local average is similarly approximated:

– The error of this approximation only affects the variance of the estimation, the estimator still convergences to the right result

f(x) =

n

Y

i=1

fi(x) (1) n avg f(x) ¼

n

Y

i=1

(avg fi(x)) ; x 2 [a; b] (3) max f(x) ·

n

Y

i=1

(max fi(x)) ; x 2 [a; b] (2)

slide-92
SLIDE 92

36

HT: Sampling in 3D

  • Samples are positioned on a 2D plane

Sample indexing according to the VDC sequence in base 4

slide-93
SLIDE 93

37

Application: Direct Illumination

  • The mapping used: HEALPix

– hierarchical representation (needed for our algorithm) – area preservation – low distorsion (allows dynamic rotations)

  • Each face is sampled individually
slide-94
SLIDE 94

38

Application: Direct Illumination

  • The mapping used: HEALPix

– hierarchical representation (needed for our algorithm) – area preservation – low distorsion (allows dynamic rotations)

  • Each face is sampled individually
slide-95
SLIDE 95

39

Application: Direct Illumination

  • The mapping used: HEALPix

– hierarchical representation (needed for our algorithm) – area preservation – low distorsion (allows dynamic rotations)

  • Each face is sampled individually
slide-96
SLIDE 96

40

HEALPix: Rotations

  • Environment Map in world space
  • BRDF in local space

– Fetching the BRDF value implies a rotation – The max area is enlarged to remain conservative – The avg is interpolated from the four neighbors

slide-97
SLIDE 97

41

Application: Visibility Computation

  • Computed from a simplified model using inner spheres
slide-98
SLIDE 98

42

Application: Visibility Computation

  • Computed from a simplified model using inner spheres

The inner sphere model gives a conservative estimate of the real model shadow.

slide-99
SLIDE 99

43

Application: Visibility Computation

no visibility

  • ccluded rays

16 samples

  • Computed from a simplified model using inner spheres

Noise level in the shadow is increased: samples are drawn to the sun which is

  • ccluded by the happy buddha. This shows in the color coded image at the

bottom (the darker the image, the higher the percentage of occluded rays).

slide-100
SLIDE 100

44

Application: Visibility Computation

no visibility with visibility

  • ccluded rays

16 samples

  • Computed from a simplified model using inner spheres

Using the occlusion information of the spheres, we can significantly reduce the percentage of occluded rays. As we use a conservative estimate, we cannot improve the behavior at the edges of the shadow or in between the feet of the buddha.

slide-101
SLIDE 101

45

Results

  • Comparison with other product sampling methods

64 samples WIS biased TSIS unbiased HT no visibility HT with visibility

Just show the 4 usefull ones If the previous slide is removed, biased results should also be removed from this comparison

slide-102
SLIDE 102

46

Wavelet Importance Sampling

Clarberg et al. Siggraph 2005 - 16 samples

WIS relies on relatively low-resolution environment maps, resulting in a less accurate importance function and therefore higher level of noise.

slide-103
SLIDE 103

47

HT: no visibility

16 samples

slide-104
SLIDE 104

48

Two Stage Importance Sampling

Cline et al. EGSR 2006 - 16 samples

While TSIS uses high resolution environment maps (which allow for high quality rendering of the plane) its refining approach does not work well for highly specular surfaces when using a small number of samples.

slide-105
SLIDE 105

49

HT: no visibility

16 samples

slide-106
SLIDE 106

50

HT: with visibility

16 samples

This is the ideal case for our method: the visibility can be accurately computed as we have only an analytical sphere on an infinite plane. As such this represents the extend to which the visibility could improve the estimation.

slide-107
SLIDE 107

51

Conclusion

  • Positive

– Fast sample generation – Simple (based on rejection sampling) – Flexible (functions must only be bounded) – Extendable to n functions

  • Negative

– Relies on precomputed BRDFs

  • Future work

– Compute the BRDF max on the fly (removes precomputation and dynamic rotations) – Smarter visibility computation

slide-108
SLIDE 108

52

Rendering Times

  • Core 2 Duo 2.4GHz (1 core)

All methods have comparable speeds. The « HT biased » method is detailed in the paper and illustrated at the end of the presentation.

slide-109
SLIDE 109

53

Questions

slide-110
SLIDE 110

54

Results

  • Comparison with other state of the art methods

Shown here for WIS is the biased rendering only which does not converge to the right value. The unbiased version would have higher variance than all methods shown here. This graph also illustrates the seemless blend from biased to unbiased values for

  • ur biased implementation, ensuring that we converge to the same value.
slide-111
SLIDE 111

55

HT: Improving Performances

  • Relies on a more agressive

branch pruning: faster rendering

  • Use the local average

instead of computing the exact sample contribution: introduces a bias

  • Seamlessy blend to exact

sample contribution if the local average resolution is too coarse

unbiased

slide-112
SLIDE 112

56

HT: Improving Performances

  • Relies on a more agressive

branch pruning: faster rendering

  • Use the local average

instead of computing the exact sample contribution: introduces a bias

  • Seamlessy blend to exact

sample contribution if the local average resolution is too coarse

biased