INFOGR – Computer Graphics
- J. Bikker - April-July 2016 - Lecture 8: “Engine Fundamentals”
Welcome! Todays Agenda: Rendering Overview Matrices - - PowerPoint PPT Presentation
INFOGR Computer Graphics J. Bikker - April-July 2016 - Lecture 8: Engine Fundamentals Welcome! Todays Agenda: Rendering Overview Matrices Transforms INFOGR Lecture 8 Engine Fundamentals Rendering
INFOGR – Lecture 8 – “Engine Fundamentals” Topics covered so far: Basics:
Ray tracing:
Shading:
INFOGR – Lecture 8 – “Engine Fundamentals” Rendering – Functional overview
translating / rotating / scaling meshes
calculating 2D screen positions
determining affected pixels
calculate color per affected pixel Transform Project Rasterize Shade meshes vertices vertices fragment positions pixels
Animation, culling, tessellation, ... Postprocessing
INFOGR – Lecture 8 – “Engine Fundamentals” Rendering – Data overview
INFOGR – Lecture 8 – “Engine Fundamentals”
Rendering – Data Overview
world car wheel wheel wheel wheel turret plane plane car wheel wheel wheel wheel turret buggy wheel wheel wheel wheel dude dude dude camera 𝑈𝑑𝑏𝑛𝑓𝑠𝑏 𝑈𝑑𝑏𝑠1 𝑈𝑞𝑚𝑏𝑜𝑓1 𝑈𝑑𝑏𝑠2 𝑈𝑞𝑚𝑏𝑜𝑓2 𝑈𝑐𝑣𝑧
INFOGR – Lecture 8 – “Engine Fundamentals”
Rendering – Data Overview
Objects are organized in a hierarchy: the scenegraph. In this hierarchy, objects have translations and
Relative translations and orientations are specified using matrices. Mesh vertices are defined in a coordinate system known as object space.
INFOGR – Lecture 8 – “Engine Fundamentals” Transform Project Rasterize Shade vertices, transforms pixels
Rendering – Data Overview
Transform takes our meshes from
(3D). Project takes the vertex data from camera space (3D) to screen space (2D). textures, shaders, lights camera transform screen buffers vertices vertices fragment positions connectivity data
INFOGR – Lecture 8 – “Engine Fundamentals”
Rendering – Data Overview
The screen is represented by (at least) two buffers:
INFOGR – Lecture 8 – “Engine Fundamentals”
Rendering – Components
Scenegraph Culling Vertex transform pipeline Matrices to convert from one space to another Perspective Rasterization Interpolation Clipping Depth sorting: z-buffer Shading Light / material interaction Complex materials Lecture 11 Lecture 8 Lecture 9 Lecture 11 Lecture 11 P2 P3
INFOGR – Lecture 8 – “Engine Fundamentals”
Bases in ℝ2 and ℝ3
Recall:
𝑏 = λ1𝑣 + λ2 𝑤
𝑤 are perpendicular unit vectors, the base is
this, with 𝑣 = (1,0) and 𝑤 = (0,1). By manipulating 𝑣 and 𝑤, we can create a ‘coordinate system’ within a coordinate system. 𝑣 𝑤
INFOGR – Lecture 8 – “Engine Fundamentals”
Bases in ℝ2 and ℝ3
This extends naturally to ℝ3: Three vectors, 𝑣, 𝑤 and 𝑥 allow us to reach any point in 3D space; 𝑏 =λ1𝑣 + λ2 𝑤 + λ3 𝑥 Again, manipulating 𝑣, 𝑤 and 𝑥 changes where coordinates specified as (λ1, λ2 , λ3) end up.
y
z
x
𝑣 𝑤 𝑥
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices
A vector is an ordered set of d scalar values (i.e., a d-tuple): 𝑤 = 𝑤1 𝑤2 𝑤3
A 𝑛 × 𝑜 matrix is an array of 𝑛 ∙ 𝑜 scalar values, sorted in 𝑛 rows and 𝑜 columns: 𝑁 = 𝑏11 𝑏12 𝑏21 𝑏22 The elements 𝑏𝑗𝑘 are referred to as the coefficients of the matrix (or elements, entries). Note that here 𝑗 is the row; 𝑘 is the column.
INFOGR – Lecture 8 – “Engine Fundamentals”
Terminology – Special Matrices
𝐵 = 1.5 0.99 3.14 𝐵 = 1 1 1 𝐵 = Before we continue, what is a matrix?
x y z
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices - Operations
Matrix addition is defined as: 𝐵 = 𝐶 + 𝐷, with: c𝑗𝑘 = 𝑏𝑗𝑘 + 𝑐𝑗𝑘 Note that addition is only defined for matrices with the same dimensions. Example: 1 1 + 2 2 4 4 = 3 2 4 5 Subtraction works the same.
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices - Operations
Multiplying a matrix with a scalar is defined as follows: 𝐵 = λ𝐶, with: a𝑗𝑘 = λ𝑐𝑗𝑘 Example: 2 1 1 = 2 2
INFOGR – Lecture 8 – “Engine Fundamentals” Matrices - Operations Multiplying a matrix (dimensions 𝑥𝐵 × ℎ𝐵) with another matrix (dimensions 𝑥𝐶 × ℎ𝐶): 𝐷 = 𝐵𝐶, with: Example: 2 6 1 5 2 4 1 4 2 5 3 6 = 17 44 21 54 Note the dimensions of the resulting matrix: ℎ𝐵 × 𝑥𝐶. Matrix multiplication is only defined if ℎ𝐵 = 𝑥𝐶 (i.e., the width of B is equal to the height of A). 𝑑11 =
𝑙=1 3
𝑏1𝑙 𝑐𝑙1 = 2 ∗ 1 + 6 ∗ 2 + 1 ∗ 3 = 17 𝑑21 =
𝑙=1 3
𝑏2𝑙 𝑐𝑙1 = 5 ∗ 1 + 2 ∗ 2 + 4 ∗ 3 = 21 𝑑12 =
𝑙=1 3
𝑏1𝑙 𝑐𝑙2 = 2 ∗ 4 + 6 ∗ 5 + 1 ∗ 6 = 44 𝑑22 =
𝑙=1 3
𝑏2𝑙 𝑐𝑙2 = 5 ∗ 4 + 2 ∗ 5 + 4 ∗ 6 = 54 𝑑𝑗𝑘 =
𝑙=1 𝑥𝐵
𝑏𝑗𝑙 𝑐𝑙𝑘
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices - Operations
Doing matrix multiplication manually: 1 4 2 5 3 6 2 6 1 5 2 4 ? ? ? ? Note that each cell in the resulting matrix is essentially the dot product of a row and a column. Some properties: Matrix multiplication is distributive over addition: 𝐵 𝐶 + 𝐷 = 𝐵𝐶 + 𝐵𝐷 𝐵 + 𝐶 𝐷 = 𝐵𝐷 + 𝐶𝐷 …and associative: 𝐵𝐶 𝐷 = 𝐵 𝐶𝐷 However, matrix multiplication is not commutative, i.e., in general: 𝐵𝐶 ≠ 𝐶𝐵
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices - Operations
Doing matrix multiplication manually: 1 4 2 5 3 6 2 6 1 5 2 4 ? ? ? ? 𝑏 𝑑 𝑐 𝑒 1 1 𝑏 𝑑 𝑐 𝑒 Multiplying by the zero matrix yields the zero matrix: 0𝐵 = 𝐵0 = 0 Multiplying by the identity matrix yields the original matrix: 𝐽𝐵 = 𝐵𝐽 = 𝐵
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices - Operations
The transpose 𝐵𝑈 of an 𝑛 × 𝑜 matrix is an 𝑜 × 𝑛 matrix that is obtained by interchanging rows and columns: 𝑏𝑗𝑘 becomes 𝑏𝑘𝑗 for all 𝑗, 𝑘: 𝐵 = 𝑏11 𝑏12 𝑏13 𝑏21 𝑏22 𝑏23 𝑏31 𝑏32 𝑏33 𝐵𝑈 = 𝑏11 𝑏21 𝑏31 𝑏12 𝑏22 𝑏32 𝑏13 𝑏23 𝑏33 The transpose of the product of two matrices is: 𝐵𝐶 𝑈 = 𝐶𝑈𝐵𝑈
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices - Operations
The inverse of a matrix 𝐵 is a matrix 𝐵-1 such that 𝐵𝐵−1 = 𝐵−1A = 𝐽 Note: only square matrix possibly have an inverse.
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices - Operations
We can multiply a d-dimensional vector by an 𝑛 × 𝑒 matrix: 𝑏11 ⋯ 𝑏1𝑒 ⋮ ⋱ ⋮ 𝑏𝑛1 ⋯ 𝑏𝑛𝑒 𝑤1 ⋮ 𝑤𝑒 = 𝑏11𝑤1 + ⋯ + 𝑏1𝑒𝑤𝑒 ⋯ + ⋯ + ⋯ 𝑏𝑛1𝑤1 + ⋯ + 𝑏𝑛𝑒𝑤𝑒 Example: multiply a 3D vector by a 3x3 matrix: 𝑏11 𝑏12 𝑏13 𝑏21 𝑏22 𝑏23 𝑏31 𝑏32 𝑏33 𝑦 𝑧 𝑨 = 𝑏11𝑦 + 𝑏12𝑧 + 𝑏13𝑨 𝑏21𝑦 + 𝑏22𝑧 + 𝑏23𝑨 𝑏31𝑦 + 𝑏32𝑧 + 𝑏33𝑨
Note: This is the same as matrix concatenation; the vector is simply an 𝑛 × 1 matrix.
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices - Operations
We can multiply a d-dimensional vector by an 𝑛 × 𝑒 matrix: 𝑏11 ⋯ 𝑏1𝑒 ⋮ ⋱ ⋮ 𝑏𝑛1 ⋯ 𝑏𝑛𝑒 𝑤1 ⋮ 𝑤𝑒 = 𝑏11𝑤1 + ⋯ + 𝑏1𝑒𝑤𝑒 ⋯ + ⋯ + ⋯ 𝑏𝑛1𝑤1 + ⋯ + 𝑏𝑛𝑒𝑤𝑒 Example: multiply a 3D vector by a 3x3 matrix: 𝑣𝑦 𝑤𝑦 𝑥𝑦 𝑣𝑧 𝑤𝑧 𝑥𝑧 𝑣𝑨 𝑤𝑨 𝑥𝑨 𝑦 𝑧 𝑨 = 𝑣𝑦𝑦 + 𝑤𝑦𝑧 + 𝑥𝑦𝑨 𝑣𝑧𝑦 + 𝑤𝑧𝑧 + 𝑥𝑧𝑨 𝑣𝑨𝑦 + 𝑤𝑨𝑧 + 𝑥𝑨𝑨 = 𝑦𝑣 + 𝑧 𝑤 + 𝑨𝑥
Note: This is the same as matrix concatenation; the vector is simply an 𝑛 × 1 matrix. u v w
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices – Determinant
The determinant 𝐵 of an 𝑜 × 𝑜 matrix A is the signed area or volume spanned by its column vectors. Example (in ℝ2 ): 𝐵 = 𝑏11 𝑏12 𝑏21 𝑏22 det A = |A| = 𝑏11 𝑏12 𝑏21 𝑏22 In this case, the determinant is the oriented area of the parallelogram defined by the two column vectors. The determinant is positive if the vectors are counter- clockwise, or negative if they are clockwise. Therefore: det 𝑏1 𝑏2 = −det |𝑏2 𝑏1|
(𝑏12, 𝑏22) (𝑏11, 𝑏21)
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices – Determinant
The determinant 𝐵 of an 𝑜 × 𝑜 matrix A is the signed volume spanned by its column vectors. In ℝ3, the determinant is the oriented area of the parallelepiped defined by the three column vectors. det 𝐵 = 𝐵 = 𝑏11 𝑏12 𝑏13 𝑏21 𝑏22 𝑏23 𝑏31 𝑏32 𝑏33
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices – Determinant
Calculating determinants: Laplace’s expansion. The determinant of a matrix is the sum of the products
with their cofactors. The cofactor of an entry 𝑏𝑗𝑘 in an 𝑜 × 𝑜 matrix A is:
and 𝑘-th column,
Example: 𝐵 = 𝑏11 𝑏12 𝑏13 𝑏21 𝑏22 𝑏23 𝑏31 𝑏32 𝑏33 𝑏11
𝑑 = 𝑏22
𝑏23 𝑏32 𝑏33 ∗ (−12) 𝑏12
𝑑 = 𝑏21
𝑏23 𝑏31 𝑏33 ∗ (−13) 𝑏13
𝑑 = 𝑏21
𝑏22 𝑏31 𝑏32 ∗ (−14) 𝐵 = 𝑏11𝑏11
𝑑 + 𝑏12 𝑏12 𝑑 +𝑏13𝑏13 𝑑
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices – Determinant
Full example for 3 × 3 matrix: 1 2 3 4 5 6 7 8 = 0 4 5 7 8 − 1 3 5 6 8 + 2 3 4 6 7 3 5 6 8 = 3 ∗ 8 ∗ −12 + 5 ∗ 6 ∗ −13 = −6 3 4 6 7 = 3 ∗ 7 ∗ −12 + 4 ∗ 6 ∗ −13 = −3 0 – 1 ∗ −6 + 2 ∗ −3 = 0.
Generic approach for a for 3 × 3 matrix: 𝑏 𝑐 𝑑 𝑒 𝑓 𝑔 ℎ 𝑗 = 𝑏 𝑓 𝑔 ℎ 𝑗 − ⋯ = 𝑏𝑓𝑗 + 𝑐𝑔 + 𝑑𝑒ℎ − (𝑑𝑓 + 𝑏𝑔ℎ + 𝑐𝑒𝑗) 𝑏 𝑐 𝑑 𝑒 𝑓 𝑔 ℎ 𝑗 𝑏 𝑐 𝑑 𝑒 𝑓 𝑔 ℎ 𝑗 Rule of Sarrus for 2 × 2: 𝑏 𝑐 𝑑 𝑒 = 𝑏𝑒 − 𝑐𝑑
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices – Adjoint
The adjoint (or adjugate) 𝐵 of matrix 𝐵 is the transpose of the cofactor matrix of A. Example: 𝐵 = 2 5 1 3 𝐷 = 3 ∗ (−12) 1 ∗ (−13) 5 ∗ (−13) 2 ∗ (−14) = 3 −1 −5 2 𝑏𝑒𝑘 𝐵 = 𝐷𝑈 = 3 −5 −1 2 .
The cofactor of an entry 𝑏𝑗𝑘 in an 𝑜 × 𝑜 matrix A is:
(𝑜 − 1) × 𝑜 − 1 matrix A’ ,
the 𝑗-th row and 𝑘-th column,
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices – Inverse
The adjoint is used to calculate the inverse 𝐵
_1 of a matrix A:
𝐵
_1 =
𝐵 |𝐵|
INFOGR – Lecture 8 – “Engine Fundamentals”
Matrices – Overview
𝐵 = 𝑏11 𝑏12 𝑏13 𝑏21 𝑏22 𝑏23 𝑏31 𝑏32 𝑏33 = 1 1 1 𝑜 × 𝑛: n rows, m columns det 𝐵 = 𝐵 = 1 = 𝑏𝑓𝑗 + 𝑐𝑔 + 𝑑𝑒ℎ − (𝑑𝑓 + 𝑏𝑔ℎ + 𝑐𝑒𝑗) 𝐵 = 1 1 note: −1 1 = −1, and: det 𝑏1 𝑏2 = −det |𝑏2 𝑏1| cofactor 𝑏11
𝑑 = 𝑏22
𝑏23 𝑏32 𝑏33 ∗ (−12) Adjoint 𝐵 of A is 𝐷𝑈; inverse 𝐵
_1 is
𝐵 |𝐵| .
y
z
x
𝑏 𝑐 𝑑 𝑒 𝑓 𝑔 ℎ 𝑗 𝑏 𝑐 𝑑 𝑒 𝑓 𝑔 ℎ 𝑗
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces - Introduction
As we have seen before, we can multiply a matrix with a vector. In 2D: 𝑏11 𝑏12 𝑏21 𝑏22 𝑦 𝑧 = 𝑏11𝑦 + 𝑏12𝑧 𝑏21𝑦 + 𝑏22𝑧 In 3D: 𝑏11 𝑏12 𝑏13 𝑏21 𝑏22 𝑏23 𝑏31 𝑏32 𝑏33 𝑦 𝑧 𝑨 = 𝑏11𝑦 + 𝑏12𝑧 + 𝑏13𝑨 𝑏21𝑦 + 𝑏22𝑧 + 𝑏23𝑨 𝑏31𝑦 + 𝑏32𝑧 + 𝑏33𝑨 Geometric interpretation: scalar multiplication of 𝑏11 𝑏21 by 𝑦, plus scalar multiplication of 𝑏12 𝑏22 by 𝑧 yields transformed point. = 𝑦 𝑏11 𝑏21 + 𝑧 𝑏12 𝑏22 = 𝑦 𝑏11 𝑏21 𝑏31 + 𝑧 𝑏12 𝑏22 𝑏32 + 𝑨 𝑏13 𝑏23 𝑏33 𝑏11 𝑏21 𝑏12 𝑏22
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces – Introduction
A matrix allows us to transform a coordinate system.
rotation + scale
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces – Scaling
To scale by a factor 2 with respect to the origin, we apply the matrix 2 2 Applied to a vector, we get: 2 2 𝑦 𝑧 = 2𝑦 + 0𝑧 0𝑦 + 2𝑧 = 2𝑦 2𝑧 This is called uniform scaling.
𝑦, 𝑧 = (2,1) 2𝑦, 2𝑧 = (4,2)
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces – Projection
If we set one of the 𝑏𝑗𝑗 to 0, we get an
1 This is useful for projecting a shadow
a 3D object on a 2D screen.
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces – Reflection
We can construct a matrix that will swap 𝑦 and 𝑧 coordinates to get a reflection in the line 𝑧 = 𝑦: 1 1 𝑦 𝑧 = 0𝑦 + 1𝑧 1𝑦 + 0𝑧 = 𝑧 𝑦 𝑧 = 𝑦
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces – Shearing
Pushing things sideways: 1 1 1 𝑦 𝑧 = 1𝑦 + 1𝑧 1𝑧 = 𝑦 + 𝑧 𝑦 This is called shearing.
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces – Rotation
To rotate counter-clockwise about the
cos ∅ −𝑡𝑗𝑜∅ sin ∅ 𝑑𝑝𝑡∅ For clockwise rotation, we use cos ∅ 𝑡𝑗𝑜∅ −sin ∅ 𝑑𝑝𝑡∅ Ф
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces – Linear transformations
A function 𝑈: ℝ𝑜 → ℝ𝑛 is called a linear transformation, if it satisfies:
𝑤 = 𝑈 𝑣 + 𝑈 𝑤 for all 𝑣, 𝑤 ϵ ℝ𝑜.
𝑤 = 𝑑𝑈 𝑤 for all 𝑤 ∈ ℝ𝑜 and all scalars c. Linear transformations can be represented by matrices. We can summarize both conditions into one equation: 𝑈 𝑑1𝑣 + 𝑑2 𝑤 = 𝑑1𝑈 𝑣 + 𝑑2𝑈 𝑤 for all 𝑣, 𝑤 ∈ ℝ𝑜 and all scalars c1, c2.
𝑦, 𝑧 = (2,1) 2𝑦, 2𝑧 = (4,2)
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces – Linear transformations
𝑈 𝑑1𝑣 + 𝑑2 𝑤 = 𝑑1𝑈 𝑣 + 𝑑2𝑈 𝑤 for all 𝑣, 𝑤 ∈ ℝ𝑜 and all scalars c1, c2. Remember Cartesian coordinates, where each vector 𝑥 can be expressed as a linear combination of base vectors 𝑣 and 𝑤: 𝑥 = 𝑦 𝑧 = 𝑦 1 0 + 𝑧 0 1 If we apply a linear transform T to this vector, we get 𝑈 𝑦 𝑧 = 𝑈 𝑦 1 0 + 𝑧 0 1 = 𝑦𝑈( 1 0 ) + 𝑧𝑈( 0 1 )
𝑦, 𝑧 = (2,1) 2𝑦, 2𝑧 = (4,2)
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces – Linear transformations
𝑈 𝑑1𝑣 + 𝑑2 𝑤 = 𝑑1𝑈 𝑣 + 𝑑2𝑈 𝑤 for all 𝑣, 𝑤 ∈ ℝ𝑜 and all scalars c1, c2. Matrices are constructed conveniently using two base vectors. 𝑣 𝑤 𝑣 𝑤 𝑣 𝑤
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces – Transforming normals
Unfortunately, normals are not always transformed correctly. To transform a normal vector 𝑜 correctly under a given linear transformation 𝐵, we have to apply the matrix 𝐵
_1 𝑈
Why? Note: if the transform is orthonormal, A
_1 = 𝐵 T ; therefore 𝐵 _1 𝑈 = 𝐵 .
𝑜 𝑢
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces – Transforming normals
We know that tangent vectors are transformed correctly: 𝐵 𝑢 = 𝑢𝐵 . But: 𝐵𝑜 ≠ 𝑜𝐵. Goal: find ind a a mat atrix 𝐍 that tr transforms 𝒐 correctly, i.e. 𝑁𝑜 = 𝑜𝑁, where 𝑜𝑁 is the correct normal of the transformed surface. Because the original normal vector 𝑜 is perpendicular to the original tangent vector 𝑢, we know that 𝑜 ∙ 𝑢 = 0. This is the same as 𝑜 𝐽 𝑢 = 0. Since 𝐽 = 𝐵
_1𝐵, this is the same as 𝑜 (𝐵 _1𝐵)
𝑢 = 0. Because 𝐵 𝑢 = 𝑢𝐵 is the correctly transformed tangent vector, we have 𝑜 𝐵
_1
𝑢𝐵 = 0. Because their scalar product is 0, 𝑜 𝐵
_1 must be orthogonal to
𝑢𝐵. So, the vector we are looking for must be: 𝑜𝑁 = 𝑜 𝐵
_1 (which suggests 𝑁 = 𝐵 _1).
Because of how matrix multiplication is defined, 𝑜𝑁 and 𝑜 are transposed vectors. We can rewrite this to 𝑜𝑁 = (𝑜𝑈𝐵
_1)T. And finally, remember that 𝐵𝐶 𝑈 = 𝐶𝑈𝐵𝑈, which gets us 𝑜𝑁 = 𝐵 _1 𝑈𝑜.
INFOGR – Lecture 8 – “Engine Fundamentals”
Spaces – Needful things
Three things left undiscussed:
Reverting a transform: Invert the matrix. Note: doesn’t always work; e.g. the matrix for orthographic projection has no inverse. Combining transforms: Use matrix multiplication. Note: matrix multiplication is not commutative, mind the order!