Virtual Reality Modeling Virtual Reality Modeling from - - PowerPoint PPT Presentation
Virtual Reality Modeling Virtual Reality Modeling from - - PowerPoint PPT Presentation
Electrical and Computer Engineering Dept. Virtual Reality Modeling Virtual Reality Modeling from http://www.okino.com/ Modeling Modeling & & VR Toolkits VR Toolkits System architecture The VR object modeling cycle: The VR object
System architecture Modeling Modeling & & VR Toolkits VR Toolkits
The VR object modeling cycle: The VR object modeling cycle:
I/O mapping (drivers); Geometric modeling; Kinematics modeling; Physical modeling; Object behavior (intelligent agents); Model management.
The VR modeling cycle
The VR geometric modeling: The VR geometric modeling:
Object surface shape: polygonal meshes (vast majority); splines (for curved surfaces); Object appearance: Lighting (shading) texture mapping
The surface polygonal (triangle) mesh The surface polygonal (triangle) mesh
Shared vertex Shared vertex Non Non-
- shared vertex
shared vertex (X (X0
0,Y
,Y0
0,Z
,Z0
0)
) (X (X1
1,Y
,Y1
1,Z
,Z1
1)
) (X (X2
2,Y
,Y2
2,Z
,Z2
2)
) (X (X3
3,Y
,Y3
3,Z
,Z3
3)
) (X (X4
4,Y
,Y4
4,Z
,Z4
4)
) (X (X5
5,Y
,Y5
5,Z
,Z5
5)
)
Triangle meshes are preferred since they are memory and computationally efficient (shared vertices)
Object Object spline spline-
- based shape:
based shape:
Another way of representing virtual objects;
Functions are of higher degree than linear functions
describing a polygon – use less storage and provide increased surface smoothness.
Parametric splines are represented by points x(t), y(t),
z(t), t=[0,1] and a, b, c are constant coefficients.
Object Object spline spline-
- based shape:
based shape:
Parametric surfaces are extension of parametric splines
with point coordinates given by x(s,t), y(s,t), z(s,t), with s=[0,1] and t=[0,1]. β-Splines are controlled indirectly through four control points (more in physical modeling section)
Object polygonal shape: Object polygonal shape:
Can be programmed from scratch using OpenGL or
- ther toolkit editor; it is tedious and requires skill;
Can be obtained from CAD files;
Can be created using a 3-D digitizer (stylus), or a 3-D
scanner (tracker, cameras and laser);
Can be purchased from existing online databases
(Viewpoint database). Files have vertex location and connectivity information, but are static.
CAD-file based models: Done using AutoCAD; Each moving part a separate file; Files need to be converted to formats compatible with VR toolkits; Advantage – use of preexisting models in manufacturing applications.
Geometric Modeling Geometric Modeling
Geometric Modeling Geometric Modeling
Venus de Milo created Venus de Milo created using the using the HyperSpace HyperSpace 3D digitizer, 4200 textured 3D digitizer, 4200 textured polygons using polygons using NuGraph NuGraph toolkit toolkit
Polhemus Polhemus 3 3-
- D scanners:
D scanners:
Eliminate direct contact with object.
uses two cameras, a laser, and one magnetic trackers (two if
movable objects are scanned)
Scanning resolution 0.5 mm at 200 mm range; Scanning speed is 50 lines/sec; Range is 75-680 mm scanner-object range.
Geometric Modeling Geometric Modeling
Polhemus Polhemus FastScan FastScan 3D scanner (can scan objects up to 3 m long). 3D scanner (can scan objects up to 3 m long).
DeltaSphere DeltaSphere 3000 3D scanner 3000 3D scanner www.3rdtech.com Large models need large-volume Scanners; The 3rdTech scanners uses time-of-flight modulated laser beam to determine position. Features: Scanning range up to 40 ft; Resolution 0.01 in; accuracy 0.3 in; scan density of up to 7200 samples/360º; complete scene scanning in 10 – 30 minutes (scene has to be static);
- ptional digital color camera (2008x1504 resolution) to add
color to models. Requires a second scan, and reduces elevation to 77º.
360 360º º horizontal horizontal 150 150º º elevation elevation electrical motor electrical motor and CPU and CPU
DeltaSphere DeltaSphere 3000 3D scanner 3000 3D scanner
www.3rdtech.com
Polhemus Polhemus scanner scanner
Feature Polhemus scanner DeltaSphere scanner Range 0.56 m 14.6 m Resolution 0.5 mm @ 0.2 m 0.25 mm Control manual automatic Speed 50 lines/sec 25,000 samples/sec
DeltaSphere DeltaSphere 3000 image 3000 image
www.3rdtech.com
DeltaSphere DeltaSphere 3000 software 3000 software-
- compensated image
compensated image
www.3rdtech.com
Conversion of scanner data: Conversion of scanner data:
Scanners produce a dense “cloud” of vertices (x,y,z). Using such packages as Wrap (www.geomagic.com) the point data
is transformed into surface data (including editing and decimation) Point cloud Point cloud from scanner from scanner Polygonal mesh Polygonal mesh after decimation after decimation
Polygonal Polygonal surface surface NURBS surface NURBS surface NURBS (non NURBS (non-
- uniform
uniform rational rational β β-
- splines
splines) patches ) patches
Higher resolution model > 20,000 polygons. Low res. Model – 600 polygons
Geometric Modeling Geometric Modeling – – using online databases using online databases
Geometric Modeling Geometric Modeling
Object Visual Appearance Object Visual Appearance
Scene illumination (local or global);
Texture mapping; Multi-textures Use of textures to do illumination in the rasterizing
stage of the pipeline
Scene illumination Scene illumination
Local methods (Flat shaded, Gouraud shaded,
Phong shaded) treat objects in isolation. They are computationally faster than global illumination methods;
Global illumination treats the influence of one
- bject on another object’s appearance. It is more
demanding from a computation point of view but produces more realistic scenes.
Phong Phong shading model shading model Flat shading model Flat shading model
Ip = Ib – (Ib- Ia) xb-xp xb-xa
Gouraud Gouraud shading model shading model Local illumination methods Local illumination methods
Flat shaded Flat shaded Utah Teapot Utah Teapot Phong Phong shaded shaded Utah Teapot Utah Teapot
Global scene illumination Global scene illumination
The inter-reflections and shadows cast by objects on each
- ther.
Radiosity Radiosity illumination illumination
Results in a more realistic looking scene Without radiosity With radiosity
Radiosity Radiosity illumination illumination
… but until recently only for fly-through (geometry fixed). A second process was added so that scene geometry
can be altered
Texture mapping Texture mapping
It is done in the rasterizer phase of the graphics
pipeline, by mapping assigning texture space coordinates to polygon vertices (or splines), then mapping these to pixel coordinates;
Texture increase scene realism; Texture provide better 3-D spatial cues (they are
perspective transformed);
They reduce the number of polygons in the scene –
increased frame rate (example – tree models).
Textured room image for increased realism Textured room image for increased realism
from http://www.okino.com/
How to create textures: How to create textures:
Models are available on line in texture
“libraries” of cars, people, construction materials, etc. Custom textures from scanned photographs or
Using an interactive paint program to create bitmaps
Tree, higher resolution model 45,992 polygons.
VR Geometric Modeling VR Geometric Modeling
Tree represented as a texture 1 polygon, 1246x1280 pixels (www.imagecels.com).
Multi Multi-
- texturing:
texturing:
Several texels can be overlaid on one pixel;
A texture blending cascade is made up of a series of texture stages (from “Real Time Rendering) Interpolated Interpolated vertex values vertex values Texture Texture value 0 value 0 Stage 0 Stage 1 Stage 2 Texture Texture value 1 value 1 Texture Texture value 2 value 2 Polygon/ Polygon/ Image buffer Image buffer
Allow more complex textures Allow more complex textures
Bump Bump maps maps Transparency Transparency texture texture Normal Normal texture texture Background Background texture texture Reflectivity Reflectivity texture texture
Multi Multi-
- texturing for bump mapping:
texturing for bump mapping:
Lighting effects caused by irregularities on object
surface are simulated through “bump mapping”;
This encodes surface irregularities as textures; No change in model geometry. No added
computations at the geometry stage;
Done as part of the per-pixel shading operations
- f the NSR
Multi Multi-
- texturing for lighting:
texturing for lighting:
Several texels can be overlaid on one pixel;
One application in more realistic lighting; Polygonal lighting is real-time but requires lots of
polygons (triangles) for realistic appearance
Vertex lighting of low polygon count surface – lights are diffuse – tessellated. Vertex lighting of high polygon count surface – lights have realistic
- appearance. High computation load
(from NVIDIA technical brief)
Multi Multi-
- texturing (texture blending):
texturing (texture blending):
Realistic-looking lighting can be done with
2-D textures called “light maps”;
Not applicable to real-time (need to be recomputed
when object moves)
Standard lighting map 2-D texture Light map texture overlaid
- n top of wall texture. Realistic
and low polygon count. Not real-time!
(from NVIDIA technical brief)
KINEMATICS MODELING: KINEMATICS MODELING:
Homogeneous transformation matrices; Object position; Transformation invariants; Object hierarchies; Viewing the 3-D world.
Object Hierarchies: Object Hierarchies:
Allows models to be partitioned into a hierarchy,
and become dynamic;
Segments are either parents (higher level object)
- r children (lower level objects).
The motion of a parent is replicated by its
children but not the other way around.
Example – the virtual human and the virtual
hand;
At the top of the hierarchy is the “world global
transformation” that determines the view to the scene.
VR Kinematics Modeling VR Kinematics Modeling
a) b) Model hierarchy: a) static model (Viewpoint Datalabs); b) segmented model.
T T global
global ← ← fingertip fingertip (t)
(t) T T W
W ← ←palm palm (t)
(t) T T 3
3← ← fingertip fingertip
T T 2
2← ← 3 3 (t)
(t) T T 1
1← ← 2 2 (t),
(t), T T palm
palm← ← 1 1 (t)
(t)
World system of coordinates World system of coordinates Camera system of coordinates Camera system of coordinates Receiver system Receiver system
- f coordinates
- f coordinates
Source system of coordinates Source system of coordinates
T T global
global ← ← fingertip fingertip (t) =
(t) = T T global
global ← ←W W(t)
(t) T T W
W ← ←source source T
T source
source ← ←palm palm (t)
(t) •
- T
T palm
palm← ← 1 1 (t)
(t)T T 1
1← ← 2 2 (t)
(t) T T 2
2← ← 3 3 (t)
(t) T T 3
3← ← fingertip fingertip
T T global
global ← ←W W(t)
(t) T T W
W ← ←source source
z z x x y y
Physical modeling
Physical characteristics of the object and the
way they change – Inertia, surface roughness & texture, compliance, (hard/soft) deformation mode (elastic/plastic)
Handled by the haptics rendering pipeline
(should be synchronized with the graphics pipeline)
The The Haptics Haptics Rendering Pipeline (revisited) Rendering Pipeline (revisited)
Force Force Tactile Tactile Traversal Traversal Display Display
Force Force Calculation Calculation Force Force Smoothing Smoothing Force Force Mapping Mapping Haptic Haptic Texturing Texturing Collision Collision Detection Detection View View Transform Transform Lighting Lighting Projection Projection Texturing Texturing Scene Scene Traversal Traversal
Geometry Geometry Rasterizer Rasterizer Application Application Display Display
adapted from (Popescu, 2001)
The The Haptics Haptics Rendering Pipeline Rendering Pipeline
Force Force Tactile Tactile Traversal Traversal Display Display
Force Force Calculation Calculation Force Force Smoothing Smoothing Force Force Mapping Mapping Haptic Haptic Texturing Texturing Collision Collision Detection Detection
Equivalent to haptic clipping
Uses bounding box collision detection for fast response; Two types of bounding boxes, with fixed size or variable
size (depending on enclosed object orientation).
Fixed size is computationally faster, but less precise
Collision detection: Collision detection:
Variable size Bounding Box Variable size Bounding Box Fixed size Bounding Box Fixed size Bounding Box
Collision response
Object deformation (if objects are non-
rigid)
Parametric surfaces vs polygonal meshes
Surface cutting: Surface cutting:
An extreme case of surface “deformation” is surface cutting. This happens when the contact force exceed a given threshold;
When cutting, one vertex gets a co-located twin. Subsequently
the twin vertices separate based on spring/damper laws and the cut enlarges.
Cutting instrument Cutting instrument
Mesh Mesh before before cut cut Mesh Mesh after after cut cut
V2 V2 V1 V1 V2 V2 V1 V1
Collision response Collision response – – surface deformation surface deformation
The The Haptics Haptics Rendering Pipeline Rendering Pipeline
Force Force Tactile Tactile Traversal Traversal Display Display
Force Force Calculation Calculation Force Force Smoothing Smoothing Force Force Mapping Mapping Haptic Haptic Texturing Texturing Collision Collision Detection Detection
Haptic Interface Point Haptic Interface Point
Object polygon Object polygon
I I -
- Haptic Interface Point
Haptic Interface Point Haptic Interface Point Haptic Interface Point
Penetration distance Penetration distance Haptic interface Haptic interface
Force output for homogeneous elastic objects Force output for homogeneous elastic objects
K K •
- d,
d,
for 0 for 0 ≤ ≤ d d ≤ ≤ d d max
max
F F max
max
for d for d max
max <
< d d
where where F
F max
max is that haptic interface maximum output force
is that haptic interface maximum output force
{ {
F = F =
F F max
max
d d max 1
max 1
Penetration distance d Penetration distance d d d max 2
max 2
H a r d
- b
j e c t H a r d
- b
j e c t Soft object Soft object
saturation saturation
Force Calculation Force Calculation – – Elastic objects with harder interior Elastic objects with harder interior
K K 1
1 •
- d,
d,
for 0 for 0 ≤ ≤ d d ≤ ≤ d d discontinuity
discontinuity
K K 1
1 •
- d
d discontinuity
discontinuity + K
+ K 2
2 •
- (d
(d – –d d discontinuity
discontinuity)
),
, for d
for d discontinuity
discontinuity ≤
≤ d d
where d where d discontinuity
discontinuity is object stiffness change point
is object stiffness change point
{ {
F = F =
Penetration distance d Penetration distance d d d discontinuity
discontinuity
F F
F F r
r
F F
Force Calculation Force Calculation – – Virtual pushbutton Virtual pushbutton
F = K F = K 1
1 •
- d (1
d (1-
- u
um
m) +
) + F Fr
r •
- u
um
m +
+ K K 2
2 •
- (d
(d – – n) u n) un
n
where u where um
m and u
and un
n are unit step
are unit step functions at m and n functions at m and n
m m n n Penetration distance d Penetration distance d Virtual wall Virtual wall
F F initial
initial = K
= K •
- d for 0
d for 0 ≤ ≤ d d ≤ ≤ m m F F = 0 during relaxation, = 0 during relaxation, F F subsequent
subsequent = K
= K 1
1 •
- d
d •
- u
um
m for 0
for 0 ≤ ≤ d d ≤ ≤ n n F F = 0 during relaxation, = 0 during relaxation, where u where um
m is unit step function at m
is unit step function at m
m m m m n n
Force Calculation Force Calculation – – Plastic deformation Plastic deformation
Generate energy due to sampling time Generate energy due to sampling time-
- To avoid system instabilities we add a
To avoid system instabilities we add a damping term damping term
Force Calculation Force Calculation – – Virtual wall Virtual wall Insufficient stiffness Insufficient stiffness
V V < < 0 Virtual wall Virtual wall Moving into the wall Moving into the wall time time F F V V ≥ ≥ 0 Virtual wall Virtual wall Moving away from the wall Moving away from the wall time time F F
K K wall
wall •
- Δ
Δ x + B v x + B v,
, for v
for v < < 0
K K wall
wall •
- Δ
Δ x, x, for v
for v ≥ ≥ 0 where B is a directional damper where B is a directional damper
{ {
F = F =
Wallness: crispness of initial contact,
cleanliness of the final release.
The The Haptics Haptics Rendering Pipeline Rendering Pipeline
Force Force Tactile Tactile Traversal Traversal Display Display
Force Force Calculation Calculation Force Force Smoothing Smoothing Force Force Mapping Mapping Haptic Haptic Texturing Texturing Collision Collision Detection Detection
where N is the direction of the contact force based on vertex normal interpolation
Force shading: Force shading:
Real cylinder contact Real cylinder contact forces forces Non Non-
- shaded contact
shaded contact forces forces Contact forces after Contact forces after shading shading
K K object
- bject •
- d
d •
- N
N ,
, for 0
for 0 ≤ ≤ d d ≤ ≤ d d max
max
F F max
max •
- N,
N, for d
for d max
max < d
< d
{ {
F F smoothed
smoothed =
=
The haptic mesh: The haptic mesh:
A single HIP is not sufficient to capture the geometry
- f fingertip-object contact as in a haptic glove;
The curvature of the fingertip, and the object
deformation need to be realistically modeled.
Screen sequence for squeezing an elastic virtual ball Screen sequence for squeezing an elastic virtual ball
Penetration distance Penetration distance for mesh point for mesh point i Mesh point Mesh point i
Haptic mesh Haptic mesh
Haptic Interface Point Haptic Interface Point i Penetration distance Penetration distance for mesh point for mesh point i
For each haptic interface point of the mesh For each haptic interface point of the mesh:
: F F haptic
haptic-
- mesh i
mesh i = K
= K object
- bject •
- d
d mesh i
mesh i •
- N
N surface
surface
where where d
d mesh i
mesh i are the interpenetrating distances at the
are the interpenetrating distances at the mesh points, mesh points, N
N surface
surface is the weighted surface normal
is the weighted surface normal
- f the contact polygon
- f the contact polygon
Haptic mesh force calculation Haptic mesh force calculation
The The Haptics Haptics Rendering Pipeline Rendering Pipeline
Force Force Tactile Tactile Traversal Traversal Display Display
Force Force Calculation Calculation Force Force Smoothing Smoothing Haptic Haptic Texturing Texturing Collision Collision Detection Detection Force Force Mapping Mapping
Force mapping Force mapping
Force displayed by the Rutgers Master interface: Force displayed by the Rutgers Master interface: F F displayed
displayed = (
= (Σ Σ F F haptic
haptic-
- mesh
mesh )
)•
- cos
cosθ θ where where θ θ it the angle between the mesh force resultant and the piston it the angle between the mesh force resultant and the piston
The The Haptics Haptics Rendering Pipeline Rendering Pipeline
Force Force Tactile Tactile Traversal Traversal Display Display
Force Force Calculation Calculation Force Force Smoothing Smoothing Force Force Mapping Mapping Haptic Haptic Texturing Texturing Collision Collision Detection Detection
Tactile mouse Tactile mouse Forces only in the z direction Forces only in the z direction Tactile patterns produced by the Logitech mouse Tactile patterns produced by the Logitech mouse
Force Force Time Time → → Time Time → → Time Time → →
haptic mouse texture simulation Textures can change according to movement direction: velvet
Surface Surface haptic haptic texture produced by the texture produced by the PHANToM PHANToM interface interface
Forces in all directions Friction simulation: force analogous to
normal force
Viscosity: force analogous to velocity Inertia: m*a
Surface haptic texture produced by the Surface haptic texture produced by the PHANToM PHANToM interface interface
Equivalent to displacement (bump) map
Haptic interface Haptic interface
Haptic Interface Point Haptic Interface Point
Object polygon Object polygon
X X Y Y Z Z
F F texture
texture = A sin(m x)
= A sin(m x) •
- sin(n y),
sin(n y),
where A, m, n are constants: where A, m, n are constants:
- A gives magnitude of vibrations;
A gives magnitude of vibrations;
- m and n modulate the frequency of
m and n modulate the frequency of vibrations in the x and y directions vibrations in the x and y directions
- F can be perceived as shape or
F can be perceived as shape or friction friction
BEHAVIOR MODELING BEHAVIOR MODELING
The simulation level of autonomy (LOA) is a function of its components
Thalmann et al. (2000) distinguish three levels of autonomy. The
simulation components can be either “guided” (lowest), “programmed” (intermediate”) and “autonomous (high)
Simulation LOA = f(LOA(Objects),LOA(Agents),LOA(Groups)) Simulation LOA = f(LOA(Objects),LOA(Agents),LOA(Groups))
Interactive object Interactive object Intelligent agent Intelligent agent Group of agents Group of agents
Autonomous Programmed Guided Autonomous Programmed Guided Autonomous Programmed Guided
Simulation Simulation LOA LOA
adapted from (Thalmann et al., 2000)
Interactive objects: Interactive objects:
Have behavior independent of user’s input (ex. clock);
This is needed in large virtual environments, where it is
impossible for the user to provide all required inputs.
System clock System clock Automatic door Automatic door – – reflex behavior reflex behavior
Interactive objects: Interactive objects:
The fireflies in NVIDIA’s Grove have behavior
independent of user’s input. User controls the virtual camera;
Agent behavior: Agent behavior:
A behavior model composed of perception, emotions,
behavior, and actions;
Perception (through virtual sensors) makes the agent
aware of his surroundings.
Perception Perception Emotions Emotions Behavior Behavior Actions Actions Virtual world Virtual world
Reflex behavior: Reflex behavior:
A direct link between perception and actions (following
behavior rules (“cells”);
Does not involve emotions.
Perception Perception Emotions Emotions Behavior Behavior Actions Actions
Object behavior Object behavior
Another example of reflex behavior – “Dexter” at MIT [Johnson, 1991]: Hand shake, followed by head turn
Autonomous virtual human Autonomous virtual human User User-
- controlled hand avatar
controlled hand avatar
Agent behavior Agent behavior -
- avatars
avatars
If user maps to a full-body avatar, then virtual human agents react through body expression recognition: example dance. Swiss Institute of Technology, 1999 (credit Daniel Thalmann)
Autonomous virtual human Autonomous virtual human User User-
- controlled hand avatar
controlled hand avatar
Emotional behavior: Emotional behavior:
A subjective strong feeling (anger, fear) following perception;
Two different agents can have different emotions to the same
perception, thus they can have different actions.
Virtual world Virtual world Emotions 2 Emotions 2 Perception Perception Behavior Behavior Actions 2 Actions 2 Perception Perception Emotions 1 Emotions 1 Behavior Behavior Actions 1 Actions 1
Crowds behavior Crowds behavior
(Thalmann et al., 2000) Crowd behavior emphasizes group (rather than individual) actions; Crowds can have guided LOA, when their behavior is defined explicitly by the user; Or they can have Autonomous LOA with behaviors specified by rules and other complex methods (including memory).
Political demonstration Political demonstration Guided crowd Guided crowd
User needs to specify Intermediate path points
Autonomous crowd Autonomous crowd
Group perceives info on its environment and decides a path to follow to reach the goal
MODEL MANAGEMENT MODEL MANAGEMENT
It is necessary to maintain interactivity and
constant frame rates when rendering complex
- models. Several techniques exist:
Level of detail management; Cell segmentation; Off-line computations; Lighting and bump mapping at rendering
stage;
Portals.
Level of detail management: Level of detail management:
Level of detail (LOD) relates to the number of polygons
- n the object’s surface. Even if the object has high
complexity, its detail may not be visible if the object is too far from the virtual camera (observer).
Tree with 27,000 polygons Tree with 27,000 polygons Tree with 27,000 polygons Tree with 27,000 polygons (details are not perceived) (details are not perceived)
Static level of detail management: Static level of detail management:
Then we should use a simplified version of the object
(fewer polygons), when it is far from the camera.
There are several approaches: Discrete geometry LOD; Alpha LOD; Geometric morphing (“geo-morph”) LOD.
Discrete Geometry LOD: Discrete Geometry LOD:
Uses several discrete models of the same virtual object;
Models are switched based on their distance from the
camera (r < r0; r0 < r < r1; r1 < r < r2; r2 < r)
LOD 0 LOD 0 LOD 1 LOD 1 LOD 2 LOD 2
r2 r0 r1
Alpha Blending LOD: Alpha Blending LOD:
Discrete LOD have problems on the r0 = r, r1 = r, r2 = r circles, leading to “popping” and cycling. Objects appear and disappear
- suddenly. One solution to cycling is distance hystheresis.
A solution to popping is alpha blending by changing the
transparency of the object. Fully transparent objects are not rendered.
Opaque Opaque Less opaque Less opaque Fully transparent Fully transparent LOD 0 LOD 0 LOD 1 LOD 1 LOD 2 LOD 2
r2 r0 r1
Hystheresis Hystheresis zone zone
Geometric Morphing LOD: Geometric Morphing LOD:
Unlike geometric LOD, which uses several models of the same
- bject, geometric morphing uses only one complex model.
Various LOD are obtained from the base model through mesh
simplification
A triangulated polygon mesh: n vertices has 2n faces and 3n edges
Collapsing edges Collapsing edges V2 V2 V1 V1 V1 V1
Mesh Mesh before before simplification simplification Mesh Mesh after after simplification simplification
Single Single-
- Object adaptive level of detail LOD:
Object adaptive level of detail LOD:
Used where there is a single highly complex object that the user wants to inspect (such as in interactive scientific visualization.
Static LOD will not work since detail is lost where needed-
example the sphere on the right loses shadow sharpness after LOD simplification.
Sphere with 8192 triangles Sphere with 8192 triangles – – Uniform high density Uniform high density Sphere with 512 triangles Sphere with 512 triangles – – Static LOD simplification Static LOD simplification (from Xia et al, 1997)
Single Single-
- object Adaptive Level of Detail
- bject Adaptive Level of Detail
Sometimes edge collapse leads to problems, so vertices need to be split again to regain detail where needed. Xia et al. (1997) developed an adaptive algorithm that determines the level of detail based on distance to viewer as well as normal direction (lighting).
V2 V2 V1 V1 Vertex Split Vertex Split V1 V1
Refined Mesh Refined Mesh Simplified Mesh Simplified Mesh
Edge collapse Edge collapse V1 is the V1 is the “ “parent parent” ” vertex vertex
(adapted from Xia et al, 1997)
Single Single-
- object Adaptive Level of Detail
- bject Adaptive Level of Detail
Sphere with 537 triangles Sphere with 537 triangles – – adaptive LOD, 0.024 sec adaptive LOD, 0.024 sec to render (SGI RE2, single to render (SGI RE2, single R10000 workstation) R10000 workstation) Sphere with 8192 triangles Sphere with 8192 triangles – – Uniform high density, Uniform high density, 0.115 sec to render 0.115 sec to render (from Xia et al, 1997)
Single Single-
- object Adaptive Level of Detail
- bject Adaptive Level of Detail
Bunny with 3615 triangles Bunny with 3615 triangles – – adaptive LOD, 0.110 sec adaptive LOD, 0.110 sec to render (SGI RE2, single to render (SGI RE2, single R10000 workstation) R10000 workstation) Bunny with 69,451 Bunny with 69,451 triangles triangles – – Uniform high Uniform high density, 0.420 sec to render density, 0.420 sec to render (from Xia et al, 1997)
LOD 1 LOD 1 LOD 2 LOD 2
Static LOD: Static LOD:
Geometric LOD, alpha blending and morphing have problems maintaining a constant frame rate. This happens when new complex
- bjects appear suddenly in the scene (fulcrum).
LOD 1 LOD 1 LOD 2 LOD 2
frame i frame i+1
Camera Camera “ “fly fly-
- by
by” ”
f u l c r u m fulcrum
Architectural Architectural “ “walk walk-
- through
through” ” (UC Berkeley Soda Hall) (UC Berkeley Soda Hall)
Camera path through auditorium No LOD No LOD menegament menegament, 72,570 polygons , 72,570 polygons from (Funkhauser and Sequin, 1993)
A B C
Start Start End End
A A C B
Time 0.2 sec Time 0.2 sec
C B
Time 1.0 sec Time 1.0 sec 0 Frames 250 0 Frames 250 0 Frames 250 0 Frames 250
No LOD management Static LOD management
Adaptive LOD Management Adaptive LOD Management-
- continued:
continued:
An algorithm that selects LOD of visible objects based on a specified frame rate;
The algorithm (Funkhauser and Sequin, 1993) is based on a
benefits to cost analysis, where cost is the time needed to render Object O at level of detail L, and rendering mode R.
The cost for the whole scene is where the cost for a single object is
Σ Σ Cost (O,L,R) Cost (O,L,R) ≤ ≤ Target frame time Target frame time
Cost (O,L,R) = max (c1Polygons(O,L) + c2 Vertices(O,L), c3 Pixel Cost (O,L,R) = max (c1Polygons(O,L) + c2 Vertices(O,L), c3 Pixels(O,L)) s(O,L)) c1, c2, c3 are experimental constants, depending on R and type o c1, c2, c3 are experimental constants, depending on R and type of computer f computer
Adaptive LOD Management: Adaptive LOD Management:
Similarly the benefit for a scene is a sum of visible
- bjects benefits;
where the benefit of a given object is
- Objects with higher value are rendered first
Sort according to value, display objects until target cost
is reached Σ Σ Benefit(O,L,R) Benefit(O,L,R)
Benefit(O,L,R) = size(O) * Accuracy(O,L,R) * Importance(O) * Fo Benefit(O,L,R) = size(O) * Accuracy(O,L,R) * Importance(O) * Focus(O) * cus(O) * Motion(O) * Motion(O) * Hysteresis(O,L,R Hysteresis(O,L,R) )
Value= Benefit(O,L,R)/Cost(O,L,R) Value= Benefit(O,L,R)/Cost(O,L,R)
Level of detail segmentation Level of detail segmentation -
- elision
elision
No detail elision, 72,570 polygons No detail elision, 72,570 polygons Optimization algorithm, 5,300 poly. Optimization algorithm, 5,300 poly. 0.1 sec target frame time (10 fps) 0.1 sec target frame time (10 fps) from (Funkhauser and Sequin, 1993)
Time 1.0 sec Time 1.0 sec Time 0.2 sec Time 0.2 sec 0 Frames 250 0 Frames 250 0 Frames 250 0 Frames 250
A C
Level of detail segmentation Level of detail segmentation – – rendering mode rendering mode Optimization, 1,389 poly., 0.1 sec target frame time
from (Funkhauser and Sequin, 1993)
No detail elision, 19,821 polygons Level of detail – darker gray means more detail
It is another method of model management,
used in architectural walk-through;
To maintain the “virtual building” illusion it
is necessary to have at least 6 fps (Airey et al., 1990)
Necessary to maintain interactivity and
constant frame rates when rendering complex models. Cell segmentation: Cell segmentation:
Model management Model management
Only the current “universe” needs to be rendered
PVS (Potentially Visible Sets) PORTALS
Cell segmentation Cell segmentation – – increased frame rate increased frame rate
Buildings are large models that can be partitioned in “cells” automatically and off-line to speed up simulations at run time; Cells approximate rooms; Partitioning algorithms use a “priority” factor that favors
- cclusions (partitioning along walls)
Automatic floor plan partition (Airey et al., 1990)
Cell segmentation Cell segmentation
From (Funkhauser, 1993)
Building model resides in a fully associative cache; But cell segmentation alone will not work if the model is so large that it exceeds available RAM; In this case large delays will occur when there is a page fault and data has to be retrieved from hard disk;
Page faults Page faults Frame time (s) Frame time (s) Frames Frames
Combined Cell, LOD and database Combined Cell, LOD and database methods methods
Floor plan partition (Funkhouser, 1993)
User User Interface Interface Visibility Visibility Determ Determ. . Detail Detail Ellision Ellision Render Render Monitor Monitor Look Look-
- ahead
ahead Determ Determ. . Cache Cache Manage Manage-
- ment
ment I/O I/O Oper Oper Database Database
Database management Database management
It is possible to add database management techniques to prevent page faults and improve fps uniformity during walk-through; It is possible to estimate how far the virtual camera will rotate and translate over the next N frames and pre-fetch from the hard disk the appropriate objects.
Database management Database management
LOD 1 LOD 1 LOD 0 LOD 0 LOD 3 LOD 3 LOD 2 LOD 2 LOD 1 LOD 1 LOD 0 LOD 0 LOD 2 LOD 2 LOD 1 LOD 1 LOD 0 LOD 0 LOD 0 LOD 0 Frame time (s) Frame time (s) Frames Frames
Floor plan visibility and highest LOD (Funkhouser, 1990)
LOD 0 LOD 0 – – lowest level of detail (loaded first) lowest level of detail (loaded first) … …. . LOD 3 LOD 3 -
- highest level of detail (loaded last)
highest level of detail (loaded last)
High-LOD are loaded for adjacent cells only
Distributed VR architectures Distributed VR architectures
Single-user systems: multiple side-by-side displays; multiple LAN-networked computers; Multi-user systems:
(3DLabs Inc.)
Single Single-
- user, multiple displays
user, multiple displays
Side Side-
- by
by-
- side displays.
side displays.
Used is VR workstations (desktop), or in large
volume displays (CAVE or the “Wall”);
One solution is to use one PC with graphics
accelerator for every projector;
This results is a “rack mounted” architecture,
such as the MetaVR “Channel Surfer” used in flight simulators or the Princeton Display Wall
Genlock Genlock.. ..
If the output of two or more graphics pipes is
used to drive monitors placed side-by-side, then the display channels need to be synchronized pixel-by-pixel;
Moreover, the edges have to be blended, by
creating a region of overlap.
(Courtesy of Quantum3D Inc.)
Problems with non Problems with non-
- synchronized displays...
synchronized displays...
CRTs that are side-by-side induce fields in
each other, resulting in electronic beam distortion and flickers – need to be shielded;
Image artifacts reduce simulation realism,
increase latencies, and induce “simulation sickness.”
(Courtesy of Quantum3D Inc.)
Synchronization of displays: Synchronization of displays:
software synchronized – system commands that frame processing start at same time on different rendering pipes;
does not work if one pipe is overloaded – one image
Application Application Geometry Geometry Rasterizer Rasterizer
Buffer CRT
Application Application Geometry Geometry Rasterizer Rasterizer
Buffer CRT Synchronization command
Synchronization of displays: Synchronization of displays:
frame buffer synchronized – system commands that frame buffer swappings start at same time on different rendering pipes;
does not work because swapping depends on electronic gun
refresh - one buffer will swap up to 1/72 sec before the other. Application Application Geometry Geometry Rasterizer Rasterizer
Buffer CRT
Application Application Geometry Geometry Rasterizer Rasterizer
Buffer CRT Synchronization command
Synchronization of displays: Synchronization of displays:
video synchronized – system commands that CRT vertical beam starts at same time; one CRT becomes the “master” Application Application Geometry Geometry Rasterizer Rasterizer
Buffer Master CRT
Application Application Geometry Geometry Rasterizer Rasterizer
Buffer Slave CRT Synchronization command
Video synchronized displays (three PCs) Video synchronized displays (three PCs)
Wildcat 4210
(Digital Video Interface- Video out)
release release done done
Synchronization of displays: Synchronization of displays:
Best method is to have software + buffer + video
synchronization of the two (or more) rendering pipes Application Application Geometry Geometry Rasterizer Rasterizer
Buffer Master CRT
Application Application Geometry Geometry Rasterizer Rasterizer
Buffer Slave CRT Synchronization command Synchronization command Synchronization command
(Courtesy of Quantum3D Inc.)