3 - University of Virginia, Department of Computer Science

Download Report

Transcript 3 - University of Virginia, Department of Computer Science

The Rendering Pipeline
CS 445: Introduction to Computer Graphics
David Luebke
University of Virginia
Admin


Call roll
Forums signup
Demo

Ogre
Recap
Display Technology: DMDs

Digital Micromirror Devices (projectors)
– Microelectromechanical (MEM) devices, fabricated with VLSI
techniques
Recap:
Display Technology: DMDs






DMDs are truly digital pixels
Vary grey levels by modulating pulse length
Color: multiple chips, or color-wheel
Great resolution
Very bright
Flicker problems
Display Technologies:
Organic LED Arrays

Organic Light-Emitting Diode (OLED) Arrays
– The display of the future? Many think so.
– OLEDs function like regular semiconductor LEDs
– But with thin-film polymer construction:




Thin-film deposition of organic, light-emitting molecules through vapor
sublimation in a vacuum.
Dope emissive layers with fluorescent molecules to create color.
Not grown like a crystal, no high-temperature doping
Thus, easier to create large-area OLEDs
Display Technologies:
Organic LED Arrays

OLED pros:
–
–
–
–
–
–
Transparent
Flexible
Light-emitting, and quite bright (daylight visible)
Large viewing angle
Fast (< 1 microsecond off-on-off)
Can be made large or small
Display Technologies:
Organic LED Arrays

OLED cons:
– Not quite there yet (96x64 displays) except niche markets


Cell phones (especially back display)
Car stereos
– Not very robust, display lifetime a key issue
– Currently only passive matrix displays


Passive matrix: Pixels are illuminated in scanline order (like a raster
display), but the lack of phosphorescence causes flicker
Active matrix: A polysilicate layer provides thin film transistors at each
pixel, allowing direct pixel access and constant illumination
See http://www.howstuffworks.com/lcd4.htm for more info
– Hard to compete with LCDs, a moving target
Display Technologies:
Other

Liquid Crystal On Silicon (LCOS)
– “Next big thing” for projectors
– Don’t know much about this one

E-Ink
– Tiny black-and-white spheres embedded in matrix
– Slow refresh, very high resolution
– Over 200 dpi eBook devices available now in Japan

Others…
Framebuffers



So far we’ve talked about the physical display device
How does the interface between the device and the computer’s
notion of an image look?
Framebuffer: A memory array in which the computer stores an
image
– On most computers, separate memory bank from main memory
(why?)
– Many different variations, motivated by cost of memory
Framebuffers



So far we’ve talked about the physical display device
How does the interface between the device and the computer’s
notion of an image look?
Framebuffer: A memory array in which the computer stores an
image
– On most computers, separate memory bank from main memory
(why?)
– Many different variations, motivated by cost of memory
Framebuffers: True-Color




A true-color (aka 24-bit or 32-bit) framebuffer stores one byte
each for red, green, and blue
Each pixel can thus be one of 224 colors
Pay attention to
Endian-ness
How can 24-bit
and 32-bit mean
the same thing
here?
Framebuffers: IndexedColor





An indexed-color (8-bit or PseudoColor) framebuffer stores one
byte per pixel (also: GIF image format)
This byte indexes into a color map:
How many colors
can a pixel be?
Still common on
low-end displays
(cell phones, PDAs,
GameBoys)
Cute trick:
color-map animation
Framebuffers: Hi-Color


Hi-Color was a popular PC SVGA standard
Packs pixels into 16 bits:
– 5 Red, 6 Green, 5 Blue (why would green get more?)
– Sometimes just 5,5,5


Each pixel can be one of 216 colors
Hi-color images can exhibit worse quantization artifacts than a
well-mapped 8-bit image
The Rendering Pipeline:
A Whirlwind Tour
Transform
Illuminate
Transform
Clip
Project
Rasterize
Model & Camera
Parameters
Rendering Pipeline
Framebuffer
Display
The Display You Know
Transform
Illuminate
Transform
Clip
Project
Rasterize
Model & Camera
Parameters
Rendering Pipeline
Framebuffer
Display
The Framebuffer You
Know
Transform
Illuminate
Transform
Clip
Project
Rasterize
Model & Camera
Parameters
Rendering Pipeline
Framebuffer
Display
The Rendering Pipeline
Transform
Illuminate
Transform
Clip
Project
Rasterize
Model & Camera
Parameters
Rendering Pipeline
Framebuffer
Display
2-D Rendering:
Rasterization
(Coming Soon)
Transform
Illuminate
Transform
Clip
Project
Rasterize
Model & Camera
Parameters
Rendering Pipeline
Framebuffer
Display
The Rendering Pipeline:
3-D
Transform
Illuminate
Transform
Clip
Project
Rasterize
Model & Camera
Parameters
Rendering Pipeline
Framebuffer
Display
The Rendering Pipeline:
3-D
Scene graph
Object geometry
Result:
Modeling
Transforms
• All vertices of scene in shared 3-D “world” coordinate system
Lighting
Calculations
• Vertices shaded according to lighting model
Viewing
Transform
• Scene vertices in 3-D “view” or “camera” coordinate system
Clipping
Projection
Transform
• Exactly those vertices & portions of polygons in view frustum
• 2-D screen coordinates of clipped vertices
The Rendering Pipeline:
3-D
Scene graph
Object geometry
Result:
Modeling
Transforms
• All vertices of scene in shared 3-D “world” coordinate system
Lighting
Calculations
• Vertices shaded according to lighting model
Viewing
Transform
• Scene vertices in 3-D “view” or “camera” coordinate system
Clipping
Projection
Transform
• Exactly those vertices & portions of polygons in view frustum
• 2-D screen coordinates of clipped vertices
Rendering:
Transformations



So far, discussion has been in screen space
But model is stored in model space
(a.k.a. object space or world space)
Three sets of geometric transformations:
– Modeling transforms
– Viewing transforms
– Projection transforms
Rendering:
Transformations

Modeling transforms
– Size, place, scale, and rotate objects parts of the model w.r.t. each
other
– Object coordinates  world coordinates
Y
Y
X
Z
Z
X
Rendering:
Transformations

Viewing transform
– Rotate & translate the world to lie directly in front of the camera


Typically place camera at origin
Typically looking down -Z axis
– World coordinates  view coordinates
Rendering:
Transformations

Projection transform
– Apply perspective foreshortening

Distant = small: the pinhole camera model
– View coordinates  screen coordinates
Rendering:
Transformations



All these transformations involve shifting coordinate systems
(i.e., basis sets)
Oh yeah, that’s what matrices do…
Represent coordinates as vectors, transforms as matrices
X  cos q
  =  q
Y  sin

-sin q  X 
 
q
cos  Y 
Multiply matrices = concatenate transforms!
Rendering:
Transformations

Homogeneous coordinates: represent coordinates in 3
dimensions with a 4-vector
– Denoted [x, y, z, w]T

Note that w = 1 in model coordinates
– To get 3-D coordinates, divide by w:
[x’, y’, z’]T = [x/w, y/w, z/w]T


Transformations are 4x4 matrices
Why? To handle translation and projection
The Rendering Pipeline:
3-D
Scene graph
Object geometry
Result:
Modeling
Transforms
• All vertices of scene in shared 3-D “world” coordinate system
Lighting
Calculations
• Vertices shaded according to lighting model
Viewing
Transform
• Scene vertices in 3-D “view” or “camera” coordinate system
Clipping
Projection
Transform
• Exactly those vertices & portions of polygons in view frustum
• 2-D screen coordinates of clipped vertices
Rendering: Lighting

Illuminating a scene: coloring pixels according to some
approximation of lighting
– Global illumination: solves for lighting of the whole scene at once
– Local illumination: local approximation, typically lighting each
polygon separately

Interactive graphics (e.g., hardware) does only local
illumination at run time
The Rendering Pipeline:
3-D
Scene graph
Object geometry
Result:
Modeling
Transforms
• All vertices of scene in shared 3-D “world” coordinate system
Lighting
Calculations
• Vertices shaded according to lighting model
Viewing
Transform
• Scene vertices in 3-D “view” or “camera” coordinate system
Clipping
Projection
Transform
• Exactly those vertices & portions of polygons in view frustum
• 2-D screen coordinates of clipped vertices
Rendering: Clipping

Clipping a 3-D primitive returns its intersection with the view
frustum:
Rendering: Clipping

Clipping is tricky!
– We will have a whole assignment on clipping
Clip
In: 3 vertices
Out: 6 vertices
Clip
In: 1 polygon
Out: 2 polygons
The Rendering Pipeline:
3-D
Transform
Illuminate
Transform
Clip
Project
Rasterize
Model & Camera
Parameters
Rendering Pipeline
Framebuffer
Display
Modeling: The Basics


Common interactive 3-D primitives: points, lines, polygons
(i.e., triangles)
Organized into objects
– Collection of primitives, other objects
– Associated matrix for transformations

Instancing: using same geometry for multiple objects
– 4 wheels on a car, 2 arms on a robot
Modeling: The Scene
Graph



The scene graph captures transformations and object-object
relationships in a DAG
Nodes are objects;
Arcs indicate instancing
– Each has a matrix
Robot
Head
Mouth
Body
Eye
Leg
Trunk
Arm
Modeling: The Scene
Graph


Traverse the scene graph in depth-first order, concatenating
transformations
Maintain a matrix stack of transformations
Robot
Visited
Head
Body
Unvisited
Active
Matrix
Stack
Mouth
Eye
Leg
Foot
Trunk
Arm
Modeling: The Camera

Finally: need a model of the virtual camera
– Can be very sophisticated

Field of view, depth of field, distortion, chromatic aberration…
– Interactive graphics (OpenGL):

Camera pose: position & orientation


Captured in viewing transform (i.e., modelview matrix)
Pinhole camera model



Field of view
Aspect ratio
Near & far clipping planes
Modeling: The Camera

Camera parameters (FOV, etc) are encapsulated in a projection
matrix
– Homogeneous coordinates  4x4 matrix!
– See OpenGL Appendix F for the matrix

The projection matrix premultiplies the viewing matrix, which
premultiplies the modeling matrices
– Actually, OpenGL lumps viewing and modeling transforms into
modelview matrix