Video Graphics Card and Video Systems
Download
Report
Transcript Video Graphics Card and Video Systems
Video Graphics Card
and Video Systems
Mark Joseph Garrovillas
CE 141 B1
Outline:
Graphics Card Definition
Hardware Components and Monitors
Operating Modes and Memory
3D graphics computations
What's a Graphics Card?
A modern graphics card is a circuit board with memory and
a dedicated processor. The processor is designed
specifically to handle the intense computational
requirements of displaying graphics. Most of these graphics
processors have special command sets for graphics
manipulation built right into the chip.
Graphics cards are known by many names, such as:
– Video cards
– Video boards
– Video display boards
– Graphics boards
– Graphics adapter cards
– Video adapter cards
Here are the three basic components of a graphics
card and what they do:
Memory - The first thing that a graphics card needs is memory.
The memory holds the color of each pixel. In the simplest case,
since each pixel. Since a byte holds 8 bits, you need (640/8) 80
bytes to store the pixel colors for one line of pixels on the display.
You need (480 X 80) 38,400 bytes of memory to hold all of the
pixels visible on the display.
Computer Interface - The second thing a graphics card needs is a
way for the computer to change the graphics card's memory. This
is normally done by connecting the graphics card to the card bus
on the motherboard. The computer can send signals through the
bus to alter the memory.
Video Interface - The next thing that the graphics card needs is a
way to generate the signals for the monitor. The card must
generate color signals that drive the cathode ray tube (CRT)
electron beam, as well as. Let's say that the screen is refreshing at
60 frames per second. This means that the graphics card scans the
entire memory array 1 bit at a time and does this 60 times per
second. It sends signals to the monitor for each pixel on each line,
and then sends a horizontal sync pulse; it does this repeatedly for
all 480 lines, and then sends a vertical sync pulse.
Hardware Components
Graphics Card Interface
CPU sends data through the AGP where it is received by the video
chipset which converts the data to that appropriate for display. It
is temporarily held in the video RAM so that output will be
continuous. For those with 3D accelerators, they “accelerate” the
data thus enabling us to see 3D objects smoothly.
For traditional CRT colored monitors, the output has to be in
analog for the technology of the time , these kind of signals
allowed more variations than the digital employed for
monochrome displays.
Use of AGP
AGP has proven itself better than PCI for the display interface as
was reported earlier.
Monitors
The LCD screen is flat, since it contains no cathode ray tube (CRT).
Instead the screen image is generated on a flat plastic disk, where
millions of transistors create the pixels.
The digital flat panel monitors are also called "soft" screens, since
their images seems to have a "softer" quality than those from
traditional CRT monitors. The image does not flicker thus causing
less eye strain.
A flat panel monitor is digital by nature. There is no analog electronics
included, and that is the big advantage of this technology. Hence, the
monitor should not be connected through an analog interface. In fact,
using the analog interface, you get to conversions, which both add noise
to the final image. First the graphics adapter has to convert the digital
data of the PC to analog electronic signals. Then these analog signals
have to be converted back till digital information to feed the display.
Using the digital interface, each pixel consists of three transistors, which
each is mapped to the corresponding memory cell holding the image
info. A purely digital to digital transmission with no electrical noise
involved - that is the way to produce a stunning image!
Video Display Modes
Date
Standard
1981
CGA
Colour
Graphics
Adapter
640x200
160x200
1984
EGA
Enhanced
Graphics
Adapter
640x350
1987
VGA
1990
XGA
SXGA
UXGA
Description
Video
Resolution
Graphics
640x480
320x200
Array
Extended
Array
Super
Graphics
Extended
Graphics Array
Ultra XGA
No. colours
None
16
16 from 64
16 from 262,144
256
1024x768
16.7 million
1280x1024
16.7 million
1600x1200
16.7 million
Resoluti
on
Bit map size with 16
bit colors
Necessary RAM on the
video card
640 x
480
614,400 bytes
1 MB
800 x
600
960,000 bytes
1.5 MB
1024 x
768
1,572,864 bytes
2 MB
1152 x
864
1,990,656 bytes
2.5 MB
1280 x
1024
2,621,440 bytes
3 MB
1600 x
1200
3,840,000 bytes
4 MB
When you look at a screen image, it actually consists of thousands of tiny dots.
If you look close you can spot them:
Each of these dots is called a pixel . That is a contraction of the term Picture
Elements.
In an ordinary screen, each pixel consists of three colors: Red, green and blue.
Thus, there are actually three "sub dots" in each pixel. But they are so small
that they "melt" together as one dot:
The individual pixel or dot then consists of three mini dots, also called trio dot .
Some screens do not have round dots, but they work the same way. With the
three basic colors, each of which can be assigned with varying intensity, you can
create many different colors.
Video Memory
The memory that holds the video image is also referred to as the frame buffer and is usually
implemented on the graphics card itself. Early systems implemented video memory in
standard DRAM. However, this requires continual refreshing of the data to prevent it from
being lost and cannot be modified during this refresh process. The consequence,
particularly at the very fast clock speeds demanded by modern graphics cards, is that
performance is badly degraded.
An advantage of implementing video memory on the graphics board itself is that it can be
customised for its specific task and, indeed, this has resulted in a proliferation of new
memory technologies:
•Video RAM (VRAM): a special type of dual-ported DRAM, which can be written to and
read from at the same time. It also requires far less frequent refreshing than ordinary
DRAM and consequently performs much better
•Windows RAM (WRAM): as used by the hugely successful Matrox Millennium card, is
also dual-ported and can run slightly faster than conventional VRAM
•EDO DRAM: which provides a higher bandwidth than DRAM, can be clocked higher
than normal DRAM and manages the read/write cycles more efficiently
•SDRAM: Similar to EDO RAM except the memory and graphics chips run on a common
clock used to latch data, allowing SDRAM to run faster than regular EDO RAM
•SGRAM: Same as SDRAM but also supports block writes and write-per-bit, which yield
better performance on graphics chips that support these enhanced features
•DRDRAM: Direct RDRAM is a totally new, general-purpose memory architecture which
promises a 20-fold performance improvement over conventional DRAM.
Some designs integrate the graphics circuitry into the motherboard
itself and use a portion of the system's RAM for the frame buffer. This
is called unified memory architecture and is used for reasons of cost
reduction only. Since such implementations cannot take advantage of
specialised video memory technologies they will always result in
inferior graphics performance.
The information in the video memory frame buffer is an image of what
appears on the screen, stored as a digital bitmap. But while the video
memory contains digital information its output medium, the monitor,
uses analogue signals. The analogue signal requires more than just an
on or off signal, as it's used to determine where, when and with what
intensity the electron guns should be fired as they scan across and
down the front of the monitor. This is where the RAMDAC comes in.
The table below summarises the characteristics of six popular types of
memory used in graphics subsystems:
Max.
throughput
(MBps)
Dual- or
single-ported
Typical Data Width
Speed (typical)
EDO
VRAM
WRAM
SDRAM
SGRAM
RDRAM
400
400
960
800
800
600
single
dual
dual
single
single
single
64
64
64
64
64
8
50-60ns
50-60ns
50-60ns
10-15ns
8-10ns
330MHz clock
speed
1998 saw dramatic changes in the graphics memory market and a pronounced
market shift toward SDRAMs caused by the price collapse of SDRAMs and resulting
price gap with SGRAMs. However, delays in the introduction of RDRAM, coupled
with its significant cost premium, saw SGRAM - and in particular DDR SGRAM,
which performs I/O transactions on both rising and falling edges of the clock cycle recover its position of graphics memory of choice during the following year.
Memory Calculation
Lets say you would like to display 256 colors on a screen resolution of 640x480. At this
resolution, there is 307,200 dots, or pixels. 256 colors requires 8 bits or data for each pixel.
You can figure this because with an eight digit binary, there are 256 possible combinations.
For two colors, you need only 1 bit, either on or off. For 16 colors, you need 4 bits, 2 to the
4th power. 256 colors requires 8 bits, and it goes up from there. Anyway, multiply the
number of dots by the number of bits per pixel to get the number of bits for the entire
screen.
307,000 x 8 = 2,457,600 bits.
There are eight bits per byte and 1,024 bytes per kilobyte. So...
2,457,600 / 8 = 307,200 bytes = 300K
Therefore it requires exactly 300K of memory to display 256 colors at 640x480 resolution.
But, after calculating this, you must consider the available amounts. You cannot buy a
video card with 300K of memory. They were available at either 256K or 512K. So, to get
this resolution and color scheme, you must buy a card with 512K of memory on-board.
Today, a screen resolution of 1024 x 768 defines the lowest point of “highresolution.” That means that there are 786,432 picture elements, or pixels, to
be painted on the screen. If there are 32 bits of color available, multiplying by
32 shows that 25,165,824 bits have to be dealt with to make a single image.
Moving at a rate of 60 frames per second demands that the computer handle
1,509,949,440 bits of information every second just to put the image onto the
screen. And this is completely separate from the work the computer has to do
to decide about the content, colors, shapes, lighting and everything else about
the image so that the pixels put on the screen actually show the right image.
When you think about all the processing that has to happen just to get the
image painted, it’s easy to understand why graphics display boards are moving
more and more of the graphics processing away from the computer’s central
processing unit (CPU). The CPU needs all the help it can get.
What Are 3-D Graphics?
For many of us, games on a computer or advanced game system are the
most common ways we see 3-D graphics. These games, or movies made
with computer-generated images, have to go through three major steps to
create and present a realistic 3-D scene:
•Creating a virtual 3-D world.
•Determining what part of the world will be shown on the screen.
•Determining how every pixel on the screen will look so that the
whole image appears as realistic as possible.
3D Techniques
Texture mapping is a technique for adding extra detail to the 3D
object. It is best described as wrapping a 2D coloured paper over a
3D object. For instance, given a 3D image of a car on-screen, a
texture would be wrapped over it to depict coloured metallic paint.
This process is painstaking, as it has to be repeated for every pixel
on the object and each pixel of the texture - known as a texel which lies on top. Many textures can be wrapped over the same
object, and this is multitexturing.
Mip mapping can be viewed as a cut-down form of texture-mapping
in which more texels are created without performing the equivalent
number of calculations. If a mip-map is one fourth the size of the
original texture, reading a single texel from this mip-map is the
same as reading four texels from the original texture. If applied
using proper filters, the image quality is actually higher, as it
smoothes out jagged edges.
Bi-linear filtering reads four texels, calculates their average - that
is, the average of their relative positions - colour and so on, and
displays the result as a single-screen texel. This results in blurring
at close quarters, which in turn reduces an otherwise blocky,
pixelated appearance. Bi-linear filtering is now standard on most PC
graphics cards.
Z-buffering is a method of calculating pixels which have to be loaded into the
frame buffer, the memory that stores soon-to-be-displayed data. 3D
accelerator chips take one pixel, render it, and proceed to the next one. The
problem with this method is that the accelerator has no way of knowing
whether the calculated pixel is to be displayed immediately or later. Zbuffering includes a "Z" value in every calculated pixel. If the Z value for a
particular pixel is smaller than another one, it means the pixel with the smaller
Z value must be displayed first.
Anti-aliasing is a technique to reduce the "noise" present in an image. To
represent any image, a certain amount of information is needed. If the object
is in motion, ideally, that information should include its every possible
position, colour, size changes etc. But if this information is not available, the
CPU often fills in the missing segments with meaningless noise. Anti-aliasing,
along with mip mapping, removes this noise.
Gouraud shading makes objects appear more solid by applying shadows to the
surface of the object. The algorithm determines the colours of adjacent
polygons and makes a smooth transition between them. This ensures that
there is no sudden change in colour over the object.
Bump mapping is an improvement on the more common "embossing"
technique used to give a "bumpy" look to surfaces. It uses three distinct
texture maps to create the illusion of depth on a surface and can be used to
create effects such as pockmarked, bullet-riddled walls and rough terrain.
However, the industry is yet to arrive at a standard set of procedures to render
this visually impressive feature.
Drawn with polygons
Anti aliased texture map
Perspective, lighting, shadows and surfaces added
3D Transforms:
The first part of the process has several important variables:
•X = 758 -- the height of the "world" we're looking at.
•Y = 1024 -- the width of the world we're looking at
•Z = 2 -- the depth (front to back) of the world we're looking at
•Sx = height of our window into the world
•Sy - width of our window into the world
•Sz = a depth variable that determines which objects are visible in front of other, hidden objects
•D = .75 -- the distance between our eye and the window in this imaginary world.
First, we calculate the size of the windows into the imaginary world.
Now that the window size has been calculated, a perspective transform is used to move a step closer to projecting the world onto a
monitor screen. In this next step, we add some more variables.
So, a point (X, Y, Z, 1.0) in the three-dimensional imaginary world would have transformed position of (X', Y', Z', W'), which we get by the
following equations:
At this point, another transform must be applied before the image can be projected onto the monitor's screen, but you begin to see the
level of computation involved -- and this is all for a single vector (line) in the image! Imagine the calculations in a complex scene with
many objects and characters, and imagine doing all this 60 times a second. Aren’t you glad someone invented computers?
/* Calculate the step value for the x and y coordinates.
For every pixel on the destination, the x coordinate on the texture
will move xincr pixels. */
Texture Mapping Example
xpos = tmapx1<<8;
ypos = tmapy1<<8;
asm {
.386
push ds
cld
mov cx, word ptr length
shr cx, 1
les di, dest
lds si, src
mov dx, word ptr ypos
shl edx, 16
mov dx, word ptr xpos
/* Set length */
/* Move the y to the high word */
/* Set destination ptr */
/* Set source ptr */
/* Put the y in the low word */
/* Put the x in the low word */
mov si, word ptr yincr
/* Set up the increments the */
shl esi, 16
/* same way */
mov si, word ptr xincr
/* Now to advance one pixel, we can add edx and esi together to
advance the x and y at the same time, with the fractional
portion automatically carrying at 256. */
cmp cx, 0
je onepixel
}
tlineloop:
;
asm {
mov ebx, edx
shr ebx, 16
/* BH now contains the y coordinate */
mov bl, dh
/* Store the x value in BL, */
Our basic texture mapped line routine looks like this:
Calculate the x step value
Calculate the y step value
Make a coordinate variable equal to the left endpoint's texture
coordinate.
mov al, ds:[bx]
add edx, esi
/* Get the color from the texture image */
/* Advance one pixel */
mov ebx, edx
shr ebx, 16
mov bl, dh
mov ah, ds:[bx]
add edx, esi
/* Repeat the above, and get another pixel */
/* BX is now an offset into the texture image,
between 0 and 65535. */
stosw
/* Store a word to the destination */
dec cx
/* Decrease length */
jnz tlineloop
/* Repeat for all pixels */
}
onepixel:
asm {
mov cx, word ptr length
and cx, 1
jz tlinedone
For x = x1 to x2
Read a pixel from the texture
mov ebx, edx
shr ebx, 16
/* BH now contains the y coordinate */
mov bl, dh
/* Store the x value in BL, */
Put pixel on screen
Add x step value to the texture coordinate
mov al, ds:[bx]
mov es:[di], al
Add y step value to the texture coordinate
End for
}
}
tlinedone:
asm {
pop ds
}
}
/* Get the color from the texture image */
/* BX is now an offset into the texture image,
between 0 and 65535. */
END
OF
PRESENTATION