Programming assignment #2 - Polygon scan converter
CS 248 - Introduction to Computer Graphics
Autumn Quarter, 2004
Marc Levoy
Handout #7
Demos on Wednesday, October 27
Writeups due on Thursday, October 28 by 2:00pm
Your assignment is to write a rasterizer for scan converting polygons drawn
using a graphical user interface that we provide. Our interface also allows
you to create simple animations from your polygons. Your rasterizer will be
capable of spatial supersampling and motion blurring, to reduce aliasing
artifacts in space and time.
What we give you
-
A graphical user interface for drawing polygons.
If you run /usr/class/cs248/assignments/assignment2/bin/animgui on the Sweet
Hall graphics lab PCs, it will pop up a user interface based on the GLUI
toolkit, which is in turn built on OpenGL. This interface allows you to draw
polygons having any number of sides and having vertices that lie at integer
pixel coordinates. Each polygon will be assigned a unique random bright color.
The interface also allows you to change the shape of each polygon, by dragging
individual vertices or by translating the entire polygon, and to associate
shapes with specific points in time, called keyframes. Finally, the interface
allows you to load or store your polygons to files. This last capability
allows you to test your rasterizer repeatedly on the same polygons, it allows
you to test it remotely, i.e. without sitting at a Sweet Hall PC, it allows us
to test your rasterizer on "grader data" at the demos, and most importantly, it
allows you to create and store cool animations!
-
The stub for a rasterizer.
Included in the interface is a "Render" button and a field for specifying which
frames to render. When you press the button, the interface will loop through
each frame to be rendered, calling a routine named "Rasterize" (for which a
stub is included in our package) and passing it an empty canvas like the one
you used in project #1. Your task is to expand this stub to include a polygon
rasterizer. Specifically, Rasterize should loop through each polygon, scan
converting it into the canvas array, then return control to the interface,
which will display the updated canvas on the screen. If you specify a filename
in the "Render To:" field, each frame will also be output to a .ppm file, and a
list of frames will be output to a .list file. We explain later how to use
this .list file to create movies. If animgui is invoked with certain
command-line arguments (see animgui.README for details), it becomes a
standalone (non-graphical) driver for your rasterizer. In this mode, it will
load polygons from a file you specify, then call Rasterize, perhaps multiple
times. Each time you return control to the driver, it will output your canvas
to a .ppm file.
-
A keyframe interpolator.
When the interface calls your rasterizer, it will also pass it an integer frame
number. As your rasterizer loops through each polygon, it should call our
interpolator routine (GetVertices), passing it the polygon number and a frame
number. GetVertices will compute for you the shape of that polygon at that
moment in time, by interpolating vertex coordinates linearly between the
keyframe positions you defined interactively in step 1. What it returns to you
is a list of vertex X,Y coordinates for the polygon. It is this interpolated
polygon shape that you should scan convert into the canvas array. By the way,
the frame number you specify when you call GetVertices can be a floating-point
number, allowing you to specify times that fall between frames. You'll need
this flexibility for computing motion blur.
What you need to write
-
Clipping and scan conversion.
Once your rasterizer obtains the vertex X,Y coordinates for an interpolated
polygon, scan convert it using the algorithm presented in class, which is also
covered in section 3.6 of your textbook. In lieu of polygon clipping (as
described in section 3.14), simply scissor the polygon to the boundaries of the
canvas. In other words, scan convert the polygon even if some of its vertices
lie outside the canvas, then test each pixel you generate to make sure it lies
within the canvas. If the pixel lies outside the canvas, discard it.
-
Spatial supersampling and averaging down.
The user interface contains a check box labeled "antialias". If checked (a
flag will be passed into your rasterizer), you should step through a pattern of
subpixel positions, shift the vertices of all polygons as appropriate for that
subpixel position, scan convert them at the shifted position, and combine the
results into the canvas using the accumulation buffer algorithm, as summarized
in Appendix A of this handout. Use any pattern of subpixel positions and any
corresponding weights (summing to 1) that you like. Some patterns work well,
and some do not, as we have discussed in class. The user interface contains a
field in which you can enter the number of supersamples (variable s in Appendix
A). Use this value (which will be passed into Rasterize) to control the number
of samples per pixel. Feel free to round this number to the nearest square
(e.g. 4, 9, 16, 25, etc.), but make sure your code supports at least 50
samples per pixel.
-
Motion blurring.
The user interface also contains a check box labeled "motion blur". If
checked, you should step through a sequence of fractional frame times, call
GetVertices to obtain the vertex coordinates for each polygon at these times,
scan convert the resulting shapes, and combine the results into the canvas
using the same accumulation buffer algorithm. Once again, use any pattern of
samples and weights that you like. The user interface contains a field in
which you can enter the number of motion blur samples (variable t in Appendix
A). Make sure your code supports at least 50 samples per frame. If the
"antialiasing" and "motion blur" boxes are both checked, your implementation
should perform both spatial supersampling and motion blurring. For both forms
of antialiasing, be prepared at the demo to describe your sampling and
weighting solutions and to characterize (in informal terms) the quality of
antialiasing that they produce. Include these descriptions in your README file
when you submit your code.
A few hints
-
Polygons should be drawn one atop another in the order they appear in the
array. If you correctly implement the algorithm in section 3.6 of the
textbook, it will work for convex or concave polygons having any number of
sides. Your rasterizer needn't handle polygons with holes. However, it should
handle self-intersecting polygons. In these cases, follow the rule that if a
line drawn from a sample position to infinity crosses an odd number of polygon
boundaries, it should be filled, otherwise not, as shown below.
In general, your rasterizer should fill only and all sample positions that fall
inside polygon edges. If a sample coincides with an edge or vertex, you can
fill it or not, as you like. Your rasterizer should also handle degenerate
polygons, i.e. polygons that collapse to a line along part or all of its
boundary, or to a point, as shown below.
As with general edges and vertices, your rasterizer can fill sample positions
if they coincide with degenerate edges or vertices, or not, as you like. The
reason you need to handle self-intersecting and degenerate polygons is that, no
matter how well behaved your polygons are at the keyframes, their shapes may be
ill-behaved at interpolated frames. The goal is to make the animation look as
continuous as possible, with no sudden popping of polygons or large parts of
polygons. By the way, sliver triangles, if rendered without antialiasing,
might have gaps on individual frames, and tiny polygons might disappear
entirely. This is to be expected. Don't try to fill these in. That is what
antialiasing is intended to do.
-
While the vertices you draw using our interface will lie on pixel centers, the
vertices we return to you after interpolation may not. Moreover, the spatial
supersampling portion of the assignment requires that you scan convert
triangles whose vertices are positioned with subpixel precision. This means
that your rasterizer needs to support non-integer vertex coordinates. Page 948
of the textbook describes two solutions to non-integer vertices: a floating
point algorithm and a fixed point algorithm (i.e. by multiplying by a fixed
constant (the book suggests 1000)). You may use either algorithm. We suggest
a floating point algorithm, mainly because it will be easier to code.
-
A few hints about computing motion blur. First, think carefully about the
filter kernel you employ, i.e. the pattern of samples and weights. Your goal
is not to simulate a video camera, or a cartoonist; your goal is to reduce
temporal aliasing artifacts. Second, if your filter kernel calls for drawing
samples before frame 1 or after your last keyframe, you should assume that the
polygon is stationary at those times. In fact, don't call GetVertices with a
frame number less than 1.0; it will malfunction. Third, don't assume that
polygons stop moving at intermediate keyframes. If your rasterization is
working correctly, then if your input dataset specifies a polygon at frame 1
and the same polygon shifted to the right 10 and 20 pixels at frames 11 and 21,
respectively, then the resulting animation should show the polygon moving
smoothly across the screen, not pausing at frame 11. If you are designing an
animation in which you want the action to pause, then specify the same set of
polygons twice, with a gap of 1 or more between their assigned frame numbers.
-
You should begin the assignment by examining the files in
/usr/class/cs248/assignments/assignment2. Start by looking at the main()
function in src/agui/animgui.cpp. This file contains the graphical user
interface. For some bells/whistles, you may need to modify this file.
If you need more information on the GLUI toolkit, you can find documentation at
http://www.cs.unc.edu/~rademach/glui/src/release/glui_manual_v2_beta.pdf.
Alternatively, if you can demonstrate your bell/whistle using animgui in
standalone (non-graphical) mode, possibly by adding more command-line
arguments, then you may do so. If you add no bells/whistles, the only function
you should need to modify is Rasterize in objects.cpp. The code we provide
contains parameterized constants for the maximum number of polygons, vertices
per polygon, keyframes per polygon, frames in the animation, and numbers of
supersamples. Feel free to increase these constants, but don't decrease them.
Also, be careful not to break the ability to load polygons from files; we will
use this to test your code.
-
We have provided (in /usr/class/cs248/data/assignment2) several polygon
datasets for you to play with. For some but not all of these, we have computed
matching .ppm files to give you an idea of what they should look like. An
accompanying README file describes the supersampling used to produce each
image. Note that you will probably not match these images pixel for pixel,
depending on how you handle degeneracies and how you perform antialiasing.
These are by no means the only data sets on which we will test your program.
Try to anticipate every nasty polygon that might be in our test files, as well
as every nasty polygon that might result from interpolation.
-
The speed at which you can rasterize frames will depend on the number of
polygons to be drawn and on the number of spatial and temporal samples you
specify. If you want to view your animation in real-time, but your frames
don't scan convert quickly enough, we provide a utility,
/usr/class/cs248/support/bin/i386-linux/ppm2fli, that accepts a .list file (as
described earlier), reads in the listed .ppm files, converts them to a
QuickTime FLC movie, and outputs a .fli file. The syntax is "ppm2fli
in.list out.fli". To play your movie, use
“/usr/class/cs248/support/bin/i386-linux/xanim out.fli”. To view
your movies on a non-linux machine, download one of the players linked on this
website http://woodshole.er.usgs.gov/operations/modeling/flc.html,
or for Windows the Moyager player
http://www.gromada.com/moyager.html.
Submission requirements
Like project #1, there are two parts to submitting your rasterizer: giving a
demo on Wednesday, October 27, and submitting an online writeup by 2:00pm on
Thursday, October 28.
For the demos, signups will be handled electronically, as before. All demos
must be given in the Sweet Hall graphics labs. To keep the demos fair, you
must freeze your executable by 7:30am on demo day and give your demo from that
executable. To freeze your executable, log into a raptor or firebird, then run
the script /usr/class/cs248/bin/submit. Since you will be demoing from the
executable you submitted, make sure it is a Linux executable that runs on the
lab PCs. Be sure to include your compiled binary named 'animgui' in the /bin
directory of your submission (this is the same filename generated with the
skeleton and default makefile)!
Writeups consist of a commented copy of your source code, a Linux executable
that runs on the Sweet Hall graphics lab PCs, a Makefile, and a README
describing the functionality implemented. It should also describe your
supersampling solutions, including the weights you assign to each supersample
in space or time, as outlined earlier. Be brief but complete; one screenful is
enough. If you did something especially clever, tell us about it. To submit
your code and README, place all these files in one directory (and optionally
subdirectories) and once again run the script /usr/class/cs248/bin/submit. If
you included the required files for the writeup when you first submitted, then
you do not need to resubmit. You may make multiple submissions; we will
consider only your latest submission while grading. If the submit script
reports an error, please email the error message with a gzipped tarball of your
submission directory to green81@stanford.edu
The assignment will be graded on correctness (60 points), efficiency (20
points), and programming style (20 points). As before, the functionality of
your program will be graded based only on what you show us during the demo.
Functionality you don't show us or that doesn't work will not be considered.
Only your source code and your README file will be graded from your submitted
writeup. Efficiency will be judged mainly by looking at your code, although if
your rasterizer runs horribly slowly on demo day, we'll notice it.
Extra credit
If you have completed the assignment you can earn extra credit by adding bells
and whistles. Here are some suggestions. Some are harder than others. Points
will be awarded in proportion to difficulty. Feel free to make up your own.
(If you aren't sure your idea is worth extra credit, run it by the staff and
we'll tell you.)
-
Modify the user interface to allow interactive scaling and rotation of polygons
around an interactively selected point in the canvas as a way of changing their
shape. This one is easy.
-
Modify the user interface to allow insertion and deletion of vertices in an
already-defined polygon. This is also easy.
-
For a bigger challenge than bell/whistle #2, modify the user interface, the
underlying polygon data structure, and the linear interpolation routine to
allow the number of vertices in a polygon to change at any keyframe. If the
additional or deleted vertices are colinear with their two immediate neighbors
in the polygon, then the polygon's shape will not change suddenly at the
keyframe. Being able to add or delete vertices in this way greatly increases
the expressiveness of keyframe interpolation. Show off this improved
expressiveness with some animations you create before demo day.
-
Modify the user interface to allow the input of polygons having "curved"
boundaries, where these curves are approximated by a sequence of closely-spaced
vertices. We suggest using button-down to initiate a new curve and button-up
to terminate it. Line segments can be defined to connect successive mouse
positions or successive mouse positions once their distance apart exceeds a
specified threshold. This one is relatively easy.
-
Back to a hard one. Combining the ideas in bell/whistles #3 and #4, allow the
user to draw a different curved boundary for each keyframe. Since the number
of vertices in your keyframes now differ substantially, you will need to
resample these curves so that they have the same number of vertices at the
beginning and end of each linear interpolation. This is how professional
keyframe animation systems work, and it greatly increases their expressiveness.
The resampling procedure is straightforward; we'll explain it during Friday's
help session.
-
Linear interpolation tends to produce animations in which the rhythm of the
keyframes are visually evident, and sometimes objectionable. To reduce this
effect, replace linear interpolation between pairs of keyframes with splined
interpolation between larger numbers of keyframes, as described in section
21.1.3 of the textbook. For example, cubic B-splines, described in section
11.2.3 of the textbook, would use the vertex positions of keyframes 1,2,3,4 to
drive the interpolation between keyframes 2 and 3, the positions in keyframes
2,3,4,5 to interpolate between 3 and 4, and so on.
-
Another problem with linear interpolation is that, if two keyframes differ
substantially in shape, the polygon tends to collapse and reform during the
course of the interpolation. To reduce this effect, define a skeleton
consisting of connected line segments, replace the X,Y coordinates of vertices
with U,V offsets relative to one of these segments, then transform the skeleton
(using rotation, splines, or other techniques), while interpolating vertex
offsets, as shown in figure 21.3 (page 1062) of the textbook.
-
Implement full polygon clipping, thereby allowing your polygons to translate
offscreen during the interval between two keyframes. Use any (reasonably
efficient) clipping algorithm you like, such as the Sutherland-Hodgman
algorithm described in section 3.14.1 of the textbook.
-
Implement unweighted area sampling as described in section 3.17.2 of the
textbook as a user-selectable alternative to accumulation buffer antialiasing.
For more fun, implement weighted area sampling as described in section 3.17.3.
You may assume (for this bell/whistle only) that your polygons do not overlap.
-
Last but not least, create a cool animation to show us at the demo.
We'll give you extra credit for it!
Appendix A: the accumulation buffer algorithm
Let "canvas" be a 16x3-bit pixel array, "temp" be an 8x3-bit pixel array, and
"polygon color" be an 8x3-bit color that is unique to each polygon.
1 clear canvas to black;
2 n = 0 (number of samples taken so far)
3 for (i=1; i<=s; i++) (for s subpixel positions)
4 for (j=1; j<=t; j++) (for t fractional frame times)
5 clear temp to black
6 n = n + 1
7 for each polygon
8 translate vertices for this subpixel position and fractional frame time
9 for each pixel in polygon (using your scan converter)
10 temp color <-- polygon color
11 for each pixel in canvas
12 canvas color <-- canvas color + temp color
13 canvas color <-- canvas color / n
14 convert canvas to 8x3 bits and display on screen (by exiting from your rasterizer)
Line 7 causes later polygons to occlude earlier polygons in temp. This
"painter's algorithm" occlusion ordering is in lieu of a hidden-surface
algorithm, which you will learn about later in the course.
Line 9 implements a box filter, i.e. all subpixel positions are given equal
weight. By modifying this interpolation formula to include a weight that
varies with subpixel position relative to the center of the filter kernel, you
can implement other reconstruction filters.
If your algorithm is working correctly, what you should see on the screen is a
nicely antialiased image, like those linked into the October 12 lecture notes.
Look also at figure 3.56 in the text. The method they use to produce that
figure is different (that is prefiltering, not supersampling and averaging
down), but the visual effects are similar.
Finally, for small numbers of samples, (i.e. small values of s and t), it is
possible to use an 8x3-bit canvas by replacing lines 12 and 13 with
canvas color <-- ((n-1)/n)*canvas color + (1/n)*temp color
This approach reduces memory use, it avoids the need for normalization and
16-to-8-bit conversion, and it allows the canvas to be displayed after each
sample location has been processed - providing continuous user feedback during
rasterization. However, for large numbers of samples, the method yields
significant quantization artifacts. We would like your algorithm to be capable
of using large numbers of samples, so we require that you employ a 16x3-bit
canvas.
Reference:
Haeberli, P., Akeley, K.,
The Accumulation Buffer:
Hardware Support for High-Quality Rendering,
Computer Graphics (Proc. SIGGRAPH), 24:4, pp. 309-318, 1990.
levoy@cs.stanford.edu
Copyright © 2004 Marc Levoy
Last update:
October 18, 2004 10:38:00 PM