Project 3 FAQ
CS348B Spring 99
Prof: Frank Crow
TA: Elena Vileshina
 

Q:

Is there any help for Composer and i3dm?

A:

There is no help for Composer.  You can view i3dm documentation with command
    showcase -v /usr/class/cs348b/i3dm/i3dm_help.sc

Q:

How can I add fields to the label passed by Composer?

A:

You need to do it manually in editor or use script to automate the process ( see below).
Then modified  inventor file can  be loaded into Composer.  You cannot modify this label in
Composer, but  Composer will export this label in the  "name" field.

----------------------------------------------------------------------

Q:

It's really hard to position stuff with the mouse in composer instead
of using i3dm. Is there any way I can both get the texture coordinates
right and use i3dm to position multiple objects, without having
everything get merged together by i3dm_clean?
 

A:

If you're willing to type a little bit in an editor, yes. The only
point of i3dm_clean is to add a Label node with the texture filename
information in it, which is wrapped with a Separator node (which is
just a container). The fact that it merges multiple objects into one
object is a minus, not a plus, for most of you. There's probably
already a Separator around each of your objects, so you can just add
the Label by hand in an editor. That is, add:
    Label {
    label "<filename>"
    }

Where you get the <filename> from the Texture2 field.

I know this is a bit of a pain, but it's definitely much less work
than trying to use the editor to type in object coordinates!

------------------------------------------------------------------------

Q:

What is formula for bump mapping?

A:

Here's the answer from Maple.
 
 

BS =

[
  (t2 - t3) x1   (t1 - t3) x2    (t1 - t2) x3
- ------------ + ------------ - ------------ ,
       D               D              D

  (t2 - t3) y1   (t1 - t3) y2   (t1 - t2) y3
- ------------ + ------------ - ------------ ,
       D               D               D

  (t2 - t3) z1   (t1 - t3) z2   (t1 - t2) z3
- ------------ + ------------ - ------------
       D               D              D
]

BT =

[

(s2 - s3) x1   (s1 - s3) x2   (s1 - s2) x3
------------ - ------------ + ------------ ,
     D              D              D

(s2 - s3) y1   (s1 - s3) y2   (s1 - s2) y3
------------ - ------------ + ------------ ,
    D               D             D

(s2 - s3) z1   (s1 - s3) z2   (s1 - s2) z3
------------ - ------------ + ------------
    D               D                D
]

D := -s1 t2 + s1 t3 + s2 t1 - s2 t3 - s3 t1 + s3 t2
 
 

Q:

I know that (s,t) values less than 0 result in a repeating texture while
values greather than 1 result in a fraction of the texture being sample,
but how do we figure out the mapping? Is it something like 1/s, s >= 1?

A:

No. Anything outside of (0,1) means it's a repeating texture. Just
think of it as a mod by 1, so you simply take the fractional part.

Q:

When I try to map a circle into a rectangle it looks wrong. Any
suggestion for mapping from circle->circle ?

A:

You can do it rectangle->rectangle by modulating the color texture map
by a transparency map. Just create a very simple square map which is
just an opaque circle and use it to control whether the color texture
map is applied.
 
 

Q:

For i3dm, sometimes I want different texture on the same object.
e.g. say ONe cylinder, ONe cap , and ONe bottom. If I give a
separately label to each one, does that name passed onto to
Composer-> Raytracer ok as 3 separate textures ?

A:

You'll have to save them as different files from i3dm since i3dm_clean
merges all the objects in one file into a single mesh. Then combine
them in composer. The output will have separate objects which have
separate textures.

------------------------------------------------------------------------

[Here's a batch of BRDF questions answered by Prof Levoy, which might
be useful to the rest of you as well.]

Q:

I have read the materials on distribution ray tracing but I am not sure
that I understand how exactly we should distribute the rays according to
the BRDF. I have a few questions(for gloss):
1. Do we distribute rays into the entire hemisphere, not just around the
direction of specular reflection?

A:

You have many choices. You can: (1) replace only your reflected rays (at the
mirror direction) with a small distribution of rays around the mirror
direction, or (2) replace these reflected rays with a combined distribution
that includes a diffuse term (over the hemisphere) + a specular term (a small
distribution around the mirror direction), or (3) replace even your rays cast
directly to the light sources with a combined distribution as in (2). To
produce good images (i.e. accurate and noise-free), (3) requires many rays
(more than are probably practical), (2) requires fewer, and (1) requires fewer
still. I suggest starting with (1).
 
 

Q:

What's a good way to divide the goniometric diagram into areas of
equal volumes?

A:

Place a disk (not a physical one) at some distance along your mirror ray and
perpendicular to it, choose a distribution of points on this disk, and trace
rays from the point on the surface through each of the points on the disk.
Think carefully about the distribution of points on the disk. You want it to
fall off smoothly to zero, and to integrate plausibly to something that looks
like the bumps I have drawing on goniometric diagrams.
 

Q:

How do we determine the weight of a ray using BRDF? I am not I
understand the notion of "weight" here.

A:
 

If you choose a distribution of points on your disk that is denser where the
goniometric bump is longest, then you may simple average the colors you get.
In other words, you are incorporating the weighting into the probability of
tracing a ray in a certain direction.

If you craft your complexity metric carefully, it will be more sensitive to
brighter reflections, hence subdividing (and tracing more rays) where the lobes
of the BRDF are, i.e. importance sampling.

4. Where can I find references on various BRDFs or shall we just use
that complicated Torrance-blinn-cook one on the notes?

If you're using Ward's model, just play with parameters until it looks
nice (or looks like your real object). If not, then simply implement
some plausible distribution of reflectance (hence weighting, hence the
probability of subdivision, hence the density of rays) on the disk or
quadrilateral. You want it to fall off smoothly to zero, and to
integrate plausibly to something that looks like the bumps I have
drawing on goniometric diagrams.
 

Q:

I'm having trouble figuring out how directional lights would fit
into distributed shadow and specular calculations. I think it should be
OK to assume directional lights don't create soft shadows (because they
don't have an area) but I know they should have spectral highlights. For
area lights I simply take a bunch of samples from the BRDF and determine
the percentage of samples that hit the light and weigh accordingly, but
this won't work for directional lights (I'm assuming all my point lights
actually have a radius). Am I going about this in the wrong way? Or do
you have any advice on how I can get directional highlights working?

A:

Directional lights will only be seen if a ray is traced exactly in that
direction, which is extremely unlikely. Therefore, it is inappropriate to use
a distribution ray tracer to capture them. Your rays traced directly to light
sources, which you should continue to use, will capture them. These so-called
direct rays should contain diffuse and specular terms, just as they did
in programming assignment #1.

--------------------
Q:

I've read up on the coordinates, both in the FAQ and in showcase,
but I can't seem to get i3dm to give me the right coordinates for a
cube. It splits each face into 4 triangles and doesn't pick
coordinates so that the whole face appears as the image; each
triangle has a different mapping system. surely i3dm can map it as a
planar solid, right?

A:

Well, I saw somewhat different but equally pathetic behavior: the cube
is actually a surface of revolution, so you can get nice texture
behavior in one direction but not in the other. For instance, typing
"surface st 0 0 4 2" does a reasonable job with the 4 side faces but
the top and bottom ones are hopeless.

It makes sense give his code base (he didn't even have cubes in
earlier versions of the modeller) but it's pretty annoying. I think
your options are either creating 6 different patches, moving them to
the right places and grouping them together, or just punting and
constructing the Inventor file by hand in the fabulous emacs modelling
system.

Q:

I am wondering why my bump maps appears like a hole instead of a
bump. What determines whether the bump is "in" or "out" ?

A:

It's just a convention. You can either say that bright areas are out
and dark areas are in, or vice versa. You can either invert your
images with /usr/pubsw/bin/pnminvert or change the conventions of your
code.

------------------------------------------------------------------------

Q:

How do we find the surface partial derivatives for polygons, in order
to do bump mapping?

A:

A bunch of people have asked this in the last two days. Here's my take
at an answer.

For the record, there's a pretty good bumpmap explanation at
http://www.cs.helsinki.fi/group/goa/render/surfmapping/bumpmap.html
which saves me from having to type a lot of context. Ignore the part
about embossing, just read the top part. (Although I do recommend that
people take a look at Blinn's Siggraph78 paper just because it's nice
to read the originals. The Watt and Watt book mentioned at the bottom
of this reference also has a nice section on bump mapping, pages
199-201.)

Sadly, all these explanations assume you've already got your surface
partials, which are derivatives with respect to texture coordinates
(s,t). You only need to worry about this if you have a general
triangle mesh. If you just have a single quadrilateral with the most
obvious mapping where the (s,t) axes are exactly aligned with the
quadrilateral axes, then you're set. So the problem is to find
(BS,BT): the mapping of the texture coordinate basis vectors into
world space.

There's some function that maps (s,t) in texture space into (x,y,z) in
world space. That function can be expressed as a 3x3 matrix:
          _             _   _   _
[s,t,1] * | m11 m12 m13 | = | x |
          | m21 m22 m23 |   | y |
          | m31 m32 m33 |   | z |
          -             -   -   -

The basis vectors (BS,BT) of texture space are (1,0)-(0,0) and
(0,1)-(0,0). In the explanation at the URL above, these are called O_u
and O_v. If we plug that in for s and t, we find that

BS = [m11, m21, m31]
BT = [m21, m22, m23]

So what's M? Well, we also know that TM=V, so M = T^-1 * V, where
T = { {s1,t1,1},{s2,t2,1},{s3,t3,1}}
V = { {x1,y1,z1},{x2,y2,z2},{x3,y3,z3}}

A quick spin through Mathematica tells us that

BS =

(t2 - t3) x1     (-t1 + t3) y1     (t1 - t2) z1
------------  ,  -------------  ,  ------------
    D                 D                 D

and BT =

(-s2 + s3) x2     (s1 - s3) y2       (-s1 + s2) z2
------------  ,   -------------  ,   ------------
    D                 D                 D

where D = -(s2 t1) + s3 t1 + s1 t2 - s3 t2 - s1 t3 + s2 t3

Note that if you precompute this info you can save it with each
triangle since it's invariant.
 
 

Q:

If I want to texture map a PPM onto some object, I have to have some
code to read in the PPM, right? Do I have to write this myself?

A:

No. Use the LoadPPM call in the xsupport libraries, which will load a
PPM file into a Canvas data structure.
 
 

Q:

We need to read a ppm or rgb format file when we implement texture
mapping but we are only provided a function to read ppm to canvas. I
was trying to use a RGBColorImage class I wrote for another class but
met some problems, so I am wondering is there ppm utilities functions
available that we can use?

A:

/usr/pubsw/bin/sgitopnm should work fine. You could just use it in a
system command, and then read in the resulting file.
 
 

Q:

Is there a way I get get at my own textures using i3dm?

A:

You can set up your own texture.index file (copy over the one from
/usr/class/cs348b/i3dm as a base). The textures need to be in SGI
(.rgb) format, use sgitopnm to convert them.
 
 

Q:

Would be the desired behavior be if a model had both per-vertex
materials and a color texture map specified? Should one just be
ignored?

A:

In a full-featured renderer you'd have some mode flag, often called
"decal" vs. "modulate", in order to choose which way to go. It's fine
with us if you just implement "decal" mode, where you ignore the
per-vertex material properties and just use the texture to decide the
color of the surface at that point. (You'd still do the shading
calculations so that the surface looks like it's lit at that point.
There's one more texture mode where you don't do any lighting at all,
but that's well-suited for your real object purposes.)

You could also instead choose to do modulate mode, where you blend the
two, if that would serve your purposes better for your real-world
scene. Modulate is probably most useful if your texture is greyscale
instead of full color. Of course if you want both modes in your scene
you would need to have a mode flag, which you can pass through i3dm
and composer with a multi-part name/label (i.e.
"colortext=mycolor.ppm mode=decal otherparam=foo "). Just document your
choice in your writeup.
 

Q:

Help! I can't get the textures to work in i3dm, it complains about not
finding files in /usr/demos/data!

A:

It will work after you source the class init.csh file which sets
the environment variable I3DM_DIR to /usr/class/cs348b/idm. (For the
curious, the texture index file in the I3DM_DIR directory had entries
with /usr/demos/data instead of /usr/class/cs348b.) Let me know if
there are any more problems.
 
 

Q:

Help! There are all these non-CS348B people using the lab so there's
no room for me to work.

A:

Graphics class students do in fact have priority for both SGI labs -
that means you actually get to kick off non-graphics students if it's
filled up. I just double-checked this with the Sweet Hall consultants
and it's still true. Be polite but firm, let me know if a problem
situation develops.
 
 

Q:

Extra credit is meaningless if it results in simply grading out of
120% instead of 100%. This unlimited extra credit system rewards those
who have no other courses and penalizes those who have other things to
do as well as this project. Is that a fair grading system?

A:

To avoid the potential unfairness you refer to, we omit extra credit points
when computing the class curve. Extra credit can raise your grade, but it
won't lower anybody else's.

------------------------------------------------------------------------

Q:

It says in the handout for Texture Map
"Required primitives: spheres and triangle meshes (and quads)"
what do you mean by Triangle Mesh, just regular Triangle ?
I would imagine Composer outputs will break Everything into Triangles,
with the coresponding s and t texture cooords, right ?
So, why do we still have to still with quads ?

Yes, a triangle mesh is just a bunch of triangles, and is generated by
composer for any object made in i3dm. If you assign a texture to an
object in i3dm, the triangle mesh (s,t) texture coordinates will be
properly output from composer.

A quadrilateral is just a simple test case that will allow us to
easily check that your texture mapping code is correct. (On a more
complex model it might not always be immediately obvious.) Just make a
single flat rectangular patch in i3dm and assign a texture to it.
Composer will split it into two triangles with proper texture
coordinates. Just make a simple test image with the camera pointed at
the quadrilateral to show us during the face-to-face grading session.
 

Q:

The SphereIO structure doesn't have any s,t co-ords.
Does this mean the texture map is "procedurally" defined in our
Intersect_Sphere function ?

A:

You can just say that all spheres are parametrized with texture
coordinates in the same way. The mapping on p. 48 of Glassner is fine.
So yes, the mapping is found by hardwired code built into your
intersect_sphere function, with no explicit (s,t) coordinates needed.
But the word "procedural" has a different specific meaning, namely
that the texture itself is computed on the fly.
 

Q:

Say I want to implement Bump Mapping.
How can I add a flag in the Object_IO , e.g. hasBumpMap.
I guess in that case I need the name of 2 image files:
one for the texture map, and one for the grey scale bump map,
how do I pass two names ?

You can encode multiple items into the single Label field, just like
tetrahedron example in the help session notes:
label "Table bumpmap=woodbmp.ppm texmap=woodtex.ppm"

In the scene_io data structures you'll be passed that string in the
name field. Then you can parse that string to recover the information.
You'll probably want to assign some texture in composer in order to
force the creation of texture coordinates. Then when you run
i3dm_clean the texture filename will be encoded into the name field.
It would be a pain to go in and change that filename to your desired
string (with either just your own texture name or some combination of
fields) by hand with an editor everytime you change the model. If you
always use the same texture from i3dm, then you can easily automate
this with a trivial line of sed or perl, like:

perl -pe 's\"/usr/demos/data/i3dm/textures/bark.rgb"\"mytext=foo.rgb mybump=bar.rgb"\g' < label.out
 

You don't need an explicity bump map flag - the presence of the token
called "mybump" can be your signal to yourself that there's bump
mapping.
 
 

Q:

Is there a way to make the mouse pointer visible in the canvas window?
It would be useful to correlate the microscopic and macroscopic
views.

A:

An easy way to do this is to use the MouseInCanvas routines. The one
that came with the example draws a single pixel into the framebuffer
at the spot where you click in the image - red if you're holding down
shift, green if not. It's true that you'll leave behind colored pixels
if you do this. The trivial fix is to not worry about it and just
redraw the image from time to time, a slightly nicer approach would be
to save off the old pixel value and replace it the next time the user
clicks to turn a different pixel green. (You probably want to delete
the printf line after you make sure everything works right.) It's
pretty clear what's going on when you just look at the code:
 

static void MouseInCanvas1(int X,
int Y,
unsigned int ButtonDown) {

printf("Mouse in canvas 1 at %d %d; buttons/modifiers %d\n",
        X,Y,ButtonDown);

if (ButtonDown &&
    X>=0 && X<Canvases[0].Width &&
    Y>=0 && Y<Canvases[0].Height)
    {
        if (ButtonDown&ShiftMask)
            SET_RED(PIXEL(&Canvases[0],X,Y),255);
        else
            SET_GREEN(PIXEL(&Canvases[0],X,Y),255);
        UpdateCanvas(&Canvases[0],X,X,Y,Y);
    }
 

Q:

i3dm_clean seems to destroy any objects beyond the first object in
the scene. If I have a cube and a cylinder, it deletes the cube. Am
I using it incorrectly?

A:

If there are multiple objects in a file i3dm will merge them together
into a single triangle strip, with labels suitable for passing on to
composer. If you load the cleaned file into composer you should still
see both. However, if you have multiple objects with different texture
maps that info will be lost. So you should need to create each one
independently and save them out into separate files. You should think
of i3dm as a modeller for making a single object, and then put them
together into a scene using composer.
 

------------------------------------------------------------------------
 

Q:

I have gone through most of the material on adaptive stochastic
sampling, and I still find myself a little confused about a certain
issue. That is, how one determines when and where to subdivide. Let's
say I randomly sample a pixel in 4 spots. Should I simply average
those values together to get the mean, and subdivide on any sample
that deviates too far from the mean? It seems error prone. Is there a
more accurate way?

A:

I'm assume by "subdivide on any sample that deviates too far" you mean
"subdivide if any sample deviates too far". That's right. It's
certainly not guaranteed to be perfect, that's why you have noise. You
can improve your chances to have well-distributed sample points by
constraining the location of your samples.

You can cast your sample points at a location which is random but is
constrained to be within a different quadrant of the square. It's
particularly obvious in this scheme that you can/should reuse pixels
instead of throwing them away if you have to subdivide.
 

Q:

Say originally I have 4 samples. If I find out the Standard Deviation
> Threshold, I subdivide. Do people usually subdivide all 4 of them ?
Or do they use something like checking | sample value - Average |
against some other threshold, and then only divide that particular
region ? Can you give a hint which one might get better result ?

A:

Well, you *have* to subdivide the whole thing unless you have explicit
bounds on subregions that you used when casting your 4 samples. For
example, if you cast 4 totally random rays within a square you can't
just subdivide part of the square since you don't have any notion of
"part". But if you each of the four sample points at a location which
is random but was constrained to be within a different quadrant of the
square, then you could use a scheme where you subdivide only some
quadrants. Offhand the methods that I can think of subdivide all four,
but you could certainly run some experiments.
 

Q:

How do we do animation for proj3? After generating a sequence of
images, how do we compile them into a movie?
 

A:

There's a tool on the SGIs called mediaconvert. The UI is a bit hard
to decipher, but once you convince it to accept your input files it
does a nice job at making them into a movie. You can then play that
movie (you probably want QuickTime, not MPEG) using movieplayer.

Enter a directory name in the "Video to Convert" field. Then pick
Numbered Images from the radio buttons that appear, which will cause
many other fields to appear. Enter the start and stop frame numbers
and a "file template". If you had files myframe00.tiff through
myframe59.tiff, you'd enter 'myframe##.tiff' in that field. If it's
happy that your entries correspond to real files, it will then throw
up a video input parameter window and ungrey all the output stuff on
the right of the UI. It often balks - I sometimes get better results
when I start it up from the directory that the files are in, and thus
use '.' for the directory name.
 
 

------------------------------------------------------------------------

Q:

I'm sending you my submission via email.

A:

No!! Do *not* send me your submission via email. Use the submit
program, that's what it's for. If you realize a few minutes after
submitting that you made a mistake on some file, you can make a new
directory, copy over only that file and submit that. Note that the
name of the tar file that the submit program creates is
<directoryname>.tar.gz. So "submit" is a bad name. Something more
unique like "proj3" or "proj3.retry" is a much safer name.

It's fine to send me mail explaining why you've submitted something
past the deadline. Just don't include the submission itself in that
mail!

Q:

I had a question about texture mapping. When you have a triangular
mesh along which you want a texture to be mapped, how do you specify
the texture coordinates for each vertex of the triangle? Is that
something we have to figure out manually, or do we have a procedure
that figures it out? I think it'd be very difficult to "stretch" the
2-D square texture over the 3-D mesh in just the right way.

A:

If you're texturing a triangle mesh you need "texture coordinates" in
addition to vertices and normals. Usually these are passed into the
rendering system from the modelling system. So you can assume that
each vertex of the triangle mesh has texture coordinates (s,t) which
you can use as input to your barycentric interpolation code.

The i3dm modeller will create the texture coordinates for you
automatically. There are various i3dm options for tweaking the
mappings. Take a look at the Texture and Attributes: Texture
Correction sections of the i3dm documentation, using the command
showcase -v /usr/class/cs348b/i3dm/i3dm_help.sc

Executive summary: default is (0,0) in lower left corner, (1,1) in
upper right corner. Going outside that range leads to repeated
textures. The tools menu has a nice interactive texture coordinate
editor. You may need to investigate texture correction (under
attributes menu) if you do nonuniform scaling or similar operations.

If the i3dm capabilities are not enough, you can always write
something custom yourself.
 

Q:

Is it true that composer cannot read its own format (.out) back?

A:

Yes, that's true. You want to keep around Inventor (.iv) format files,
which is the output of i3dm. If you do any processing or file
creation, Inventor is the right file format to use. Then just think of
composer as the end of the line, where you simply *place* objects and
lights and viewpoint.
 

Q:

How do I find the partial derivatives of the sphere surface at a given
intersection point, in order to do bump mapping?

A:

It depends on which mapping you do. If it's the same as the
latitude/longitude mapping used for earth, then the partial
derivatives would be the horizontal tangent (i.e. the direction of a
latitude line on the globe) and the vertical tangent (i.e. the
direction of a longitudinal line on the globe).
 

Q:

I have a question about distribution ray tracing for gloss : Suppose I
use a BRDF like the isotropic Gaussian one given in Ward's paper about
anisotropy (in the reader). If I decide not to do path ray tracing
(because we're already doing adaptive stochastic supersampling), then
I need to (a) figure out how many rays to send out (b) figure out
their directions. Part (a) is no problem. However, I'm not sure I
completely understand how to use the BRDF to decide which direction
these new rays go in. Should I be dividing my volume up into equal
solid angles, sending out rays, and then doing a weighted average of
the returned values, with the weights determined by the BRDF?

A:

That's one way to do it. However, with near-mirror surfaces, all of
the weights (except one) will be nearly zero. Thus you'll be doing a
lot of work to get rays that contribute very little to the overall
image.

Another way to do it would be to figure out how to divide the
goniometric diagram into areas of equal volume, and send a random ray
into each area of equal volume. Or, if you prefer, for each ray, just
make its probability of going in a certain direction proportional to
the BRDF for that direction (given the incoming eye ray direction).
This way, each ray will be weighted roughly equally, spreading out the
work more. However, it may lead to more noise, meaning that it may
require more samples.
 
 

Q:

When i3dm creates a face, it outputs an .iv file with more than 3
vertices. If the face is non-convex, composer will get confused.

A:

Here's the solution. Let's say I had a file, concave.iv,
which contained a non-convex polygon. To convert it to triangles:

1. Edit the .iv file. Find the lines:

ShapeHints {
hints (SURFACE | ORDERED | CONVEX)
}

Here, i3dm is lying, the polygon is not convex. So delete that
flag:

ShapeHints {
hints (SURFACE | ORDERED)
}

2. Run ivfix on the file, to convert the faces to triangles.
Use "-a", to output ascii:

ivfix -a < concave.iv > convex2_1.iv

3. Unfortunately, ivfix outputs Inventor 2.1, which composer can't
read. So run ivdowngrade on the file, to convert it to Inventor 2.0:
 

ivdowngrade -v 2.0 convex2_1.iv convex.iv

Now convex.iv is readable by composer, and it should not contain
polygons with more than 3 vertices. However, ivfix over-optimizes
things, so the triangles may not be using the material. Delete
the following lines from the .iv file, if they exist:

PackedColor {
}
TextureCoordinate2 {
point [ ]
}

And finally, you should have a reasonable .iv file.
 
 

Q:

I was just wondering what all we had to do during the demo... are we
supposed to show our raytracer rendering the image? Are we supposed to
show our algorithms or our interface? What will the judges be looking
for? Do we need to dress up? Should we have a rehearsed demo, or will
we just converse? Will you want a code walkthrough?

A:

For the competition, you'll just show your final images, explain what
you did, etc. There's no need to dress up.

For the 15-minute face-to-face grading, you'll explain what extensions
you did, show us your pictures, show us your real object, and explain
your algorithms. Talk about what tradeoffs you made, and why. We'll
ask questions, too, about how you approached things.

If your extensions involve visualizations or user-interface stuff,
you definitely should be ready to give a demo. If not, then perhaps
you should just be ready to show it raytracing a small picture.

We will not have time for a code walkthrough. Your images (or possibly
animations) will show us what you have done. Remember, if we can't see
it the day of the demo, you don't get credit for it.

Should you rehearse? Not formally. You should know what extensions
you did, and be ready to explain them. It will be mostly conversation,
but it's nice for you to have your thoughts organized, since we only
have 15 minutes to evaluate your multi-days of work... (Overhead
slides or Powerpoint presentations are *not* required. :-)
 

------------------------------------------------------------------------
This FAQ is mainly based on actual questions sent to TAs in previous
years. Thanks to last year's TA Lucas Pereira for the answers, and
last year's students for the questions. Thanks in advance to you for
your questions, which are added to the top as they roll in. New questions
are added to the top of the document with timestamps, so that it's
easy to find the new stuff.
-Tamara

Since the programming assignment has not change, this document
is an extremely useful source of information.  Continuing the tradition,
we will update the document with your questions. Thanks to last year's TA for
the answers and for providing help.
 Elena

Copyright 1998 Marc Levoy
updated by lena@cs