Project 1 FAQ CS348B Spring 98 Prof: Marc Levoy TA: Tamara Munzner ------------------------------------------------------------------------ Mon Apr 20 11:54:30 1998 Q: What am I supposed to submit? A: A README, your five images, your two models (both .iv and .out files), your source, and an executable. No object files. The submit script wants the name of a directory with all and only these files in it, and will creates a tar.gz file. Even though your individual images might be .75MB apiece, they compress well so the compressed tar file will probably be < 1MB. It would be useful to name your directory something like "p1" or "project1" as opposed to "submit", since you will also use this script to submit project 2. (Don't worry about the name if you've already submitted, though.) Q: What should I put in my README file? A: A brief description of your models, a *brief* outline of your code, and all choices you made in your implementation that we should know when judging your images. For example, definitely include any global parameter tweaks, like the values for scaling of ambient lights, if you did so. Definitely include assumptions and algorithmic choices, like "I used just ktran" or "I used ktran*diffColor" or "I used ktran*diffColorNorm". It doesn't have to be long, a page is fine. ------------------------------------------------------------------------ Sun Apr 19 22:48:38 1998 Q: The semi-transparent objects in my test3 image are much darker than in the sample image. A: A previous answer below told you to use "ktran*diffColor" to attenuate the effect of the light ray so that your shadows would be properly colored. That answer has just been updated. You will get better results with "ktran*diffColorNorm", where diffColorNorm is the NORMALIZED diffColor. Just divide all three channels by the biggest component. For instance, if diffColor is (.2, .5, .2), diffColorNorm should be (.4, 1.0, .2). Q: I can't get gdb to work. A: Use dbx instead, gdb appears to be broken on the firebirds and raptors. If you're having trouble even with dbx, try using SGI "cc" or "CC" instead of GNU "gcc" or "g++". ------------------------------------------------------------------------ Fri Apr 17 23:21:50 1998 Q: In another question you told us not to "always blindly flip the normals .... [or] your lighting will be wrong". But then how come in Glassner book, p.283 inside the Trace function, it says: if (VecDot(ray->D,N) > 0) VecNegate(N,N); Shade(...); A: My previous answer was misleading, and I removed it. Here's a better one: Short answer: it's safe to flip the normals *if* you've already recorded whether you're entering or exiting a surface. Glassner keeps track of this in his Isect data structure. Glassner flips his normals and then assumes later in his shading routines that the normal is always facing outward. If you don't flip your normals, you'll need to deal with the fact that you might have a negative dot product and so you should take the absolute value before using it. Long answer: So why doesn't this mess up the lighting? If the normal was already facing out, it's the front of an object and you're fine. If the normal was facing back, it's the back face on the inside of an object. (This can't occur on the primary rays traced from the image plane, but it could happen on the recursive traces.) If the object is totally opaque, you'll discover this when you trace the shadow ray and find that the light is blocked by the front of the object. Thus your lighting won't be wrong. (Although your TA can be sometimes - this is what I forgot about in my earlier answer...) If the object is semi-transparent, then there will in fact be a lighting contribution from that inside surface. You *do* need to know whether you're inside or outside the object so that you know whether you're entering an object (moving from refractive index 1.0 of air to 1.5 of the object) or leaving it (moving from object back to air). You'll also need it for the situation discussed in the next question. That's why you either keep track that you flipped the normal or have a field in your intersection data structure that tells you where you are. Q: What should I do about "absorption" when a ray passes through a semi-transparent object? Rays intersect semi-transparent objects twice: once in front, and once in back. For HW#1, you should only scale by k_trans *once*, not twice: don't scale when entering the object, just when leaving it. (You can think of the entering case as scaling by the k_trans of the air you came from, which is 1.0.) This is essentially a hack, since you'll get the same results no matter how thick your object is. It's good enough for now, since we're ignoring other things too like the effects of refraction when computing your shadow rays. If you're feeling motivated, you could do something that's closer to the right thing: use the equation alpha_total = 1 - (1 - alpha_perunitdist)^d, where d is the total distance travelled inside the object and alpha_perunitdistance is k_trans. The even more motivated might choose to deal with the so-called "participatory media" problem for HW#3. Prof Levoy will cover the real mathematics of this near the end of the class. Q: Do we have to deal with one transparent object inside another with a different refractive index? A: No, you get to assume everything that's not air has index 1.5 for HW#1. Q: In the Graphics Gems II handout, what's the correspondence between Glassner's variables and the ones in scene_io.h? A: In my online version I forgot to add that part. Sorry, it's there now. (And I now know how to do tables in HTML :) Q: Should I apply diffuse/specular shading to interior surfaces of transparent objects? A: Yes. Q: Do I spawn reflection rays from interior surfaces? A: Yes. Q: If a solid object is within a transparent object, does the transparent object cast a shadow on the object inside? A: Yes, if you interpret "cast a shadow" to mean "block some of the light so that the final color of the inside thing is darker than it would be if the outer translucent object weren't there". Q: You said the diffuse color should be scaled by (1-Kt) on semitransparent objects. Do I also scale the specular highlight in the same way? Do I also scale the reflected contribution by (1-Kt)? Do I scale the ambient contribution by (1-Kt)? A: For semitransparent objects, scale diffuse and ambient, but not reflective or specular. Think of a piece of almost totally clear glass - you see very little of its "color" - which is the combination of the diffuse and ambient terms. Although it's almost colorless it may still be very shiny, which is why you *don't* scale the specular term or the reflected ray by (1-Ktran). ------------------------------------------------------------------------ Wed Apr 15 20:01:28 1998 Q: Load PPM causes a crash. A: Check the file size with "imginfo". You can't load files which are bigger than your canvas size. Either increase the canvas size in your code or crop the ppm image in xv. Q: The Read Composer Scene button doesn't work. A: Try running composer from the same directory as your own binary. You might see a spurious error message Error: could not find Composer, using composer.out but the new scene should indeed be dumped to the file "./composer.out" and loaded. Q: My reflections are too bright. I'm scaling the reflected ray color by material.shininess. Is this right? A: Nope. You should scale by any of the components of specColor (all 3 will always be the same). The material.shininess field is the *exponent* in the specular term, stored as a number between 0 and 1. Multiply it by 128 and use it for "n" in equation 16.55. ------------------------------------------------------------------------ Tue Apr 14 21:43:05 1998 Q: 1. When I hit a mirror sphere and spawn a reflected ray, I want to be sure this ray doesn't intersect the sphere that spawned it, otherwise it will look like lots of little dots. 2. But when I spawn a transmitted ray off of a glass sphere, do I want it to be able to intersect the same sphere (but on the different side) to simulate the moving of the ray out of the glass and into the air? A: 1. Right. If you had infinite precision you wouldn't have this problem - it's an artifact of the limited precision of your representation. You should use a tolerance value (something like .000001) instead of an exact comparison so that slight roundoff errors don't cause these problems. See p 46-7 in the Glassner book for more details. 2. Yes, you definitely want that intersection with the other side. Should be fine if you check whether the ray origin is on the surface of the sphere vs outside it by a tolerance instead of an exact compare. ------------------------------------------------------------------------ Tue Apr 14 20:13:59 1998 Q: With directional lights, how do we determine the width of the cone? Is it in dropOffRate or cutOffAngle. (The comment in the scene_io.h file says cutOffAngle is for spot lights.) And this brings me to another question. Do we support spot lights? What are those, anyway? A: You do not need to support spot lights for HW#1. In the immortal words of the project handout: "You only need to handle directional and point light sources" I think you're confused because your mental picture of a directional light actually corresponds to a spot light. Spot lights have cones and a dropoff rate. Directional lights don't have either. They're infinitely far away and all rays of incident light from them are parallel. (This might sound wierd, but think of it like sunlight - the sun is so far away that for all practical purposes its rays are parallel when they hit tthe earth.) Directional lights only have a direction (which is affected by the current transformation matrix). Point lights do have a location and radiate in all directions. Since we use the Inventor data structures, there is not a field for attenuation. The README.composer file tells you to use Eqn 16.8 for the attenuation factor "fatt" and suggests starting values for the constants (although you might have to tweak them). Spot lights are more sophisticated light models that correspond to lights as you know them in the real world. Luckily for you, they're beyond the scope of this assignment. Thus ignore the cutOffAngle and dropOffRate fields for now - they should not occur in any of the test scenes. But do implement attenuation for point lights. Below is some ASCII art may shed some light on the subject :) directional light ---------------> ---------------> ---------------> ---------------> fields: direction position is infinitely far away. no light ray attenuation. point light ^ ^ \ / \ / \ / <----*-----> / \ / \ / \ v v fields: position specific position. use fatt for light ray attenuation. spot light (don't implement now!) ^ /. . /. . /. . * --------> \ . . \ . . \ . . \ v field: location, direction, dropOffRate, cutOffAngle specific position, direction, attenuation, and cone. Q: Will it be important down the road (in Projects 2 and 3) for us to stick strictly to the Heckbert data structures? The handouts seem to hint that read_scene() should replace Heckbert's SceneRead(). If so this also seems to imply that we should use SceneIO * rather than Comp * as our main scene root. This has a semi-ripple effect when attempting to port Heckbert's code. I.e., our Intersect prototype should take a ObjIO * rather than a Prim *. Is this intentional? Are we going to make it harder and harder for ourselves to leverage the examples as we go along by doing this? A: You definitely want to start out by using the SceneIO data structures instead of Heckbert's structures, since that's the code framework that we've given you. When we encouraged you to use his code as a base, we meant the subroutine breakdown and parts of the code inside his subroutines, but not his data structures. For instance, there's no need to mess around with the Comp* structure since you're not doing a CSG tree - you're just doing simple primitives. There is some of his code you can use verbatim - but the part where he uses structures like comp and prim you need to use your own sceneio stuff. You might choose to use some of the *ideas* from his data structures, but don't use the actual names etc verbatim. Where he says prim*, you say objio*. But along the idea side you might for example decide that you like his ideas about having a toplevel "object/primitive" class that breaks down into subclasses like triangle meshes, quad meshes, quadrics, spheres, etc for HW #3. Some people have decided to build their own data structures and copy fields from SceneIO into their own stuff. Most don't do this for HW#1, though - it's more of a HW#2 and HW#3 thing. Q: I've read the README.composer file, but I still have some questions about to translate the composer data structure to a more convenient form. First, will I have to externally specify what k_diffuse is, since diffColor is pre-multiplied? Second, the README says to take ambColor as Ia. I had thought Ia is a global constant--am I wrong? And would I also have to specify k_ambient externally as well? A: No, you don't need to specify Kd separately. Just use diffColor in place of the entire "Kd*Od" term Whitted's eqn. (In Prof Levoy's simplified eqn of HO#5, he just calls that whole term "Kd".) Think of this term as the "base color" for the object. Keep in mind that the whole concept of the "ambient contribution" is a total hack! It's just fast cheap way to avoid solving the global illumination problem without looking totally horrible. There are many ways of splitting it up: one is that there's a global quantity GA which is like an "ambient lightsource" (totally nondirectional and never shadowed), and then each material has a 3-channel "ambient color" and floating point "ambient coefficient". Multiplying them all together gives you the "ambient contribution". That's in the Whitted eqn. A somewhat simpler way to think about it is that each material has a "base color" which can be modified by a 3-channel "ambient scaling factor/color". The base color is diffColor, and the ambient scaling factor is ambColor. Remember that since none of this is physically correct we're just doing what's convenient! In composer the ambColor is set to something like (.2, .2, .2) unless changed. That gives the result of having an ambient contribution that's the same color as the base color but scaled way down. This is often what you want in a scene. So no, you don't need to specify Ka externally either. Just use diffColor*ambColor for the ambient contribution. ------------------------------------------------------------------------ Fri Apr 10 02:59:41 1998 Q: I'm having problems compiling. A: Before you try to compile make sure that you have set the environment variable $HOSTTYPE to the right value and then run the initialization setup script: source /usr/class/cs348b/init/init.csh Choices for HOSTTYPE are iris4d, sun4, alpha, decstation, rs6000, hp9000s700 The defalt is iris4d, which is for SGIs. Q: I've looked at Glassner's book and I don't quite understand why his "intersect" routine returns a list of all intersections. It seems to me that the only intersection we really care about is the first one (lowest positive time past a threshhold). A: For forward rays (starting ray, reflections, and transmissions), you are right. Rays that we cast to the lights from each intersection, though, might pass through multiple transparent objects, and you'll want to accumulate the shadow factor (Si in Whitted's model). Thus if the ray passes through several transparent objects on the way to a light, Si would be the product of the material transparency factors and their normalized base colors (ktran*diffColorNorm). You can find diffColorNorm by dividing through all three channels by the largest one, i.e. (.2, .5, .2) would become (.4, 1.0, .4). Q: For every semi-transparent object between the light and the point being investigated do we just multiply their ktran values together and use this to scale the color contribution by the light? We don't have to deal with the effects of refraction on the shadow rays? A: Yes. Refract only the primary ray but not the secondary rays for HW 1. See above answer for exactly how to find the color contribution. Q: Are semi-transparent objects solid volumes or "shells"? A: Solids. Q: Should ambient light affect only primary rays or also secondary rays? A: Also secondary rays. Consider a desk which has no direct light underneath it, only ambient. If you want to see the area under the desk reflected in a mirror, it would be pitch black without the ambient light factor on the secondary ray reflected from the mirror to a surface under the desk. Q: For HW 1, is the PolySetType always a triangle mesh? A: Yes. Q: For HW 1, do I have to deal with per-vertex normals for triangle meshes? A: No. Wait until HW 2 to deal with this. Q: For HW 1, do I have to deal with different axis weights for the sphere object so that they are ellipsoids? triangle meshes? A: No. Wait until HW 2 to deal with this. Q: Are we assuming the aspect ratio of the screen is 1 so that horizontal FOV is equal to vertical FOV? A: Yes. Q: Does Fatti, the light attenuation factor (eq 16.8), only apply to point lights? A: Yes. Q: How do I change the size of the windows containing the canvases for the ray tracer? I tried resizing the window, and I got this error: X Error of failed request: BadDrawable (invalid Pixmap or Window... A: Don't try to resize the window. When you start up, create a canvas window of the biggest size on your slider (512x512 is a good number). If the user selects a smaller size "window", just render to a portion of the window... Don't worry about resizing it. Q: What's the idea with multiple canvases? Should I implement rendering the same thing into both windows? A: No, just deal with one rendering window. You might choose to use the other one to visualize auxilliary information, for instance sampling patterns for HW 3. Don't worry about it for now. Q: What size images should we submit? A: Big enough so that we can see what's going on. 400x400 is fine. Q: If I get a color value > 1.0 should I just clamp it to 1.0? A: It's better to clamp it to 1 than to let the value wrap around and result in wierd dark spots, but if you have to clamp any more than a few scattered values you need to take some action. You should scale either the material properties of the scene or the computed pixel values in your final image. It's up to you which approach to take - document it in your README file. If you go the pixel route, then don't clamp intermediate values and add a step at the end where remap the color range from the dimmest to the brightest pixel in your image into the range (0,1). If you go the material tweaking route, play around with the surface/light properties until you don't have to clamp. Your goal should be to match the test pictures as closely as possible. Q: On page 283 of Glassner, in function Trace(), shouldn't the line (*prim->procs->normal) (&hit[0], P, N); read (*prim->procs->normal) (hit[0].prim, P, N); A: Yes. Q: What's our policy regarding code in public domain? There are lots of ray tracing code on the internet, and in various ray tracing books, such as "Graphics Gems". Can we use them freely in our programs? A: You should write the ray-tracing code. You are free to use the Graphics Gems code (in fact, the vector and matrix routines are in cs348b/useful-code), or any general-purpose (non-raytracer specific) libraries. The framework, though, should be written by you. If you have any specific questions, feel free to ask. As a rule of thumb, though, you should only use code that you could easily write, if you had the time. We don't want to waste your time rewriting a dot-product routine for the 5th time in your life, but we also want you to learn enough that you could rewrite your raytracer from scratch, if you had to. Q: Can we assume that Composer orders the vertices of triangles such that a consistently taken cross product will properly indicate "insideness" of the object described by the mesh? (I'm already assuming that the mesh is closed.) A: Yes. You can assume the mesh is closed, and that the vertices of each triangle are listed counterclockwise (CCW), when viewed from the outside. This is true of all the testcases in /usr/class/cs348b/proj1/tests, and will be true for any other testcase we might use for grading. Caveat: Creating your own scenes When composer translates a cube to a .out file, it creates CCW triangles. However, if you give it an arbitrary mesh (such as from i3dm), it will not change the order of the triangle vertices. Thus, you must be careful that your input .iv files have only CCW triangles. i3dm will output either clockwise or CCW triangles. For example, for surfaces of revolution, it depends on which way you draw your original line. If you draw a profile bottom-to-top and then revolve it around the Y axis (As I did at the help session), the vertices will be ordered the wrong way, clockwise. If I had drawn the profile top-to-bottom, it would have been correct (CCW). To fix clockwise vertices in i3dm, go to "Attrib -> Reverse Vertices". Also, when you save models you create with i3dm, you'll probably want to save the file both as an Inventor and as an i3dm file, so that you can go back and reverse the vertex order if you get it wrong. i3dm also creates per-vertex normals for surfaces of revolution. If your raytracer supports per-vertex normals (not required), then both "Attrib -> Reverse Vertices" and "Attrib -> Reverse Normals" will flip the normals. (If you do both, the normals will be back where they started!) I suggest always using "Attrib->Reverse Vertices", since this will fix the normals for both per-face and per-vertex modes. ------------------------------------------------------------------------ This FAQ is mainly based on actual questions sent to TAs in previous years. Thanks to last year's TA Lucas Pereira for the answers, and last year's students for the questions. Thanks in advance to you for your questions, which are added to the top as they roll in. New questions are added to the top of the document with timestamps, so that it's easy to find the new stuff. -Tamara Copyright 1998 Marc Levoy