Lucas Pereira David Redkey cs348c Project Proposal February 13, 1996 Depth Information for Light Fields For our project we propose that we will investigate methods for the generation and use of depth information in light fields. We will attempt to generate the depth information for light fields acquired from a static real-world scene. We will use this depth information to combine the light field with synthetically rendered polygon models, thus merging the real and virtual worlds in a new way. Our overall goal is to assess the feasibility, benefits, and limitations of this approach. PART I: Depth Generation ------------------------ Our goal is to be able to generate depth values for any arbitrary light field, whether it has been collected from the real world or synthetically generated. To help us test the accuracy of our depth reconstruction, we can record the Z values for a synthetically generated light field. Then we can compare our generated values to the actual Z values, and use this to gauge the effectiveness of our algorithms. The depth-information generation step will involve expanding on the work of current computer-vision research, such as Prof. Carlo Tomasi's vision system. However, with light fields we have several key factors that should make our algorithm simpler and more robust. First, we know the exact position and orientation of the camera, so we do not have to do pose estimation. Secondly, we have access to the view from virtually any camera direction within the light field. Thus, rather than trying to reconstruct depth from only two images, we can find the depth of each ray from a whole contiguous plane of images. The coherence of these images should make it easier for us to calculate depth reliably, and be able to recognize and throw away artifacts due to occlusion from other viewpoints. If we find that we are unable to reproduce accurate Z values, we may just use the depth values from a synthetically generated light field (i.e. store the Z-buffer values for each rendering). This will allow us to test PART II. PART II: Using Depth Information -------------------------------- We also wish to explore the applicability of depth information to combining light fields with other representations, such as polygonal objects. We will examine the feasibility of this approach, and find out if there are any major problems with depth aliasing or other artifacts. For our final presentation, we will generate a (possibly interactive) display that shows how the light field can be merged with other objects. This display should demonstrate how well our approach simulates proper occlusion between the real (light-field) and virtual (polygon) world. Time permitting, we would also like to make a stereo display (either on the workbench or a stereo monitor), to give the viewer a better feel for how well the real and virtual worlds merge in 3D.