Click slide for next, or goto previous, first, last slides or back to thumbnail layout.
Click slide for next, or goto previous, or back to thumbnail layout.
We've seen now all the different components of our view-based rendering approach. This slide summarizes how our viewer works.
First, we choose which three views surround the current viewpoint, and render them from the perspective of the virtual camera. Then we process each pixel separately; for each pixel, we have three possible inputs and one output. We first perform soft z-buffering for the input pixels. Using the ones that survive, we calculate a weighted average, where the weight is the product of the three weights we introduced (view direction, surface sampling, and boundary blending). The result is written to the output image.