Recent advances in realtime image compression and decompression hardware make it possible for a high-performance graphics engine to operate as a rendering server in a networked environment. If the client is a low-end workstation or set-top box, then the rendering task can be split across the two devices. In this paper, we explore one strategy for doing this.
For each frame, the server generates a high-quality rendering and a low-quality rendering, subtracts the two, and sends the difference in compressed form. The client generates a matching low quality rendering, adds the decompressed difference image, and displays the composite. Within this paradigm, there is wide latitude to choose what constitutes a high-quality versus low-quality rendering. We have experimented with textured versus untextured surfaces, fine versus coarse tessellation of curved surfaces, Phong versus Gouraud interpolated shading, and antialiased versus nonantialiased edges.
In all cases, our polygon-assisted compression looks subjectively better for a fixed network bandwidth than compressing and sending the high-quality rendering. We describe a software simulation that uses JPEG and MPEG-1 compression, and we show results for a variety of scenes.
These cubes were modeled and rendered using RenderMan. For each frame of the animation sequence, a high and low-quality rendering was computed. The high-quality renderings contain texturing, antialiasing, and smooth shading. The low-quality renderings contain antialiasing and smooth shading, but no texturing.
The first movie demonstrates image-based compression (MPEG) of the high-quality rendering using q-scales of 20, 24, and 28 for the I, B, and P frames, respectively. The second movie demonstrates polygon-assisted compression, i.e. the low-quality rendering plus an MPEG-compressed difference image. To give the same MPEG code size for the two cases, the difference image was compressed using q-scales of 7, 9, and 11. These lower q-scales for the difference image (relative to the q-scales used to compress the high-quality rendering in the first case) lead to a better rendition of the textures in the scene. This improvement is posible because the difference image contains less information than the high-quality rendering. In addition to better textures, the polygon-assisted compression also has fewer artifacts along edges and in smoothly shaded areas. These two improvements demonstrate the advantage of polygon-assisted compression.
For the first movie, you are looking at the actual image-based (MPEG) compression of the high-quality rendering. For the second movie, the polygon-assisted compresion was re-compressed using MPEG and very low q-scales (1, 2, and 3) to allow it to be viewed as a movie loop over the WWW. Hence, the size of these two MPEG movie files differ.
Unlike most of the images in our web hierarchy, the GIF and TIFF images on this page have been gamma corrected in preparation for printing. The gamma used was 1/1.4. This will make them look too bright when viewed on an SGI display, which is hardware corrected to 1/1.7, but not as dark as our other images look when viewed on an uncorrected display. For correct viewing on an SGI, set your gamma correction value (using the "gamma" command) to 1.7/1.4 = 1.2.