Stanford Real-Time Programmable Shading Project


[Introduction]  [Project History]  [Downloads]  [Papers and Talks]  [Sponsors]  [People]  [Contacting Us] 

Introduction

Recent trends in graphics hardware have expanded real-time shading capabilities from Gouraud shading and simple texture mapping to multipass rendering with multitexturing and texture combiners. These trends have enabled a wide variety of interesting shading techniques and effects, such as bump maps, cubical environment maps, texture-based BRDFs, reflections, refractions, and shadows, and have been used extensively by the entertainment industry to create stunning immersive environments.

While graphics hardware is becoming more powerful, it is also becoming complicated and difficult to use. It is time-consuming both to program the hardware at the level of passes and texture units and to optimize for different chipsets, each of which exposes a different set of functionality.

Shading languages have been used successfully in non-real time applications such as creating imagery for motion pictures and rendering data for scientific visualization. Shading languages provide a proven, flexible, logical means for describing appearance.

The goal of our project is to bring real-time programmable shading to mainstream graphics hardware. In particular, we aim to:

Project History and Status

Our project started in June 1999. Our initial investigations were motivated by hardware features such as multitexturing, advanved texture combiner functions, and fragment lighting, as well as by the increased use of and support for multipass rendering, especially as evidenced by Quake 3 Arena and the PlayStation 2. We focused on the problem of implementing a simple language and compiler to serve as an abstraction layer between programmer and OpenGL hardware, with the intent of mapping shading computations to multiple passes and multiple hardware platforms.

The first system we implemented provided a simple, Lisp-like language for describing surfaces using arbitrary expressions of colors and textures given three operators: add, multiply, and over. The language allowed for the indirect specification of colors using material properties as well as the configuration of textures using texture coordinate generation and texture matrices. A compiler translated shader expressions into rendering passes, making use of multitexturing and texture combiner add and over operations when available. A revision of the first system replaced the Lisp-like language with one more like C.

Our first system implemented what we call pure fragment programmability, since every programmable operation it supported was evaluated per fragment, either by a texture combiner or by the framebuffer blend unit. While the system provided a useful abstraction of the programmability made possible by multipass rendering, we found its pure fragment programmability to be severely limited by the hardware available to us. In particular, we found support for only a few simple operators and fixed point data types, and we found ourselves unable to adequately express many computations that were of interest to us, such as new lighting models and custom texture coordinate generation. Moreover, we noted that although future hardware could provide sufficient support for the operations we were missing, use of those operations would be overly expensive for many computations, especially ones that did not vary much across a surface.

Our solution to these issues arose from the observation that polygon-based graphics systems generally perform a large number of computations at rates less than per fragment, trading off evaluation frequency for more complex operations and floating point. Lighting and texture coordinate generation, for example, are typically evaluated per vertex, while transformation matrices are typically computed per group of primitives. We decided that by exposing programmability at multiple computation frequencies, we could enable an enormous amount of additional functionality without sacrificing interactivity. In particular, we found that many of the computations we could not perform using pure fragment programmability could be mapped to vertex computations evaluated on the host processor. We ended up defining four computation frequencies: constant (evaluated at compile time), per primitive group, per vertex, and per fragment.

Constant Per primitive group Per vertex Per fragment
Multiple computation frequencies. Our system allows programmability at four computation frequencies: constant, per primitive group, per vertex, and per fragment.

After identifying the potential usefulness of having multiple computation frequencies, we implemented a second system that provided a C-like language with a support for multiple computation frequencies, including a variety of types and operators appropriate for each computation frequency. The language also supported separate surface and light shaders, tying the two together using a linear integrate operator. As with our first system, a compiler translated shaders into multiple rendering passes, supporting multitexturing and texture combiner add and over operations when available; however, only fragment computations were mapped to rendering passes. Per-primitive-group and per-vertex computations were mapped to executable host processor code, either by using an external C compiler or, on x86 platforms, by generating object code directly.

While our second system provided good support for multiple computation frequencies, it did not provide much support for the newer features of graphics hardware. When designing our second system, we decided to restrict ourselves to widely-supported OpenGL 1.1 functionality, with the intent of making our system easier to implement and easier to port. As a result, our second system was left with two limitations: first, it is difficult to add support for new operations, especially ones that are not supported on all hardware, and second, it is difficult to efficiently target the wide variety of complex hardware architectures available. We are currently working to extend our second system to address these two limitations.

Downloads

We are distributing executable versions of our system for Win32, IRIX, and Linux. You can use this system to experiment with shaders written in our shading language.

We are also making available some working documents written for internal use but possibly useful to users of our shading system and others who might be interested:

Papers and Talks

Papers related to our project:

We have given a number of talks related to our project. Slides for the following talks are available online:

Current Project Sponsors

Additional support provided via the IMTV project.

Former project sponsors: SGI, 3dfx.

People

At Stanford:

Eric Chan
Pat Hanrahan
Yi-Ren Ng
Kekoa Proudfoot
Philipp Schloter
Pradeep Sen

Formerly at Stanford:

Bill Mark
Philipp Slusallek
Svetoslav Tzvetkov

At other institutions:

David Ebert

Reaching Us

For more information, please contact Eric Chan (ericchan@graphics.stanford.edu).


ericchan@graphics.stanford.edu