CS348B - Image Synthesis
Barbara Sweet
Date submitted: 9 June 2004
Final Rendered Image
I was interested in rendering the effect of lights viewed through fog at night.
An example (actual) photograph containing some of the elements I wanted to
model is here:
The photo is of an aircraft landing at night. The light sources (probably) include
multiple runway edge lights, multiple runway centerline lights, multiple runway threshold lights,
and aircraft wingtip and landing lights. In order to do this efficiently, I implemented an
algorithm developed by Narasimhan and Nayar (see full reference below). The paper describes a method to represent the appearance of an isotropic light source viewed through a homogeneous scattering media, without volumetric integration. The function incorporates terms to describe the thickness of the media, as well as a forward scattering parameter from the Henyey-Greenstein phase function.
The paper describes an approximation that can be made when the camera is located far
from the light source (a nearly
orthographic viewing frustum). With the approximation, the light can be represented as a
volumetric light source. To accomplish this in pbrt, I modified the arealight
class of pbrt to calculate the atmospheric point spread
function described in the paper. This function was used to modulate the luminance returned from the member AreaLight::L. In order to implement this, it was necessary to create a light that would
be "seen" when sampled, and that could also be seen "through". The only original light class in
pbrt that allows the light to be "seen" is the arealight class, so I chose to modify it rather
than another light class. In order for the light to be seen "through", the material properties
were specified to be "glass" with indices of refraction and reflection of 1.0. This choice of
material properties did not appear to be compatible with any of the integrators defined in pbrt,
probably because of the arealight class was never intended to be seen through. I found that it
worked with one light in the scene, not multiple lights; specifically, you could see objects through the lights, but the spheres defining the lights themselves would actually occlude each other.
For this reason, the effects of each light were
determined separately and later composited for the final image. The requirement to do
compositing turned out to be beneficial; it was possible to adjust the balance of the different
light sources to create a more attractive image. The original image resulting from compositing (see below)
looked somewhat flat, overwhelmed by the runway edge lighting and runway centerline lighting
(which were somewhat yellow). The final image was rendered with increased weight on the aircraft
wingtip lights (red & green), the aircraft landing light (bluish white), and the runway
threshold lights (red), and diminished weight on the runway centerline and edge lights.
A total of 59 lights were used in the image.
While the Narasimhan and Nayar algorithm describes the spatial attenuation of the light
through homogeneous media, I was also interested in simulating the effect of nonhomogeneous
media as can be seen in the wingtip vortices in the photograph. To accomplish this,
I created a volume grid with density distributions that were proportional to the velocity
distribution in a wingtip vortex (rotational velocity is linearly proportional to the radial
distance from the center, up to a maximum where it decays to zero; see below). This was rendered using
the emission integrator in pbrt.
The use of an isotropic light source to render the airport scene is not strictly accurate;
centerline, edge, and threshold lighting typically are not isotropic. It would
be possible to superimpose the effects of a goniometric diagram on the fog attenuation effects,
but the results would not be representative of a physically-based rendering using
volumetric integration.
Another effect that was not simulated was the atmospheric attenuation of light on the surfaces;
only the light between the light and the surface, and the light and the camera, was calculated
with proper attenuation.
If I were to continue working on this, I would consider modifying the integrator and light
class to properly account for multiple lights in the scene. In order to implement the
non-approximation version of the algorithm, a light class would need to be specified that does not rely on a shape class to return an intersection (similar to an environment light, but a light that can be seen). Like environment lighting, it would be beneficial to incorporate
importance sampling in this new class. A method would also need to be determined that would attenuate the light scattering from surfaces (probably modifying the brdf functions).
Original Image before weighted composite
Density distribution of one "slice" of the volumegrid representing the wake vortex.
Narasimhan, S. and Nayar, Shree (2003). "Shedding Light on the Weather". Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 1063-6919/03.<\h4>