Multiperspective Imaging for Automated Urban Imaging

Augusto Román
Ph.D. Dissertation
Stanford University
September 2006

Download

PDF (6 MB)
Defense slides (coming soon)

Abstract

Traditional perspective images (such as those produced by a typical camera) cannot consistently display detail across all parts of an entire city block. It is inevitable that detail is lost in areas of the scene most distant from the camera. A multiperspective image generated from a collection of photographs or a video stream can be used to effectively summarize long, roughly planar scenes such as city streets. For example, we have generated a single continuous image of a street spanning approximately 10 city blocks. This image is over 300,000 pixels wide.

This single-image representation has several advantages over other possible representations (such as 360-degree panoramas, individual photographs, 3D models, or satellite maps) in that it is continuous, compact, high resolution, and requires no special viewing software. However, multiperspective images also suffer distortions caused by the deviation from the familiar perspective image.

Constructing multiperspective images with minimum distortion is typically done manually by an artist, however this is not practical for large-scale projects such as creating images along every street in an entire city. We describe how these images can be automatically constructed, including a technique to evaluate and minimize the distortion without requiring user intervention.

This thesis presents three contributions toward the use of multiperspective images in urban visualization. The first is a method of constructing images from serially blended crossed-slits mosaics that makes it possible to along significantly reduce the distortion in the final output. Second, an efficient method of rendering high-quality multiperspective images is described, along with a user-driven GUI program that allows a user to quickly manipulate the perspective structure of a multiperspective image and gain an intuition about parameters of such images. Finally, we present a metric for quantifying the distortion in these images, along with an optimization for automatically minimizing these distortions.