3-D 'Happy Buddha' takes fax trip for reincarnation
Researchers at Stanford's Computer Graphics Laboratory have demonstrated
the capability to make and fax three-dimensional computer models of objects
that are so detailed they can be used to make accurate physical facsimiles
of the original.
Such an ability has a number of potential applications, including computer
graphics for movies and video games, home shopping, and the duplication
of rare artifacts or engineering prototypes.
The researchers scanned a 6-inch-high plastic sculpture of a "Happy
Buddha" and converted it into a 3-D computer model - a process that
took about six hours. They then transmitted the model electronically to
3-D Systems in Valencia, Calif., that uses stereolithography to create plastic
models. Over a weekend the company created a facsimile - a process that
took 12 to 15 hours - and mailed it to the researchers.
According to Marc Levoy, professor of computer science and electrical engineering
and head of the 3-D fax project, the basic process began with placing the
Buddha on a black platform. A line of sparkling, ruby-red light traced the
statue's surface as the platform carried it through the plane of laser light.
Scans were made from dozens of different orientations to get enough information
to convert them into a highly detailed three-dimensional computer model.
Once such a model is created, it can be sent electronically to factories
that have the equipment to convert it into a physical object.
The key step in the 3-D fax process is producing high-quality, three-dimensional
computer models of physical objects. That is the area on which the Stanford
researchers have concentrated. Several years of effort have enabled them
to generate 3-D computer models in a fraction of the time, and with far
greater fidelity, than is currently possible.
Another application for these models is in Hollywood movies. Since their
introduction in the film "Jurassic Park", 3-D computer models
increasingly are being used in the movie industry. The recent movie "Toy
Story" was the first full-length feature film entirely produced through
computer animation.
Interest from commercial operations, including Industrial Light and Magic,
means that the Stanford modeling techniques are likely to appear in movies
in a few years in the form of more realistic and detailed computer-made
creatures and scenery. There are other potential applications.
- Museum artifacts that are in demand could be 3-D faxed to those who
are interested in examining them. Such models could be measured and manipulated
in ways that are not possible with ordinary pictures or holograms.
- Home shoppers might be able to "download" computer models
of products that they are interested in purchasing to their home computers
or interactive television sets so that they can inspect them in greater
detail.
- Engineers who make physical prototypes of objects such as automobile
cylinder heads that cannot be designed without being tested and tuned in
real engine blocks, could generate a computer model of their best prototype,
which could be used to mass produce the part using computer-aided design
and manufacturing systems.
A critical obstacle that must be overcome to make such applications possible
is reducing the time and expense involved in creating 3-D models. In "Toy
Story", the 3-D computer models of the toys, people and animals were
created by artists working directly from their imaginations, a process that
took dozens of people working for about a year.
Another way to generate these models is to begin with a physical object.
A number of companies currently generate computer models from physical objects
by hand. Individuals painstakingly digitize the surface of physical models
using touch probes, a process that takes days.
Levoy's group has been working for several years to automate this process.
They begin with a laser range scanner. By illuminating the object with laser
light coming from one angle and recording it in a video camera from another
angle, the scanner determines the distance of each point on the line of
light by the process of triangulation. This information is stored in a "range
image" that consists of a series of pixels, each of which is associated
with the distance of the surface at that point.
There are a variety of different types of range scanners, priced from $1,000
to more than $50,000, that are capable of producing such images.
Going from range data to seamless, three-dimensional images is a complex
process. Researchers must combine range images taken from a number of different
viewpoints. The more complex the object, the more range images they need.
In their first effort, postdoctoral student W. Greg Turk, who is now at
the University of North Carolina, began by converting each range image into
a 3-D irregular polygon mesh that traces the object's surface as seen from
each viewpoint.
Next, Turk developed an algorithm that aligns the different meshes accurately.
As there is frequently considerable overlap between meshes, he also created
a program that "eats away" at the mesh boundaries until the overlaps
are eliminated. Finally, he developed a routine that "zippers"
the different meshes together, filling in gaps with new polygons as needed.
This produces a seamless 3-D mesh. These meshes can be very fine, with
a single model containing hundreds of thousands of polygons. Nevertheless,
the algorithms that Turk developed can construct such a mesh in a matter
of hours.
"The polygon mesh does a pretty good job of capturing the surface,
but it fails at extreme corners and sharp points," Levoy said.
Turk's work was presented at the 1994 Siggraph meeting and Cyberware, a
scanner manufacturer, now distributes it with its products.
Since then, however, the Stanford researchers have developed a method that
is as fast as the zippering approach, does not have its limitations, and
can capture detail as small as 0.2 millimeters.
This "volumetric" approach - developed by Brian Curless, a doctoral
student in electrical engineering - begins by dividing up the space of the
object into thousands of tiny cubes, dub-bed voxels or volume pixels. Each
voxel can have one of three values: occupied, unoccupied and occupancy unknown.
If a voxel is occupied, it is on the surface of the object.
Curless developed an algorithm that takes the information in a range image
and determines the state of the voxels that are visible from the image's
viewpoint. Each successive range image fills in more of the voxels until
a complete, 3-D representation of the object is built up. The result is
a fuzzy, or probabilistic, representation of the object.
The last step is to extract the "most probable" surface from
this representation using a contour extraction algorithm, called the marching
cubes method by the researchers.
The algorithm's author is Bill Lorensen, a senior scientist from General
Electric, who is spending six months at Stanford working with Levoy and
his students.