Marc Levoy

VMware Founders Professor of Computer Science and Electrical Engineering, Emeritus
Computer Graphics Laboratory
Computer Systems Laboratory
Computer Science Department
Electrical Engineering Department
School of Engineering
Stanford University
Gates Computer Science Building
Room 374, Wing 3B
Stanford University
Stanford, CA 94305
Press here for directions.
Personal data:
Born in New York City
B. Architecture, Cornell, 1976
M.S. in Architecture, Cornell, 1978
PhD in Computer Science, Univ. North Carolina, 1989
Office hours:
By appointment only
(650) 725-4089 (answered by arrangement only)
(650) 723-0033 (fax)
Email (the best way to reach me):
Web address:
Administrative assistant:

I have retired from Stanford to lead a team at Google. My team is in Google Research, and we work broadly on cameras and photography. One of our projects was burst-mode photography for Project Glass (see also this talk) and HDR+ mode for Nexus and Google Pixel smartphones. The French agency DxO gave the 2016 Pixel the highest rating ever given to a smartphone camera, and an even higher rating to the Pixel 2 in 2017. Here are some albums of photos I've shot with the Pixel XL: Desolation Wilderness, Cologne and Paris, New York City, Fort Ross and Sonoma County. Press "i" ("Info") for per-picture captions. And here is an album of photos captured with the Pixel 2, including the portrait mode my team worked on. See also this interview in The Verge, this video about the Pixel 2 camera, and these papers in SIGGRAPH Asia 2016 on HDR+ and SIGGRAPH 2018 on portrait mode. The Pixel 3 also contains computational photography features my team worked on, including Super Res Zoom, synthetic fill-flash, learning-based portrait mode, and Night Sight (based on my prototype SeeInTheDark app.) Night Sight has won numerous awards, including DP Review's 2018 Innovation of the Year and the Disruptive Device Innovation Award at Mobile World Congress 2019. See also these inteviews by DP Review and CNET, and this article written by Google team member Florian Kainz about nighttime photography using cell phones. My team also worked on underlying technologies for Project Jump, a light field camera that captures stereo panoramic videos for VR headsets such as Google Cardboard. See this May 2015 presentation at Google I/O. Being Emeritus, I will unfortunately not be offering CS 178 (Digital Photography) again at Stanford. However, this course was given at Google in Spring of 2016, and the lectures were recorded for free public distribution. Here is a link. I will also not be taking on any new PhD students, or supervising summer interns. That said, I visit Stanford weekly, and am happy to meet with students (or other members of the university community) by appointment.

Biographical sketch

Marc Levoy is the VMware Founders Professor of Computer Science (Emeritus) at Stanford University and a Distinguished Engineer at Google. In previous lives he worked on computer-assisted cartoon animation (1970s), volume rendering (1980s), 3D scanning (1990s), light field imaging (2000s), and computational photography (2010s). At Stanford he taught computer graphics, digital photography, and the science of art. At Google he launched Street View, co-designed the library book scanner, and currently leads a team whose projects include HDR+ mode, Portrait mode, and Night Sight mode on Pixel smartphones, the Jump light field camera, and the Halide image processing language. Awards: Cornell University Charles Goodwin Sands Medal for best undergraduate thesis (1976), National Science Foundation Presidential Young Investigator (1991), ACM SIGGRAPH Computer Graphics Achievement Award (1996), ACM Fellow (2007). His Google team's software for Pixel phones has won DP Review Innovation of the Year (2018) and Smartphone Camera of the Year (2019), Mobile World Congress Disruptive Device Innovation Award (2019), and other awards.

Professional stuff

List of publications
(with pictures, abstracts, and links to papers)


I love teaching. After becoming full-time at Google, and finding people there who wanted to know more about photography, I decided to teach a revised but nearly full version of my Stanford course CS 178 (Digital Photography) at Google. These lectures were recorded and edited to remove proprietary material, at which point Google permitted me to make them public. Here is a link to this course, which I called Lectures on Digital Photography. I also uploaded the lecture videos as a YouTube playlist, where to my surprise they went viral in September 2016. A Googler made a word cloud (at left) algorithmically from the comments on those videos and sent it to me as a gift. In a word cloud the size of each word is proportional to the number of times it appears in the text being processed by the algorithm. It's one of the nicest gifts I have ever received.

I am fortunate that my research (and my teaching) are also my hobby - photography. Also fortunately, California is a wonderful place to practice this art. At left is a shot taken along the coast near Davenport, using HDR+ mode on the Nexus 6P. Here are some of my Google Photos albums. (Yes, this is the ancient, deprecated Picasa Web user interface. With Google's move from Google+ to Google Photos, this is the only way I can share all my public albums at once, including to people who might not have Google+ profiles.) Here are some more recent albums: a Tour of The Baltics, Bike tour of the North Coast (of California), camping tour of Croatia and Italy, and an Expedition to Antarctica in 2017. By the way, here are some group photos with colleagues and students.

Although my favorite camera is a Canon 5D III, I'm gradually moving towards mirrorless interchangeable cameras like the Panasonic GX1, Olympus OMD EM-5, Sony NEX-7, or Sony a7II/a9. Here are recent albums from Thailand and Cambodia using the NEX-7, and Chile using the a7II. Of course the best camera is one you have with you, and cell phone cameras have been getting better - especially as they incorporate burst-mode capture and other kinds of computational photography. HDR+ mode on the Nexus 6 and Google Pixel smartphones is one example; it captures, aligns, and combines multiple shots to improve performance in low light and high dynamic range situations. The pictures at left were taken with the Nexus 6: the top image with HDR+ mode off, and the bottom image with HDR+ mode on. Click on the thumbnails to see them at full resolution. Look how much cleaner, brighter, and sharper the HDR+ image is. Also, the stained-glass window at the end of the nave is not over-exposed, and there is more detail in the side arches. This software was written by my team at Google.

For more recent results, here are some albums of photos I've shot with the 2016 Google Pixel XL smartphone: Desolation Wilderness, Cologne and Paris, New York City, Fort Ross and Sonoma County. Press "i" ("Info") for per-picture captions. And here is an album of photos captured with the Pixel 2, including the portrait mode my team worked on. See also this review by DxO of the 2016 Pixel and 2017 Pixel 2, this interview in The Verge, this video about the Pixel 2 camera, these papers in SIGGRAPH Asia 2016 and SIGGRAPH 2018, and this article written by Google team member Florian Kainz about nighttime photography using cell phones. The 2018 Pixel 3 also contains computational photography features my team worked on, including Super Res Zoom, synthetic fill-flash, learning-based portrait mode, and Night Sight based on my prototype SeeInTheDark app. The pictures at left compare a scene in Central Park after sunset captured handheld and without flash using iPhone XS (top) versus Night Sight on a Pixel 3 (bottom). Night Sight has won numerous awards, including DP Review's 2018 Innovation of the Year and the Disruptive Device Innovation Award at Mobile World Congress 2019. See also these interviews by DP Review and CNET.

My research has recently focused on making cameras programmable. One concrete outcome of this project is our Frankencamera architecture, published in this SIGGRAPH 2010 paper and commercialized in Android's Camera2/HAL3 APIs. To help me understand the challenges of building photographic applications for a mobile platform, I tried writing a cell phone app myself. The result is SynthCam. By capturing, tracking, aligning, and blending a sequence of video frames, the app makes the near-pinhole aperture on an iPhone camera act like the large aperture of a single-lens-reflex (SLR) camera. This includes the SLR's shallow depth of field and resistance to noise in low light. The app is available for $0.99 in the iTunes app store. I don't expect to get rich from this app, but I learned a lot by writing it, and seeing it appear in the app store was a thrill. (Update - it's now free.) Here are a few of my favorite reviews of the app: MIT Technology Review, WiReD, The Economist.

In 1999 the National Academies published Funding a Revolution: Government Support for Computing Research. This landmark study, sometimes called the Brooks-Sutherland report, argued that research in computer science often takes 15 years to pay off. The iconic illustration from that report is reproduced at right. In 1996 Pat Hanrahan and I begin working on light fields and synthetic focusing, supported by the National Science Foundation. In 2005 Ren Ng, a PhD student in our lab, worked out an optical design that allowed dense light fields to be captured by a handheld camera. This design enabled everyday photographs to be refocused after they are captured. In 2006 Ren started a company called Refocus Imaging to commercialize this technology. In 2011 that company, now called Lytro, announced its first camera. So 15 years from initial idea to first product. An exciting ride, but a long wait. At left is a picture I took using the Lytro. Click here for my public page of Lytro pictures.


In 2012 I was invited to give the commencement address at the 2012 Doctoral Hooding Ceremony of the University of North Carolina (from which I graduated with a doctorate in 1989). After the ceremony a number of people asked for the text of my address. Here it is, retrospectively titled "Where do disruptive ideas come from?". Or here is UNC's version, with more photos like the one at left. And here is the video.
Portrait photographer Louis Fabian Bachrach took some nice photographs (here and here) of me in 1997 for the Computer Museum in Boston (now closed). I do occasionally wear something besides blue dress shirts. Here are shots with other shirts, from August 2001 and July 2003. Yes, that's a Death Ride T-shirt in the last shot. I also rode in 2005, and yes, I finished all 5 passes - 15,000 feet of climbing. That's why I'm smiling in the official ride photograph (shown at left), taken at the top of Carson Pass after 12 hours of cycling.
During a 1998-99 sabbatical in Italy, my students and I digitized 10 of Michelangelo's statues in Florence. We called this the Digital Michelangelo Project. Here are some photographic essays about personal aspects of the sabbatical. In particular, I spent the year learning to carve in marble. At left is my first piece - a mortar with decorated supports. Here are some sculptures by my mother, who unlike me had real talent.
The Digital Michelangelo Project was not my first foray into measuring and rendering 3D objects. Here are some drawings I made in college for the Historic American Buildings Survey. In this project the measuring was done by hand - using rulers, architect's combs, and similar devices.
I still like finding and measuring old objects, especially if it involves getting dirty. This photographic essay describes a week I spent on an archaeological dig in the Roman Forum. The image at left, taken during the dig, graces the front cover of The Bluffer's Guide to Archaeology.
My favorite radio interview, by Noah Adams of National Public Radio's All Things Considered - about the diverging gaze directions of the two eyes in Michelangelo's David (June 13, 2000). (Click to hear the interview using RealAudio at 14.4Kbs or 28.8Kbs, or as a .wav file.)
This interview, by Guy Raz, weekend host of All Things Considered, runs a close second. It's about the Frankencamera (October 11, 2009), pictured at left. (Click here for NPR's web page containing the story and pictures, and here for a direct link to the audio as an .mp3 file.)
Finally, here's a text-only interview at SIGGRAPH 2003, by Wendy Ju, with reminiscences of my early mentors in computer graphics.
Speaking of computer graphics, I'm fond of the front cover of the Siggraph 2001 proceedings. The image is from a paper (in the proceedings) on subsurface scattering, co-authored with Henrik Wann Jensen, Steve Marschner, and Pat Hanrahan. And check out this milk. This paper won a Technical Academy Award in 2004. Subsurface scattering is now ubiquitous in CG-intensive movies.
However, not everything went smoothly at Siggraph 2001. A more-strenuous-than-expected after-Siggraph hike inspired Pat Hanrahan's and my students to create this humorous movie poster. Click here for the innocent version of this story. And here for the real story.


A lot of my research relates to volume data. The cause may be genetic. My mother's cousin David Chesler is credited with the first demonstration of filtered backprojection, the dominant method used in computed tomography (CT) and positron emission tomography (PET) for combining multiple projections to yield 3D medical data. Here is a description of his contributions.
My father's genes also seem to be guiding my research tastes. Optics has been in my family for four generations. My father Barton Monroe Levoy and my grandfather Monroe Benjamin Levoy were opticians and sellers of eyeglasses through their company, Tura. At left is an early brochure.
Going back further, my great-grandfather Benjamin Monroe Levoy sold eyeglasses, cameras, microscopes, and other optical instruments in New York City a century ago. Here is a piece of stationery from his store. He later moved to 42nd street, as evidenced by the address on the case of these eyeglasses, remade as pince-nez with a retractor. (At left is a closeup of the embossed address.) And here is a wooden box he used to mail eyeglasses to customers. The stamp is dated 1902.
In the drawing (at left) illustrating that stationary, I believe you can see a microscope. In any case I have an old microscope from his store. This specimen of a silkworm mouth, which accompanied the microscope, appears in our SIGGRAPH 2006 paper on light field microscopy.
My great-grandfather apparently also sold binoculars from the store. This pair, made by Jena Glass about 80 years ago and inscribed with the name B.M. Levoy, New York, was recovered by a SWAT team during a drug raid in South Florida in 2008. They have undoubtedly passed through many hands during their long and storied life. It would be fascinating to watch a video of everything these lenses have seen.
When my students work at the optical bench, they're perplexed by my arcane knowledge of mirror technology. Returning to my mother's side of the family, my great-grandfather Jacob Chesler built a factory in Brooklyn (pictured at left) that made hardware. The building passed to my grandfather Nathan Chesler, who converted it to making decoratively bevelled mirrors. I spent many pleasant hours studying the factory's machinery, designed by my uncle Bertram Chesler, for silvering large plate-glass mirrors.

© 1994-2010 Marc Levoy
Last update: March 18, 2020 04:25:18 PM