Computational photography

From Wikipedia, the free encyclopedia

Computational imaging refers to any image formation method that involves a digital computer. In contrast with digital imaging, which refers simply to representation of image data using symbols, computational imaging integrates sensing and data processing. Computational imaging is under development by a large and diverse community, as represented in a series of Optical Society of America conferences beginning with the 2001 Integrated Computational Imaging Systems topical meeting and continuing through the 2005, 2007 and 2009 Computational Optical Sensing and Imaging meetings.

Computational photography refers broadly to computational imaging techniques that enhance or extend the capabilities of digital photography. The output of these techniques is an ordinary photograph, but one that could not have been taken by a traditional camera.

The term was first used by Steve Mann, and possibly others, to describe their own research. Its current definition, which stems from a 2004 course at Stanford University and a 2005 symposium at MIT (see links below), has evolved to cover a number of subject areas in computer graphics, computer vision, and applied optics. These areas are given below, organized according to a taxonomy proposed by Shree Nayar. Within each area is a list of techniques, and for each technique one or two representative papers or books are cited. Deliberately omitted from the taxonomy are image processing (see also digital image processing) techniques applied to traditionally captured images in order to produce better images. Examples of such techniques are image scaling, dynamic range compression (i.e. tone mapping), color management, image completion (a.k.a. inpainting or hole filling), image compression, digital watermarking, and artistic image effects. Also omitted are techniques that produce range data, volume data, 3D models, 4D light fields, 4D, 6D, or 8D BRDFs, or other high-dimensional image-based representations.

Contents

[edit] Computational illumination

Controlling photographic illumination in a structured fashion, then processing the captured images, to create new images. The applications include image-based relighting, image enhancement, geometry/material recovery and so forth.

[edit] Computational optics

Capture of optically coded images, followed by computational decoding to produce new images. Coded aperture imaging was mainly applied in astronomy or X-ray imaging to boost the image quality. Instead of a single pin-hole, a pinhole pattern is applied in imaging, and deconvolution is performed to recover the image. In coded exposure imaging, the on/off state of the shutter is coded to modify the kernel of motion blur. In this way motion deblurring becomes a well-conditioned problem. Similarly, in a lens based coded aperture, the aperture can be modified by inserting a broadband mask. Thus, out of focus deblurring becomes a well-conditioned problem. The coded aperture can also improve the quality in light field acquisition using Hadamard transform optics.


[edit] Computational processing

Processing of non-optically coded images to produce new images.

[edit] Computational sensors

Detectors that combine sensing and processing, typically in hardware.

[edit] Early work in computer vision

Although computational photography is a currently popular buzzword, many of its techniques first appeared in the computer vision literature, either under other names or within papers aimed at 3D shape analysis. A few examples are:

[edit] External links

[edit] Overviews

[edit] Symposia

[edit] Tools

[edit] Courses