Lighting Deep G-Buffers:Single-Pass, Layered Depth Images
with Minimum Separation Applied to Indirect Illumination

Michael Mara, NVIDIA and Williams College
Morgan McGuire, NVIDIA and Williams College
David Luebke, NVIDIA

A newer technical report on this work is available at

NVIDIA technical report (51 MB PDF)
NVIDIA technical report (low res) (1.5 MB PDF)
Video results (coming soon)


We introduce a new method for computing two-level Layered Depth Images (LDIs) [Shade et al. 1998] that is designed for modern GPUs. The method is order-independent, can guarantee a mini- mum depth separation between the layers, operates within small, bounded memory, and requires no explicit sorting. Critically, it also operates in a single pass over scene geometry. This is important because the cost of streaming geometry through a modern game engine pipeline can be high due to work expansion (from patches to triangles to pixels), matrix-skinning for animation, and the rel- ative scarcity of main memory bandwidth compared to caches and registers.

We apply the new LDI method to create Deep Geometry Buffers for deferred shading and show that two layers with a minimum depth separation make a variety of screen-space illumination effects surprisingly robust. We specifically demonstrate improved robustness for Scalable Ambient Occlusion [McGuire et al. 2012b], an extended multibounce screen-space radiosity [Soler et al. 2009], and screen-space reflection ray tracing. All of these produce results that are necessarily view-dependent, but in a manner that is plausible based on visible geometry and more temporally coherent than results without layers.

Selected Images


  author = {Michael Mara and Morgan McGuire and David Luebke}
  title = {Lighting Deep G-Buffers: Single-Pass, Layered Depth Images with Minimum Separation Applied to Indirect Illumination},
  month = {December},
  day = {13},
  year = {2013},
  pages = {17},
  institution = {NVIDIA Corporation},
  number = {NVR-2013-004},
  url = {}