We describe a method that can produce two-layer G-buffers in a single pass over geometry on a GPU
and guarantee a minimum depth separation between them. We then apply this to computing robust
ambient obscurance, radiosity, and specular reflections in screen-space in real time for
complex scenes like San Miguel.
High-quality motion blur is an increasingly important and pervasive effect in interactive graphics that, even in the context of offline rendering, is often approximated using a post process. Recent motion blur post-process filters (e.g., [MHBO12, Sou13]) efficiently generate plausible results suitable for modern interactive rendering pipelines. However, these approaches may produce distracting artifacts, for instance, when different motions overlap in depth or when both large- and fine-scale features undergo motion. We address these artifacts with a more robust sampling and filtering scheme that incurs only small additional runtime cost. We render plausible, temporally-coherent motion blur on several complex animation sequences, all in just 3ms at a resolution 1280x720. Moreover, our filter is designed to integrate seamlessly with post-process anti-aliasing and depth of field.
The bandwidth cost and memory footprint of vector buffers are limiting factors for GPU rendering in many applications. This article surveys time- and space-efficient representations for the important case of non-register, in-core, statistically independent unit vectors, with emphasis on GPU encoding and decoding. These representations are appropriate for unit vectors in a geometry buffer or attribute stream--where no correlation between adjacent vectors is easily available--or for those in a normal map where quality higher than that of DXN is required. We do not address out-of-core and register storage vectors because they favor minimum-space and maximum-speed alternatives, respectively.
We evaluate precision and its qualitative impact across these techniques and give CPU reference implementations. For those methods with good quality and reasonable performance, we provide optimized GLSL GPU implementations of encoding and decoding.
We describe the technique used in the G3D Innovation Engine 9.00 to produce reasonable real-time environment lighting. It adds two lines of code to a pixel shader to reasonably ap- proximate Lambertian and Blinn-Phong glossy reflection of a standard cube map environment with a MIP-chain without preprocessing. That is, we combine Blinn’s BSDF with Blinn’s en- vironment mapping in a modern physically-based way.