This paper describes the CloudLight system for computing indirect
lighting asynchronously on an abstracted, computational "cloud,"
in support of real-time rendering for interactive 3D applications
on a mobile, virtual reality HMD, gaming desktop or other local client device.
Project Rocket Golfing is a game of space travel and discovery with simple touch-and-drag gameplay. It contains an infinite, procedurally-generated universe.
The further you explore, the more that the game changes. You'll encounter new game features as you reach more distant galaxies. Find ice planets, wormholes, aliens, verdant worlds, binary star systems, lost civilizations, and more.
Multisample antialiasing computes pixel coverage at high resolution and shading at low resolution
for efficient forward rendering without jagged edges or flickering.
Deferred shading addresses the materials and lights combinatorial explosion
and is more efficient for renderers that use a prepass. These techniques are inherently incompatible.
We present a solution that allows high-resolution sampling of coverage and materials, but aggregates
those samples into clusters for fast, deferred shading. Our technique has higher quality and lower
space costs than 16x MSAA.
We present an efficient GPU solution for screen-space 3D ray tracing against a depth buffer by adapting the perspective-correct DDA line rasterization algorithm. Compared to linear ray marching, this ensures sampling at a contiguous set of pixels and no oversampling. This paper provides for the first time full implementation details of a method that has been proven in production of recent major game titles. After explaining the optimizations, we then extend the method to support multiple depth layers for robustness. We include GLSL code and examples of pixel-shader ray tracing for several applications.
We describe a method that can produce two-layer G-buffers in a single pass over geometry on a GPU
and guarantee a minimum depth separation between them. We then apply this to computing robust
ambient obscurance, radiosity, and specular reflections in screen-space in real time for
complex scenes like San Miguel.
High-quality motion blur is an increasingly important and pervasive effect in interactive graphics that, even in the context of offline rendering, is often approximated using a post process. Recent motion blur post-process filters (e.g., [MHBO12, Sou13]) efficiently generate plausible results suitable for modern interactive rendering pipelines. However, these approaches may produce distracting artifacts, for instance, when different motions overlap in depth or when both large- and fine-scale features undergo motion. We address these artifacts with a more robust sampling and filtering scheme that incurs only small additional runtime cost. We render plausible, temporally-coherent motion blur on several complex animation sequences, all in just 3ms at a resolution 1280x720. Moreover, our filter is designed to integrate seamlessly with post-process anti-aliasing and depth of field.
The bandwidth cost and memory footprint of vector buffers are limiting factors for GPU rendering in many applications. This article surveys time- and space-efficient representations for the important case of non-register, in-core, statistically independent unit vectors, with emphasis on GPU encoding and decoding. These representations are appropriate for unit vectors in a geometry buffer or attribute stream--where no correlation between adjacent vectors is easily available--or for those in a normal map where quality higher than that of DXN is required. We do not address out-of-core and register storage vectors because they favor minimum-space and maximum-speed alternatives, respectively.
We evaluate precision and its qualitative impact across these techniques and give CPU reference implementations. For those methods with good quality and reasonable performance, we provide optimized GLSL GPU implementations of encoding and decoding.