Faculty: Prof. Morgan McGuire Graphics Lab: TPL013

Computational Graphics is the science of enabling visual communication through computation. It is used in film, video games, medical imaging, engineering, and machine vision.

Williams has a world-class research program in computer graphics and offers several related undergraduate courses for students of all interests and abilities.


Our Latest Research Results   [All Papers...]

Weighted Blended Order-Independent Transparency, McGuire and Bavoil, JCGT, 2013
I3D'14 Best Presentation Award

We present a new method for rendering scenes with complex transparency that works on a wide range of existing hardware and consumes low bandwidth compared to alternative order-independent approaches.

Lighting Deep G-Buffers:Single-Pass, Layered Depth Images with Minimum Separation Applied to Indirect Illumination, Mara, McGuire, and Luebke, NVIDIA Technical Report, 2013

We describe a method that can produce two-layer G-buffers in a single pass over geometry on a GPU and guarantee a minimum depth separation between them. We then apply this to computing robust ambient obscurance, radiosity, and specular reflections in screen-space in real time for complex scenes like San Miguel.

A Fast and Stable Feature-Aware Motion Blur Filter, Guertin, McGuire, and Nowrouzezahrai, NVIDIA Technical Report, 2013

High-quality motion blur is an increasingly important and pervasive effect in interactive graphics that, even in the context of offline rendering, is often approximated using a post process. Recent motion blur post-process filters (e.g., [MHBO12, Sou13]) efficiently generate plausible results suitable for modern interactive rendering pipelines. However, these approaches may produce distracting artifacts, for instance, when different motions overlap in depth or when both large- and fine-scale features undergo motion. We address these artifacts with a more robust sampling and filtering scheme that incurs only small additional runtime cost. We render plausible, temporally-coherent motion blur on several complex animation sequences, all in just 3ms at a resolution 1280x720. Moreover, our filter is designed to integrate seamlessly with post-process anti-aliasing and depth of field.

Plausible Blinn-Phong Reflection of Standard Cube MIP-Maps, McGuire, Evangelakos, Wilcox, Donow, and Mara, Technical Report, 2013

We describe the technique used in the G3D Innovation Engine 9.00 to produce reasonable real-time environment lighting. It adds two lines of code to a pixel shader to reasonably ap- proximate Lambertian and Blinn-Phong glossy reflection of a standard cube map environment with a MIP-chain without preprocessing. That is, we combine Blinn’s BSDF with Blinn’s en- vironment mapping in a modern physically-based way.

2D Polyhedral Bounds of a Clipped, Perspective-Projected 3D Sphere, Mara and McGuire, JCGT 2013

We show how to efficiently compute 2D polyhedral bounds of the (elliptic) perspective projection of a 3D sphere that has been clipped to the near plane. For the special case of a 2D axis-aligned bounding box, the algorithm is especially efficient.

GPU Ray Tracing, Parker et al., CACM 2013

The NVIDIA OptiX ray tracing engine is a programmable system designed for NVIDIA GPUs and other highly parallel architectures. The OptiX engine builds on the key observation that most ray tracing algorithms can be implemented using a small set of programmable operations. Consequently, the core of OptiX is a domain-specific just-in-time compiler that generates custom ray tracing kernels by combining user-supplied programs for ray generation, material shading, object intersection, and scene traversal. This enables the implementation of a highly diverse set of ray tracing-based algorithms and applications, including interactive rendering, offline rendering, collision detection systems, artificial intelligence queries, and scientific simulations such as sound propagation. OptiX achieves high performance through a compact object model and application of several ray tracing-specific compiler optimizations. For ease of use it exposes a single-ray programming model with full support for recursion and a dynamic dispatch mechanism similar to virtual function calls.

Toward Practical Real-Time Photon Mapping: Efficient GPU Density Estimation.
Mara, Luebke, McGuire, I3D 2013

We describe the design space for real-time photon density estimation, the key step of rendering global illumination (GI) via photon mapping. We then detail and analyze efficient GPU implementations of four best-of-breed algorithms. All produce reasonable results on NVIDIA GeForce 670 at 1920x1080 for complex scenes with multiple-bounce diffuse effects, caustics, and glossy reflection in real-time. Across the designs we conclude that tiled, deferred photon gathering in a compute shader gives the best combination of performance and quality.

The Augmented Artist: Computation & Content Creation.
McGuire, I3D 2013 invited talk

Modern game production is at a crisis. The primary bottleneck on quality, budgets, and schedules for most games is the limited available time of experienced artists. There is little opportunity to increase the number of artists because inflation-adjusted profit margins are stagnant or falling. Simultaneously, consumer expectations for the fidelity of the entertainment experience continue to rise. The solution to this crisis is to recognize that the digital artist's ultimate tool is computation...and that newer GPUs and the cloud disrupt historical trends in available computation for production workflow. We can't make more artists, but we can augment existing ones through computation to increase their effective skill and multiply their efforts. In this talk, I chart this space through case studies of production realities and research possibilities, including scalable assets and algorithms, procedural assets, digital content creation tools, and cloud & crowd resources.

Scalable High Quality Motion Blur and Ambient Occlusion.
Bukowski, Hennessy, McGuire, Osman, SIGGRAPH 2012 Courses

Producing visual effects that scale with resolution and hardware capabilities is a major challenge in real-time 3D graphics production today. Effects must run on older hardware, cost no more than linear time in resolution to support HD displays, and exhibit and increase in quality on faster hardware (that may not even exist today) without artist intervention. These course notes describe practical implementation details of two phenomenologically based algorithms for motion blur and ambient occlusion that exhibit this scalable property.

I've been preparing clean, easy-to-use versions of popular graphics research and education data for distribution. About half of the data is available now and the rest will be coming online throughout 2013.

Scalable Ambient Obscurance.
McGuire, Mara, and Luebke, HPG 2012

SAO is a new screen-space ambient occlusion/obscurance algorithm that produces radiosity-like ambient shadowing over distances from centimeters to meters. Most previous screen-space AO algorithms run disproportionally slow as sampling radius and pixel density increase because they become dominated by main-memory DRAM (vs. on-chip L1 and L2) memory operations, thus limiting most previous algorithms to centimeter scale. SAO exhibits perfect scaling in the number of pixels sampled and has performance-per-pixel independent of screen density or world-space sampling radius.

A Reconstruction Filter for Plausible Motion Blur.
McGuire, Hennessy, Bukowski, and Osman, I3D 2012

This paper describes a novel filter for simulating motion blur phenomena in real time by applying ideas from offline stochastic reconstruction. The filter operates as a 2D post-process on a conventional framebuffer augmented with a screen-space velocity buffer. We demonstrate results on video game scenes rendered and reconstructed in real-time on NVIDIA GeForce 480 and Xbox 360 platforms, and show that the same filter can be applied to cinematic post-processing of offline-rendered images and real photographs. The technique is fast and robust enough that we deployed it in a production game engine used at Vicarious Visions.

All Papers...