Darryl Gouder's portfolio

High Performance PBR for Stereoscopic Virtual Light Fields


Generating an image using physically-based rendering is computationally intensive and consumes a lot of time. When applying this to stereoscopy, multiple images are required for the left and right eye. This dissertation tackles these problems by using distributed computing to parallelise the renderer and accelerate the computation time. Utilizing existing stereo panoramic techniques, this project proposes generating a pair of spherical stereo panoramic images of a scene that may be used to look around a scene from a fixed position using a HMD. Due to the restrictive viewing point, a second solution is proposed using light fields and image-based rendering to allow the user to move about the scene whilst re-creating images on the fly of the new position using pre-rendered images and interpolation techniques, also whilst using a HMD for stereoscopy. These solutions were implemented and their effectiveness evaluated, with some promising results.

Accelerating PAR

PAR - the proprietary University of Warwick Physically Based Renderer was accellerated using distributed computing to render images in a shorter time. A master-slave technique was adopted for this, with the master distributing tiled jobs to each individual node. Note that since this was not a shared-memory implementation, the scene was to be copied to each node.

Rendering Stereo Panorama Pairs

To render a stereo panoramic we implemented Paul Bourke's paper - a rotating pair of cameras that render 1 degree width of an image. The cameras are rotated to simulated the turning head effect. The veritical field of view was 180 degrees. The 2 generated images were then viewed using the Oculus Rift. The software to view them was written as part of Unity. The image at the top of this article is one of the stereo panoramas rendered.

Rendering Virtual Light Fields

The stereo panoramas require that the position of the viewer not change in the scene for the view to remain correct. To solve this problem we propose a new technique using Light Field Theory. By rendering several spherical images of the scene, we map these images to a 3D-Grid, position the user's eyes in the grid and when the user moves his head, the eye position will create an interpolated image by using the nearby images in the grid which are deemed to be locally contributing. We experimented with various interpolation techniques and managed to achieve real-time results since the image synthesis from the light field was not computationally heavy and also made use of Compute Shaders. Below we show an interpolated image and a path traced one of the same viewpoint. Altough visual fidelity was not extensively preserved, this method proved useful for quick prototyping.


The image above was created by interpolating a group of 8 images in the light field. Each image was weighted according to the distance to the eye position.
The same image rendered from a similar point of view to show the visual difference.

Final Remarks

The final report can be found here. This is not an exact copy of the one I submitted – one of the 3D scenes is proprietary to a particular company and for that reason, images of that scene are not to be published. For that reason I have removed the relevant sections that describe it.

More Images

Distortion caused by the spherical projection. The HMD viewing tool was written using Unity. The HMD of choice was the Oculus Rift.