This document covers a variety of topics related to working with pbrt-v4,the rendering system described in the forthcoming fourth edition ofPhysically Based Rendering: From Theory to Implementation,by MattPharr, WenzelJakob, and Greg Humphreys. Because most users of pbrt are alsodevelopers who also work with the system’s source code, this guide alsoincludes coverage of a number of topics related to the system’s structureand organization.
If you find errors in this text or have ideas for topics that should bediscussed, please either submit a bug inthe pbrt-v4 issuetracker, or send an emailto [emailprotected].
The system has seen many changes since the third edition. To figure outhow to use the new features, you may want to look at the example scenefiles and read through the source code to learn more details of thefollowing.
Major changes include:
- Spectral rendering: rendering computations are always performed usingpoint-sampled spectra; the use of RGB color is limited to the scenedescription (e.g., image texture maps), and final image output.
- Modernized volumetric scattering
- An all-new VolPathIntegrator based on the null-scattering pathintegral formulation of Miller etal. 2019 has been added.
- Tighter majorants are used for null-scattering with the GridDensityMediumvia a separate low-resolution grid of majorants.
- Both emissive volumes and volumes with RGB-valued absorption andscattering coefficients are now supported.
- Support for rendering on GPUs is available on systems that have CUDAand OptiX.
- The GPU path provides all of the functionality of the CPU-basedVolPathIntegrator, including volumetric scattering, subsurfacescattering, all of pbrt's cameras, samplers, shapes, lights, materialsand BxDFs, etc.
- Performance is substantially better than rendering on theCPU.
- New BxDFs and Materials
- The provided BxDFs and Materials have been redesigned to be moreclosely tied to physical scattering processes, along the lines ofMitsuba's materials. (Among other things, the kitchen-sink UberMaterialis now gone.)
- Measured BRDFs are now represented using Dupuy and Jakob'sapproach.
- Scattering from layered materials is accurately simulated using MonteCarlo random walks(after Guo etal. 2018.)
- A variety of light sampling improvements have been implemented.
- "Many-light" sampling is available via light BVHs (Conty and Kulla 2018).
- Solid angle sampling is used for triangle(Arvo1995) andquadrilateral (Ureña et al. 2013)light sources.
- A single ray is now traced for both indirect lighting and BSDF-sampled direct-lighting.
- Warp product sampling is used for approximate cosine-weighted solid anglesampling (Hart et al. 2019).
- An implementation of Bitterli et al's environment light portal samplingtechnique is included.
- Rendering can now be performed in absolute physical units withmodeling of real cameras asper Langlands andFascione 2020.
- And also...
- Various improvements have been made tothe Sampler classes, including betterrandomization and a new sampler thatimplements Ahmedand Wonka's blue noise Sobol' sampler.
- A new GBufferFilm that provides position, normal, albedo, etc., ateach pixel is now available. (This is particularly useful for denoising and ML training.)
- Path regularization (optionally).
- A bilinear patch primitive has been added (Reshetov 2019).
- Various improvements to ray–shape intersection precision.
- Most of the low-level sampling code has been factored out intostand-alone functions for easier reuse. Also, functions that invertmany sampling techniques are provided.
- Unit test coverage has been substantially increased.
We have also made a refactoring pass throughout the entire system, cleaningup various APIs and data types to improve both readability and usability.
File format and scene description
We have tried to keep the scene description file format as unchangedfrom pbrt-v3 as much as possible. However,progress in other parts of the system required changes to the scenedescription format. pbrt now provides an --upgradecommand-line option that can usually automatically update pbrt-v3 scenefiles for use with pbrt-v4. See the pbrt-v4File Format documentation for more information.
Images that encode directional distributions (such as environment maps)should now be representedusing Clarberg'sequal-area mapping. pbrt's imgtoolutility provides a makeequiarea operationthat converts equirectangular environment maps (as used in pbrt-v3) tothis parameterization.
Building pbrt
Please seethe README.mdfile in the pbrt-v4 distribution forgeneral information about how to check out and compile the system.
Porting to different targets
pbrt should compute out of the box for reasonably-recent versions ofLinux, FreeBSD, OpenBSD, OS X, and Windows. A C++ compiler with support forC++17 is required.
The CMakeLists.txtfile does its best to automatically determine the capabilities andlimitations of the system's compiler and to determine which header filesare available. If pbrt doesn't build out of the box on your system andyou're able to figure out the changes needed inthe CMakeLists.txt file, we'd be delighted toreceive a github pull request. Alternatively, opena bug in the issuetracker that includes the compiler output from your failed build andwe'll try to help get it running.
Note that if extensive changes to pbrt are required to build it on a newtarget, we may not accept the pull request, as it’s also important that thesource code on github be as close as possible to the source code in thephysical book.
Debugging
Debugging a ray tracer can be its own special kind of fun. When the systemcrashes, it may take hours of computation to reach the point where thecrash occurs. When an incorrect image is generated, chasing down why it isthat the image is usually-but-not-always correct can be a very trickyexercise.
When trouble strikes, it’s usually best to start by rendering the sceneagain using a debug build of pbrt. Debug builds of the system not onlyinclude debugging symbols and aren’t highly optimized (so that the programcan be effectively debugged in a debugger), but also include more runtimeassertions, which may help narrow down the issue. We find that debug buildsare generally three to five times slower than optimized builds; thus, ifyou’re not debugging, make sure you’re not using a debug build! (Seethe pbrt-v4README.mdfile for information about how to create a debug build.)
One of the best cases for debugging is a failing assertion. This atleast gives an initial clue to the problem–assuming that the assertion isnot itself buggy, then the debugging task is just to figure out thesequence of events that led to it failing. In general, we’ve found thattaking the time to add carefully-considered assertions to the system morethan pays off in terms of reducing future time working backward when thingsgo wrong.
There are a number of improvements in pbrt-v4that make debugging easier than it was in previous versions:
- The renderer is now deterministic, which means that if onerenders a crop window of an image (or even a single pixel), a bug thatmanifested itself when rendering the full image should still appearwhen a small part of the image is rendered.The --pixeland --pixelbounds command-lineoptions can be used to isolate small regions of images fordebugging.
- If pbrt crashes or an assertion fails during rendering, the errormessage will often include the directions to re-run the renderer usinga specified --debugstart command-line option.Doing so will cause pbrt to trace just the single ray path that failed,which often makes it possible to quickly reproduce a bug or test afix.
If the system crashes outright (e.g., with a segmentation violation),then the issue is likely corruption in the memory heap or another problemrelated to dynamic memory management. For these sorts of bugs, we'vefound valgrindand addresssanitizer to be effective.
If the crash happens intermittently and especially if it doesn't presentitself when running with a single thread(--nthreads=1), the issue is likely due to arace condition or other issue related to parallel execution. We havefound that thehelgrind toolto be very useful for debugging threading-related bugs. (Seealso ThreadSanitizer.)
For help with chasing down more subtle errors, most of the classes inthe system both provide a ToString() method thatreturns a std::string that describing the valuesstored in an object. This can be useful when printing out debugginginformation. If you find it useful to add these methods to a class thatdoesn’t currently have them implemented, please send us a pull request withthe corresponding changes so that they can be made available to other pbrtusers.
Debugging on the GPU can be more difficult than on the CPU due to themassive parallelism of GPUs and more limited printing and loggingcapabilities. In this case, it can be useful to try rendering the sceneon the CPU using the --wavefront command-lineoption; this causes the CPU to run the same code as the GPU. Theprogram's execution may not follow exactly the same path and compute thesame results, however, due to slight differences in intersection pointsreturned by the GPU ray tracing hardware compared to pbrt's CPU-based rayintersection code.
Unit tests
We have written unit tests for some parts of the system (primarily fornew functionality added with pbrt-v4, but some to test preexistingfunctionality). Running the pbrt_test executable,which is built as part of the regular build process, causes all tests to beexecuted. Unit tests are written usingthe Google C++ TestingFramework, which is included with the pbrt distribution. Seethe Google TestPrimer for more information about how to write new test.
We have found these tests to be quite useful when developing newfeatures, testing the system after code changes, and when porting thesystem to new targets. We are always happy to receive pull requests withadditions to the system’s tests.
Pull requests
We’re always happy to get pull requests or patches that improvepbrt. However, we are unlikely to accept pull requests that significantlychange the system’s structure, as we don’t want the “master” branch todiverge too far from the contents of the book. (In this cases, however, wecertainly encourage you to maintain a separate fork of the system on githuband to let people know about it on the pbrt mailing list.)
Pull requests that fix bugs in the system or improve portability arealways welcome.
Example scenes
See the resources page for links to avariety of sources of interesting scenes to render with pbrt-v4.
Choosing an integrator
pbrt-v4 provides approximately ten choices for the "integrator" that isused to compute solutions to the rendering equation. In most cases,the default "volpath" integrator is the most effective choice; it handlescomplex direct and indirect lighting and offers state-of-the-artalgorithms for rendering participating media. (When rendering on theGPU or using the "wavefront" integrator on the CPU, there is no choice ofintegrator and the integrator used has equivalent functionality to the"volpath" integrator).
In scenes with focused indirect light due to specular reflection(caustics) or with other forms of difficult-to-sample indirect lighting,the "bdpt" integrator, which uses bidirectional path tracing, may be abetter choice. However, its sampling algorithms for volumetric media(and especially chromatic volumetric media) are not as good as those ofthe "volpath" integrator. Further, it has the disadvantage that itsperformance doesn't scale well as the maximum depth parameter isincreased since the number of shadow rays traced to connect the cameraand light paths grows quadratically with path length.
For scenes with especially challenging indirect lighting or caustics,the "mlt" integrator may be more effective than the "bdpt" integrator.It applies Metropolis sampling algorithms, which allows the reuse oflight carrying paths; this can improve results when it is challenging tosample light carrying paths. However, this benefit comes at the cost ofincreased correlation between paths which can manifest itself aslow-frequency noise in images. Further, the "mlt" integrator is built ontop of the "bdpt" integrator and so inherits its shortcomings withrespect to chromatic media.
The "sppm" integrator is an alternative to the "bdpt" integrator forscenes with tricky indirect lighting and caustics; it is based on thestochastic progressive photon mapping algorithm. When it is used, errorin images is manifested with low-frequency noise at low sampling rates.This integrator does not support volumetric scattering.
The remaining integrators are primarily useful for pedagogical purposesor as baselines for evaluating other integration algorithms.
Choosing a sampler
pbrt-v4 also provides a variety of samplers that are used byintegrators to generate sample points for Monte Carlo integration. Whenrendering complex scenes, most of them give similar results, especiallyat higher sampling rates.
The default "zsobol" sampler is especially effective at low samplingrates; it decorrelates sample values at nearby pixels which tends tocause error in the image to have a "blue noise" (i.e., high frequency)distribution. This tends to be more visually pleasing to humanobservers than lower-frequency noise and is generally more friendly inputto provide to denoising algorithms.
If higher sampling rates are to be used, the "halton" or"sobol" sampler is likely to give slightly better results. Note that the"independent" and "stratified" samplers should not generally be usedexcept as a baseline for comparing the performance of more sophisticatedsamplers.
GPU rendering
If your system has a supported GPU and if pbrt was compiled with GPUsupport, the GPU rendering path can be selected usingthe --gpu command-line option. When used, anyintegrator specified in the scene description file is ignored, and anintegrator that matches the functionality ofthe VolPathIntegrator on the CPU is used:unidirectional path tracing with state-of-the-art support for volumetricscattering.
The GPU rendering path can also execute on the CPU: specifythe --wavefront command-line option to use it.This capability can be especially useful for debugging. We note,however, that execution on the CPU and GPU will not give precisely thesame results, largely due to ray-triangle intersections being computedusing different algorithms, leading to minor floating-point differences.
Interactive rendering
pbrt-v4 provides an interactive rendering mode that is enabled usingthe --interactive command-line option.When it is used, pbrt opens a window and displays the image as it isrendered. Rendering restarts from the first sample after the cameramoves and pbrt exits as usual, writing out the final image once thespecified number of pixel samples have been taken. (Thus, you may wantto specify a very large number of pixel samples if you do not want pbrtto exit.)
The following keyboard controls are provided:
- w, a, s, d: move the camera forward and back, left and right.
- q, e: move the camera down and up, respectively.
- Arrow keys: adjust the camera orientation.
- B, b: respectively increase and decrease the exposure("brightness").
- c: print the transformation matrix for the current cameraposition.
- -, =: respectively decrease and increase the rate of cameramovement.
Note that using many CPU cores, using pbrt's GPU rendering path, orrendering low resolution images will be necessary for interactiveperformance in practice.
Working with images
pbrt is able to write images in a variety of formats, including OpenEXR,PNG, and PFM. We recommend the use of OpenEXR if possible, as it is ableto represent high dynamic range images and has the flexibility to encodeimages with additional geometric information at each pixel as well asspectral images (see below).
While many image viewers can display RGB OpenEXR images, it isworthwhile to use a capable viewer that allows close inspection of pixelvalues, makes it easy to determine the coordinates of a pixel, andsupports display of additional image channels. We have foundThomas Müller'stev image viewer to be anexcellent tool for working with images rendered by pbrt; it runs onWindows, Linux, and OSX.
The pbrt distribution also includes a command-line program for workingwith images, imgtool. It provides a range ofuseful functionality, including conversion between image formats andvarious ways of computing compute the error of images with respect to areference. Run imgtool to see a list of thecommands it offers. Further documentation about a particular command canbe found by running, e.g., imgtool help convert.
"Deep" images with auxiliary information
When the "gbuffer" film is used, pbrt will write out EXR images thatinclude the following image channels. The auxiliary information theyinclude is especially useful for denoising rendered images and for theuse of rendered images for ML training:
- {R,G,B}:Pixel color (as is also output by the regular "rgb" film).
- Albedo.{R,G,B}: red, green, and blue albedo of the first visiblesurface
- P{x,y,z}: x, y, and z components of the position.
- u, v: u and v coordinates of the surface parameterization.
- dzd{x,y}: partial derivatives of camera-space depth z withrespect to raster-space x and y.
- N{x,y,z}: x, y, and z components of thegeometric surface normal.
- Ns{x,y,z}: x, y, and z components of the shadingnormal, including the effect of interpolated per-vertex normals andbump/or normal mapping, if present.
- Variance.{R,G,B}: sample variance of the red, green, and blue pixel color.
- RelativeVariance.{R,G,B}: relative sample variance of red, green, andblue.
The geometric quantities—position, normal, and screen-space z derivatives—arein camera space by default. Alternatively, the "gbuffer" film's "coordinatesystem"parameter can be used to specify world space output; see the file formatdocumentation for details. Note also that pbrt records thecamera-from-world and NDC-from-world transformation matrices in themetadata of the EXR files that it generates; these can be helpful whenworking with those geometric values.
Spectral output
The "spectral" film records images where a specified range ofwavelengths is discretized into buckets the radiance in each wavelengthrange is recorded separately. The images are then stored using OpenEXRbased on an encoding proposedby Fichet et al. (Italso stores "R", "G", and "B" color channels so that the images itgenerates can be viewed in non-spectral-aware image viewers.)
Converting scenes to pbrt’s format
For scenes in a format that can be readby assimp, we have foundthat converting them to pbrt's format using assimp's pbrt-v4 exporterusually works well. It is the preferred approach for formats like FBX.See also the scene exporterssection in the resources page forinformation about exporters from specific modeling and animationsystems.
However, a scene exported using assimp or one of the other exporters isunlikely to immediately render beautifully at first export. Here are somesuggestions for how to take an initial export and turn it into somethingthat looks great.
First, you may find it useful to run
$ pbrt --toply scene.pbrt > newscene.pbrt
This will convert triangle meshes into more compact binary PLY files,giving you a much smaller pbrt scene file to edit and a scene that will befaster to parse, leading to shorter start-up times.
The Camera
Next, if the exporter doesn’t include camera information, the nextthing to do is to find a good view. If your computer has sufficientperformance, the --interactive option to pbrtallows you to interactively position the camera and then print thetransformation matrix that positions the camera(see above).
Lacking interactive rendering, the “spherical” camera (which renders animage in all directions) can be useful for orienting yourself and forfinding a good initial position for the camera. (Setting the “stringmapping” parameter to “equirectangular” gives a spherical image that iseasier to reason about than the default equal-area mapping.) Keeprendering images and adjusting the camera position to taste. (Forefficiency, you may want to use as few pixel samples as you can tolerateand learn to squint and interpret noisy renderings!) Then, you can use thecamera position you’ve chosen as the basis for specifyinga LookAt transformation for a more conventionalcamera model.
Lighting
If the lighting hasn't been set up, it can be helpful to have a pointlight source at the camera’s position while you're placing thecamera. Adding a light source like the following to your scene file doesthis in a way that ensures that the light moves appropriately to whereverthe camera has been placed. (You may need to scale the intensity up or downfor good results–remember the radius-squared falloff!
AttributeBeginCoordSysTransform "camera"LightSource "point" "color I" [10 10 10]AttributeEnd
Once the camera is placed, we have found that it’s next useful to set upapproximate light sources. For outdoor scenes, a good HDR environment mapis often all that is needed for lighting. (You may want to consider usingimgtool makesky to make a realistic HDR skyenvironment map, orsee polyhaven, which has a widevariety of high-quality and freely-available HDR environment maps.)
For indoor scenes, you may want a combination of an environment map forthe outside and point and/or area light sources for interior lights. Youmay find it useful to examine the scene in the modeling system that it camefrom to determine which geometry corresponds to area light sources and totry adding AreaLightSource properties tothose. (Note that in pbrt, area light sources only emit lights on the sidethat the surface normal points; you may needa ReverseOrientation directive to make the lightcome out in the right direction.)
Materials
Given good lighting, the next step is to tune the materials (or set themfrom scratch). It can be helpful to pick a material and set it to anextreme value (such as a “diffuse” material that is pure red) and renderthe scene; this quickly shows which geometric models have that materialassociated with it.
To find out which material is visible in a given pixel,the --pixelmaterial command-line option can beuseful: for all intersections along a ray through the center of the pixel,it prints geometric information about each intersection for all of thematerials from front to back. For a material specifiedusing Material in the scene description file, theoutput will be of the form:
Intersection depth 1World-space p: [ 386.153, 381.153, 560 ]World-space n: [ 0, 0, 1 ]World-space ns: [ 0, 0, 1 ]Distance from camera: 1070.9781[ DiffuseMaterial displacement: (nullptr) normapMap: (nullptr)reflectance: [ SpectrumImageTexture filename: ../textures/grid.exr[...further output elided...]
In this case we can see that the “diffuse” material was used with no bumpor normal map, and ../textures/grid.exr as atexture map for the reflectance.
If a material has been defined usingpbrt's MakeNamedMaterial directive, the outputwill along the lines of:
Intersection depth 1World-space p: [ 0.95191336, 1.1569455, 0.50672007 ]World-space n: [ 0.032488506, -0.851931, -0.5226453 ]World-space ns: [ 0.02753538, -0.8475513, -0.5299988 ]Distance from camera: 6.3228507Named material: MeshBlack_phong_SG
As you figure out which material names correspond to what geometry,watch for objects that are missing texture maps andadd Texture specifications for them and use themin their materials. (The good news is that such objects generally do havecorrect texture coordinates with them, so this usually goes smoothly. Insome cases it may be necessary to specify a custom UV mapping like"planar", "spherical", or "cylindrical".)