Eccentric Developments


Path Tracing

In the last article on rendering, I explained how ray tracing works, as mentioned there, this rendering technique simluates the interaction of lights with the objects in a scene using a construct called a shadow ray. These rays originate at the objects intersection points and are directed to light sources in the scene, with them we calculate the amount of light that incides on the element surface. This technique is called direct lighthing, since it only cares about the light that comes directly from light sources.

Direct lighthing is a first approximation of how light interacts with the objects in a scene, its main problem is that it does not account for indirect illumination which is a very important part when trying to render realistic scenes. Several techniques have been created to to include indirect light while rendering, here I'll explain on a very high level how one of these work: path tracing.

Path tracing (wiki), in its most basic form, means forgoing direct lighthing in lieu of a monte carlo simulation of the interactions of the rays of light in a scene and use that to obtain an approximation of how the scene objects are illuminated. This type of algorithms are stochastic in nature, and so, the time it takes to converge can take forever if not bound correctly.

Basic Path Tracing Algorithn

The basic path tracing algorithm is composed of some simple modifications to regular ray tracing; for full source code check previous articles or look at the source of this page:

  1. The shadow rays are no longer needed, as there is no need to directly check for light incidence.
  2. On every ray-object collision, recursively generate a new ray pointing to a random direction in the upper hemisphere of the intersection normal, keeping track of the recursion depth and stopping when the max depth is reached.
  3. The previous code to calculate the amount of incident illumination is now used after every collision, and not only if the shadow ray foud no obstructions.
  4. Lights cannot be points, they are replaced by geometry configured as emissive, also when a light if found, no more random rays are generated.

Given this modifications, the updates to the ray tracing code are as follows:

Add a function to randomly generate vectors on the upper hemisphere of a given normal. These random vectors are used as directions for rays while tracing paths.

function createRandomDirectionFunction(args) {
  const { vector } = args;
  const randomDirection = (normal) => {
    while (true) {
      let p = vector([
        2 * Math.random() - 1,
        2 * Math.random() - 1,
        2 * Math.random() - 1,
      ]).unit();
      if (p.dot(normal).value() >= 0) {
        return p.value();
      }
    }
  };
  return {
    randomDirection,
  };
}

Add a isLight property to the objects and add a light sphere. The light sphere is positioned so it will not show in the render output, and big enough to shine lots of light.

The isLight flag is just a hack for a more complete material engine that is not going to be implemented here.

function createScene() {
  return {
    scene: [
      {
        center: [0.0, 0.0, 0.0],
        radius: 5.0,
        color: [0, 0, 1.0],
        isLight: false,
      },
      {
        center: [0.0, -9000.0 - 5.0, 0.0],
        radius: 9000.0,
        color: [1.0, 1.0, 1.0],
        isLight: false,
      },
      {
        center: [20.0, 20.0, -20.0],
        radius: 10,
        color: [3.0, 3.0, 3.0],
        isLight: true,
      },
    ],
  };
}

Update the code that calculates the incident light on a given point to generate a random ray and use the resulting color as the light source.

The shading function is updated to no longer trace shadow rays to determine light incidence, instead, it will trace a new ray from the given point and normal in a random direction, and recursively keep doing the same following a path of rays until one of the following happens:

  • The ray path a light.
  • The recursion limit is reached, in this case 5 levels.
  • The ray hits nothing, i.e. escapes the scene.

Another change is that now instead of returning an scalar for ilumination, it now returns a color vector.

function createShadingFunction(args) {
  const { vector, trace, randomDirection } = args;
  const shading = (shadingPoint, pointNormal, depth) => {
    if (depth === 5) {
      return [0, 0, 0];
    }
    const origin = vector(shadingPoint)
      .add(vector(pointNormal).scale(0.01))
      .value();
    const direction = randomDirection(pointNormal);
    const ray = { origin, direction };
    const { hit, sphere, normal, point } = trace(ray);
    if (!hit) {
      return [0, 0, 0];
    }
    if (sphere.isLight) {
      return sphere.color;
    }
    return vector(sphere.color).mul(shading(point, normal, depth + 1));
  };

  return {
    shading,
  };
}

Because the shading function does not return a scalar anymore, the generateBitmap function needs to handle this new case. You'll also notice that the return value is an array of colors (in itself an array with 3 elements) insted of the Uint32Array used in the ray tracing version.

function* generateBitmap(args) {
  const {
    traceResults,
    imageGeometry: { width },
    shading,
    vector,
  } = args;

  const bitmap = [];
  let count = 0;
  let idx = 0;
  for (const result of traceResults) {
    const { intersection } = result;
    const { hit, sphere, point, normal } = intersection;
    let pixel = [0, 0, 0];
    if (hit) {
      pixel = sphere.color;
      if (!sphere.isLight) {
        const intensity = shading(point, normal, 0);
        pixel = vector(pixel).mul(intensity).value();
      }
    }
    bitmap[idx++] = pixel;
    if (++count === width * 10) {
      count = 0;
      yield;
    }
  }

  return {
    bitmap,
  };
}

The pipeline is basically the same as in the ray tracing implementation, but with the inclusion of the createAspectRatioFunction step.

var renderingPipeline = pipeline([
  createAspectRatioFunction,
  createScene,
  createCamera,
  createImageGeometry,
  createVectorFunction,
  createRandomDirectionFunction,
  calculatePrimaryRays,
  createIntersectionFunction,
  createTraceFunction,
  createShadingFunction,
  tracePrimaryRays,
  generateBitmap,
]);

Finally, we execute the pipeline, and to render the results, we take each pixel of the bitmap and convert it to a format compatible with the canvas.

const canvas = document.getElementById("canvas-1");
const ctx = canvas.getContext("2d");
const width = canvas.width;
const height = canvas.height;

renderingPipeline({ width, height }).then((result) => {
    const { bitmap } = result;
    const finalBitmap = new Uint32Array(width * height);
    for(let i = 0 ; i < bitmap.length; i++) {
        const [r, g, b] = bitmap[i];
        finalBitmap[i] = (255 << 24) | (Math.floor(b * 255) << 16) | (Math.floor(g * 255) << 8) | Math.floor(r * 255);
    }

    const imageData = new ImageData(new Uint8ClampedArray(finalBitmap.buffer), width);
    ctx.putImageData(imageData, 0, 0);
});

As you can see in the canvas, the image looks all grainy. This is because path tracing requires several samples per pixel to better approximate the light interactions, only that in this case one sample per pixel has been calculated. To do better, keep reading.

Bonus

To get much better results, we will update the code to keep rendering one image after the other, accumulating the values of each pixel and dividing them by the number of samples taken to get their average.

Accumulating the results will eventually converge to a more accurate representation of the light interactions in the scene, how long this takes will vary on the complexity of the scene, the parameters used for the rendering algorithm and the pseudo-random number generator (Math.random()) algorithm used.

const canvas = document.getElementById("canvas-2");
const ctx = canvas.getContext("2d");
const width = canvas.width;
const height = canvas.height;

const accBuffer = [];
let framesCount = 0;

function renderFrame() {
    console.time("renderFrame");
    renderingPipeline({ width, height }).then((result) => {
        const { bitmap } = result;
        const finalBitmap = new Uint32Array(width * height);
        framesCount++;
        for(let i = 0 ; i < bitmap.length; i++) {
            const accPixel = accBuffer[i] || [0, 0, 0];
            const bitmapPixel = bitmap[i];
            const acc = [accPixel[0] + bitmapPixel[0], accPixel[1] + bitmapPixel[1], accPixel[2] + bitmapPixel[2]]
            accBuffer[i] = acc;
            let [r, g, b] = acc;
            r /= framesCount;
            g /= framesCount;
            b /= framesCount;
            finalBitmap[i] = (255 << 24) | (Math.floor(b * 255) << 16) | (Math.floor(g * 255) << 8) | Math.floor(r * 255);
        }

        const imageData = new ImageData(new Uint8ClampedArray(finalBitmap.buffer), width);
        ctx.putImageData(imageData, 0, 0);
        window.requestAnimationFrame(renderFrame);

        console.timeEnd("renderFrame");
    });
}

window.requestAnimationFrame(renderFrame);

This is what changed: first we add accBuffer and frameCount to accumulate the pixels on consecutive renders, and before generating the finalBitmap, the accumulated pixels are divided over frameCount and then passed to the canvas for view.

Since we don't want to block the browser, we use window.requestAnimationFrame to wait for an animation frame and do the image rendering; there are also other tricks in the code if you spotted them but will talk about those in another post.

Finally, the code editor is no longer capable of correctly measuring the time it takes for each rendered image, to keep some visibility on it, the console.time instruction is introduced but the results can only be seen in the console.

Click the Run button and see the result bellow.

If you leave this running for a while, you will start noticing things like soft shadows and color bleeding (the blue showing in the floor), which are lighting characteristics that we did not explicity implement, but are inherent to the path tracing algorithm.

Final Thoughs

The path tracing algorithm is a very simple concept but also very powerful when approximating the light interactions in a scene, all it required was some small changes to the ray tracing algorithm and the results on light interactions are vastly superior.

It is very slow as you have noticed and, while part of it is because the code presented is not optimized, its main performance hit comes from the 2.5x increase in the amount of rays used per pixel when compared with ray tracing, and that is only for a single pass.

Simple, powerful, and a lot of fun.