Eccentric Developments


Orbital Camera

Introduction

In the previous entry, I tried to implement an orbital camera to use in the viewport used to visualize the path tracer ouput. But if you played around with it for a little bit, you must have noticed it behaved erraticaly after a little bit.

I'll try to fix it in this entry.

The problem

The initial orbital camera implementation works as follows:

  1. On vertical mouse drag, the camera will rotate on the X axis
  2. On horizontal mouse drag, the camera will rotate on the Y axis

Which so fa is fine, the problem though is that the camera uses a flawed reference vector:

  • The X axis is always (1.0, 0.0, 0.0)
  • The Y axis is always (0.0, 1.0, 0.0)

Since the vectors are fixed, the camera rotates incorrectly after a while.

The fix

To avoid this issue, I need to update the transformations code to allow using arbitrary vectors as axis for the rotation operations. And then, make sure that the camera rotation always uses the correct axis according to the current camera position.

Calculating the correct axis for the camera rotation is easy, all I need to do is use the coordinate system structure that is already calculated after the camera is initialized:

impl CoordinateSystem {
    pub fn new(u: &Vector, v: &Vector) -> CoordinateSystem {
        let mut cs = CoordinateSystem::default();
        cs.u = u.unit().into();
        cs.v = v.unit().into();
        cs.w = cs.u.cross(&cs.v);
        cs
    }
}

(source)

The coordinate system is computed using the lef-top, right-top and left-botton points that define the camera.

let edge1 = &self.right_top - &self.left_top;
let edge2 = &self.left_bottom - &self.left_top;
self.coordinate_system = CoordinateSystem::new(&edge1, &edge2);

Now that this part is clear, the next step is to fix the rotation code.

Rotation

The rotation implementation in Light (code), works by applying a rotation matrix with fixed axis: X (1.0, 0.0, 0.0), Y (0.0, 1.0, 0.0) and Z (0.0, 0.0, 1.0). That define three rotation matrices:

    X rotation          Y rotation         Z rotation
|   1    0     0  | |  cos   0   sin | |  cos  -sin   0 |
|   0   cos  -sin | |   0    1    0  | |  sin   cos   0 |
|   0   sin   cos | | -sin   0   cos | |   0     0    1 |

To support arbitrary rotation axis, I use the Rodrigues' rotation formula. This formula allows for, given an angle, rotate a vector (v) around another vector (k), as long as this other vector is normalized:

vr = vR
R = I + sin(o)K + (1 - cos(o))K^2

Where I is the identity matrix and K is the following:

|  0   -kz   ky |
|  kz   0   -kx |
| -ky   kx   0  |

The final matrix is as follows (source):

| kx * kx * (1 - cos(o)) + cos(o)       kx * ky * (1 - cos(o)) - kz * sin(o)    kx * kz * (1 - cos(o)) + ky * sin(o)   |
| kx * ky * (1 - cos(o)) + kz * sin(o)  ky * ky * (1 - cos(o)) + cos(o)         ky * kz * (1 - cos(o)) - kx * sin(o)   |
| kx * kz * (1 - cos(o)) - ky * sin(o)  ky * kz * (1 - cos(o)) + kx * sin(o)    kz * kz * (1 - cos(o)) + cos(o)        |

Implementation

When the camera is created, the initialization process creates a coordinate system with three axis. This initial axis are used during the first rotation of the camera. Then after each rotation, the axis are recreated and stored for the next rotation.

The rotation code is this:

let (s, c) = o.sin_cos();
let cr = 1.0 - c;
let R: Matrix = [
    v.0 * v.0 * cr + c,
    v.0 * v.1 * cr - v.2 * s,
    v.0 * v.2 * cr + v.1 * s,
    0.0,
    v.0 * v.1 * cr + v.2 * s,
    v.1 * v.1 * cr + c,
    v.1 * v.2 * cr - v.0 * s,
    0.0,
    v.0 * v.2 * cr - v.1 * s,
    v.1 * v.2 * cr + v.0 * s,
    v.2 * v.2 * cr + c,
    0.0,
    0.0,
    0.0,
    0.0,
    1.0,
];

(source)

In the Light-Wasm implementation, I added a new function rotate_camera_orbit, that gets the rotation angles for the three dimensions, and applies them to the camera using the current coordinate system.

#[no_mangle]
pub unsafe fn camera_rotate_orbital(x: f32, y: f32, z: f32) {
    if let Some(renderer) = &mut RENDERER {
        let transform = Transform::combine(&[
            Transform::rotate_around_vector(x, renderer.camera.coordinate_system.u),
            Transform::rotate_around_vector(y, renderer.camera.coordinate_system.v),
            Transform::rotate_around_vector(z, renderer.camera.coordinate_system.w),
        ]);
        renderer.camera.apply_transform(&transform);
        TOTAL_FRAMES = 0.0;
        FRAMES_ACC = Some(vec![Color::default(); LEN]);
    }
}

(source)

After the transfor is applied, the camera reinitializes itself and calculates a new coordiante system: source.

The final result is a correct orbital camera, you can see it in action below:

Conclusion

This new camera behaves way better than the previous implementaiton, now the movement is more natural and allows me to explore 3D models more easily.

Now I wonder if I can use this to build a free-roam camera next.


Enrique CR - 2025-06-22