Syntopia mentioned that he reduces noise significantly with another technique: stratification. That means randomness is being applied not as raw chaos, but tamed to have a desirable characteristic: sample points should not cluster near each other too much, nor should they leave too large gaps.

Thinking about this some more, I realized that the brute force raytracer does not need to be based on randomness at all. We need randomness only to decouple adjacent pixels from another, so that we don't get Moiré patterns or other objectionable artifacts.

So what we need is a way to place sample points in an interval that has the following qualities:

- cheap to compute

- sample density is reasonably uniform across the interval (i.e. stratified)

- works for any number of points N (because we keep shortening the sampling interval and then start over)

- the method is incremental, i.e. we can add another point to an existing set of samples, and maintain the other qualities

As it turns out, something like that actually exists. It is related to the Golden Ratio, or more specifically to the Golden Angle. If you look at the stylized flower on the wikipedia page

http://en.wikipedia.org/wiki/Golden_angleyou can probably see how and why this works. New petals are always being added into the largest existing gap. All we need to do is crack the circle open, and roll out its circumference in our [0.0 .. 1.0] interval.

So the algorithm for sampling along the ray becomes something like this:

- prepare a constant for the (normalized) Golden Angle:

const float GoldenAngle = 2.0f - 0.5f*(1.0f + sqrtf(5.0f));

- once per ray, a random seed value from [0. .. 1.0] is passed in as a parameter RandSeed to decouple the rays from each other

- then the core loop for one ray looks like this

for (number of samples ...) {

RandSeed += GoldenAngle;

if ((RandSeed - 1.0f) >= 0.0f) { // keep wrapping back into [0.0 .. 1.0]

RandSeed = RandSeed - 1.0f;

}

where = RandSeed; // current sample location in original parameter interval

where = (2.0f - where) * where; // bias sample locations as described above in the thread

where = closest * where; // shorten to current ray length

if (iterate(rayBase + where*rayDirection) == hit) {

closest = where; // remember closest hit

}

}

return closest;

With this method, I could halve the number of samples to 150 per ray, and get visually identical results to the example above with 300 biased random samples.

So, to recap: we started with 500 to 1000 samples for pure random sampling. This was reduced to 300 samples with biasing sample locations towards the closest hit. Another reduction to 150 samples per ray was then enabled by the stratification gained from GoldenAngle sampling. All in all the number of samples could be reduced to somewhere between a quarter and an eighth.