I've been testing an interesting approach to computing deep zoom movies: rather than brute-calculate every single frame (slow!) or even calculate a regularly-spaced series of large keyframes to interpolate, compute a wallpaper-like vertical strip in a coordinate system like

*i* ln(

*z* -

*P*) where

*P* is the target of the zooming. The

*x* coordinate in the strip decreases from 2pi to zero, while the

*y* coordinate increases from ln(min_zoom) to ln(max_zoom). A horizontal row of pixels in the strip corresponds to a circle about the target point, of a radius dependent on the

*y* coordinate of the row via

*r* =

*e*^{-2πy/w} where w is the chunk width. The radii bunch closer together with increasing

*y*, tending asymptotically to 0.

At this point, I have the following components:

- A Mandelbrot-calculation tool rendering chunks of the strip that are roughly ten thousand pixels wide and four thousand high; the exact ratio is that of 2pi to ln 10. The result is that a new chunk adds a factor of ten to the zoom range spanned. This tool uses a new type of solid guessing that is compatible with smoothed iterations: rather than look for
*solids* it looks for a pixel (at a given pass) not contrasting too strongly with its neighbors, but allowing a difference of up to 2 points in whichever of red, green, blue channels differs the most (in an overall 0-255 range). If any channel of the pixel contrasts with any channel of any of its eight neighbors by more than that that block gets subdivided; otherwise the block is painted by linear-interpolating it with its neighbors. A smooth, shallow gradient thus avoids getting subdivided but still comes out looking accurate. - A tool that can synthesize an image of any in-range magnification centered on
*P* from a directory full of sequentially-numbered chunks. It converts image pixels into the chunk-space coordinates, then breaks *y* into a quotient and remainder by the height of one chunk to get a chunk number and an intra-chunk *y* coordinate. This tool also does antialiasing: it can subsample pixels in its output image, transforming them into the chunk coordinates and averaging the colors of the samples found in chunkspace. With a chunk nearly 10,000 pixels wide it can generate sharp, noise-free 1280x720 images -- which can serve as 720p movie frames. The effective AA degree is around 3 in the corners and can be *extremely* high towards the center; the central region always looks very good. Subsample pixels are subjected to random jitter by as much as the width of a subpixel, giving a fairly uniform distribution of samples within each output pixel. - A tool that can synthesize an Archimedean spiral wrapping the entire strip around into a coil, so as to cram as much area as possible into an image with dimensions and an aspect ratio more reasonable than ten thousand by ten million (or whatever). Pixels are again subsampled. A 10,000-wide strip should allow a spiral image up to several thousand pixels across that's still decently antialiased.

These have all been tested up to a shallow but arbitrary-precision depth and seem to work. The chunk generator is, for a low-detail image with large regions lacking high-frequency material, about 4x faster with the super-solid-guessing enabled than with it disabled. The speedup amount will obviously drop, eventually to 1x, for higher-detail areas. The frame synthesizer produces beautiful results as long as the chunk resolution is high enough for the desired output resolution and the zoom is in the range spanned by the chunk data. (Go too deep and a black circle appears in the center of the image where the next needed chunk is missing.) And the spiral generator's results look good so far, though its real test won't come until I have chunk data going a lot deeper.

The whole setup is amenable to gradual and incremental extension of a zoom sequence, movie, or etc.; more chunks can be added at any time to extend the chunk data, and then new, deeper frames can be synthesized that needed the new data. The spiral generator's output is a handy way to visualize progress and serves as a "map" of sorts (it can be given the final depth of the zoom sequence and output a spiral that winds in as far as the chunk data exists and then is black, with the outermost part of the spiral not changing as data is added -- the shallowest magnifications remain in the same place while the deeper material extends the spiral inward).

It's also amenable to experimentation: given the color scheme and zoom target point are finalized, the chunk data need not be recalculated to redo a frame at a different magnification, or a spiral with adjusted parameters, or a movie with a changed mapping of time to magnification, or whatever; or any of those at a changed resolution. And it's the chunk data that's expensive to recalculate, of course.

Note also that the frame generator can rotate as well as zoom -- rotation, in fact, just requires replacing

*x* with (

*x* + offset) mod chunk-width just before sampling from the chunk data, and of course having calculated the offset appropriately: chunk-width *

*rho* / 360 if

*rho* is the desired rotation in degrees. (What, not even a pi in there? Of course there's plenty of pi elsewhere -- in calculating

*x* from the pixel's coordinates to begin with, in the chunk generator to convert the

*x* coordinate into a complex number for the Mandelbrot calculations, and there are even pi squareds here and there in the spiral generator. The spiral generator also has the distinction of having a logarithm of a logarithm -- to keep right angles right angled requires the curves of constant

*y* be perpendicular everywhere to the family of Archimedean spirals of some scale factor, and those curves turn out to be described by

*theta* = -ln(

*k* ln

*r*), where

*r* is radial distance from the center of the spiral and

*k* is a scaling factor calculated to fit the right number of windings into the image.)

There may be some images coming soon. (Fans of Mandelbrot Safari, fear not -- most of my CPU power is being thrown at rendering the next, slow image in that sequence, now about 1/3 done, with only a few short interruptions now and again for testing this stuff or adding a chunk to the test input.)