A slightly early Christmas present to y'all: a (slightly) deep M-set zoom I ran into on the way to deeper things. Because it only barely requires bignum arithmetic and the iterations are so low, the image could be rendered in only half an hour or so on my equipment. Consider it a bonus of sorts.

Without further ado...

<Quoted Image Removed>A straightforward one-layer image colored by smoothed iterations, of a microbrot on the spike in the big minibrot's Elephant Valley. Zoom into a "cumulus" mini-julia on the spike, then at the spike between the left pair of big spirals, then into another mini-julia to find a microbrot similar to this one.

This one was right on the border between long double and bignum arithmetic, to the point that the 640x480 preview didn't use bignums but the final 2048x1536 render did.

Freely redistributable and usable subject to the Creative Commons Attribution license, version 3.0.

Detailed statistics:

Name: All Roads Lead To Rome

Date: December 23, 2009

Fractal: Mandelbrot

Location: Elephant Valley of big spike minibrot

Depth: Moderately Deep (21 decimals)

Min Iterations: 269

Max Iterations: 17,257

Layers: 1

Anti-aliasing: 3x3, threshold 0.10, depth 1

Preparation time: 10 minutes

Calculation time: 35 minutes (2.5GHz dual-core E5200)

So it takes 1/2 hour to render this as a 3x super-sampled 2048x1536 image? (6144 4608 samples) That is using "bignum" arithmetic? How many bits of precision are in "bignum"? Is it arbitrary?

Where did you get the math library to do your bignum calculations? My app uses long doubles to calculate pixel coordinates, and doubles for iterating. I haven't resorted to long doubles, even, because I want it to be fast. I've debated recompiling with long doubles, or even finding a higher precision than that. I think I would add a preference to switch between types so I could still offer fast calculations.

For comparison, I rendered a plot that looks a whole lot like yours, at lower zoom. I don't support super-sampling, so I rendered it at 6144x4608 pixels, and it took about 4 minutes. That's on a 2.4 ghz Core 2 Duo (2 core) Mac. It's definitely worth only using higher precision when you have to, since the native double precision on current hardware is VERY fast.

I used fractional iteration counts to get smooth coloring. If I had turned that off it would have been a lot faster because I would have been able to use boundary following That would enable me to not even calculate quite a few of the pixels.

Duncan C