I have made what may be a crucial breakthrough in the automatic correction of glitches -- particularly, the "noisy glitches" that have previously proved difficult to detect.
What I did was, I started from the hypothesis that glitches occur when two nearby delta-orbits (perturbation calculations for adjacent pixels, say) are getting too close together compared to their overall magnitude, so that they "snap together" (solid-color glitches) or worse, get close enough to lose important bits of precision of the difference between them without quite snapping all the way together (noisy glitches).
I formerly had Nanoscope checking many local maxima of iterations in the image for noisy glitches by computing a non-perturbative point and comparing, and using the computed point as a replacement main reference point if it had much different (usually higher) iterations from the perturbation calculation of the same point using the original reference point. But this was unsatisfactory for three reasons. One, it required changing the main reference point, and thus there was no way to cope with two different types of noisy glitch, should such an eventuality occur. Two, there were glitchy behaviors with the "solid" glitch correction with large solid glitches, caused by a narrow "fringe" of noisy-glitch where the precision loss is slightly less fatal. This region could be distorted noticeably even with the solid glitch within corrected. Finally, small noisy glitches occasionally snuck past Nanoscope. So I have been looking for a way to turn all glitches solid. And I found one.
I applied perturbation theory to the perturbation theory iteration!
The perturbation iteration is:
Perturbing that yields:
So,
which gets small in comparison to
when
is small.
But that's just the actual (unperturbed) orbit of the current pixel! Whose size we need to check anyway, to detect bailout. It's when this gets very small that glitches can occur. This fits the observation that glitches hit at a) deep minibrots and b) deep "peanuts", embedded Julias, etc. (peanuts of course are just especially sparse embedded Julias) that are associated with deep minibrots. So, this suggests checking for the current orbit point to be sufficiently closer to 0 than the corresponding iterate of the reference orbit.
The breakthrough: checking for
The implementation actually precomputes 10
-3 |
zn| for all points of the reference orbit and keeps this data in an array, which means we only have to check for the current orbit point magnitude to be less than the value looked up for the current iteration number.
Bailing promptly if this happens turns a noisy glitch into a flat glitch that's somewhat larger, and also expands flat glitches while de-noising their borders. Detecting flat glitches (by simply finding two identical pixels in a row), then finding the blob bounding box (find its extent left, right, up, down before a different pixel value), then applying the "contracting net" algorithm I've previously described to locate the center, sort-of works. It was necessary to "fingerprint" blobs with a hash of the specific reference point calculation used when encountering it, so blobs hit at the same iteration using different reference points were treated as different; and to chain this whole system, so that it might calculate with reference point A, find a blob, look up that blob's fingerprint in a hash and get reference point B stored earlier, recalculate that point with B, and repeat as necessary, and if the blob is not in the hash, create a reference point at its center and both use it and add it to the hash.
That worked, but it generated upwards of 70 reference points for smallish test images, even in non-glitchy areas (the sensitivity of one-thousandth can't be turned down any further without missing real glitches in some of the test cases I have). And it actually caused a glitch or two in some cases. Even in images without glitches before.
So I hybridized the two approaches! Since many of the "blobs" caught with this method calculate fine otherwise, I decided to test which need what. So I iterate with glitch-catching on, and maybe land in a "proto-blob". If it's already in the hash with a reference point to shrink/fix it, switch to that and redo point. If it's already in the hash with a special value ":ignore", recalculate with that last reference point and glitch-catching turned off. If it's not already in the hash, discover the blob's extent, apply the contracting net to that region with a temporary copy of the hash adding ":ignore" for this proto-blob, and thus zero in on a local iteration maximum or a smaller, *solid* glitch. Then calculate a reference orbit at this high iter point, use it for this proto-blob if solid glitch, otherwise compare the non-perturbative iteration count with the perturbation value gotten at the same spot using that temporary ":ignore" directive to look for a discrepancy. If the difference is less than 10 iterations, make the temporary ":ignore" permanent and discard the new reference orbit, otherwise treat as in the solid glitch case and use the new reference orbit for this proto-blob.
This method computes a 1280x960 unantialiased image of Dinkydau's "Flake" image in 12 minutes ... correctly. It ends up with about 24 secondary reference points, though I suspect only one or two of them are doing most of the work.
I think I'm close to having something open-sourcable soon. When that point is reached I'll announce it here so Kalles Fractaler's, and other perturbation engines', authors (I think most of them are watching this thread) can benefit from looking over Nanoscope's implementation of the algorithm sketched out above for detecting possible-noisy-glitches on the fly and testing them for are-they-really-noisy.