Welcome to Fractal Forums

Fractal Software => Announcements & News => Topic started by: Pauldelbrot on April 15, 2014, 10:59:00 PM




Title: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on April 15, 2014, 10:59:00 PM
I have made what may be a crucial breakthrough in the automatic correction of glitches -- particularly, the "noisy glitches" that have previously proved difficult to detect.

What I did was, I started from the hypothesis that glitches occur when two nearby delta-orbits (perturbation calculations for adjacent pixels, say) are getting too close together compared to their overall magnitude, so that they "snap together" (solid-color glitches) or worse, get close enough to lose important bits of precision of the difference between them without quite snapping all the way together (noisy glitches).

I formerly had Nanoscope checking many local maxima of iterations in the image for noisy glitches by computing a non-perturbative point and comparing, and using the computed point as a replacement main reference point if it had much different (usually higher) iterations from the perturbation calculation of the same point using the original reference point. But this was unsatisfactory for three reasons. One, it required changing the main reference point, and thus there was no way to cope with two different types of noisy glitch, should such an eventuality occur. Two, there were glitchy behaviors with the "solid" glitch correction with large solid glitches, caused by a narrow "fringe" of noisy-glitch where the precision loss is slightly less fatal. This region could be distorted noticeably even with the solid glitch within corrected. Finally, small noisy glitches occasionally snuck past Nanoscope. So I have been looking for a way to turn all glitches solid. And I found one.

I applied perturbation theory to the perturbation theory iteration!

The perturbation iteration is:

\delta_{n+1} = 2z_n\delta_n + \delta_n^2 + \delta_0

Perturbing that yields:

2z_n(\delta_n + \epsilon_n) + (\delta_n + \epsilon_n)^2 + \delta_0 = 2z_n\delta_n + 2z_n\epsilon_n + \delta_n^2 + 2\delta_n\epsilon_n + \epsilon_n^2 + \delta_0
= \delta_{n+1} + 2z_n\epsilon_n + 2\delta_n\epsilon_n + \epsilon_n^2

So,

\epsilon_{n+1} = 2z_n\epsilon_n + 2\delta_n\epsilon_n + \epsilon_n^2
= \epsilon_n(2(z_n + \delta_n) + \epsilon_n)

which gets small in comparison to \epsilon_n when z_n + \delta_n is small.

But that's just the actual (unperturbed) orbit of the current pixel! Whose size we need to check anyway, to detect bailout. It's when this gets very small that glitches can occur. This fits the observation that glitches hit at a) deep minibrots and b) deep "peanuts", embedded Julias, etc. (peanuts of course are just especially sparse embedded Julias) that are associated with deep minibrots. So, this suggests checking for the current orbit point to be sufficiently closer to 0 than the corresponding iterate of the reference orbit.

The breakthrough: checking for

\frac{|z_n + \delta_n|}{|z_n|} < 10^{-3}

The implementation actually precomputes 10-3 |zn| for all points of the reference orbit and keeps this data in an array, which means we only have to check for the current orbit point magnitude to be less than the value looked up for the current iteration number.

Bailing promptly if this happens turns a noisy glitch into a flat glitch that's somewhat larger, and also expands flat glitches while de-noising their borders. Detecting flat glitches (by simply finding two identical pixels in a row), then finding the blob bounding box (find its extent left, right, up, down before a different pixel value), then applying the "contracting net" algorithm I've previously described to locate the center, sort-of works. It was necessary to "fingerprint" blobs with a hash of the specific reference point calculation used when encountering it, so blobs hit at the same iteration using different reference points were treated as different; and to chain this whole system, so that it might calculate with reference point A, find a blob, look up that blob's fingerprint in a hash and get reference point B stored earlier, recalculate that point with B, and repeat as necessary, and if the blob is not in the hash, create a reference point at its center and both use it and add it to the hash.

That worked, but it generated upwards of 70 reference points for smallish test images, even in non-glitchy areas (the sensitivity of one-thousandth can't be turned down any further without missing real glitches in some of the test cases I have). And it actually caused a glitch or two in some cases. Even in images without glitches before.

So I hybridized the two approaches! Since many of the "blobs" caught with this method calculate fine otherwise, I decided to test which need what. So I iterate with glitch-catching on, and maybe land in a "proto-blob". If it's already in the hash with a reference point to shrink/fix it, switch to that and redo point. If it's already in the hash with a special value ":ignore", recalculate with that last reference point and glitch-catching turned off. If it's not already in the hash, discover the blob's extent, apply the contracting net to that region with a temporary copy of the hash adding ":ignore" for this proto-blob, and thus zero in on a local iteration maximum or a smaller, *solid* glitch. Then calculate a reference orbit at this high iter point, use it for this proto-blob if solid glitch, otherwise compare the non-perturbative iteration count with the perturbation value gotten at the same spot using that temporary ":ignore" directive to look for a discrepancy. If the difference is less than 10 iterations, make the temporary ":ignore" permanent and discard the new reference orbit, otherwise treat as in the solid glitch case and use the new reference orbit for this proto-blob.

This method computes a 1280x960 unantialiased image of Dinkydau's "Flake" image in 12 minutes ... correctly. It ends up with about 24 secondary reference points, though I suspect only one or two of them are doing most of the work.

I think I'm close to having something open-sourcable soon. When that point is reached I'll announce it here so Kalles Fractaler's, and other perturbation engines', authors (I think most of them are watching this thread) can benefit from looking over Nanoscope's implementation of the algorithm sketched out above for detecting possible-noisy-glitches on the fly and testing them for are-they-really-noisy.


Title: Re: Pertubation Theory Glitches Improvement
Post by: cKleinhuis on April 15, 2014, 11:26:25 PM
@all, i extracted this important posting from the kalles fractaler thread, please continue discussion here!


Title: Re: Pertubation Theory Glitches Improvement
Post by: Sockratease on April 15, 2014, 11:51:08 PM
@all, i extracted this important posting from the kalles fractaler thread, please continue discussion here!

And I made it a sticky thread!

I wish I still had the Math Chops to understand all of that - but if I did I'd probably just make a nuisance of myself writing fractal generators that do silly things to images...


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on April 16, 2014, 05:07:12 AM
Update: I've got a couple of ideas for further improvements, which I might test in the future (though not now). Meanwhile, I'd appreciate anyone posting noisy glitch locations to serve as additional test cases (anywhere where Kalles Fractaler, Superfractalthing, or Mandel Machine fouls up that's not just a unicolor blob is potentially useful).

To better visualize what it is doing, here is Dinkydau's "Flake" location with the same color gradient, three times.

First, calculated with only the main reference point. Nanoscope produces the same glitch as KF, probably because it uses the same FP calculations under the hood.

(http://i1248.photobucket.com/albums/hh496/Paul_Derbyshire/Miscellaneous/test_dinkydau3_a.png~original)

Next, this is what happens if the "glitch warning system" is engaged, but only the primary reference point is used (no autocorrection):

(http://i1248.photobucket.com/albums/hh496/Paul_Derbyshire/Miscellaneous/test_dinkydau3_magtest1e-3.png~original)

Note that the "noisy" glitches have been replaced by somewhat expanded, uniform blobs of color, with small satellite blobs. It's not just the obvious areas near the center, either; the upper left corner showed whitish stripes in curls and became small solid blobs. It turns out that these were noisy glitches too. Here is the version with auto-correction on:

(http://i1248.photobucket.com/albums/hh496/Paul_Derbyshire/Miscellaneous/test_dinkydau3.png~original)

Note that the corner curls now have orange spirals which were missing before. The center region is most spectacularly corrected, showing normal Mandelbrot spirals.

This is what Nanoscope now produces with zero human intervention if given only the "Flake" center coordinates and magnification. No manual placement of added reference points is necessary. It's completely automated. It reported calculating about three dozen auxiliary reference points for this image. Just three dozen points iterated at 163 decimals of precision, instead of the one-and-one-quarter-million at that precision to render this same image in conventional software. :)


Title: Re: Pertubation Theory Glitches Improvement
Post by: hobold on April 16, 2014, 10:10:44 AM
http://xkcd.com/54/

 :)


Title: Re: Pertubation Theory Glitches Improvement
Post by: Kalles Fraktaler on April 16, 2014, 05:30:06 PM
How many iterations does your program skip with Series Approximation?
It looks very promising, but unfortunately I found that the time when the glitch is detectable can be on iterations that are skipped with SA.

An example is if the center of flake is moved slightly
Code:
Re: -1.9999661944503703041843468850635057967553124154072485151176192294480158424234268438137612977886891381228704640656094986435381057574477216648567249609280392009771725847367351850324630769742779025339580147325194
Im: -0.0000000000000000000000000000000003001382436790938324072497303977592498734683119077333527017425728012047497561482358118564729928841407551922418650497818162547805194830526529629935073843651444194932083970534961
Zoom: 2.56203307883E157

KF use 5 terms and skip 23653 iterations.
First image is a closeup on the area where the structured glitch occurs, detected glitches are encoded with yellow.
Second image is the same area without SA, then the glitch is detected.
The glitch is only partly detected also when using 3 terms and skipping 15769 iterations...


Title: Re: Pertubation Theory Glitches Improvement
Post by: ellarien on April 16, 2014, 05:52:48 PM
This is the weirdest one I've come across, at 2.25E15 on the zoom-out from the attached location. Note the difference in the lower right area (Kalles Fraktaler did sort it out, but only with the 'no approximation' option.) The erroneous version looks almost plausible out of context.



Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on April 17, 2014, 05:33:00 AM
Nanoscope is only skipping 6230 iterations in the Flake image with series approximation. The smallest iteration where errors are occurring seems to be in the seven thousands. Looks like going too far with series approximation can cause problems subtler than previously noted. The odd thing is that the series approximation shouldn't be able to "make the error" when it "hits" those iterations!


Title: Re: Pertubation Theory Glitches Improvement
Post by: Kalles Fraktaler on April 17, 2014, 05:22:19 PM
I have investigated this a little bit more. Oh Yes, this is awesome!!!
Despite that the glitch detection can be within the iteration span skipped by Series Approximation - this is, as far as I can see, the bullet proof glitch detection we all been waiting for so long.
Thanks a lot Pauldelbrot, you have done a really good job on this!!!
For almost all location the glitch detection is not within the SA span - I have so far found none except flake.

I don't fully understand your glitch solving method, but I think you make it a little bit too complicated.
KF uses a simple flood-fill algorithm to detect one-colored blobs examining both the iteration count value and the smoothing coefficient, and adds a reference in the center of the biggest one and recalculate all pixels with the same iteration count value.
With your new glitch detection, all the detected pixels are now set to the same iteration count and smooth coefficient.
A new reference is put in the center of the largest area, and all those pixels are recalculated.
This is repeated until no more blobs are found.

By doing so, KF with Series Approximation turned off automatically creates a perfect flake image 1280x720, including all the tiny spirals in the corners, with 7 additional reference in just under 1.5 minutes:
(http://www.chillheimer.de/kallesfraktaler/flake.jpg)


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on April 17, 2014, 05:44:05 PM
This method computes a 1280x960 unantialiased image of Dinkydau's "Flake" image in 12 minutes ... correctly. It ends up with about 24 secondary reference points, though I suspect only one or two of them are doing most of the work.
This image needs no more than 3 reference points. Automatic detection of the issue would be progress, though.
I will look into your approach. Would be cool if it's generally working.
A region for testing:
-1.41036459426074570658817618676297211879321324385433824208227598E+00
1.36711010515164632751932900773139846402453359380892643209313503E-01
2.865303424E-53


Title: Re: Pertubation Theory Glitches Improvement
Post by: Kalles Fraktaler on April 17, 2014, 09:10:38 PM
This image needs no more than 3 reference points. Automatic detection of the issue would be progress, though.
I will look into your approach. Would be cool if it's generally working.
A region for testing:
-1.41036459426074570658817618676297211879321324385433824208227598E+00
1.36711010515164632751932900773139846402453359380892643209313503E-01
2.865303424E-53

Thanks hapf.
Yep, your location breaks this method.
If the main reference is calculated in the center of the big julia, the outer ring of julia glitches are not detected with this method.
The center is the parameters slightly changed to:
-1.410364594260745706588176186762972118793213243854338242161867741
-0.136711010515164632751932900773139846402453359380892643209313503
1.39601271072E53


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on April 18, 2014, 01:13:24 AM
A more refined method, but with more per-iteration overhead, would be to compute epsilon alongside delta and watch for epsilon/delta to get too small. The first post in this thread already shows how to compute epsilon for each iteration from its previous iterate and delta. This could be combined with series approximation by using series approximation to generate the delta for the current pixel and for an adjacent pixel, and then use the difference between those deltas as the epsilon for the first "real" iteration to begin. That method would be slower, but possibly detect even more glitches (maybe all of them) reliably, and maybe with fewer false positives as well.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Dinkydau on April 18, 2014, 01:54:46 AM
Great work, Pauldelbrot.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on April 18, 2014, 01:58:38 AM
Great work, Pauldelbrot.

Thanks! But more testing and work is needed...


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on April 18, 2014, 08:01:52 PM
Thanks! But more testing and work is needed...
As far as I can see at the moment the basic idea (using absolute value of 'perturbated' iterate versus reference iterate) is  very good. The idea of a fixed
threshold (0.001 etc.) less so. Better results are possible by not aborting early but looking at the statistics when all pixels finish their run.
Corruption happens due to rounding errors. Rounding errors happen when bits are lost due to adding numbers not so close together as one would wish them
to be. As zn * deltan gets bigger rounding errors get bigger. The deltan get bigger on average as reference orbit and computed orbit via difference computation
go out of sync more or less quickly on average. One way to judge out of sync is to look at absolute value of 'perturbated' iterate versus reference iterate as suggested.
One could use a threshold as suggested, or find the minimum value and compare with all other pixels minimum. One could use the sum and compare. Once can use instead the
sum of the deltan and compare or the max. The results seem to be comparable. What the methods don't provide is a clear yes or no to the question: Is a pixel corrupted?
Only when hard clipping occurs one knows for sure.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Kalles Fraktaler on April 18, 2014, 10:30:47 PM
Sorry for a stupid non-mathematical suggestion but what if |Zn+dn|/|Zn| > 10+3?
Is there any point in testing that too?

I am not near a computer at the moment but I will give it a test when I am...


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on April 19, 2014, 09:23:05 AM
Sorry for a stupid non-mathematical suggestion but what if |Zn+dn|/|Zn| > 10+3?
Is there any point in testing that too?

I am not near a computer at the moment but I will give it a test when I am...
There is a point in testing everything that could make sense. :-) But going for the max works not nearly as well
as going for the min in my tests.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Kalles Fraktaler on April 20, 2014, 06:19:40 PM
Checking |Zn+dn|/|Zn| > 10+3 did only make it worse.
Only checking |Zn+dn|/|Zn| < 10-3 is useful.

For your reference, here is my collection of locations which I ever have had problems with, which I (hrm... often) use as regression test of new versions of Kalles Fraktaler.
http://www.chillheimer.de/kallesfraktaler/glitches.zip


Title: Re: Pertubation Theory Glitches Improvement
Post by: Kalles Fraktaler on April 27, 2014, 08:17:01 PM
I just want to mention also in this thread, that I strongly believe that this is the bullet proof method we all been searching for since mrflay published the perturbation and series approximation method just a little more than a year ago.

The trick is to test every iteration (after series approximation), since the interval were the condition |Zn+Dn|/|Zn|<10-3 is true can be very small.

This method doesn't only detect noisy borders around flat glitches and non-flat glitches with structure in it, it also prevents a program from examine one colored areas that are valid, and all of this in a reliable way.
This method can even detect one pixel sized glitches, and there are no need to compare a render with a full precision render anymore. :)


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on April 28, 2014, 12:24:35 AM
I just want to mention also in this thread, that I strongly believe that this is the bullet proof method we all been searching for since mrflay published the perturbation and series approximation method just a little more than a year ago.

The trick is to test every iteration (after series approximation), since the interval were the condition |Zn+Dn|/|Zn|<10-3 is true can be very small.

In theory, you could only check on iterations that are multiples of certain numbers -- I think the periods of minibrots that are smaller than about 10-10 wide, but large in comparison with their distance from the reference orbit ... where one would need a more precise definition of "large in comparison with" that isn't very intuitive, since the deeper you are the tinier and farther the minibrot can be and still "count", but "the reference orbit is far enough inside the period-doubling zone of the minibrot that it's inside an eight-fold repetition" seems like it might suffice; iterations where the reference orbit point is within some sufficiently-small distance of zero, which could be precomputed into a table when the reference orbit is calculated. (Nanoscope already does something similar, but more stringent, for a different purpose; it saves a 16-bit-mantissa unlimited-exponent-width reference orbit point for each iteration where the reference orbit point is within a tiny enough distance of zero that it denormalizes or snaps to zero when stored in doubles. These are kept in a hash table indexed by iteration number and used in an unlimited-exponent-width recalculation of an iteration if ndx or ndy denormalizes (this is also when delta squared gets calculated, when below 10-308); otherwise, flying past a very very deep minibrot (below e308) and then far enough to reach iters that are the next multiple of the minibrot's period produces a precision-blockied image and then, shortly, nothing. Note: ndx and ndy may sometimes denormalize on iterations where the reference point doesn't, and then Nanoscope uses the normal stored reference orbit point, in doubles, copying them into temporaries with unlimited exponent width; that is, it looks in that hash table at the current iteration for a wide-exponent reference point, and failing that creates one from the "regular" one. Then it uses it, and delta with its rescaling descaled, in an exact (delta squared included and unlimited exponent width) next-iteration calculation, before rescaling delta again so that it is O(1) before continuing, or just leaving it descaled if it's gone above e-308 at that point.)

In practice, it's probably easier to just check every iteration.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Kalles Fraktaler on April 28, 2014, 12:34:19 AM
Hahaha yeah I agree with your last statement!  :D lol
Thanks so much for your discovery!


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on April 28, 2014, 01:03:12 AM
Hahaha yeah I agree with your last statement!  :D lol
Thanks so much for your discovery!

yw


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on May 05, 2014, 02:17:53 PM
I looked into iterating the epsilons but that showed no benefit so far. Checking minima of |Zn+Dn|/|Zn| or maxima of Zn|/|Zn+Dn| (single value or sums) seems to be the most efficient metric so far. While there is definitely correlation between this metric and actual error/corruption it is far from perfect. So the question remains where to set thresholds and how to deal best with all the pixels that are potentially corrupted. With a given threshold there can be 10000s, even 100000s of blobs (for large images) that need to be dealth with, or not.  :crazyeyes:


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on May 05, 2014, 03:04:01 PM
I looked into iterating the epsilons but that showed no benefit so far. Checking minima of |Zn+Dn|/|Zn| or maxima of Zn|/|Zn+Dn| (single value or sums) seems to be the most efficient metric so far. While there is definitely correlation between this metric and actual error/corruption it is far from perfect. So the question remains where to set thresholds and how to deal best with all the pixels that are potentially corrupted. With a given threshold there can be 10000s, even 100000s of blobs (for large images) that need to be dealth with, or not.  :crazyeyes:

I noticed. Nanoscope maintains a memory of iteration numbers it encountered potential glitches at, as detected by these minima. When it encounters a new one (novel iteration count) it finds the approximate center of the region (using an algorithm previously described) and computes a high precision orbit there, as well as computing a point there while ignoring the "glitch alarm", and compares the iteration counts. If there's a significant discrepancy, it considers the glitch "real" and will use the just-computed orbit as reference for regions where the "glitch alarm" trips on that same iter. If there's no significant discrepancy, it marks that iteration as "ignore" and ignores future "glitch alarms" that trip on that iter. There's a separate such registry of alarm iters and ignore/use-this-ref-point info for each reference point.

The results looked accurate to my eye and fixed all the usual-suspect glitch cases (including noisy glitches) while generating relatively few high precision orbits per image, even for large (30ish megapixel) images. Most of the tiny "glitch alarms" end up being ignored, while the bigger ones sometimes end up getting their own local reference points, including (seemingly) all the ones hiding real actual glitches.


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on May 05, 2014, 06:50:07 PM
That's an interesting approach. It is indeed so that even complex images usually don't need tons of new references, just a few, the right ones. I tried to characterise blobs with all kinds of measures (average iterations, shape properties etc.) but making sure that blobs with the same properties can be fixed with the same (new) reference proved difficult in the general case (but not the case with a minibrot in the center of the image). So I will look into this new property to see if it helps generally.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Kalles Fraktaler on May 05, 2014, 09:51:14 PM
100000 glitches in the same image? I guess almost all of them are one pixel in size?

I set all pixels identified as glitch to the same iteration value and recalculate all of them for every new reference added, regardless. This glitch detection method is so good so if the new reference doesn't solve one glitch, it is detected again as a glitch, and might get solved by the next reference.
I try to add the new reference in the center of the largest blob. But that is ineffective many times, especially on glitches caused by close passage of a minibrot's elephant valley where the spirals are in the edge of the blob glitches.
But you could as well place the new references randomly inside any glitch and recalculate them all. Eventually they all get solved - by far less full precision calculations than if they were calculated one by one.

And for one-pixel sized glitches, I just cover them by using the nearby pixel's values :)


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on May 06, 2014, 09:33:08 AM
100000 glitches in the same image? I guess almost all of them are one pixel in size?

I set all pixels identified as glitch to the same iteration value and recalculate all of them for every new reference added, regardless. This glitch detection method is so good so if the new reference doesn't solve one glitch, it is detected again as a glitch, and might get solved by the next reference.
I try to add the new reference in the center of the largest blob. But that is ineffective many times, especially on glitches caused by close passage of a minibrot's elephant valley where the spirals are in the edge of the blob glitches.

Nanoscope uses an iterative refinement approach here. After determining a blob region (left top right bottom rectangle around it) and computing a reference in the center and checking for discrepancy with calculating the center with the previous reference and disabled "glitch alarm" and finding a substantial discrepancy, it will use the hill-climbing net-contracting thingy in that rectangle with the new reference to find a smaller blob or high-iteration point, and if it hits a blob or the glitch alarm goes off, this will result in refining the choice of reference point. If it gets refined, it gets repeated. In the end it comes back with a reference point that's recursively in sub-sub-sub-peanuts, or in a spiral in the edge of the blob glitch, or wherever it needs to be. There might even end up being a chain of reference points, if necessary; the really old only-two-reference-points versions sometimes ended up with annular glitches with the center corrected but not a ring around the outside, particularly at one of Dinkydau's test locations (not Flake).


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on May 06, 2014, 10:48:46 AM
The measure is very good in the sense that it will catch all potentially corrupted pixels as long as one does not set the threshold too low. So applying an unsuitable reference to corrupted pixels or pixels suspected to be corrupted is just wasting time, but not adding undetectable new corruption, which is very important. There is another measure/test that is pretty much 100% accurate in the sense that if it says corrupt, then the pixel is corrupt. Unfortunately that measure does not identify all the noisy corruption pixels. It can be used to prioritise blobs, though. The most efficient method is the one that uses not more references than needed and applies a suitable reference every time pixels are recalculated and every pixel is calculated at most twice. I'm not sure this is feasible, but it might be. And then there is the issue of how to get a new reference most efficiently.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Kalles Fraktaler on May 11, 2014, 03:01:36 PM
First I just want to mention that this glitch detecting method also works on 3rd degree Mandelbrot. Super!!

Secondly, I have found a location (in the 2nd degree mandelbrot) where pixels are detected as glitches and don't get solved even with an additional reference in the very same pixel!

In the latest version of Kalles Fraktaler all glitches larger than 1 pixel are solved and up to 30 references are added, and on the last reference no glitch detection is made, hoping that they all are so small at that time so that they don't get noticed by the consumer.

Also the location from hapf earlier in this thread has there 2-pixel sized glitches that are always detected as glitches. They keep being visible while KF add reference until the last reference is added, then they disappears and look good because detection was not applied.


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on May 12, 2014, 11:24:47 AM
It all depends how you define "glitch" and "detected". The measure does not detect glitches per se. It just gives you an estimate for the likelihood of a glitch. And what is a glitch or corruption? Any deviation from the real value above some percentage threshold? Any deviation that is actually visible using a specific colour map and colouring algorithm? Depending on the threshold used with the measure you will get more or less corrupted pixels that are false alerts. Or bad pixels missed.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on May 12, 2014, 06:49:30 PM
First I just want to mention that this glitch detecting method also works on 3rd degree Mandelbrot. Super!!

Secondly, I have found a location (in the 2nd degree mandelbrot) where pixels are detected as glitches and don't get solved even with an additional reference in the very same pixel!

In the latest version of Kalles Fraktaler all glitches larger than 1 pixel are solved and up to 30 references are added, and on the last reference no glitch detection is made, hoping that they all are so small at that time so that they don't get noticed by the consumer.

Also the location from hapf earlier in this thread has there 2-pixel sized glitches that are always detected as glitches. They keep being visible while KF add reference until the last reference is added, then they disappears and look good because detection was not applied.

This is why Nanoscope checks a high iter point inside a possible-glitch for whether there's truly a significant discrepancy between the perturbatively-calculated value and the full precision calculated value. If there's none it sticks with that reference point for that class of "glitches", but if there is one, it uses the orbit computed at full precision at that high iter point as a new reference point for that class of "glitches".


Title: Re: Pertubation Theory Glitches Improvement
Post by: laser blaster on May 13, 2014, 12:04:00 AM
Couldn't you just assume a pixel is glitched if it's delta orbit (distance form the reference orbit) gets sufficiently large? 10^14 times the width of an image pixel seems like a good value. I think this method should be able to detect all glitches, not just noisy ones. I actually have some well-thought-out reasons for this suggestion, but I won't bore you with the details if you've already tried this method (and you probably have).

I'm probably just not understanding the complexity of the situation.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on May 13, 2014, 04:46:36 AM
Couldn't you just assume a pixel is glitched if it's delta orbit (distance form the reference orbit) gets sufficiently large? 10^14 times the width of an image pixel seems like a good value. I think this method should be able to detect all glitches, not just noisy ones. I actually have some well-thought-out reasons for this suggestion, but I won't bore you with the details if you've already tried this method (and you probably have).

I'm probably just not understanding the complexity of the situation.

It will naturally get that large on the way to escaping, for most pixels, whether it glitches or not. Problems occur if too many of the sig figs of delta for adjacent pixels end up the same before the first digits that differ; i.e., if they get much closer to each other than they are from the reference orbit; i.e., if they "bunch together". That requires that they enter an area of dynamics that locally re-contracts amid the general expansion among escaping points, but where this re-contracting area does not include the reference orbit. Such locally-contracting areas are found in mini Julias and related structures, and the test I described at the start of this thread essentially checks a proxy for local contraction like this.


Title: Re: Pertubation Theory Glitches Improvement
Post by: laser blaster on May 13, 2014, 06:12:04 PM
Okay, that makes sense to me. Your method seems pretty foolproof.


Title: Re: Pertubation Theory Glitches Improvement
Post by: claude on May 28, 2015, 04:05:42 PM
...

But that's just the actual (unperturbed) orbit of the current pixel! Whose size we need to check anyway, to detect bailout. It's when this gets very small that glitches can occur. This fits the observation that glitches hit at a) deep minibrots and b) deep "peanuts", embedded Julias, etc. (peanuts of course are just especially sparse embedded Julias) that are associated with deep minibrots. So, this suggests checking for the current orbit point to be sufficiently closer to 0 than the corresponding iterate of the reference orbit.

The breakthrough: checking for

<Quoted Image Removed>

...

I've been doing some experiments.

The current orbit gets close to 0 at glitches.  I observed that at the iteration number when this happens, the glitch contains a (possibly symmetrically equivalent) minibrot of that period.  The minibrot's orbit will in fact hit 0 exactly at that iteration (definition of periodic).

So when that happens, find the smallest (deltaZ + refZ) iterate, its (deltaC + refC) value should be close to (one of the symmetrically equivalent) minibrots.  A few of iterations of Newton's method refines it to the exact periodic nucleus refCNew.  Compute the difference of refCNew and the original refC at high precision, round it to double, then set each glitched deltaZ to (deltaZ + refZ), and corresponding deltaC to (deltaC + diffC).  Now you have two sets of pixels to iterate, each with their own reference orbit - no need to restart the iterations (either pixels or references) from scratch, they're already in the right place to continue.

I found that it's important to have the reference orbits at high precision (I've been using twice the number of digits required to resolve individual pixels), because the minibrots are so much smaller, and the deviations from perfectly periodic reference orbits get magnified.

I was getting also ugly "seam" artifacts between different regions, but that was mostly down to a sign error when computing diffC (oops). Need to figure out how to eliminate them completely....


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on January 02, 2016, 10:08:40 PM
What is the current state of the art of automatic glitch removal? Let's say how much overhead is needed for a really large image (16K) with (many) thousands of glitches even when using an initial reference that is well chosen? Typically how many additional references? And is it really clean afterwards?


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on January 03, 2016, 01:22:47 AM
What is the current state of the art of automatic glitch removal? Let's say how much overhead is needed for a really large image (16K) with (many) thousands of glitches even when using an initial reference that is well chosen? Typically how many additional references? And is it really clean afterwards?

So far as I am aware there have been no new developments here in a year or so. In the case of Nanoscope on a >10k image it is likely to use a couple of hundred additional references and produce a flawless output image.


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on January 03, 2016, 10:57:35 AM
Thanks. After a long break I have started experimenting again with automatic procedures and a couple of hundred references sounds right for that size.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on January 03, 2016, 11:37:30 AM
Thanks. After a long break I have started experimenting again with automatic procedures and a couple of hundred references sounds right for that size.

According to my email notifications, hapf, you posted two replies to this thread, the second within a minute after the first. For some reason I can only seem to view one of them, no matter how much I reload, shift-reload, etc. page 3 of this thread (and as of this writing it's not got a page 4). The reply of yours that I can see is the one I just quoted. Please repost the other one, or pm me a copy of it. Actually copy and paste -- not just a permalink to the article! I doubt the latter will work, due to whatever it is that is hiding the article in question from me, but whatever that is might not work versus a separate copy.


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on January 03, 2016, 01:59:39 PM
Typically how many additional references? And is it really clean afterwards?

it should be perfect if you let it run its full course.  i'll set it to bail early on the glitch correction for casual exploration, for saving nice images i just go ahead and let it run down every last pixel  :)


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on January 03, 2016, 04:32:54 PM
I only posted one reply.  :confused:


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on January 04, 2016, 05:58:12 AM
I only posted one reply.  :confused:

No, I definitely got two emails, about 40 seconds apart. I got one, saying hapf posted something to this thread, and loaded the thread in the browser. While I was waiting for the browser to render the page I got a new mail notification, checked my inbox, and found a second email saying hapf posted something to this thread. The only sequence of events that should generate that outcome is:

1. I check this thread, thus reactivating notifications if anything is subsequently added.
2. You post once, triggering a notification email and deactivating notifications.
3. I reload this thread in the browser, thus reactivating notifications.
4. Within seconds, you make another post, just after notifications were reactivated, triggering another email.

It can't happen if you only post once (or if I don't start reloading the page in between your first post and the second).


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on January 04, 2016, 09:05:03 AM
Yesterday/today I posted with this posting 4 times. That's all there is. There simply is no more. I deleted no postings.


Title: Re: Pertubation Theory Glitches Improvement (alternative derivation)
Post by: claude on April 09, 2016, 06:21:00 PM
I didn't quite understand the first post about applying perturbation to perturbation, so I've come up with an alternative derivation of the key result, which is (recap):

\frac{|z+\delta|}{|z|} < 10^{-3} means there is likely to be a glitchy problem

I started by looking at catastrophic cancellation, https://en.wikipedia.org/wiki/Loss_of_significance#Loss_of_significant_bits - this can be rewritten as

N = - \log_2 \left|1 - \frac{y}{x}\right| is the number of bits lost in a subtraction

but we have an addition, z + \delta, so set x = z and y = -\delta.  Then the following simplifies:

N = -\log_2 \left|1 + \frac{\delta}{z} \right|

2^{-N} = \left|1 + \frac{\delta}{z} \right|

2^{-N} = \left|\frac{z + \delta}{z} \right|

2^{-N} = \frac{|z + \delta|}{|z|}

The smaller the right hand side the larger N is, so the correct inequality for checking if there are more than N bits of accuracy lost is:

\frac{|z + \delta|}{|z|} < 2^{-N}

Pauldelbrot's 10^{-3} threshold is pretty close to 2^{-10} meaning more than 10 bits are lost.  For comparison a double has 53 bits of precision.


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on April 13, 2016, 10:35:10 AM
Which explains that this test with 1/10000 can not tell what will really happen. 10 bits lost may be irrelevant in the context. Or even 5 bits may be relevant. It all depends on how often what loss occurs and where in the computation. The bigger and the earlier the worse. I had a case that went bad and never went below 1/500 or so. And many that hit 1/10000 and are fine in the end. The only thing that is very safe and reasonably efficient is to use something like 1/100 and test each region that hits at the same iteration and the one that never hits explicitly for corruption. It requires one full precision iteration to the end per region and which pixel to test is crucial, of course. But fortunately all pixels in the same region behave more or less the same.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on June 22, 2016, 01:44:07 PM
Nanoscope's current threshold is actually for the squared magnitude to be 1/1000 or less; that corresponds to an unsquared magnitude ratio of approximately 1/32.

I am interested to know what the glitch detection logic is in the other major perturbation renderers, KF and Mandel Machine.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Kalles Fraktaler on June 23, 2016, 09:05:59 AM
Nanoscope's current threshold is actually for the squared magnitude to be 1/1000 or less; that corresponds to an unsquared magnitude ratio of approximately 1/32.

I am interested to know what the glitch detection logic is in the other major perturbation renderers, KF and Mandel Machine.
The code of KF is available on my site, even though it is not updated for a while, the glitch methods are the same.
KF use 0.0000001, which corresponds to an unsquared magnitude of 0.0003162...
Further I use a buffer of integers to represent the iteration values for each pixel, and a buffer of float to represent the smooth value for each pixel, with values between 0 to 1.
If a glitch is detected, the smooth value is set to 2 for that pixel.
Next reference is used to re-calculating all pixels indicated as glitch with smooth value 2.
By doing so, one could add the new references randomly in these glitch indicated pixels, even though it may be ineffective.
But it is probably faster than trying to use high precision to do any advanced decision on where to put the next reference.


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on June 23, 2016, 01:09:39 PM
Quote from: dr paul
Nanoscope's current threshold is actually for the squared magnitude to be 1/1000 or less

well, i had read your original post to say 1e-3, so thats what ive rolled with all this time, 1e-6 for the squared magnitude, and it seems fine.

Quote from: dr kalle
By doing so, one could add the new references randomly in these glitch indicated pixels, even though it may be ineffective.

is that to say this is what you do?  originally when i was trying to figure out what to do, i decided to pick the "deepest" glitched point to use for the next reference, simply the point with the greatest distance to the nearest non-glitched point.  ive been meaning to take a look at this again, what a more perfect approach might be.  i think claude has discussed some more straightforward, more proper approach.  i forget if it was based on which point glitched first, or based on the derivative, or something else.  have to dig through his posts again sometime.  i know he also mentioned applying newton's method to further refine the reference point once youve chosen it.  these things are all well and good when you are calculating the derivative, though it would be nice to also have as proper a method as possible for use with a non-DE mode.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Kalles Fraktaler on June 23, 2016, 07:16:37 PM
is that to say this is what you do?  originally when i was trying to figure out what to do, i decided to pick the "deepest" glitched point to use for the next reference, simply the point with the greatest distance to the nearest non-glitched point.  ive been meaning to take a look at this again, what a more perfect approach might be.  i think claude has discussed some more straightforward, more proper approach.  i forget if it was based on which point glitched first, or based on the derivative, or something else.  have to dig through his posts again sometime.  i know he also mentioned applying newton's method to further refine the reference point once youve chosen it.  these things are all well and good when you are calculating the derivative, though it would be nice to also have as proper a method as possible for use with a non-DE mode.
No, I am trying to use some kind of flood fill function to idenfity the largest separated glitch area, and then try to and the next reference in the center of it.
But the flood fill is not measuring too large areas, since some large AA would make it extremely slow.
So not random, but not much more either.
I think newton's method would be best, but not practially when going deep.


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on June 23, 2016, 08:28:17 PM
ah, so our methods are basically the same i think.  i am using an OpenCV distance function for it, and i think my opencv is set to use the GPU, so it is pretty fast and does not incur a noticeable slowdown.  i suspect if it ran it on the cpu it might be noticeably slower.  still, it would be nice to eliminate this entirely if you can select as good or better reference points by simply using the iteration and/or derivative data.


Title: Re: Pertubation Theory Glitches Improvement
Post by: claude on June 23, 2016, 09:26:51 PM
well, i had read your original post to say 1e-3, so thats what ive rolled with all this time, 1e-6 for the squared magnitude, and it seems fine.

I use this value too, 1e-6 for squared magnitude, 1e-3 for magnitude.

Quote
i think claude has discussed some more straightforward, more proper approach.  i forget if it was based on which point glitched first, or based on the derivative, or something else.  have to dig through his posts again sometime.  i know he also mentioned applying newton's method to further refine the reference point once youve chosen it.  these things are all well and good when you are calculating the derivative, though it would be nice to also have as proper a method as possible for use with a non-DE mode.

In mandelbrot-perturbator[1] I use a tree structure for recursively solving glitches.  At the iteration P when a glitch occurs, I find the pixel with the minimum |Z+deltaZ| value, whose C is near the minibrot of period P whose influence is causing the glitch.  At the minibrot's nucleus the |Z+deltaZ| would be 0, because of the periodicity, so using the derivative calculated anyway for DE, you can do one step of Newton's method very cheaply (no need to do any more iterations) to get a better new reference (said minibrot of period P).  Then rebase all the pixels that glitched (at the same iteration number with the same parent reference) to the new reference, no need to restart from the beginning, because of periodicity:  the new deltaZ is just Z+deltaZ, the new deltaC is a translation by the difference of the old and new reference (be sure to use the correct sign).

[1] https://code.mathr.co.uk/mandelbrot-perturbator - very messy pre-alpha code, not really ready for public consumption yet (in particular the dependencies on my other projects are a bit fiddly to get set up build-wise)


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on June 24, 2016, 01:46:22 AM
No, I am trying to use some kind of flood fill function to idenfity the largest separated glitch area, and then try to and the next reference in the center of it.
But the flood fill is not measuring too large areas, since some large AA would make it extremely slow.
So not random, but not much more either.
I think newton's method would be best, but not practially when going deep.

Since KF seems to work fine with 1e-7 sensitivity instead of the much more picky 1e-3, I'm trialling that value in Nanoscope.

Nanoscope uses the same method now to pick references, instead of the original fairly complex contracting-net scheme. The flood fill recently had to be adapted to use a hash set instead of a stack to avoid excessive memory consumption on large glitches -- it was blowing up on a glitch that took up about 40% of a 30-megapixel image until I made that change. Pixels would get added to the stack, then added again from a different direction and visited, leaving one instance still on the stack, which wouldn't be revisited again for a very long time. Checking that it was visited and not recursing was fast, but the accumulation of dead pixels on the stack got into the tens of millions for that giant glitch. I wasn't sanguine about a queue being much better, but a set discards duplicate elements. The set only grew to in the tens of thousands, but was very slow because of poor locality of memory accesses to the image bitmap. Changing from a hash set to a tree set sorted on coordinates made it very fast and cheap, only around 9000 at its largest size finding the middle of the same glitch, because it restored locality of successive memory accesses and further reduced the set's size and complexity (it would have grown haphazardly and contains weird holes with a hash set, versus returning to the same scanning behavior of the stack with the tree set).

Of course, averaging the coordinates of the glitch member pixels to find the barycenter is only the first step. If the glitch is ring shaped (as one is in one of Dinkydau's test locations) the barycenter won't be glitched! So if the selected center pixel is non-glitched, it walks a straight line from the initial glitched pixel to the barycenter, one pixel width at a time (using FP coordinates instead of integers for this bit), until it hits a nonglitched pixel and then returns the pixel that is halfway along this line segment. So, with a ring it's going to pick a point halfway from the ring's outer rim to its inner rim, for example.


Title: Re: Pertubation Theory Glitches Improvement
Post by: Pauldelbrot on June 24, 2016, 03:42:28 AM
The code of KF is available on my site, even though it is not updated for a while, the glitch methods are the same.
KF use 0.0000001

?????

In my testing, this does not catch the glitch in Dinkydau's "Flake" image. OTOH, KF 2.10 renders the "Flake" image correctly. The image is shallow enough that differences in the approaches taken to handling large exponents (over e308) shouldn't enter into it. Is KF using anything unusual, beyond ordinary double-precision FP, when calculating the "Flake" image?


Title: Re: Pertubation Theory Glitches Improvement
Post by: claude on June 24, 2016, 03:51:16 AM
?????
could possibly be down to differences in reference point selection?  I don't know..


Title: Re: Pertubation Theory Glitches Improvement
Post by: Kalles Fraktaler on June 27, 2016, 01:18:36 PM
?????

In my testing, this does not catch the glitch in Dinkydau's "Flake" image. OTOH, KF 2.10 renders the "Flake" image correctly. The image is shallow enough that differences in the approaches taken to handling large exponents (over e308) shouldn't enter into it. Is KF using anything unusual, beyond ordinary double-precision FP, when calculating the "Flake" image?
No, there is no magic, and the code (somewhat outdated but this part has not been changed for long) is available on http://www.chillheimer.de/kallesfraktaler/

Code:
for (i = 0; i<nMaxIter && !m_bStop; i++){
xin = (xr + xi).Square() - sr - si + m_iref;
xrn = sr - si + m_rref;
xr = xrn;
xi = xin;
sr = xr.Square();
si = xi.Square();
m_nRDone++;

m_db_dxr[i] = xr.ToDouble();
m_db_dxi[i] = xi.ToDouble();
abs_val = (g_real*m_db_dxr[i] * m_db_dxr[i] + g_imag*m_db_dxi[i] * m_db_dxi[i]);
m_db_z[i] = abs_val*0.0000001;
if (abs_val >= terminate){
if (nMaxIter == m_nMaxIter){
nMaxIter = i + 3;
if (nMaxIter>m_nMaxIter)
nMaxIter = m_nMaxIter;
m_nGlitchIter = nMaxIter;
}
}
}


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on September 16, 2016, 08:35:42 PM
i forget if this has been discussed before, but going over these parts of my code again got me to thinking, is there a variation on paul's glitch test that would not involve taking the magnitude?  since this requires you to compute squared values, it seems that this should halve the precision range that the test will actually be good for.  ie, 1e-200 * 1e-200 = 1e-400, so here for instance you have underflowed if using double.  so then either the test is no good or you compensate by using a bigger slower type.


Title: Re: Pertubation Theory Glitches Improvement
Post by: claude on September 16, 2016, 09:37:51 PM
i forget if this has been discussed before, but going over these parts of my code again got me to thinking, is there a variation on paul's glitch test that would not involve taking the magnitude?  since this requires you to compute squared values, it seems that this should halve the precision range that the test will actually be good for.  ie, 1e-200 * 1e-200 = 1e-400, so here for instance you have underflowed if using double.  so then either the test is no good or you compensate by using a bigger slower type.

Good point, this underflow might cause things to be detected as glitches when they are fine (if using <=) or not detected as glitches at all (if using <).

There are ways of avoiding under/overflow when computng magnitude, but they don't apply if you use magnitude-squared to avoid the square root computation.


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on September 16, 2016, 09:49:07 PM
yeah, i think we've all been using magnitude squared to avoid the square root.  the precision issue though seems to invalidate this approach, whereas some kind of shortcut to a non-squared magnitude sounds like just the ticket.

noting also that for those who put a hard boundary on where they increase precision, ie if you jump from double to your next biggest type at a hard boundary of a magnification of 1e308 or whatever, you are likely shielding yourself from the precision issue given the fact that the series approximation will usually be initializing \Delta{z} to values substantially larger than \Delta{c}.  but this does not mean the precision issue doesnt exist.

looking at the perturbation code again, it looks like not only the |z| computations have this issue, but also the \Delta{z} iteration itself.

Quote from: botond kosa
* There is one more thing to check: the previously mentioned bigger downward jumps in log|deltai| are caused by sudden drops in the magnitude of the reference orbit (|Zm|). So we have to be sure that minValue = |deltaN| * minm>N|Zm| > 10-308

here botond kosa was describing his method of determining when it is safe to scale down the precision of the perturbation iterations.  i think perhaps this sort of forward-looking approach is really what is needed to guarantee a proper outcome?  i think this still leaves the question regarding the computation of the magnitudes for the glitch detection however, or would this cover that too?

thinking more about this, i guess the z values wont usually be too small unless you are zooming in very far on the real or imaginary axis, so maybe the magnitudes could only ever underflow in locations like a deep zoom on the needle.


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on October 26, 2016, 01:52:47 AM
In mandelbrot-perturbator I use a tree structure for recursively solving glitches.  At the iteration P when a glitch occurs, I find the pixel with the minimum |Z+deltaZ| value, whose C is near the minibrot of period P whose influence is causing the glitch.  At the minibrot's nucleus the |Z+deltaZ| would be 0, because of the periodicity, so using the derivative calculated anyway for DE, you can do one step of Newton's method very cheaply (no need to do any more iterations) to get a better new reference (said minibrot of period P).  Then rebase all the pixels that glitched (at the same iteration number with the same parent reference) to the new reference, no need to restart from the beginning, because of periodicity:  the new deltaZ is just Z+deltaZ, the new deltaC is a translation by the difference of the old and new reference (be sure to use the correct sign).

i was going to see about implementing this, though im not sure if i totally get it (im not really familiar yet with how the period stuff works and such):

do we know for sure that one iteration of newtons method will always land you in the center of the desired minibrot ?
do we know for sure that the period of that minibrot is P, or could P be some multiple of the actual period or vice versa ?
if P is definitely the period, can we then say that our new reference can simply consist of P iterations, which we could then index as iter%P ?
what is the new Z value to start the new reference ?  or are we starting it at the beginning ?


Title: Re: Pertubation Theory Glitches Improvement
Post by: claude on October 26, 2016, 06:27:34 PM
i was going to see about implementing this, though im not sure if i totally get it (im not really familiar yet with how the period stuff works and such):

do we know for sure that one iteration of newtons method will always land you in the center of the desired minibrot ?

No, but it should give a few more bits of accuracy compared to the estimate of "nearest pixel".

Quote
do we know for sure that the period of that minibrot is P, or could P be some multiple of the actual period or vice versa ?

It could be a multiple of the true period I suppose.  It's worth investigating, to see if it occurs or even matters in practice.

Note that if you create a new reference at its true period, it won't be created again (glitch detection test will fail as refZ will be 0 at multiples of the period).

Quote
if P is definitely the period, can we then say that our new reference can simply consist of P iterations, which we could then index as iter%P ?

I suppose so!  Rounding errors might cause problems, though. - chances are that you don't use enough bits of precision to stop the secondary reference escaping eventually..  

Quote
what is the new Z value to start the new reference ?  or are we starting it at the beginning ?

The new delta-Z value is the old reference-Z+delta-Z value (which is near 0 by the glitch detection test), and iteration continues from where it is.  No starting from scratch.



Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on October 26, 2016, 07:28:11 PM
Quote from: claude
Quote
if P is definitely the period, can we then say that our new reference can simply consist of P iterations, which we could then index as iter%P ?

I suppose so!  Rounding errors might cause problems, though. - chances are that you don't use enough bits of precision to stop the secondary reference escaping eventually..

i see, we would have to do more iterations of newtons method to actually land inside the minibrot.  i was envisioning a scenario where we determine the reference to enough precision that it is actually inside the minibrot, definitively determine its period P, and then only have to do P reference iterations.  it seems this might be particularly useful if we could make this happen for the initial reference, which i assume everyone is still iterating from 0 up to a maxIter or such.  could this be feasible at all ?

i guess you would also have to see if the benefit of doing only P reference iterations outweighed the cost of iterating at a higher precision.


Title: Re: Pertubation Theory Glitches Improvement
Post by: claude on October 26, 2016, 09:20:26 PM
could this be feasible at all ?

definitely. but it is probably only worth doing high precision Newton's method for the primary reference which can be shared between multiple frames when exploring or rendering videos...

Quote
i guess you would also have to see if the benefit of doing only P reference iterations outweighed the cost of iterating at a higher precision.

True. Shame that the series approximation stuff isn't periodic...

It remains to know how much extra precision is required.  I guess a safe bound would be 4x the precision at the depth where the reference first becomes viable (findable with boxperiod method), or 2x the precision at the 2-fold embedded Julia set.  I'm more confident about the second of those guesses...  It makes it easier if the size estimate algorithm (*) gives reasonable values for not-that-accurate input, then you could use the size estimate as precision estimate - another thing to investigate...

(*)
https://code.mathr.co.uk/mandelbrot-numerics/blob/HEAD:/c/lib/m_d_size.c
https://code.mathr.co.uk/mandelbrot-numerics/blob/HEAD:/c/lib/m_r_size.c


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on October 26, 2016, 10:07:41 PM
Shame that the series approximation stuff isn't periodic...

i hadnt thought how the series stuff would work..  i guess the coefficients themselves wouldnt need to start over; you could just keep picking up where you left off as you zoom in.  knighty's truncation error stuff i guess you would have to calculate fresh each time, which i guess also means you would need to store each iteration of the coefficients too..  :-\


Title: Re: Pertubation Theory Glitches Improvement
Post by: claude on October 26, 2016, 10:40:06 PM
you could just keep picking up where you left off as you zoom in.

yes, in my mandelbrot-perturbator I do just that, works really well for interactive use.

Quote
knighty's truncation error stuff i guess you would have to calculate fresh each time

yes, which is why I haven't ported that to my main renderer yet, still using a (probably broken) "size of terms decreases sufficiently fast" heuristic..


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on October 27, 2016, 12:02:02 AM
Quote from: claude
Quote
what is the new Z value to start the new reference ?  or are we starting it at the beginning ?

The new delta-Z value is the old reference-Z+delta-Z value (which is near 0 by the glitch detection test)

i guess this statement must imply what the new refZ value becomes, but my eyes are glazing over and my brain isnt making the connection.. is it zero ?

i just built your mandelbrot-perturbator, i guess it must be the continuation of mightymandel ?  i wondered why that wasnt updated in a while   :)


Title: Re: Pertubation Theory Glitches Improvement
Post by: claude on October 27, 2016, 12:31:05 PM
is it zero ?

yes! the new reference Z reaches zero at multiples of the period, so you don't need to restart iterations from scratch, just carry on from there

Quote
i just built your mandelbrot-perturbator, i guess it must be the continuation of mightymandel ?  i wondered why that wasnt updated in a while   :)

GPU-using mightymandel is sleeping until I get around to porting some of the ideas from CPU-based mandelbrot-perturbator - it may take some time...


Title: Re: Pertubation Theory Glitches Improvement
Post by: Adam Majewski on October 27, 2016, 05:21:41 PM
GPU-using mightymandel is sleeping until I get around to porting some of the ideas from CPU-based mandelbrot-perturbator - it may take some time...
[/quote]

What about  mixed computations :
* gpu code for double precision
* cpu code fro arbitrary precision


Title: Re: Pertubation Theory Glitches Improvement
Post by: claude on October 27, 2016, 06:19:34 PM
What about  mixed computations :

yes that's how mightymandel works.  (but it has many inefficiencies that mean it needs a lot of work to be done on the code, and I have not enough time for coding...)


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on October 27, 2016, 06:27:41 PM
Quote from: claude
it may take some time

the first and last thing you ever need to know about programming..

Quote from: claude
the new reference Z reaches zero at multiples of the period

does a periodic point's (is limit cycle the right jargon?) always begin and end with zero ?


Title: Re: Pertubation Theory Glitches Improvement
Post by: claude on October 30, 2016, 09:21:55 PM
does a periodic point's (is limit cycle the right jargon?) always begin and end with zero ?

Yes the nucleus of a hyperbolic component has 0 in its limit cycle, so the cycle is reached immediately, but other points in the hyperbolic component have a limit cycle that doesn't contain 0 and this cycle is reached asymptotically after an infinite number of iterations from 0 (but Newton's method can accelerate convergence).


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on December 18, 2016, 08:28:55 AM
after being lazy for too long i got around to finishing an implementation of claude's glitch algorithm.  oh it is wonderful and glorious  :music:

i forget if this was discussed on here anywhere recently, but i was also wondering how much people have experimented with what value to use for glitch triggering.  the original value as dictated by paul was 1e-3 (or 1e-6 for squared magnitude), but i get the feeling from his original description that he just kind of arbitrarily chose that value because it made glitch blobs bigger and more uniform for the purposes of visual inspection, or perhaps for the purposes of his visual-ish algorithm for calculating the centroids of blobs?  in any case, i dont think it is any sort of proper value, and i wonder what something more proper would be?  i guess knighty came up with one of his interval arithmetic things for glitch triggering, but that sure sounds very costly to do per point.

ive been experimenting a bit with using different values for paul's glitch triggering recently.  using more lax values starts to cut way down on the amount of glitch triggering and the number of reference points used, but gradually starts to introduce some artifacts it seems, also depending on location though.  it makes me wonder if even the 1e-3 value (and hence really this glitch triggering mechanism itself) is actually always necessarily giving perfect results?


Title: Re: Pertubation Theory Glitches Improvement
Post by: knighty on December 18, 2016, 04:24:23 PM
I have implemented my version of glitch detection at the time but haven't tested it thoroughly just because it doesn't give much better results than Pauldelbrot's one. It is not that expensive though.
(modify line 188 to switch between the tow formulas)


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on December 20, 2016, 06:07:43 AM
ok knighty i will have to implement it sometime just for kicks.

a few initial observations from implementing claude's algorithm:  

this spawns a ton of reference points!  at least several times as many as if you simply plod along picking a best new reference point and redoing all remaining points and repeating until done.  but since they continue in place they are nice and fast.

doing the newton step versus not doing the newton step seems to give the same results, with the same number of reference points used etc.  which is good if you care to maintain a non-DE code path.  as claude said though you may as well do the newton step if you have the derivative.  though i wonder if doing the newton step could ever give a worse reference instead of a better one?  im not really sure how this stuff is supposed to work, but i tried using several newton steps leading up to when a glitch happens, and i get botched renders.  just out of curiosity i also tried using a single z/zp from a few iterations prior to the glitch iteration and that also screws up the render.  for some reason it seems to only work (or at least not screw it up) to use the glitch iteration.  does z/zp approach zero when a glitch occurs?  maybe in that case it is just not screwing it up, as opposed to actually contributing anything.  im also not certain my implementation of claude's overall algorithm is perfect yet either, in fact it seems perhaps not, as it seems to work great most of the time but then when i test it on large renders where i intentionally use a crap initial reference point, it often results in screwed up renders.  hopefully this is just revealing an error on my part, and not a limitation of the algorithm..

edit:  the botched large render with crap initial reference is fixed if i use 32 terms instead of 64, so probably just the limitations of SA again.  though i wonder if using a crap initial reference helps to make the SA stuff crappier and more prone to failure...

also i cleaned up my implementation of claude's thing and it should be good now.


Title: Re: Pertubation Theory Glitches Improvement
Post by: hapf on December 20, 2016, 07:31:34 PM
i forget if this was discussed on here anywhere recently, but i was also wondering how much people have experimented with what value to use for glitch triggering.  the original value as dictated by paul was 1e-3 (or 1e-6 for squared magnitude), but i get the feeling from his original description that he just kind of arbitrarily chose that value because it made glitch blobs bigger and more uniform for the purposes of visual inspection, or perhaps for the purposes of his visual-ish algorithm for calculating the centroids of blobs?  in any case, i dont think it is any sort of proper value, and i wonder what something more proper would be?  
I that value is passed it does not mean the the pixel will go corrupt. And if it is not it does not mean the pixel is fine. What one can say is that the more times this or higher values are passed the more likely the pixel goes eventually bad.


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on December 20, 2016, 07:36:45 PM
making knighty's interval arithmetic thing the first truly proper test, assuming he devised it correctly.

though his interval formulas still include the mysterious precision parameter, the value for which varies depending on location and probably also depending on a multitude of things.  i have yet to see anyone propose a way to predict what this value should be.  as proper as the interval formulas may be, this keeps them kind of in a similar grey area as paul's approach, depending on a magic number which cannot be predicted..


Title: Re: Pertubation Theory Glitches Improvement
Post by: knighty on December 20, 2016, 08:54:34 PM
Hi,
The formula for glitch detection (http://www.fractalforums.com/announcements-and-news/*continued*-superfractalthing-arbitrary-precision-mandelbrot-set-rendering-in-ja/msg91505/#msg91505), unlike the one for series approximation error estimation, doesn't use interval arithmetics, only the derivative. the misterious precision parameter is not that misterious :). It depends only on the size of the pixels in the rendered window. It should be a fractions of that pixel size/radius. I have left the value of the fraction (relatively) undefined in order to let the user choose the level of accuracy she/he wants. For example, without antialiasing one can take fraction=1 but when using antialiasing (say N x N) fraction should be < 1/N.

That said, it doesn't take into account the (possible) accumulation of rounding errors. Therefore, it is safer to use a small value for the fraction. 1e-3 for example seems a resonable choice.


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on December 20, 2016, 09:24:28 PM
i guess by "precision parameter" i meant your "pmax," which is found in both your glitch formula and your SA error formula.  it seems to be a mysterious value which can usually be [quite a bit] less than the full mantissa precision, but how to predict what the perfect value is at any given time seems to be a mystery?

also, does your glitch formula need to be checked on every iteration like paul's glitch formula, or could it be done less frequently?


Title: Re: Pertubation Theory Glitches Improvement
Post by: knighty on December 21, 2016, 02:10:54 PM
Well, IMHO it is not pmax that is mysterious. The mysterious part comes from the unknown effects of rounding error accumulation (that are not taken into account). Maybe the way I wrote the code is a little bit misleading? At some point the glich inequality looks like this:

2-pmax * LHS < RHS * fraction

if we set: fraction = 2-k

we can rewrite the inequality this way:

2-(pmax-k) * LHS < RHS

in the code I've posted, I have set k to a too big value (maybe in order to make the blobs have the same size as with Pauldelbrot's formula  :evil1: ).

I did some experiments with k=10 which, so far, give good results.

For both formulas, I gess it is not, strictly speaking, necessary to do the check at every iteration if one can predict when the iterated point comes close to the "cancellation area". This is because the ( catastrophic ) cancellation happens at some iterations if any.
I believe it is even possible to predict where the cancellations occur while computing reference point orbit and/or series approximation and so, by using... err... interval arithmetic  ;D and root finding. But this is another story.


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on December 21, 2016, 02:57:27 PM
Quote from: knighty
I believe it is even possible to predict where the cancellations occur while computing reference point orbit and/or series approximation and so, by using... err... interval arithmetic   and root finding. But this is another story.

that sounds like an interesting story indeed  :D


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on January 02, 2017, 09:20:26 PM
ok i was going to try your glitch formula, though i am a bit confused about \delta'.  you gave the formula:

\delta'_{n+1} = 2\delta'_n(z_n + \delta_n) + 1

\delta is initialized from the SA, but i am not sure how \delta' is supposed to be initialized?

also conceptually im not sure how this \delta' relates to glitch detection.  it seems like it implies that glitching can happen under more circumstances than simple precision loss?  i was also trying to remember how your SA truncation error formulas progressed.  didnt you start out using SA' and then decide that something else should be used instead?  are you sure that \delta' is the right thing to use here?

edit:  is \delta' the same as the SA stuff claude came up with for initializing z' for doing DE?


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on January 06, 2017, 07:32:05 AM
ok assuming the \delta' is what i think it is i implemented your glitch detection.  it seems like maybe it will work though currently i am not sure what \delta' becomes when switching to a new secondary reference under claude's glitch correction algorithm.

also i recall claude suggesting that calculating z' directly made more sense than using the perturbation formula, so that is how i implemented it.  since this glitch detection requires \delta' anyway it would be nice to be able to use it to calculate the value of the derivative at the end of iteration, unless it would be problematic for some reason.  if it were too problematic to use then that would make this glitch detection formula that much more costly..


Title: Re: Pertubation Theory Glitches Improvement
Post by: knighty on January 06, 2017, 09:27:22 PM
Hi,

 delta' is exactly the same as z'. :)


Title: Re: Pertubation Theory Glitches Improvement
Post by: quaz0r on January 07, 2017, 07:37:03 AM
so are you saying that by |\delta'_{n+1}| you in fact meant |z'_{n+1}| ?  and if that is the case, then did you also mean |\delta_{n+1}| to be |z_{n+1}| ?  though here by z i mean the current point, not the reference z.  i wish we had clearer terminology for this.  i think whenever you guys write z you tend to mean the reference z, but then how are we supposed to refer to non-reference z's ?  z_{point-we-are-currently-calculating-not-the-reference} ?  it is further confused if you talk about non-perturbation formulas at the same time as perturbation formulas.   :angry:

looking at claude's old perturbation document, it appears that he gives this formula for \delta'

\delta'_{n+1} = 2(z'_n\delta_n + z_n\delta'_n + \delta_n\delta'_n)

whereas the formula you gave for \delta' is in fact the standard non-perturbation formula for z'_{as-in-the-derivative-of-the-current-point-we-are-calculating-not-the-derivative-of-the-reference-point}, though alongside that you gave the actual perturbation formula for \delta.  it seems perhaps something got confused here ?

also i noticed in the comments in the code you attached you say your glitch detection doesnt always work right.  shouldnt we expect it to work if you got the formulas right ?  so maybe they are not right..

i'll keep playing around with it but so far nothing i try seems to work..

actually one last thought about all of this:  when previously playing around with claude's glitch correction algorithm, i noticed it blows up spectacularly if any value more lax than the standard 10-3 is used with paul's glitch detection.  it seems claude's thing has requirements/conditions/pitfalls/whatever that are yet unknown or yet to be fully explained, and perhaps wont work nice with your glitch detection, or will need to have pmax tweaked just so..