Pauldelbrot
|
|
« Reply #30 on: May 12, 2014, 06:49:30 PM » |
|
First I just want to mention that this glitch detecting method also works on 3rd degree Mandelbrot. Super!!
Secondly, I have found a location (in the 2nd degree mandelbrot) where pixels are detected as glitches and don't get solved even with an additional reference in the very same pixel!
In the latest version of Kalles Fraktaler all glitches larger than 1 pixel are solved and up to 30 references are added, and on the last reference no glitch detection is made, hoping that they all are so small at that time so that they don't get noticed by the consumer.
Also the location from hapf earlier in this thread has there 2-pixel sized glitches that are always detected as glitches. They keep being visible while KF add reference until the last reference is added, then they disappears and look good because detection was not applied.
This is why Nanoscope checks a high iter point inside a possible-glitch for whether there's truly a significant discrepancy between the perturbatively-calculated value and the full precision calculated value. If there's none it sticks with that reference point for that class of "glitches", but if there is one, it uses the orbit computed at full precision at that high iter point as a new reference point for that class of "glitches".
|
|
|
Logged
|
|
|
|
laser blaster
Iterator
Posts: 178
|
|
« Reply #31 on: May 13, 2014, 12:04:00 AM » |
|
Couldn't you just assume a pixel is glitched if it's delta orbit (distance form the reference orbit) gets sufficiently large? 10^14 times the width of an image pixel seems like a good value. I think this method should be able to detect all glitches, not just noisy ones. I actually have some well-thought-out reasons for this suggestion, but I won't bore you with the details if you've already tried this method (and you probably have).
I'm probably just not understanding the complexity of the situation.
|
|
|
Logged
|
|
|
|
Pauldelbrot
|
|
« Reply #32 on: May 13, 2014, 04:46:36 AM » |
|
Couldn't you just assume a pixel is glitched if it's delta orbit (distance form the reference orbit) gets sufficiently large? 10^14 times the width of an image pixel seems like a good value. I think this method should be able to detect all glitches, not just noisy ones. I actually have some well-thought-out reasons for this suggestion, but I won't bore you with the details if you've already tried this method (and you probably have).
I'm probably just not understanding the complexity of the situation.
It will naturally get that large on the way to escaping, for most pixels, whether it glitches or not. Problems occur if too many of the sig figs of delta for adjacent pixels end up the same before the first digits that differ; i.e., if they get much closer to each other than they are from the reference orbit; i.e., if they "bunch together". That requires that they enter an area of dynamics that locally re-contracts amid the general expansion among escaping points, but where this re-contracting area does not include the reference orbit. Such locally-contracting areas are found in mini Julias and related structures, and the test I described at the start of this thread essentially checks a proxy for local contraction like this.
|
|
|
Logged
|
|
|
|
laser blaster
Iterator
Posts: 178
|
|
« Reply #33 on: May 13, 2014, 06:12:04 PM » |
|
Okay, that makes sense to me. Your method seems pretty foolproof.
|
|
|
Logged
|
|
|
|
claude
Fractal Bachius
Posts: 563
|
|
« Reply #34 on: May 28, 2015, 04:05:42 PM » |
|
...
But that's just the actual (unperturbed) orbit of the current pixel! Whose size we need to check anyway, to detect bailout. It's when this gets very small that glitches can occur. This fits the observation that glitches hit at a) deep minibrots and b) deep "peanuts", embedded Julias, etc. (peanuts of course are just especially sparse embedded Julias) that are associated with deep minibrots. So, this suggests checking for the current orbit point to be sufficiently closer to 0 than the corresponding iterate of the reference orbit.
The breakthrough: checking for
<Quoted Image Removed>
...
I've been doing some experiments. The current orbit gets close to 0 at glitches. I observed that at the iteration number when this happens, the glitch contains a (possibly symmetrically equivalent) minibrot of that period. The minibrot's orbit will in fact hit 0 exactly at that iteration (definition of periodic). So when that happens, find the smallest (deltaZ + refZ) iterate, its (deltaC + refC) value should be close to (one of the symmetrically equivalent) minibrots. A few of iterations of Newton's method refines it to the exact periodic nucleus refCNew. Compute the difference of refCNew and the original refC at high precision, round it to double, then set each glitched deltaZ to (deltaZ + refZ), and corresponding deltaC to (deltaC + diffC). Now you have two sets of pixels to iterate, each with their own reference orbit - no need to restart the iterations (either pixels or references) from scratch, they're already in the right place to continue. I found that it's important to have the reference orbits at high precision (I've been using twice the number of digits required to resolve individual pixels), because the minibrots are so much smaller, and the deviations from perfectly periodic reference orbits get magnified. I was getting also ugly "seam" artifacts between different regions, but that was mostly down to a sign error when computing diffC (oops). Need to figure out how to eliminate them completely....
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #35 on: January 02, 2016, 10:08:40 PM » |
|
What is the current state of the art of automatic glitch removal? Let's say how much overhead is needed for a really large image (16K) with (many) thousands of glitches even when using an initial reference that is well chosen? Typically how many additional references? And is it really clean afterwards?
|
|
|
Logged
|
|
|
|
Pauldelbrot
|
|
« Reply #36 on: January 03, 2016, 01:22:47 AM » |
|
What is the current state of the art of automatic glitch removal? Let's say how much overhead is needed for a really large image (16K) with (many) thousands of glitches even when using an initial reference that is well chosen? Typically how many additional references? And is it really clean afterwards?
So far as I am aware there have been no new developments here in a year or so. In the case of Nanoscope on a >10k image it is likely to use a couple of hundred additional references and produce a flawless output image.
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #37 on: January 03, 2016, 10:57:35 AM » |
|
Thanks. After a long break I have started experimenting again with automatic procedures and a couple of hundred references sounds right for that size.
|
|
|
Logged
|
|
|
|
Pauldelbrot
|
|
« Reply #38 on: January 03, 2016, 11:37:30 AM » |
|
Thanks. After a long break I have started experimenting again with automatic procedures and a couple of hundred references sounds right for that size.
According to my email notifications, hapf, you posted two replies to this thread, the second within a minute after the first. For some reason I can only seem to view one of them, no matter how much I reload, shift-reload, etc. page 3 of this thread (and as of this writing it's not got a page 4). The reply of yours that I can see is the one I just quoted. Please repost the other one, or pm me a copy of it. Actually copy and paste -- not just a permalink to the article! I doubt the latter will work, due to whatever it is that is hiding the article in question from me, but whatever that is might not work versus a separate copy.
|
|
|
Logged
|
|
|
|
quaz0r
Fractal Molossus
Posts: 652
|
|
« Reply #39 on: January 03, 2016, 01:59:39 PM » |
|
Typically how many additional references? And is it really clean afterwards?
it should be perfect if you let it run its full course. i'll set it to bail early on the glitch correction for casual exploration, for saving nice images i just go ahead and let it run down every last pixel
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #40 on: January 03, 2016, 04:32:54 PM » |
|
I only posted one reply.
|
|
|
Logged
|
|
|
|
Pauldelbrot
|
|
« Reply #41 on: January 04, 2016, 05:58:12 AM » |
|
I only posted one reply. No, I definitely got two emails, about 40 seconds apart. I got one, saying hapf posted something to this thread, and loaded the thread in the browser. While I was waiting for the browser to render the page I got a new mail notification, checked my inbox, and found a second email saying hapf posted something to this thread. The only sequence of events that should generate that outcome is: 1. I check this thread, thus reactivating notifications if anything is subsequently added. 2. You post once, triggering a notification email and deactivating notifications. 3. I reload this thread in the browser, thus reactivating notifications. 4. Within seconds, you make another post, just after notifications were reactivated, triggering another email. It can't happen if you only post once (or if I don't start reloading the page in between your first post and the second).
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #42 on: January 04, 2016, 09:05:03 AM » |
|
Yesterday/today I posted with this posting 4 times. That's all there is. There simply is no more. I deleted no postings.
|
|
|
Logged
|
|
|
|
claude
Fractal Bachius
Posts: 563
|
|
« Reply #43 on: April 09, 2016, 06:21:00 PM » |
|
I didn't quite understand the first post about applying perturbation to perturbation, so I've come up with an alternative derivation of the key result, which is (recap): means there is likely to be a glitchy problemI started by looking at catastrophic cancellation, https://en.wikipedia.org/wiki/Loss_of_significance#Loss_of_significant_bits - this can be rewritten as is the number of bits lost in a subtraction but we have an addition, , so set and . Then the following simplifies: The smaller the right hand side the larger N is, so the correct inequality for checking if there are more than N bits of accuracy lost is: Pauldelbrot's threshold is pretty close to meaning more than 10 bits are lost. For comparison a double has 53 bits of precision.
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #44 on: April 13, 2016, 10:35:10 AM » |
|
Which explains that this test with 1/10000 can not tell what will really happen. 10 bits lost may be irrelevant in the context. Or even 5 bits may be relevant. It all depends on how often what loss occurs and where in the computation. The bigger and the earlier the worse. I had a case that went bad and never went below 1/500 or so. And many that hit 1/10000 and are fine in the end. The only thing that is very safe and reasonably efficient is to use something like 1/100 and test each region that hits at the same iteration and the one that never hits explicitly for corruption. It requires one full precision iteration to the end per region and which pixel to test is crucial, of course. But fortunately all pixels in the same region behave more or less the same.
|
|
|
Logged
|
|
|
|
|