brainiac94
Forums Freshman
Posts: 12
|
|
« Reply #180 on: October 03, 2013, 04:28:49 AM » |
|
get rid of the "blobs"
I downloaded SFT's code and started rendering a zoom without reading more than maybe one page of this thread. I have since run into the blobs and I dread them. I have taken the time to identify all my blobbed frames and, so close to completing the video, am willing to spend lots of computing time eliminating them.. I even resorted to pasting parts of good frames over the blobs in Paint.NET by hand.. Have you guys come to any "working" way to get rid of them, regardless of efficiency? Edit: After actually reading a good part of the thread, I tried to split the blobbed frames vertically and render each half individually. My idea was that it would result in different reference points, and it does indeed work like a charm. Implementing this and merging the PNGs after rendering is trivial and picking out the broken frames by hand is okay for me, so I will just tell my software which frames I want re-rendered in this fashion and the blobs will be defeated. Once again, great work! I am so happy to finally have this free and incredibly efficient software.
|
|
« Last Edit: October 03, 2013, 04:57:27 AM by brainiac94, Reason: Question became obsolete »
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #181 on: October 03, 2013, 06:37:31 PM » |
|
Edit: After actually reading a good part of the thread, I tried to split the blobbed frames vertically and render each half individually. My idea was that it would result in different reference points, and it does indeed work like a charm. Implementing this and merging the PNGs after rendering is trivial and picking out the broken frames by hand is okay for me, so I will just tell my software which frames I want re-rendered in this fashion and the blobs will be defeated.
I doubt that works in general. I don't know how SFT selects reference points, though. Does it work for this region simply by splitting vertically in the middle? -1.4286834450720908323536745652315210519302441679686062023167081505113436391520073310684621209340498784840833268742809294590859491122325334517211060328215717496653589864847131768204309481202118089862775783597674485966256739928525464654718845489972747832732448165677988906514523111972E+00 -1.6249714167279050622903119235098671074408472668686211688386758706636769058023399093938140040045751978736837152053493586322683075141206689859218984124859502172196525489346454846749412611777998816497558114213964228961216131628268228308266344518028972043035404782111712652663530081419E-01 5.491739575E-268 (horizontal size)
|
|
|
Logged
|
|
|
|
brainiac94
Forums Freshman
Posts: 12
|
|
« Reply #182 on: October 03, 2013, 10:25:36 PM » |
|
I doubt that works in general.
It's not 100% certain, but I implemented three different ways to split the image (vertically, horizontally and both) and that seems to be helping. My computer is still re-rendering the blobbed frames, but I'll give it a try once that's done.
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #183 on: October 04, 2013, 10:17:15 AM » |
|
It's not 100% certain, but I implemented three different ways to split the image (vertically, horizontally and both) and that seems to be helping. My computer is still re-rendering the blobbed frames, but I'll give it a try once that's done.
Divide and conquer can help depending on how references are chosen. But at worst many subdivisions are needed till the sub image is so small that any issues with the chosen reference have subpixel size and are not visible any more.
|
|
|
Logged
|
|
|
|
Kalles Fraktaler
|
|
« Reply #184 on: October 04, 2013, 10:35:20 AM » |
|
I doubt that works in general. I don't know how SFT selects reference points, though. Does it work for this region simply by splitting vertically in the middle? ...
I cannot even see that this location contains any blobs? But I guess you mean the 64 nodes in the circle? You need to render this image in a very high resolution to notice at all that they are blobs...
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #185 on: October 04, 2013, 12:50:38 PM » |
|
I cannot even see that this location contains any blobs? But I guess you mean the 64 nodes in the circle? You need to render this image in a very high resolution to notice at all that they are blobs...
I mean the 32 nodes in the middle third of the picture at my given horizontal size. If rendered with the central minibrot as reference. If you see no corruption at say 1000*750 pixels could you upload your image somewhere? Maybe it's the way you colour the image that hides the corruption. Or there is a bug in my code.
|
|
|
Logged
|
|
|
|
Kalles Fraktaler
|
|
« Reply #186 on: October 04, 2013, 12:55:46 PM » |
|
I mean the 32 nodes in the middle third of the picture at my given horizontal size. If rendered with the central minibrot as reference. If you see no corruption at say 1000*750 pixels could you upload your image somewhere? Maybe it's the way you colour the image that hides the corruption. Or there is a bug in my code.
It may also be differences in the way we use the zoom magnification level. I consider this image be zoomed to 5.49E+268 and you are saying 5.49E-268
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #187 on: October 04, 2013, 02:38:09 PM » |
|
My zoom level is the horizontal size. 5.491739575E-268 = horizontal size (real max - real min, no rotation assumed) So do you get blobs in the 32 bulbs at that size? Your pic is too small to see it well. You have 64 bulbs at that position.
|
|
|
Logged
|
|
|
|
Kalles Fraktaler
|
|
« Reply #188 on: October 04, 2013, 04:43:29 PM » |
|
Yes, if I zoom out to 1e266 there are 32 blobs pairs. My automatic function finds and solves them for some resolutions, for others it doesn't...
We may call the blobs in the pairs for A (the bigger one) and B (the smaller one). Even though all 32 A-blobs can be solved with the same new reference point inside one them, this reference does does not solve the B blobs, and vice verse. Fortunately all pixels in the bigger A blob have the same iteration count value, 39987, and the smaller B have the value 40025, so as long as the blob can be identified and a new reference point can be selected, it is easy to re-render the pixels with these iteration counts.
|
|
|
Logged
|
|
|
|
|
Kalles Fraktaler
|
|
« Reply #190 on: October 14, 2013, 01:57:37 PM » |
|
Interesting claude, have you tried your function on the location from hapf? Because I think the blobs are unfortunately not only because of "loss of significance" but also because the limited precision of the hardware data-types, and that it would be possible that the "loss of significance" would not be visible but there would still be blobs...?
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #191 on: October 14, 2013, 05:13:33 PM » |
|
Interesting claude, have you tried your function on the location from hapf? Because I think the blobs are unfortunately not only because of "loss of significance" but also because the limited precision of the hardware data-types, and that it would be possible that the "loss of significance" would not be visible but there would still be blobs...?
My location needs at least 2 references because double has not enough precision. It's not a case where a better reference does it for the whole image (up to some size in pixels). How many references are needed depends on the complexity of the image and the size in pixels, even when the best references are used. The more pixels the more one can potentially see local detail where the double runs out of precision. How quickly it runs out of precision does depend on the reference used as well, in addition to image complexity and size in pixels.
|
|
|
Logged
|
|
|
|
claude
Fractal Bachius
Posts: 563
|
|
« Reply #192 on: October 14, 2013, 08:05:04 PM » |
|
double has not enough precision Right, that's true - I'm surprised it even works as well as it does. But it's not so much precision as what happens when going out of range - when values underflow the normal range they lose precision (or even become zero). Here's a table of numeric limits: type | precision | range | epsilon | normal | sqrt normal | nonzero | sqrt nonzero | finite | sqrt finite | float | 23+1 | 8 | 1.192e-07 | 1.175e-38 | 1.084e-19 | 1.401e-45 | 3.743e-23 | 1.701e+38 | 1.304e+19 | double | 52+1 | 11 | 2.220e-16 | 2.225e-308 | 1.492e-154 | 4.941e-324 | 2.223e-162 | 8.988e+307 | 9.481e+153 | long double | 63+1 | 15 | 1.084e-19 | 3.362e-4932 | 1.834e-2466 | 3.645e-4951 | 6.038e-2476 | 5.949e+4931 | 7.713e+2465 | __float128 | 112+1 | 15 | 1.926e-34 | 3.362e-4932 | 1.834e-2466 | 6.475e-4966 | 2.545e-2483 | 5.949e+4931 | 7.713e+2465 |
precision: number of mantissa bits range: number of exponent bits epsilon: smallest positive value such that 1 + epsilon != 1 normal: smallest positive normal number nonzero: smallest positive number finite: largest finite power of two I also included the square root of some of these values in the tables. The "sqrt normal" column is what I would use as a guideline for chosing which floating point type to use, comparing it with the pixel spacing in the image. I'd use double up to about 1e-150 (though I noticed that distance estimation can exceed range and cause problems from around 1e-140 or so) then switch to long double until about 1e-2460, then either use software floating point with a wide range (mpfr etc) or use a double int pair, using the int to extend the exponent range, keeping the double scaled near 1. If long double isn't available (eg: GPU) then I'd use the (double,int) thing earlier. long double is not necessarily slower than double - in fact it was 10% faster for hapf's location (presumably subnormal / denormal doubles take longer to compute than normal doubles).
|
|
|
Logged
|
|
|
|
Kalles Fraktaler
|
|
« Reply #193 on: October 16, 2013, 04:27:33 PM » |
|
claude, maybe you are on to something though!!! Maybe this can be used to locate where to put additional references, and also to indicate which pixels needs to be re-rendered with the additional reference. But I don't like your triple square root for every iteration, can't it be done without them? I have discovered that my way of solving glitches, by replacing the pixels with the same iteration count in a blob, is not efficient on some locations, especially for stretched dense Julia patterns that arise after close passage of one of a minibrots dense tentacles. Here is an example Re: -1.9855484133529534182456788035170260405619874319858542762764067467 Im: -0.0000000000002743067126729694556175287646032154237263455024187597 Zoom: 4.72E21 Max-iter: 2000
Without any additional reference points the glitches contains pattern and are not just big blobs with the same iteration count. I need to add many additional reference points in order to replace all different iteration counts that arise. But your error encoding shows clearly where the glitch are, so when I have time I will examine if it is possible to use it for glitch correction.
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #194 on: October 17, 2013, 04:38:50 PM » |
|
claude, maybe you are on to something though!!! Maybe this can be used to locate where to put additional references, and also to indicate which pixels needs to be re-rendered with the additional reference. But I don't like your triple square root for every iteration, can't it be done without them? The error will likely overflow if you don't use the root. And recursive error propagation is important. When I tried something like this I did not consider proper propagation and the resulting measure was not good enough for detecting corruption reliably. I have discovered that my way of solving glitches, by replacing the pixels with the same iteration count in a blob, is not efficient on some locations, especially for stretched dense Julia patterns that arise after close passage of one of a minibrots dense tentacles. Here is an example Re: -1.9855484133529534182456788035170260405619874319858542762764067467 Im: -0.0000000000002743067126729694556175287646032154237263455024187597 Zoom: 4.72E21 Max-iter: 2000
Without any additional reference points the glitches contains pattern and are not just big blobs with the same iteration count. I need to add many additional reference points in order to replace all different iteration counts that arise. But your error encoding shows clearly where the glitch are, so when I have time I will examine if it is possible to use it for glitch correction. Corruption can start way earlier before constant blobs show up. So this error measure can help you find the potentially corrupted places.
|
|
|
Logged
|
|
|
|
|