quaz0r
Fractal Molossus
Posts: 651


« Reply #315 on: September 21, 2016, 09:52:45 PM » 

yeah, i use a floatexp for all this stuff. for me it has not overskipped yet. what are you guys using for tmax? i am using the longest side of my viewport, ie 1280 for 1280x720, times the size of a point. it is a simple value to calculate and seems like this would be maybe a little more than the minimum you could get away with, just for good measure. so i guess you are finding my stop condition to start overskipping with a somewhat smaller tmax than this? does this mean the stop condition formula is not exactly what it should be, or does this simply mean that in practice we need to say tmax should be a little larger than the absolute minimum it seems like it should be?


« Last Edit: September 21, 2016, 10:00:03 PM by quaz0r »

Logged




claude


« Reply #316 on: September 21, 2016, 09:57:03 PM » 

ah, i guess you are using fp exception stuff. i had compiled with Ofast, it works now with O3, and says 43113 at that location. Don't know about fp exceptions, but there is possibly some parts that depend on associativity being in the right order (ie, (a + b)  b != a + (b  b) or similar). Generally I only use ffastmath or similar for small parts of a codebase, when I've tested and it gives acceptable output. 43113 is still way too low for the light years away location. maybe im not entering the location right. is it reporting the skip amount as an amount past a multiple of the period or something? it reports a period of 126210, (126210*2)+43113 is 295533, that would be more in the expected range for this location. Yes is is low, thanks to knighty for the probable explanation  will see about adding something floatexp to my example code. No multiple of periods are reported, just the plain value. I count iteration 0 as z=0, iteration 1 as z=c  maybe this is where some confusion is coming from... (perturbation starts from iteration 1 in my terminology, as at iteration 0 everything is 0 so nothing to perturb) the period detection scheme you linked to sounds very intriguing! can it be said to be perfectly reliable? It can fail in a few situations: all initial corners are outside the mandelbrot set with no points of the mandelbrot set in the interior; also if all corners are interior to the same component and not surrounding that component's nucleus, it could iterate forever without finding a period or escaping. the most common failure case is as I described earlier. For tmax I use the largest distance from the reference to the corners of the viewport.



Logged




knighty
Fractal Iambus
Posts: 815


« Reply #317 on: September 21, 2016, 10:01:00 PM » 

yeah, i use a floatexp for all this stuff. for me it has not overskipped yet. what are you guys using for tmax? i am using the longest side of my viewport, ie 1280 for 1280x720, times the size of a point. Yes, you use a bigger Tmax so R grows (much) faster. does this mean the stop condition formula is not exactly what it should be, or does this simply mean that in practice we need to say tmax should be a little larger than the absolute minimum it seems like it should be?
Guess: Not exactly what it should be but it seem to be very close.



Logged




knighty
Fractal Iambus
Posts: 815


« Reply #318 on: September 21, 2016, 10:04:00 PM » 

For one, I use ffastmath. No difference so far... except it is faster.



Logged




quaz0r
Fractal Molossus
Posts: 651


« Reply #319 on: September 21, 2016, 10:34:37 PM » 

actually i misspoke, looking at it again, i set tmax to the diagonal distance between opposite corners of the viewport, thinking this could be a simple catchall value for a reference point at any location within the image. though right now i am simply using the center as my initial reference, and thus what gets used to determine the skip amount. so i guess ive been using twice what the minimum value could be. i will try setting tmax to the minimum and play around with that some.
results at these two locations again with min tmax:
claude's location
min iter: 3554
old test, 8 terms: 3216, Incorrect render new test, 8 terms: 3215, correct render new test, 8 terms, min tmax: 3215, correct render
old test, 16 terms: 3298, Incorrect render new test, 16 terms: 3297, correct render new test, 16 terms, min tmax: 3297, correct render
old test, 32 terms: 3368, correct render new test, 32 terms: 3375, correct render new test, 32 terms, min tmax: 3379, correct render
old test, 64 terms: 3380, correct render new test, 64 terms: 3379, correct render new test, 64 terms, min tmax: 3461, correct render
redshifter's light years away location
min iter: 313767
old test, 8 terms: 282752, Incorrect render new test, 8 terms: 282751, correct render new test, 8 terms, min tmax: 282751, correct render
old test, 16 terms: 282752, correct render new test, 16 terms: 300358, correct render new test, 16 terms, min tmax: 300358, correct render
old test, 32 terms: 282752, correct render new test, 32 terms: 313030, correct render new test, 32 terms, min tmax: 313083, correct render
old test, 64 terms: 282752, correct render new test, 64 terms: 313083, correct render new test, 64 terms, min tmax: 313192, Incorrect render


« Last Edit: September 21, 2016, 11:01:45 PM by quaz0r »

Logged




knighty
Fractal Iambus
Posts: 815


« Reply #320 on: September 22, 2016, 05:46:57 PM » 

It seems that Quazor's test gives the maximum possible skip while neglecting the roots of SA'(t). In most situations, it gives the right amount of iterations to skip. It is even not necessary to multiply by .



Logged




quaz0r
Fractal Molossus
Posts: 651


« Reply #321 on: September 22, 2016, 09:42:13 PM » 

multiplying that extra part was my suggested modification to your original test condition. it results in skipping a little less. also except that you were originally trying to subtract some of the terms in some way. testing things again with the right tmax this time simply does appear to give the maximum skippable amount most of the time (plus one) ... whereas your original idea of subtracting some of the terms sometimes results in the right skip amount (plus one) and other times results in (dramatic) underskipping. does still appear to sometimes break down and overskip by more than 1 for me however, for example with 64 terms at the light years away location, it skips 313249. in my last post, i wrote how skips 313192, which itself was already resulting in an incorrect render. so simply SA' does seem very close, but it seems it does still need modification by something? i wonder too at how many terms, at how many iterations, or under what other conditions error due to limited precision might start accumulating and affecting the outcome. it makes it extra difficult to know whether the test condition is theoretically correct when it starts to fail seemingly only at a higher number of terms and/or iterations. still, the fact that this test condition always overskips by at least 1 does make it seem like it is indeed an incomplete test condition? albeit very close. maybe it does need some scheme of subtracting or otherwise modifying some terms after all testing the light years away location with 128 terms overskips (by more than 1) also.


« Last Edit: September 22, 2016, 10:31:10 PM by quaz0r »

Logged




knighty
Fractal Iambus
Posts: 815


« Reply #322 on: September 22, 2016, 11:10:34 PM » 

multiplying that extra part was my suggested modification to your original test condition. it results in skipping a little less. also except that you were originally trying to subtract some of the terms in some way. testing things again with the right tmax this time simply <Quoted Image Removed> does appear to give the maximum skippable amount most of the time (plus one) ... whereas your original idea of subtracting some of the terms sometimes results in the right skip amount (plus one) and other times results in (dramatic) underskipping. <Quoted Image Removed> does still appear to sometimes break down and overskip by more than 1 for me however, for example with 64 terms at the light years away location, it skips 313249. in my last post, i wrote how <Quoted Image Removed> skips 313192, which itself was already resulting in an incorrect render. so simply SA' does seem very close, but it seems it does still need modification by something? Well, one have to do the reasoning beginning with the per pixel test. The error is greater when R*t ^{m+1} is big and/or the derivative of the SA is small. Now, assuming the reference point is inside the dominant minibrot, when the number of iterations goes beyond one period there is necessarily one point where the derivative is zero. That zero is inside the minibrot. between one period and two there are points where the derivative of the SA vanishes. They are located in the area around the minibrit that is between one and two map doubling. after we reach two periods, the new points where the derivative of the SA become 0 are inside the area between two and three map doubling (plus two more points inside the minibrot)... etc. That means that after the first period, we get more and more points where the derivative becomes zero. All these zeros are located inside minibrots which are much smaller than a pixel. Their area of influence, that is the area where we get a lot of errors, are very small, therefore, they have to be discareded somehow. Now, the problem is that with the SA which is a low order polynomial (order m), we are trying to approximate a polynomial (let's call it mandelbrot's polynomial) which have a very high order (2 ^{Iterations} roots). so at some point it is impossible to the derivative of the SA to have the same number of roots as the Mandebrot polynomial inside the rendered area. Then we get a deformed glitch which is not necessarily because R is too big but because the SA's derivative can't reproduce all the points where the Mandelbrot polynomial's derivative vanishes. (see for example the 1st animated gif in this post.) The test that I have proposed is finally no more than an aparatus that counts the number of roots of SA'(t) that we can be sure are inside the radius tmax disk around the reference. The number of roots is given by the index of the "winning" coefficient (1 of course). See Bounds on (complex) polynomial roots Based on the Rouché theorem. There are a lot of stuff to learn... The good new now is that we have a lower AND upper bounds on the number of iterations to skip. i wonder too at how many terms, at how many iterations, or under what other conditions error due to limited precision might start accumulating and affecting the outcome. it makes it extra difficult to know whether the test condition is theoretically correct when it starts to fail seemingly only at a higher number of terms and/or iterations. Yes it's quite difficult, not impossible. One can use a class that computes the rounding errors and their propagation along with the calculations... but that would be too slow. BTW! In "light years away" location, do you have the same glitches as kalles fractaler when skipping more than 313083 iterations. The circular one is probably due to rounding errors (I can detect it by using a crude estimation of the rounding error) but the random one is very difficult to explain.



Logged




aleph0


« Reply #323 on: September 22, 2016, 11:36:30 PM » 

How are you guys doing the distance estimation screenshots? They are quite stunning and beautiful! This question seems to have been overlooked... Do you mean how is the distance estimate translated to greyscale value in the rendered images? If so, take a look at function "plot" in Claude's example C++ code v6 in reply #308. It's using hyperbolic tangent (tanh) as the translation function. See these posts by Pauldelbrot for other translation functions (with inverted colouring, white is close): http://www.fractalforums.com/imagesshowcase(ratemyfractal)/troubledtreeii/http://www.fractalforums.com/imagesshowcase(ratemyfractal)/doily/Where Paul talks of grey curve and distance estimator colouring, he is essentially referring to the translation function used; there can be scaling and inversion factors applied too. DE renderings can be stunningly beautiful, no doubt, especially for deeply zoomed and dense images... symmetries revealed in all their glory. Greyscale DE can also be combined with fullcolour rendering methods to great effect: http://mrob.com/pub/muency/representationfunction.htmlOr did you you mean from which program? If so, Claude's example program would be suitable if you can get it compiled; see source code comments (that's going to need a Linux distribution typically). Then run it from command line with something like this: ./a.out width 1920 height 1080 maxiters 65536 re 1.7490930547842157115735991351312021280767302886600874275176041515873173828 im 0.000175618967275210134352664990246036215136914278796260509941399986977798 radius 1e47 precision 100 order 32 stopping quaz0r output out.ppm Then something to convert ppm image output file to one of the more common image formats. I use Gimp.



Logged




quaz0r
Fractal Molossus
Posts: 651


« Reply #324 on: September 22, 2016, 11:57:58 PM » 

at some point it is impossible for the derivative of the SA to have the same number of roots as the Mandebrot polynomial inside the rendered area. Then we get a deformed glitch which is not necessarily because R is too big but because the SA's derivative can't reproduce all the points where the Mandelbrot polynomial's derivative vanishes. if i am understanding what you are saying, could we then say that is the right test condition, with the caveat that by its nature of having a limited number of terms it is inevitably going to fail? this failure due to limited terms, can we say what exactly this entails, such that we can add a check for when this happens? and with the added check, we should then have everything we need? also if i understand what you are saying, i think this explains why i have observed the test starting to fail with a greater number of terms... even though there are more terms, it means we are going farther into more iterations, which is compounding this problem of doubling the roots? or whatever you are talking about that i dont really understand BTW! In "light years away" location, do you have the same glitches as kalles fractaler when skipping more than 313083 iterations. with 32 terms, i get a correct render when skipping 313083 iterations. at 313084 it falls apart, even at 64 terms:


« Last Edit: September 23, 2016, 02:18:55 AM by quaz0r »

Logged




quaz0r
Fractal Molossus
Posts: 651


« Reply #325 on: September 23, 2016, 02:52:43 AM » 

skipping 313084 iterations with both 128 and 256 terms results in the same render, with the fuzzylooking sort of glitchiness at the ends:


« Last Edit: September 23, 2016, 04:48:00 AM by quaz0r »

Logged




quaz0r
Fractal Molossus
Posts: 651


« Reply #326 on: September 24, 2016, 01:54:20 AM » 

i screwed around a bit with reordering the summation of the terms and it does affect how the fuzzy bits turn out, so it does look to be a loss of significance sort of issue i guess. i was just reading about a few different things like kahan summation; as usual it looks like the solution is not necessarily a simple one. ok, running a test using full precision just to add up the deltas for each point took forever, and it still didnt fix the glitchy areas.


« Last Edit: September 24, 2016, 04:06:33 AM by quaz0r »

Logged




knighty
Fractal Iambus
Posts: 815


« Reply #327 on: September 24, 2016, 01:42:55 PM » 

After thinking a little bit about it I finally realized that I am really stupid! Well... later better than never. The test should be: Where: p _{max} is the number of bits in the mantissa (64 for long double) s is the (misterious) number of bits of accuracy that could be loosed without causing too much errors. This depends a lot on the location. it can go from 0 to something like 20 or 30. Better set it to 0. This is because (in principle) the area of influence of the zeros of the derivative of the SA is about the size of the minibrot where the zero occures Which is usually very small. Even when the minibrot is visible, that area won't show up because it is inside. It works perfectly so far for m<=64. The "Light years away" location is problematic. When m>90 or so, the rounding errors in the coefficients (+ sometimes cancellation) seem to dominate the truncation error.


« Last Edit: September 25, 2016, 05:11:34 PM by knighty, Reason: another error! »

Logged




knighty
Fractal Iambus
Posts: 815


« Reply #328 on: September 24, 2016, 01:49:36 PM » 

i screwed around a bit with reordering the summation of the terms and it does affect how the fuzzy bits turn out, so it does look to be a loss of significance sort of issue i guess. i was just reading about a few different things like kahan summation; as usual it looks like the solution is not necessarily a simple one. Here, the fuzzy part changes even when nothing is modified in the code or the parameters. ok, running a test using full precision just to add up the deltas for each point took forever, and it still didnt fix the glitchy areas.
I have also tried that (among other things). It seems to show that the problem indeed comes from loosing accuracy in the coefficients.



Logged




quaz0r
Fractal Molossus
Posts: 651


« Reply #329 on: September 24, 2016, 03:11:36 PM » 

did you write the formula right? if we subtract an amount from pmax, the test bails sooner rather than later. also the test seems to bail rather early in general? It works perfectly so far for m<=64. The "Light years away" location is problematic. When m>90 or so, the rounding errors in the coefficients (+ sometimes cancellation) seem to dominate the truncation error. is it a coincidence that the number of terms it appears we can safely use seems to also roughly correspond to the number of bits of mantissa precision? or could we use that as a general rule of thumb? i guess you are saying the truncation error computation can fall apart with more terms, but it seems the calculation of the coefficients themselves can also fall apart, ie, light years away location being fuzzy.


« Last Edit: September 24, 2016, 04:29:53 PM by quaz0r »

Logged




