Roquen
Iterator
Posts: 180
|
|
« Reply #135 on: August 06, 2013, 05:57:16 AM » |
|
If you're working on one of these it would probably be a good idea to review "what every computer scientist should know about floating-point arithmetic".
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #136 on: August 06, 2013, 02:54:55 PM » |
|
If you're working on one of these it would probably be a good idea to review "what every computer scientist should know about floating-point arithmetic".
The relevant formulas for delta i and delta r have not much to optimize. There is one term (a²-b²). Replacing it with (a-b)*(a+b) actually delivered worse accuracy in the example I tested.
|
|
|
Logged
|
|
|
|
knighty
Fractal Iambus
Posts: 819
|
|
« Reply #137 on: August 06, 2013, 06:48:39 PM » |
|
If you're working on one of these it would probably be a good idea to review "what every computer scientist should know about floating-point arithmetic".
That was it IIRC!
|
|
|
Logged
|
|
|
|
hobold
Fractal Bachius
Posts: 573
|
|
« Reply #138 on: August 06, 2013, 10:58:27 PM » |
|
The relevant formulas for delta i and delta r have not much to optimize. There is one term (a²-b²). Replacing it with (a-b)*(a+b) actually delivered worse accuracy in the example I tested. Numerics is tricky and can be quite frustrating. The proper way to do these kind of things isn't really by trial and error ("let's see if this makes things more accurate!"). In a perfect world, you would have the opportunity and the means to analyze the programmed formulas, and find out if there are any (at least statistical) guarantees about relative magnitudes, or about same/opposite signs, and so on. And then you would pick the most accurate way of computation and write that into the program. In practice, there are quite a few cases where no really accurate way of computation is known, you can only minimize the losses somewhat. And often the analysis does not yield any guarantees. Then the most accurate way would require a check at runtime, and a conditional branch to the appropriate form of computation. Testing can be very expensive, and even the conditional branch can be very expensive when it behaves unpredictably. Numerics just isn't pretty. I see it more as a necessary evil to make floating point math robust and reliable, on top of its virtues of being fast and convenient.
|
|
|
Logged
|
|
|
|
Roquen
Iterator
Posts: 180
|
|
« Reply #139 on: August 07, 2013, 01:28:22 PM » |
|
@hapf: The entire formulation must be considered to find the desired balance between speed & accuracy.
WRT the mentioned, a quick mental review of the two forms it seems impossible for a2-b2 to be more accurate the (a+b)(a-b), but my numerical analysis is really rusty. Can you can an example of values to demonstrate the contrary?
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #140 on: August 07, 2013, 04:11:28 PM » |
|
@hapf: The entire formulation must be considered to find the desired balance between speed & accuracy.
WRT the mentioned, a quick mental review of the two forms it seems impossible for a2-b2 to be more accurate the (a+b)(a-b), but my numerical analysis is really rusty. Can you can an example of values to demonstrate the contrary?
I used http://www.fractalforums.com/index.php?topic=16457.msg63060#msg63060 and looked at the iterations in a blob area. I compared both formulas results against precise results. Average deviation was bigger for (a-b)*(a+b). I compared some pixels, not all pixels in the blobs.
|
|
|
Logged
|
|
|
|
Kalles Fraktaler
|
|
« Reply #141 on: August 07, 2013, 04:31:54 PM » |
|
cancellation which induce accuracy loss happens usually when subtracting two numbers which difference is very small.
Wouldn't that be equal to check that is not near 2?
|
|
|
Logged
|
|
|
|
knighty
Fractal Iambus
Posts: 819
|
|
« Reply #142 on: August 07, 2013, 11:15:41 PM » |
|
I would say it is equivalent to: is near -2. or is near 0. One thing worth to verify is to: Check if If true set instead of If the blob effect is due to cancellation then it would be replaced by some (maybe deformed) details. That said, I'm not sure what is exactly happening. For now this superfractalthing looks to me just like numerical black magic. I'm just trying to understand.
|
|
« Last Edit: August 14, 2013, 03:10:58 PM by knighty, Reason: typo »
|
Logged
|
|
|
|
Roquen
Iterator
Posts: 180
|
|
« Reply #143 on: August 08, 2013, 10:07:20 AM » |
|
Sterbenz theorom: given two floating point values x & y where y/2 <= x <= 2y, then x-y is exact.
|
|
|
Logged
|
|
|
|
Roquen
Iterator
Posts: 180
|
|
« Reply #144 on: August 08, 2013, 10:28:02 PM » |
|
Follow-up on x2-y2 vs. (x+y)(x-y). With a little more thought I've realized that the second can have more error than the first, but it should never be more that one ULP of properly rounded. But considering I screwed up the first time take this with a grain of salt. However the first can have much larger errors.
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #145 on: August 14, 2013, 01:31:45 PM » |
|
I have tested this new rendering method quite some time now. The challenge is clearly to find good reference points. Without them blob detection and blob fixing becomes more and more of an issue depending on where you zoom and how much you zoom. If you let the computer pick a point at random in the region/picture it will usually not be in a minibrot and blob detection is unavoidable as many points run out of available iterations from the reference point. If you let the computer automatically find a minibrot, which one should it use? Can't be too far outside the region or accuracy breaks down and blobs are created. In the region finding it is not (always) trivial, as far as I can see. And once it's found it might still create blobs somewhere in the picture. So blob detection seems basically unavoidable, and lacking a method to identify blobs reliably and find good reference points among the blobbed pixels there is no telling how long the blob fixing goes on, using full precision at every step and endangering the speed of the method. Nonetheless the results that can be obtained from your 'average' locations are spectacular and in minutes you see complexity you had to wait for for days without the method or were simply out of reach in the 80s, when it all began.
|
|
« Last Edit: August 14, 2013, 01:34:27 PM by hapf »
|
Logged
|
|
|
|
Kalles Fraktaler
|
|
« Reply #146 on: August 14, 2013, 01:50:36 PM » |
|
I have tried to validate that is not near 0, however that didn't help finding glitches, unfortunately. It seems that some glitches is simply due to insufficient precision, no matter what, and there is no common behavior of numbers in the series. Just to be sure, here is how I calculated the validation above, which includes complex division: is stored as (dr+di) and as (xr+xi) I then get the following code for the division and the validation (0.001 is an arbitrary small number, I have tried smaller and bigger): cr = (dr*xr+di*xi) / (xr*xr+xi*xi); ci = (xr*di-dr*xi) / (xr*xr+xi*xi); if(abs(cr+2)<0.001 && abs(ci)<0.001) ...
|
|
|
Logged
|
|
|
|
knighty
Fractal Iambus
Posts: 819
|
|
« Reply #147 on: August 14, 2013, 03:07:56 PM » |
|
There is a typo in my previous post: I've written instead of . I did some experiments with a basic implementation that doesn't use multiprecision arithmetics. The attached pictures are zooms around the point: ( -1.7490030131, 0.000247672). The colors represent the values of . blue --> small values and red--> values>=1 (actually they are always < 2). The first uses the reference point: ( -1.7490030131, 0.000247672) and the second the reference point: (-1.7490030131+9.09e-7, 0.000247672+11.39e-7). Both are inside minibrots. in perticular notice the "shape" of the blue dots. There are two types: cone shaped and flat ones. I think that blobs would appear in the blue flat dots but not cone shaped ones. This of cours when zooming much more. Not in this particular example. The values of are quite different. mandel2(tx,ty,mi){ hx=tx;hy=ty; x=cc[0][0]+hx; y=cc[0][1]+hy;//cc[] contain the reference point orbit r2=x*x+y*y; can=100;acan=1;ican=0; cant=1; bl=0; for(i=0;i<mi && r2<4;i++){ hxx=2*cc[i][0]+hx; hxx2=cc[i][0];//2*cc[i][0]-hx; hyy=2*cc[i][1]+hy; hyy2=cc[i][1];//2*cc[i][1]-hy; can=min(can,(hxx^2+hyy^2)/(hxx2^2+hyy2^2)); cant=min(cant,(tx^2+ty^2)/can); //if(can<1) {ican++;cant*=can; bl=max(bl,floor(-0.5*log(can)*1/log(2)));} //acan=can; //if(i==90) return can; xx=hxx*hx-hyy*hy+tx; hy=hxx*hy+hyy*hx+ty; hx=xx; x=cc[i+1][0]+hx; y=cc[i+1][1]+hy; r2=x*x+y*y; } //i/mi*10 return sqrt(can); //bl*0.1 }
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #148 on: August 19, 2013, 07:23:26 PM » |
|
AIUI, Superfractalthing also has some method for determining when a second reference point is needed; I don't know about Kallesfractaler, though, or the methods either use. I've described mine in the hope that it may be helpful to either of their authors, and in the hope that someone might have a useful suggestion for speeding my own up.
How does Superfractalthing decide a new reference point is needed and how is it determined? I so far have tried to find suitable minibrots in or around the image region, which seems to be difficult in some cases. No good method yet that is fast and works everywhere.
|
|
|
Logged
|
|
|
|
hapf
Fractal Lover
Posts: 219
|
|
« Reply #149 on: August 24, 2013, 08:26:07 PM » |
|
Here's a nice place to use for checking your anti 'corruption' measures. Render it at 2K plus to see where the issues likely are. Not (well) visible at PAL/NTSC resolution. 3.491243948744602226845164903446013604574954671252471332414684967283103122394453166881753622839962804126553654590343280 405480576210221670E-01 7.044912918643927975113013191745694904413230679707124075989897868290741298182991608687727632191553399184489116483374676 984467558071736600E-01 3.362630349E-126 (sorry, the old parameters were wrong. )
|
|
« Last Edit: August 25, 2013, 02:15:23 PM by hapf »
|
Logged
|
|
|
|
|