quaz0r
Fractal Molossus
Posts: 652


« Reply #60 on: October 26, 2016, 07:28:11 PM » 

if P is definitely the period, can we then say that our new reference can simply consist of P iterations, which we could then index as iter%P ? I suppose so! Rounding errors might cause problems, though.  chances are that you don't use enough bits of precision to stop the secondary reference escaping eventually.. i see, we would have to do more iterations of newtons method to actually land inside the minibrot. i was envisioning a scenario where we determine the reference to enough precision that it is actually inside the minibrot, definitively determine its period P, and then only have to do P reference iterations. it seems this might be particularly useful if we could make this happen for the initial reference, which i assume everyone is still iterating from 0 up to a maxIter or such. could this be feasible at all ? i guess you would also have to see if the benefit of doing only P reference iterations outweighed the cost of iterating at a higher precision.


« Last Edit: October 26, 2016, 07:38:08 PM by quaz0r »

Logged




claude
Fractal Bachius
Posts: 510


« Reply #61 on: October 26, 2016, 09:20:26 PM » 

could this be feasible at all ? definitely. but it is probably only worth doing high precision Newton's method for the primary reference which can be shared between multiple frames when exploring or rendering videos... i guess you would also have to see if the benefit of doing only P reference iterations outweighed the cost of iterating at a higher precision. True. Shame that the series approximation stuff isn't periodic... It remains to know how much extra precision is required. I guess a safe bound would be 4x the precision at the depth where the reference first becomes viable (findable with boxperiod method), or 2x the precision at the 2fold embedded Julia set. I'm more confident about the second of those guesses... It makes it easier if the size estimate algorithm (*) gives reasonable values for notthataccurate input, then you could use the size estimate as precision estimate  another thing to investigate... (*) https://code.mathr.co.uk/mandelbrotnumerics/blob/HEAD:/c/lib/m_d_size.chttps://code.mathr.co.uk/mandelbrotnumerics/blob/HEAD:/c/lib/m_r_size.c


« Last Edit: October 26, 2016, 09:51:05 PM by claude, Reason: clarification »

Logged




quaz0r
Fractal Molossus
Posts: 652


« Reply #62 on: October 26, 2016, 10:07:41 PM » 

Shame that the series approximation stuff isn't periodic... i hadnt thought how the series stuff would work.. i guess the coefficients themselves wouldnt need to start over; you could just keep picking up where you left off as you zoom in. knighty's truncation error stuff i guess you would have to calculate fresh each time, which i guess also means you would need to store each iteration of the coefficients too..


« Last Edit: October 26, 2016, 10:16:48 PM by quaz0r »

Logged




claude
Fractal Bachius
Posts: 510


« Reply #63 on: October 26, 2016, 10:40:06 PM » 

you could just keep picking up where you left off as you zoom in. yes, in my mandelbrotperturbator I do just that, works really well for interactive use. knighty's truncation error stuff i guess you would have to calculate fresh each time yes, which is why I haven't ported that to my main renderer yet, still using a (probably broken) "size of terms decreases sufficiently fast" heuristic..



Logged




quaz0r
Fractal Molossus
Posts: 652


« Reply #64 on: October 27, 2016, 12:02:02 AM » 

what is the new Z value to start the new reference ? or are we starting it at the beginning ? The new deltaZ value is the old referenceZ+deltaZ value (which is near 0 by the glitch detection test) i guess this statement must imply what the new refZ value becomes, but my eyes are glazing over and my brain isnt making the connection.. is it zero ? i just built your mandelbrotperturbator, i guess it must be the continuation of mightymandel ? i wondered why that wasnt updated in a while



Logged




claude
Fractal Bachius
Posts: 510


« Reply #65 on: October 27, 2016, 12:31:05 PM » 

is it zero ? yes! the new reference Z reaches zero at multiples of the period, so you don't need to restart iterations from scratch, just carry on from there i just built your mandelbrotperturbator, i guess it must be the continuation of mightymandel ? i wondered why that wasnt updated in a while GPUusing mightymandel is sleeping until I get around to porting some of the ideas from CPUbased mandelbrotperturbator  it may take some time...



Logged




Adam Majewski


« Reply #66 on: October 27, 2016, 05:21:41 PM » 

GPUusing mightymandel is sleeping until I get around to porting some of the ideas from CPUbased mandelbrotperturbator  it may take some time... [/quote]
What about mixed computations : * gpu code for double precision * cpu code fro arbitrary precision



Logged




claude
Fractal Bachius
Posts: 510


« Reply #67 on: October 27, 2016, 06:19:34 PM » 

What about mixed computations :
yes that's how mightymandel works. (but it has many inefficiencies that mean it needs a lot of work to be done on the code, and I have not enough time for coding...)



Logged




quaz0r
Fractal Molossus
Posts: 652


« Reply #68 on: October 27, 2016, 06:27:41 PM » 

it may take some time the first and last thing you ever need to know about programming.. the new reference Z reaches zero at multiples of the period does a periodic point's (is limit cycle the right jargon?) always begin and end with zero ?



Logged




claude
Fractal Bachius
Posts: 510


« Reply #69 on: October 30, 2016, 09:21:55 PM » 

does a periodic point's (is limit cycle the right jargon?) always begin and end with zero ? Yes the nucleus of a hyperbolic component has 0 in its limit cycle, so the cycle is reached immediately, but other points in the hyperbolic component have a limit cycle that doesn't contain 0 and this cycle is reached asymptotically after an infinite number of iterations from 0 (but Newton's method can accelerate convergence).



Logged




quaz0r
Fractal Molossus
Posts: 652


« Reply #70 on: December 18, 2016, 08:28:55 AM » 

after being lazy for too long i got around to finishing an implementation of claude's glitch algorithm. oh it is wonderful and glorious i forget if this was discussed on here anywhere recently, but i was also wondering how much people have experimented with what value to use for glitch triggering. the original value as dictated by paul was 1e3 (or 1e6 for squared magnitude), but i get the feeling from his original description that he just kind of arbitrarily chose that value because it made glitch blobs bigger and more uniform for the purposes of visual inspection, or perhaps for the purposes of his visualish algorithm for calculating the centroids of blobs? in any case, i dont think it is any sort of proper value, and i wonder what something more proper would be? i guess knighty came up with one of his interval arithmetic things for glitch triggering, but that sure sounds very costly to do per point. ive been experimenting a bit with using different values for paul's glitch triggering recently. using more lax values starts to cut way down on the amount of glitch triggering and the number of reference points used, but gradually starts to introduce some artifacts it seems, also depending on location though. it makes me wonder if even the 1e3 value (and hence really this glitch triggering mechanism itself) is actually always necessarily giving perfect results?



Logged




knighty
Fractal Iambus
Posts: 818


« Reply #71 on: December 18, 2016, 04:24:23 PM » 

I have implemented my version of glitch detection at the time but haven't tested it thoroughly just because it doesn't give much better results than Pauldelbrot's one. It is not that expensive though. (modify line 188 to switch between the tow formulas)



Logged




quaz0r
Fractal Molossus
Posts: 652


« Reply #72 on: December 20, 2016, 06:07:43 AM » 

ok knighty i will have to implement it sometime just for kicks.
a few initial observations from implementing claude's algorithm:
this spawns a ton of reference points! at least several times as many as if you simply plod along picking a best new reference point and redoing all remaining points and repeating until done. but since they continue in place they are nice and fast.
doing the newton step versus not doing the newton step seems to give the same results, with the same number of reference points used etc. which is good if you care to maintain a nonDE code path. as claude said though you may as well do the newton step if you have the derivative. though i wonder if doing the newton step could ever give a worse reference instead of a better one? im not really sure how this stuff is supposed to work, but i tried using several newton steps leading up to when a glitch happens, and i get botched renders. just out of curiosity i also tried using a single z/zp from a few iterations prior to the glitch iteration and that also screws up the render. for some reason it seems to only work (or at least not screw it up) to use the glitch iteration. does z/zp approach zero when a glitch occurs? maybe in that case it is just not screwing it up, as opposed to actually contributing anything. im also not certain my implementation of claude's overall algorithm is perfect yet either, in fact it seems perhaps not, as it seems to work great most of the time but then when i test it on large renders where i intentionally use a crap initial reference point, it often results in screwed up renders. hopefully this is just revealing an error on my part, and not a limitation of the algorithm..
edit: the botched large render with crap initial reference is fixed if i use 32 terms instead of 64, so probably just the limitations of SA again. though i wonder if using a crap initial reference helps to make the SA stuff crappier and more prone to failure...
also i cleaned up my implementation of claude's thing and it should be good now.


« Last Edit: December 21, 2016, 01:59:57 AM by quaz0r »

Logged




hapf
Fractal Lover
Posts: 218


« Reply #73 on: December 20, 2016, 07:31:34 PM » 

i forget if this was discussed on here anywhere recently, but i was also wondering how much people have experimented with what value to use for glitch triggering. the original value as dictated by paul was 1e3 (or 1e6 for squared magnitude), but i get the feeling from his original description that he just kind of arbitrarily chose that value because it made glitch blobs bigger and more uniform for the purposes of visual inspection, or perhaps for the purposes of his visualish algorithm for calculating the centroids of blobs? in any case, i dont think it is any sort of proper value, and i wonder what something more proper would be?
I that value is passed it does not mean the the pixel will go corrupt. And if it is not it does not mean the pixel is fine. What one can say is that the more times this or higher values are passed the more likely the pixel goes eventually bad.



Logged




quaz0r
Fractal Molossus
Posts: 652


« Reply #74 on: December 20, 2016, 07:36:45 PM » 

making knighty's interval arithmetic thing the first truly proper test, assuming he devised it correctly.
though his interval formulas still include the mysterious precision parameter, the value for which varies depending on location and probably also depending on a multitude of things. i have yet to see anyone propose a way to predict what this value should be. as proper as the interval formulas may be, this keeps them kind of in a similar grey area as paul's approach, depending on a magic number which cannot be predicted..


« Last Edit: December 20, 2016, 07:43:31 PM by quaz0r »

Logged




