Title: Understanding Perturbation Post by: nitroxis on September 22, 2014, 08:56:18 PM Hello.
I've been working on a simple Mandelbrot zoomer using OpenGL Compute Shaders (a relatively new kind of GPU programs) and multi-precision fixed-point numbers. The program itself works pretty well and is faster than most CPU implementations. But as with all implementations of the standard Z=Z*Z+C technique, the performance depends heavily on the magnification (i.e. precision of the fixed-point numbers) and number of iterations used. I recently came across the perturbation technique implemented in SuperFractalThing and others and need some help understanding and implementing it. I tried to understand the source code of SuperFractalThing, but it's rather hard for me to get an idea of what's going on. So the basic idea of using perturbation is that the image (except for the reference points) can be calculated using hardware floating point numbers (which sounds great, considering that's what GPUs are best at). The first thing to do is choosing a reference point and iterating it using arbitrary precision numbers. Then the rest of the image can be calculated using delta iteration based on that reference orbit. That's what I don't really get. First, the reference orbit is in arbitrary precision. Do I just convert it to floating-point numbers in order to use it in the delta iteration? Second, depending on how far you've zoomed in, the delta (i.e. the distance between two pixels) get extremely small, so small that they can't be reasonably represented using floating-point numbers. How does that work? Can I somehow scale up the deltas in order to work with the larger numbers? Third, how do I find out when the iterated value's magnitude is larger than the bailout value? I only have the deltas, so I suppose I have to add it to the reference point and check the magnitude of reference[n]+delta[n] > bailout, but then again, the delta is supposed to be a floating point number whereas the reference orbit is arbitrary precision and not necessarily small, so a lot of precision would be lost when adding the two. I'd be glad if someone familiar with the technique could explain a little more how exactly the "low precision" part is supposed to work. Title: Re: Understanding Perturbation Post by: youhn on September 22, 2014, 11:01:39 PM Use normal floating points down to 1x10−308
Use double precision for everything down to 3.65×10−4951 Bailout condition need not be so high. Kalles Fraktaler uses "only" 100 for smooth method, 2 for normal. I cannot help you further, since I do not understand the computational steps in detail for the perturbation thing. Title: Re: Understanding Perturbation Post by: nitroxis on September 22, 2014, 11:10:49 PM I'm using the "standard" bailout of |z| > 2. I was talking about adding the reference point (usually a number between -2 and 2) to the delta (a number that is extremely small for high levels of magnification) and the loss of precision in this step.
Title: Re: Understanding Perturbation Post by: Kalles Fraktaler on September 22, 2014, 11:30:54 PM First, the reference orbit is in arbitrary precision. Do I just convert it to floating-point numbers in order to use it in the delta iteration? Yes!Second, depending on how far you've zoomed in, the delta (i.e. the distance between two pixels) get extremely small, so small that they can't be reasonably represented using floating-point numbers. How does that work? You can with the double datatype represent values as small as 1e-308.Can I somehow scale up the deltas in order to work with the larger numbers? You can multiply the high precision values before converting them to double values with up to 1e308. You then need to divide the values with the same scaling value.By doing so, you can use ordinary double datatypes up to 1e616, in theory. In practice, Kalles Fraktaler is using scaling from 1e301 to 1e600 and then the long double datatype is used from 1e600 to 1e4900. And beyond that I use a custom datatype with double as mantissa and integer as exponent. See floatexp in the source of Kalles Fraktaler. (but you could of course use scaled long double to 1e9800) Third, how do I find out when the iterated value's magnitude is larger than the bailout value? I only have the deltas, so I suppose I have to add it to the reference point and check the magnitude of reference[n]+delta[n] > bailout, but then again, the delta is supposed to be a floating point number whereas the reference orbit is arbitrary precision and not necessarily small, so a lot of precision would be lost when adding the two. You have already converted the arbitrary precision values to doubles, so reference[n]+delta[n] grows beyond the bailout value. Also the delta value grows from extremely small to a value comparable to the bailout value in magnitude.And then you will find out that the low precision prevents some areas of a view to be rendered correctly and that glitches will occur in your images. Some of these glitches are one-colored blobs, some with incorrect structures. Then you are ready for the next step, to use Pauldelbrot's glitch detection method to detect these pixels, which need additional high precision references to be rendered correctly. But you will have a lot of fun! :) And don't forget Series Approximation, that allows you to skip a lot of iterations and that is many times even more time saving than what you get from using hardware datatypes instead of high precision. Title: Re: Understanding Perturbation Post by: nitroxis on September 23, 2014, 12:17:52 AM Thank you, I've got a better idea of how it works now.
Yes, I already know about glitches and the like. It was more about the fundamental ideas of the algorithm that I didn't fully understand. I got a somewhat working implementation now (which just uses the center of the screen as reference point). Title: Re: Understanding Perturbation Post by: cKleinhuis on September 23, 2014, 09:18:10 AM kalles pure cpu implementation already is awesomly fast, i am looking forward to your attempt using a gpu based implementation, its kinda interesting how it develops, some people that implemented it said they use more than one gigabyte to store such reference high precision variable, it comes down to pure memory to get deeper, i have 16 gigs available here, lol i would love uising 8 or more gigs to get a view of regions of the mandelbrot where no one has gone before ;)
Title: Re: Understanding Perturbation Post by: nitroxis on September 23, 2014, 01:11:47 PM Well, it would require the RAM of the graphics card, not the system RAM. But gigabytes sound a bit much, considering you only need 16 bytes for one complex number (2 doubles). It would mainly depend on the number of iterations used. One GB of VRAM would already be enough to hold 67108864 complex numbers. That'd be enough for millions of iterations, depending on whether you use series approximation, the derivatives and so on. The memory required for the output would only depend on the image size used to render the fractal. So I guess it would scale arguably well, but I'll have to see about that.
One thing to reduce the memory footprint of the application would be to only calculate say 1000 iterations of the reference point, then calculate pixels around it with delta iteration, calculate the next 1000 iterations of the reference point, continue with the delta iterations and so on. This way one could achieve good CPU and GPU parallelism, as the GPU can compute the pixels around the reference point while the point itself is still being calculated by the CPU. Title: Re: Understanding Perturbation Post by: cKleinhuis on September 23, 2014, 02:01:33 PM right, but we are talking about depths of a billion or even more iterations!
Title: Re: Understanding Perturbation Post by: 3dickulus on September 23, 2014, 04:07:30 PM re:storage this image, "Sircle" by dinkydau, http://www.fractalforums.com/index.php?topic=18254.msg76549#msg76549 took over 300M to store the reference data at almost 13,000,000 iterations. all of the stored data is double (64bit) format. by this one could infer that 1G will hold about 40,000,000+ iterations worth of data. well, at least in the way SFTC stores it, mX[], mXi[], mDistance_to_edge_sqr[]. With series approx I think this could be cut down considerably by not storing the "skipped" iterations and just maintaining the current working range required for the image.
edit:"Sircle" location from dinkydau, image rendered by SFTC Title: Re: Understanding Perturbation Post by: Alef on September 25, 2014, 05:30:51 PM What is deepest image so far? Hadn't followed this stuff for a while? That example is kind of nice throught it isn't 800x600 at least.
Title: Re: Understanding Perturbation Post by: Kalles Fraktaler on September 25, 2014, 11:44:40 PM Here is the deepest image(s)
http://www.youtube.com/watch?v=WgzW_jr0xSw Title: Re: Understanding Perturbation Post by: cKleinhuis on September 26, 2014, 06:35:46 AM Here is the deepest image(s) lol, its just a video ;)http://www.youtube.com/watch?v=WgzW_jr0xSw i know you have been deeper, just did not save and publish :D Title: Re: Understanding Perturbation Post by: Kalles Fraktaler on September 26, 2014, 12:15:17 PM lol, its just a video ;) No, I have never been deeper!i know you have been deeper, just did not save and publish :D The "Find Minibrot" function was fortunately successful on this location and was working for about a month! If you use KF with standard resolution 640x360, zoom lever 32 is about the largest to have some control of where to zoom. Zooming manually to 1e10000 would require 6644 clicks! I'll probably never go deeper than this... Title: Re: Understanding Perturbation Post by: nitroxis on September 26, 2014, 08:21:28 PM Alright, I got my program working with plain delta iteration. It's able to recognize glitches, however it's not correcting them yet. It currently uses the center of the screen as a reference point. This is not optimal, but I don't really know what to do here. How do I find a relatively good reference point to start with (given the position in the complex plane and a zoom factor)? After the first pass is rendered, do I just use the center of the glitch blobs as new reference point or is there a better way to do this? I don't really understand how SFT is doing it.
Here's what I got so far (red means glitch). (http://s.nxs.re/hrc.png) Title: Re: Understanding Perturbation Post by: Kalles Fraktaler on September 27, 2014, 10:52:25 AM Is K.I.Martin's SFT solving glitches at all?
Title: Re: Understanding Perturbation Post by: nitroxis on September 27, 2014, 02:19:57 PM Well, don't ask me :P
I for sure want to solve them. Title: Re: Understanding Perturbation Post by: Kalles Fraktaler on September 27, 2014, 11:38:17 PM Well, you can read the posts about glitch corrections or even examine the source of Kalles Fraktaler. But then you'll just do what we have done.
Maybe you can find a better solution by solving it yourself from scratch with a fresh mind :) Title: Re: Understanding Perturbation Post by: 3dickulus on September 29, 2014, 11:03:48 PM Alright, I got my program working with plain delta iteration. It's able to recognize glitches, however it's not correcting them yet. It currently uses the center of the screen as a reference point. This is not optimal, but I don't really know what to do here. How do I find a relatively good reference point to start with (given the position in the complex plane and a zoom factor)? After the first pass is rendered, do I just use the center of the glitch blobs as new reference point or is there a better way to do this? I don't really understand how SFT is doing it. The center of a blob is not the best point, you need the locus where the highest iterations are, in the simplest terms one might proceed as follows...Here's what I got so far (red means glitch). ... Do a very loose scan, say two or three horizontal and vertical lines through the data rendered from the first pass looking for large blobs if found, estimate locus and use as reference point, render pixels marked as glitch, repeat if no more big blobs... Do a more dense scan looking for smaller blobs if found, estimate locus and use as reference point, render pixels marked as glitch, repeat if no more small blobs... Do a scan of remaining pixels and use the highest iteration location as a reference, render pixels marked as glitch by this time you should be done :) Is K.I.Martin's SFT solving glitches at all? When I ported SFT(java) to C++ I found that the reference point selection wasn't working as well as I though it should and I couldn't quite figure it out so I scrapped the original java version in favor of my own "hunt an peck" method that finally seems to work as of v0.75@nitroxis if want to look at the souce (http://www.digilanti.org/cudabrot/) :dink: Title: Re: Understanding Perturbation Post by: kjknohw on September 29, 2014, 11:45:20 PM The center of a blob works pretty well because it will be close. After a few reference points there is usually no glitch left.
Title: Re: Understanding Perturbation Post by: youhn on September 30, 2014, 02:57:37 PM Not all glitches have roundish shapes, so it does not work well for all glitches. What test locations do you use?
Title: Re: Understanding Perturbation Post by: 3dickulus on October 02, 2014, 01:18:52 AM @nitroxis: you got me thinkin' 'bout perturb on GPU
I managed to get par4all (http://www.par4all.org/download.html) working on my linux dev box and here is the result. (http://www.fractalforums.com/index.php?topic=20016.msg77360#msg77360) p4a did a great job of converting a niffty piece of code, knighty's Pertubation and 3rd degree Mandelbrot evladraw script (http://www.fractalforums.com/new-theories-and-research/pertubation-and-3rd-degree-mandelbrot/) into working cuda kernels :) the really cool bit is that it will parallelize loops and such and output code for CUDA, OpenCL, OpenMP, SCMP and a few others. If you can express and idea in linear c99 code this thing can translate it :) just a heads up, you can never have too many tools :dink: Title: Re: Understanding Perturbation Post by: nitroxis on October 03, 2014, 03:41:24 AM Just a small update...
My program is now able to fix all glitches that occur. It preserves reference points between renders until better points have been found while fixing glitches or the points go out of bounds (i.e. are not in the view rectangle). I am using yet another compute shader to estimate the center of a glitch blob and then recompute all glitched pixels using that point while also adding that point to a list of possible reference points for the next image (useful when you just zoom in and don't pan around too much). The next thing to work on now is series approximation which I guess will improve performance even more. Here's a picture of the WIP, the circles show the reference points used to render the image. (http://s.nxs.re/Heg.png) Title: Re: Understanding Perturbation Post by: 3dickulus on October 03, 2014, 09:19:06 AM Nicely done :beer: |