Welcome to Fractal Forums

Fractal Software => Kalles Fraktaler => Topic started by: recursiveidentity on August 04, 2016, 09:26:17 PM




Title: Bug in Kalles on fast machine?
Post by: recursiveidentity on August 04, 2016, 09:26:17 PM
I'm wondering if I found a bug - I'm using 2.11 (or newest version whatever that is). I got curious and tried to use it on a virtual machine (an EC2 via Amazon web service with 40 CPU threads!)  As I expected, the first frame of my zoom out rendered in about an hour (would take over 30 hours on my home machine for the same frame) however, it then appeared to get stuck in some kind of loop - the image seemed done, but it just kept jumping back to 0% over and over, and never moved on to the next frame.  This happened with three different fractals I tried at three different resolutions.  Any ideas?

Also just curious, I'm a coder - What environment can I use to build and compile on my own?  Will Visual Studio 2010 work?  I'd love to poke around the source code sometime..

And, I guess since I'm at it (not complaining though) some other bugs I've noticed:

In the movie maker, if the old movie file is still there in the folder, the preview won't work / have to delete the old file.
Also in the movie maker, the correct values on a preview are not always picked up if you change the starting frame to a frame after a key frame (like rotation etc...)

Thanks!


Title: Re: Bug in Kalles on fast machine?
Post by: Kalles Fraktaler on August 04, 2016, 10:46:20 PM
Hi

Really cool that it got 30 times faster on that 4-core machine.
I am sorry I have no ideas why it didn't proceed. I've only tried it on 4 cores, however Yann has 8 cores and dinkydau had 32 and it worked for them. And I have never tried it on a virtual machine either...
(I assume you successfully made movies before?)
Yes the code will compile with visual studio 2010 or 2013, at least these are the two compilers I've tried.
The operators of fixedfloat will unfortunately not compile on gcc-compilers.

Thanks a lot


Title: Re: Bug in Kalles on fast machine?
Post by: valera_rozuvan on August 05, 2016, 12:31:19 AM
The operators of fixedfloat will unfortunately not compile on gcc-compilers.

What do you mean by this? GCC has support for Fixed-Point Types (https://gcc.gnu.org/onlinedocs/gcc-6.1.0/gcc/Fixed-Point.html). However, the implementation is still not finalized, and is a WIP:

Quote
As an extension, GNU C supports fixed-point types as defined in the N1169 draft of ISO/IEC DTR 18037. Support for fixed-point types in GCC will evolve as the draft technical report changes. Calling conventions for any target might also change. Not all targets support fixed-point types.


Title: Re: Bug in Kalles on fast machine?
Post by: claude on August 05, 2016, 01:01:15 AM
I think the FixedFloat is a C++ class with a pair of a double and a long, so you can get a deeper exponent range and avoid overflow/underflow when zooming past 1e-308 (or thereabouts).

Some time ago I tried cross-compiling KF2 with mingw on Linux, I failed - I recall some errors with FixedFloat and other classes possibly due to g++ being stricter about C++ standards, missing const variants of methods, etc.


Title: Re: Bug in Kalles on fast machine?
Post by: valera_rozuvan on August 05, 2016, 01:12:23 AM
I think the FixedFloat is a C++ class with a pair of a double and a long, so you can get a deeper exponent range and avoid overflow/underflow when zooming past 1e-308 (or thereabouts).

What about using some library (such as GMP (https://gmplib.org/)) for arbitrary precision arithmetic operations? This will give cross-platform support...


Title: Re: Bug in Kalles on fast machine?
Post by: claude on August 05, 2016, 02:37:20 AM
What about using some library (such as GMP (https://gmplib.org/)) for arbitrary precision arithmetic operations? This will give cross-platform support...

Yes I see I was wrong - CFixedFloat is the arbitrary precision floating point implementation in Kalles Fraktaler 2.  I was thinking of the floatexp class.

I use GMP and MPFR (and recommend them too) for high-precision high-range (such as reference orbits), but for low-precision high-range (such as series approximation coefficients and deep orbit deltas) a hardware double+separate exponent is likely to be faster and take less storage than GMP or MPFR.  I must admit I haven't actually run benchmarks yet though!


Title: Re: Bug in Kalles on fast machine?
Post by: Kalles Fraktaler on August 05, 2016, 06:19:58 PM
Yes I see I was wrong - CFixedFloat is the arbitrary precision floating point implementation in Kalles Fraktaler 2.  I was thinking of the floatexp class.

I use GMP and MPFR (and recommend them too) for high-precision high-range (such as reference orbits), but for low-precision high-range (such as series approximation coefficients and deep orbit deltas) a hardware double+separate exponent is likely to be faster and take less storage than GMP or MPFR.  I must admit I haven't actually run benchmarks yet though!

I've commented this before without any feedback so I hope I get it now :)
When you multiply for example two 100 digit values, you need to put the result in a 200 digit mantissa to ensure correct result. Because, if every digit is 9 the least significant digit may flip all digits and affect the result.
However that small value is meaningless when making the Mandelbrot set, so you don't need more than 100 digits. This is even how fractal extreme is doing it, so even fx is a approximative! That is why it is so fast compared to if you use standard arbitrary precision libraries.
Further, when squaring you actually don't need to use more than 50 digits.

Does a general library such as GMP not make accurate correct calculations? I doubt that...

I have tried different options and using the same type of method described by the author of fx. This requires more memory per digit, and I was able to make it faster than the current up until some 4000 digits. Then it got significantly slower! I recall Botond mentioned this as an explanation why mantel machine doesn't go deeper, there is some optimization in the compilers that is lost.

I've also tried SIMD and was able to get more than 3 times faster. However that is only applicable on double, up to e600. Mandel machine analyses the values of the "delta" and use double, or even float, when values get appropriate high, but it is difficult to determine that in a chaotic system and MM have still some problems with glitches.

So, my conclusion is that I am not able to do anything at the moment :)


Title: Re: Bug in Kalles on fast machine?
Post by: recursiveidentity on August 05, 2016, 07:48:45 PM
Cool thanks for the reply! Yeah, I was sad that it didn't work because having access to a 40 core machine is pretty cool!  Of course it's not free, works out to about $5 an hour...

Otherwise interesting discussion, but I don't know enough to contribute further! :)


Title: Re: Bug in Kalles on fast machine?
Post by: TheRedshiftRider on August 06, 2016, 03:13:23 PM
I've commented this before without any feedback so I hope I get it now :)
When you multiply for example two 100 digit values, you need to put the result in a 200 digit mantissa to ensure correct result. Because, if every digit is 9 the least significant digit may flip all digits and affect the result.
However that small value is meaningless when making the Mandelbrot set, so you don't need more than 100 digits. This is even how fractal extreme is doing it, so even fx is a approximative! That is why it is so fast compared to if you use standard arbitrary precision libraries.
Further, when squaring you actually don't need to use more than 50 digits.

Does a general library such as GMP not make accurate correct calculations? I doubt that...

I have tried different options and using the same type of method described by the author of fx. This requires more memory per digit, and I was able to make it faster than the current up until some 4000 digits. Then it got significantly slower! I recall Botond mentioned this as an explanation why mantel machine doesn't go deeper, there is some optimization in the compilers that is lost.

I've also tried SIMD and was able to get more than 3 times faster. However that is only applicable on double, up to e600. Mandel machine analyses the values of the "delta" and use double, or even float, when values get appropriate high, but it is difficult to determine that in a chaotic system and MM have still some problems with glitches.

So, my conclusion is that I am not able to do anything at the moment :)
I have been using kf for a while now, mosly on a 32bit machine. And I think there is nothing wrong with the program in terms of speed. For me, on 32bit it takes two days to make a movie and about twenty minutes to an hour on average to render a 6400×3600 image. On 64bit about 20 minutes to an hour to make a 16k by 9k image. And the zooming speed is nothing I can complain about. I would rather take time to make a good zoom. And even the formulas that have recently been added are many times faster than I expected.

I would not want a faster but unstable program. It would make things frustrating instead of relaxing. I would suggest keeping the program like it is.