News: Visit us on facebook
 Welcome, Guest. Please login or register. January 23, 2017, 09:33:52 AM 1 Hour 1 Day 1 Week 1 Month Forever Login with username, password and session length

 Pages: [1] 2 3 ... 17   Go Down
 Author Topic: SuperFractalThing: Arbitrary precision mandelbrot set rendering in Java.  (Read 48260 times) Description: 0 Members and 1 Guest are viewing this topic.
mrflay
Alien

Posts: 36

 « on: April 07, 2013, 05:02:21 PM »

I've converted my arbitrary precision Mandelbrot set hobby project to Java:

http://www.superfractalthing.co.nf/

It uses perturbation theory to dramatically reduce the calculation times. Here is a mathematical description of how it works:

http://www.superfractalthing.co.nf/sft_maths.pdf

I recommend the 9x super sampling. It takes a bit longer, but it makes complicated images look a lot better.

Unfortunately the Java JNLP UI seems a bit buggy on Apple OS X, as you can't edit text in dialog boxes, or enter a save file name. Hopefully Oracle will fix this sometime. It should work fine on PC though.

Edit: Latest version of Java seems to have fixed the Apple OS X problems

Edit2: SuperFractalThing is now open source. The source, and a runnable jar, can be obtained from here:
http://sourceforge.net/projects/suprfractalthng/

Edit3: For an explanation of why this thread is so long, try reading reply 12.
 « Last Edit: June 01, 2013, 12:19:02 PM by mrflay » Logged
Tabasco Raremaster
Iterator

Posts: 171

 « Reply #1 on: April 13, 2013, 06:19:03 AM »

only gives an error using a 32-bit windows 7 laptop.

Java Plug-in 10.17.2.02
Using JRE version 1.7.0_17-b02 Java HotSpot(TM) Client VM
User home directory = C:\Users\Basje
----------------------------------------------------
c:   clear console window
f:   finalize objects on finalization queue
g:   garbage collect
h:   display this help message
l:   dump classloader list
m:   print memory usage
o:   trigger logging
q:   hide console
r:   reload policy configuration
s:   dump system and deployment properties
t:   dump thread list
v:   dump thread stack
x:   clear classloader cache
0-5: set trace level to <n>
----------------------------------------------------
Match: beginTraversal
Match: digest selected JREDesc: JREDesc[version 1.6+, heap=268435456-4294967296, args=null, href=http://java.sun.com/products/autodl/j2se, sel=false, null, null], JREInfo: JREInfo for index 0:
platform is: 1.7
product is: 1.7.0_17
location is: http://java.sun.com/products/autodl/j2se
path is: C:\Program Files\Java\jre7\bin\javaw.exe
args is:
native platform is: Windows, x86 [ x86, 32bit ]
JavaFX runtime is: JavaFX 2.2.7 found at C:\Program Files\Java\jre7\
enabled is: true
registered is: false
system is: true

Match: selecting maxHeap: 4294967296
Match: selecting InitHeap: 268435456
Match: digesting vmargs: null
Match: digested vmargs: [JVMParameters: isSecure: true, args: ]
Match: JVM args after accumulation: [JVMParameters: isSecure: true, args: ]
Match: digest LaunchDesc: null
Match: digest properties: []
Match: JVM args: [JVMParameters: isSecure: true, args: ]
Match: endTraversal ..
Match: JVM args final: -Xmx4g
Match: Running JREInfo Version    match: 1.7.0.17 == 1.7.0.17
Match: Running JVM args mismatch: have:<> !satisfy want:<-Xmx4g>
 « Last Edit: April 13, 2013, 06:23:42 AM by Tabasco Raremaster » Logged

http://tabasco-raremaster.deviantart.com/

If you dislike it press; Alt+F4
mrflay
Alien

Posts: 36

 « Reply #2 on: April 15, 2013, 11:35:16 PM »

Sorry about that, I haven't got a 32bit OS to test it on. I've added a link to a low memory version on the launch page, which I think should help. The low memory version is the same except it can't export poster size images.
 Logged
Tabasco Raremaster
Iterator

Posts: 171

 « Reply #3 on: April 20, 2013, 01:32:05 AM »

Sorry about that, I haven't got a 32bit OS to test it on. I've added a link to a low memory version on the launch page, which I think should help. The low memory version is the same except it can't export poster size images.

The system has 3 gig but I guess it's the on-board graph card that ruins the party.
Just pasted the info to inform you.
Checking on 64bit machine in a few.
 Logged

http://tabasco-raremaster.deviantart.com/

If you dislike it press; Alt+F4
Dinkydau
Fractal Senior

Posts: 1469

 « Reply #4 on: May 06, 2013, 11:27:12 PM »

I don't think I fully understand what you're doing but if this "means that the time taken rendering Mandelbrot images is largely independent of depth and iteration count, and mainly depends on the complexity of the image being created", this could be something big.
 Logged

Syntopia
Fractal Molossus

Posts: 681

 « Reply #5 on: May 08, 2013, 09:47:30 PM »

I'm not sure I get this? As I understand it, you calculate a series, Xn, using the standard approach with arbitrary precision floating points. Then you use an approximation with lower precision floats in a neighborhood around Xn.

First, you say that "Equation (1) is important, as all the numbers are 'small', allowing it to be calculated with hardware floating point numbers.". Why does the magnitude of the numbers matter - the series will always stay below 4.0 anyways? I cannot see why the smaller numbers should need less precision.

Secondly, you say "Using equations (1) and (2) means that the time taken rendering Mandelbrot images is largely independent of depth and iteration count". But you still need to calculate a number of series Xn using arbitrary resolution to base your interpolations on, right? Doesn't that mean you get exactly the same dependence on depth and iteration count, and only a linear speedup, because you can estimate the neighborhood faster?

Btw, I tried running your app, but Firefox crashed, and in Chrome I got an NumberFormatExpection for the number "1.024" (probably a localization issue - dot versus comma).
 Logged
simon.snake
Fractal Bachius

Posts: 615

Experienced Fractal eXtreme plugin crasher!

 « Reply #6 on: May 08, 2013, 11:17:49 PM »

I'm on Windows 7 x64 version running Chrome but all I got was a blank page, then after (quite) a while Chrome crashed all my windows and I had to restart it.

Don't think I'll try again as the same thing happened to me last night when I clicked the link.
 Logged

To anyone viewing my posts and finding missing/broken links to a website called www.needanother.co.uk, I still own the domain but recently cancelled my server (saving £30/month) so even though the domain address exists, it points nowhere.  I hope to one day sort something out but for now - sorry!
elphinstone
Alien

Posts: 38

 « Reply #7 on: May 08, 2013, 11:59:49 PM »

I cannot see why the smaller numbers should need less precision.

I didn't read the whole theory, maybe I'm completely wrong, but in fact floats are really "more precise" near 0. Because of their structure floating points are not distributed uniformly, they are more dense near zero.

(32 bit) floats are structured this way:
• One bit sign
• 8 bit exponent
• 23 bit mantissa

The fixed size of the mantissa only allows a limited range of digits, but using the exponent we can "move" these digits at a different decimal place (not so easy as moving the point in decimal digits, the point is moved between base-2 digits, but the idea is the same).

So with floats we can not represent numbers like $10^{10} + 10^{-10}$, but it is possible to represent for example $1,23456 * 10^{-30}$ which we can call precise since ha more than 30 decimal digits.

Here you can find a good introduction to float representation: http://www.systems.ethz.ch/sites/default/files/file/CASP_Fall2012/chapter3-floatingpoint-1up.pdf. Have a look expecially to slide 20.

(Note: not sure if the numbers can be really represented in base 2... Again I used base 10 because it's easier to understand. It's just an example )
 Logged
Syntopia
Fractal Molossus

Posts: 681

 « Reply #8 on: May 09, 2013, 09:05:18 AM »

Well, I get that if you have two high precision numbers, close to each other, e.g:

X_n = 0.1230980948029380492
Y_n = 0.1230980948029380493

their difference could be expressed with a lower precision number, such as D_n = Y_n-X_n = 1E-19.

Quote
Equation (1) is important, as all the numbers are 'small'

But in (1)-(5), the X_n that enters the equations are not small (we know they are points on the orbit - their components will be 0<|X_n|<2). Only the delta's are.

I would like to see a comparison between a deep zoom image with another fractal program, comparing rendering speed and image quality (and just to check the correct image is produced!), before I'm convinced. Sorry about being sceptical,  it just sounds rather wild to me :-)
 Logged
elphinstone
Alien

Posts: 38

 « Reply #9 on: May 09, 2013, 09:36:33 AM »

Finally read the document, sound like an interesting trick, but now I'm confused too.

The multiplication of $X_n$ and $\Delta_n$ (or in general of a small number with a number of order 1) still gives a result with the same precision of $\Delta_n$ (mantissas are multiplicated first with full precision, then it's rounded to float precision and the exponents are added). But even if we can say that we have a high precision on the result we still cut away details from $X_n$, so we can only work on points that can be represented with floats.

Are these multiplications performed with arbitrary precision? They could be faster than a full $X_n^2$ because one of the two terms has a relatively small number of significant digits, so the advantage of this method can be seen only with big zooms. But I'm only guessing, I'm also interested in some more explaination.
 « Last Edit: May 09, 2013, 05:52:06 PM by elphinstone » Logged
Dinkydau
Fractal Senior

Posts: 1469

 « Reply #10 on: May 09, 2013, 05:32:40 PM »

I would like to see a comparison between a deep zoom image with another fractal program, comparing rendering speed and image quality (and just to check the correct image is produced!), before I'm convinced. Sorry about being sceptical,  it just sounds rather wild to me :-)
I did something like that. Fractal extreme is supposed to be the fastest windows software to render fractals, so I use that as a reference, and then I used SuperFractalThing. Both programs using 32 threads, SuperFractalThing is faster. I can't get SFT to work at the moment so I can't provide any numbers.
 Logged

Dinkydau
Fractal Senior

Posts: 1469

 « Reply #11 on: May 09, 2013, 05:49:55 PM »

Okay, I got something out of it now. I used the location "deep" in the library in SFT.

Benchmark parameters:
Code:
Re = -0.8635169079336723787909852673550036317112290092662339023565685902909598581747910666789341701733177524889420466740410475917304931409405515021801432520061688309437600296551693365761424365795272805469550118785509705439232403959541588349498522971590667888487052154368155355344563441

Im = 0.24770085085542684897920154941114532978571652912585207591199032605489162434475579901621342900504326332001572471388836875257693078071821918832702805395251556576917743455093070180103998083138219966104076957094394557391349705788109482159372116384541942314989586824711640815455948160

zoom level = 5.054594E-264, equivalent to 2^879.33 in Fractal extreme

iteration_limit = 13824

resolution = 1024×768

The results:
 Software Calculation time (m:ss) Fractal extreme 2:52,400 SuperFractalThing 0:11,686

That is a really significant decrease in render time!
 Logged

Dinkydau
Fractal Senior

Posts: 1469

 « Reply #12 on: May 09, 2013, 06:22:11 PM »

The program magically works again, so I inserted some parameters of my own.

Coordinates:
Code:
Re = -1.479,946,223,325,078,880,202,580,653,442,563,833,590,828,874,828,533,272,328,919,467,504,501,428,041,551,458,102,123,157,715,213,651,035,545,943,542,078,167,348,953,885,787,341,902,612,509,986,72
Im = 0.000,901,397,329,020,353,980,197,791,866,197,173,566,251,566,818,045,102,411,067,630,386,488,692,287,189,049,145,621,584,436,026,934,218,763,527,577,290,631,809,454,796,618,110,767,424,580,322,79

Magnification:
2^471
6.0971651373359223269171820894398 E141

4000×3000 render in SFT took 105 seconds
24000×12000 render in Fractal extreme took 1 day, 6 hours, 0 minutes, 15 seconds

Even though my render in Fractal extreme was 24 times larger, SFT rendered 1029 times faster, which is equivalent to 42.86 times faster if the resolutions were the same. The reason I rendered at 4000×3000 in SFT is because it gave me an error that it was out of memory and that I had to use a 64-bit browser.

Downsampled images:

Fractal extreme:

SFT with the default gradient:
 « Last Edit: May 09, 2013, 06:32:09 PM by Dinkydau » Logged

cKleinhuis
Fractal Senior

Posts: 6980

formerly known as 'Trifox'

 « Reply #13 on: May 09, 2013, 06:29:32 PM »

incredible results, to make the differences more clear, can you exactly render same resolutions, and then using the same colormap, e.g. 256 grayscales that repeat, and then subtract the images from eachother to show the errors ?
 Logged

---

divide and conquer - iterate and rule - chaos is No random!
cKleinhuis
Fractal Senior

Posts: 6980

formerly known as 'Trifox'

 « Reply #14 on: May 09, 2013, 06:31:14 PM »

hehe, with this incredible new zoom depths will be possible muahahar, expecting new deep zoom records soon!
 Logged

---

divide and conquer - iterate and rule - chaos is No random!
 Pages: [1] 2 3 ... 17   Go Down