Logo by visual - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Did you know ? you can use LaTex inside Postings on fractalforums.com!
 
*
Welcome, Guest. Please login or register. March 29, 2024, 08:41:46 AM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1] 2 3 ... 24   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: *Continued* SuperFractalThing: Arbitrary precision mandelbrot set rendering in Java.  (Read 46986 times)
0 Members and 2 Guests are viewing this topic.
hapf
Fractal Lover
**
Posts: 219


« on: February 22, 2016, 03:53:39 PM »

So there are certain locations where series approximation works exceptionally well. How to recognize or find those locations? What do those locations have in common? That may be something to consider when making deep zoom videos.
I'm currently working on a better understanding of skipping limits in general.... In time I might have more to share.  wink
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #1 on: February 26, 2016, 06:27:27 PM »

By the way, how much slower are extended long doubles with separately stored exponent compared to regular long doubles when performing standard iterations and no assembly language coding is used?
Logged
stardust4ever
Fractal Bachius
*
Posts: 513



« Reply #2 on: February 26, 2016, 10:17:03 PM »

By the way, how much slower are extended long doubles with separately stored exponent compared to regular long doubles when performing standard iterations and no assembly language coding is used?
Any programming language and especially ASM is light years faster than Java based.
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #3 on: February 27, 2016, 12:51:48 AM »

Any programming language and especially ASM is light years faster than Java based.
I'm not talking about Java. Rather something like C or C++.
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #4 on: February 27, 2016, 03:56:57 AM »

long double does not really seem relevant regardless.  GPU sorts of devices dont have long double and even CPUs have larger and larger vector units nowadays which also do not do long double so for all intents and purposes long double is a bazillion times slower compared to properly utilizing the resources available on both cpu and gpu   undecided
Logged
stardust4ever
Fractal Bachius
*
Posts: 513



« Reply #5 on: February 27, 2016, 05:21:29 AM »

long double does not really seem relevant regardless.  GPU sorts of devices dont have long double and even CPUs have larger and larger vector units nowadays which also do not do long double so for all intents and purposes long double is a bazillion times slower compared to properly utilizing the resources available on both cpu and gpu   undecided
Really no current CPU or GPU tech is ideally suited to doing fractal rendering. I'm a pretty hardcore retrogamer and there is much talk of using FPGAs to simulate retro video game systems in realtime hardware where current emulation technology falls short. Kevtris is one of the pioneers of gaming on FPGA cores and he alledgedly produced a realtime Mandelbrot zoomer, although I've never seen it in use and I doubt it goes extremely deep.

Quote
The following systems have been fully emulated, 100% as best as I can tell.
All games available and ROMs were tested and fully work as far as I know.

Sega Master System
Game Gear
Colecovision
NES/Famicom
Atari 2600
Atari 7800
Intellivision
Odyssey^2
Adventure Vision
Supervision
RCA Studio 2
Fairchild Channel F
Videobrain
Arcadia 2001
Creativision
Gameboy
Gameboy Colour (not 100% yet, still debugging. runs 99% of games so far)


(nonvideogame things)
SPC player (SNES music)
Mandelbrot realtime zoom/pan/julia
http://blog.kevtris.org/blogfiles/systems_V105.txt

Maybe someone could use an FPGA to design a Fractal calculation core using say 1024-bit (or larger 2048, 4096, etc...) integer maths or some BS like that, instead of relying on fixed precision 64-bit CPUs. I think Intel will begin including FPGA coprocessors in it's CPUs beginning in 2018. Even if the FPGA core were only one tenth the speed of the main CPU, the 1024 bit or whatever native ALU would be insanely faster than modern 64-bit CPU tech. The FPGA would only be used for raw number crunching and no fancy pants instruction sets, so transistor count could be relatively low. CPU would handle rendering and iteration depth and everything else. Once a usable bignum FPGA core is established, it could be duplicated to add additional multiprocessor "cores" until the FPGA space is full. Because it would only need to handle add+multiply, the cores could be made insanely lean.
« Last Edit: February 27, 2016, 05:32:36 AM by stardust4ever » Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #6 on: February 27, 2016, 06:22:14 AM »

yeah i saw that stuff too about fpgas, emulation, and the guy doing a mandelbrot renderer on it.  it is of course always a silly and ridiculous thing any time someone says something like "real time mandelbrot".   snore   and yeah, an fpga/whatever designed with fractal number crunching in mind would be awesome, but it is basically just a fantasy as nobody cares that much about fractals except a handful of us on this forum   grin
Logged
stardust4ever
Fractal Bachius
*
Posts: 513



« Reply #7 on: February 27, 2016, 07:28:54 AM »

yeah i saw that stuff too about fpgas, emulation, and the guy doing a mandelbrot renderer on it.  it is of course always a silly and ridiculous thing any time someone says something like "real time mandelbrot".   snore   and yeah, an fpga/whatever designed with fractal number crunching in mind would be awesome, but it is basically just a fantasy as nobody cares that much about fractals except a handful of us on this forum   grin
If FPGA coprocessors became standard equipment in addition to integrated GPU/CPU dies, then cost would drop and people could start to seriously think about designing special interest processors that would never be practical to manufacture. An FPGA can basically clone any piece of silicone logic provided it contains enough gates to do the job.

Yes it is true nobody cares about fractal rendering, but highly specialized appications like cryptography or protein synthesis that require massive amounts of calculations on general purpose CPU could use a "clean slate" ie a custom logic core tailored to it's use. It generally requires Gigahertz level CPUs to emulate with perfect cycle accuracy old Game machines which barely clocked a few megahertz. Bruce Dawson of Fractal Extreme wrote some really good essays on his blog about the benefits of 64-bit CPUs and why arbitrary precision math becomes so slow. Fractal Extreme was lightyears faster than anything else out there before perturbation method (ie render one pixel in arbitrary integer and deduce the rest of the image in float) was discovered.

A custom stripped down 1024-bit CPU core could do a single multiplication where a 64-bit CPU would take hundreds of operations. Even if the hypothetical 1024-bit FPGA core was arbitrarily limited to 100Mhz top speed, it would beat the pants off a 4Ghz Intel i7 core in terms of render efficiency. Assuming the cores were open source and desktop CPUs with FPGA in their dies become common, users could improve on each others' designs. Everything in the software package would be x86-64 code except the arbitrary integer math via FPGA.
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #8 on: February 27, 2016, 09:31:55 AM »

long double does not really seem relevant regardless.  GPU sorts of devices dont have long double and even CPUs have larger and larger vector units nowadays which also do not do long double so for all intents and purposes long double is a bazillion times slower compared to properly utilizing the resources available on both cpu and gpu   undecided
I'm not sure what you are talking about. Long doubles are much slower compared to what doing what exactly? Basic Mandelbrot iterations?
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #9 on: February 27, 2016, 10:46:14 AM »

compared to the performance of using double with CPU vector units / GPUs
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #10 on: February 27, 2016, 11:38:43 AM »

compared to the performance of using double with CPU vector units / GPUs
double can not be used for perturbation beyond 10^-308. Quite limiting. So either extended double with separate exponent or extended long double with separate exponent are used. Hence my question. How much slower is the extended version compared to not extended (for example in Kalles program or Mandelmachine)?
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #11 on: February 27, 2016, 01:50:22 PM »

A custom stripped down 1024-bit CPU core could do a single multiplication where a 64-bit CPU would take hundreds of operations. Even if the hypothetical 1024-bit FPGA core was arbitrarily limited to 100Mhz top speed, it would beat the pants off a 4Ghz Intel i7 core in terms of render efficiency. Assuming the cores were open source and desktop CPUs with FPGA in their dies become common, users could improve on each others' designs. Everything in the software package would be x86-64 code except the arbitrary integer math via FPGA.
Yes, but that only moves the point where perturbation is needed to higher magnifications. Perturbation and skipping remain very important tools independent of the CPU word length.
Logged
stardust4ever
Fractal Bachius
*
Posts: 513



« Reply #12 on: February 27, 2016, 08:56:49 PM »

Yes, but that only moves the point where perturbation is needed to higher magnifications. Perturbation and skipping remain very important tools independent of the CPU word length.
Even with perturbation, it often takes 30+ seconds on my 4.2Ghz AMD rig to calculate the first pixel at arbitrary precision. Sometimes I must wait for a second or third pass because the calculated pixel was off center. When I'm 10^7000 or 8000 zooms deep within the set, that first pixel invokes a considerable wait time, even if the rest of the image is rendered instantly. Going multiple thousands of zooms deep into the rabbithole invokes manually advancing the Mandelbrot three to four zoom levels at a time (8-16x) then waiting for the image to render, and advancing once again. Rendering itself, especially at high resolution, doesn't take hours, days, or weeks like it used to, but getting to the target formation requires long hours of methodical zooming, mostly just clicking into the centroid.

So while the overall process is vastly faster with perturbation, the lead times on calculating that slow as balls first orbit could still be improved significantly by writing custom cores to handle the work load. This would greatly help with exploration and possibly video frame rendering as well. But this will not be possible until FPGA cores become commonplace in desktop processors, and we have programmers knowledgable to utilize them.
Logged
claude
Fractal Bachius
*
Posts: 563



WWW
« Reply #13 on: February 27, 2016, 09:28:20 PM »

Even with perturbation, it often takes 30+ seconds on my 4.2Ghz AMD rig to calculate the first pixel at arbitrary precision. Sometimes I must wait for a second or third pass because the calculated pixel was off center. When I'm 10^7000 or 8000 zooms deep within the set, that first pixel invokes a considerable wait time, even if the rest of the image is rendered instantly. Going multiple thousands of zooms deep into the rabbithole invokes manually advancing the Mandelbrot three to four zoom levels at a time (8-16x) then waiting for the image to render, and advancing once again. Rendering itself, especially at high resolution, doesn't take hours, days, or weeks like it used to, but getting to the target formation requires long hours of methodical zooming, mostly just clicking into the centroid.

So while the overall process is vastly faster with perturbation, the lead times on calculating that slow as balls first orbit could still be improved significantly by writing custom cores to handle the work load. This would greatly help with exploration and possibly video frame rendering as well. But this will not be possible until FPGA cores become commonplace in desktop processors, and we have programmers knowledgable to utilize them.

In my experimental mandelbrot-perturbator I cache the series approximation iterations of the centroid, so zooming further in only needs a few more iterations.  Calculating it fresh for a brand new view (eg coordinates from a file) or when zooming off-center does take a long time.  It's also sometimes possible to zoom directly to the next off-center departure lounge - if you know approximately how deep it will be (eg zoom from 1e-128 to 1e-192 to 1e-288 when julia morphing, the exponent is multiplied by 1.5 each time), and know that the central reference is high enough precision in a minibrot (meaning, it really is central - you can use Newton's method to find it once you know the period, and finding the period of the lowest period minibrot in a region is possible too).  An example: http://mathr.co.uk/mandelbrot/2015-06-12_perturbator_deep_zoom_stress_test/  I definitely didn't do the zooming only 16x each frame - that would take insanely heroic amounts of boring manual time and effort!
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #14 on: February 27, 2016, 09:56:30 PM »

So while the overall process is vastly faster with perturbation, the lead times on calculating that slow as balls first orbit could still be improved significantly by writing custom cores to handle the work load. This would greatly help with exploration and possibly video frame rendering as well. But this will not be possible until FPGA cores become commonplace in desktop processors, and we have programmers knowledgable to utilize them.
No doubt FPUs with more bits are always better than with fewer bits. We can use all they throw at us. But as Claude explained finding minibrots can basically be done automatically. It would be faster with more bits in the FPU, sure. But it takes not very long even at 10^-5000 and more with today's CPUs.
Logged
Pages: [1] 2 3 ... 24   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
Java applet for exploring the Mandelbrot set Announcements & News Paton 5 6601 Last post March 26, 2007, 06:03:34 PM
by Paton
What range/precision for fractional escape counts for Mandelbrot/Julia sets? Programming Duncan C 7 9934 Last post May 01, 2007, 08:23:13 PM
by Duncan C
Java Mandelbrot segment Help & Support fractalwizz 10 1896 Last post December 29, 2008, 08:01:24 PM
by cKleinhuis
[Java] Double-double library for 128-bit precision. Programming Zom-B 10 16968 Last post December 20, 2010, 04:03:48 AM
by David Makin
SuperFractalThing: Arbitrary precision mandelbrot set rendering in Java. Announcements & News « 1 2 ... 16 17 » mrflay 252 98980 Last post August 17, 2016, 11:59:31 PM
by cKleinhuis

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.241 seconds with 28 queries. (Pretty URLs adds 0.014s, 2q)