Logo by slon_ru - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Did you know ? you can use LaTex inside Postings on fractalforums.com!
 
*
Welcome, Guest. Please login or register. October 22, 2019, 10:02:54 PM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1]   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Mandelbrot "Adaptive" Escape Limit?  (Read 1411 times)
Description: Looking for a "best practice" or something for escape limits
0 Members and 1 Guest are viewing this topic.
nessalc
Forums Newbie
*
Posts: 3


« on: September 16, 2013, 08:38:03 PM »

Having written probably dozens of Mandelbrot renderers over the years (boredom, new programming languages to try, etc.), one thing has bugged me: the persistent use of a constant escape limit (e.g. 100-10000+ iterations). I'm familiar with the tests for the main cardioid and 0/1 Bulb, but is there some better way to determine, say, based on an image size or zoom factor, how many iterations I ought to use? Is there some "well-known" formula for testing chunks of other bulbs (since they're not strictly circular, I know a complete formula is unrealistic) that I have somehow missed? Basically, I don't want my program wasting precious cycles iterating thousands, or possibly millions of times when there's some better way--my favorite images are typically right along the border, and often feature some decent chunk of the image devoted to points within the Mandelbrot set.

I'm familiar with Mariani/Silver and successive refinement. I've tried a boundary-scanning method, but always run into memory issues. Is there some website that has pseudocode describing a non-recursive implementation that I might be able to utilize (if, that is, this is an appropriate method for me)?

Thanks!

J
Logged
lkmitch
Fractal Lover
**
Posts: 238



« Reply #1 on: September 16, 2013, 09:54:16 PM »

In this work,

http://www.kerrymitchellart.com/articles/area/mandelbrot-area.html

I used a series of masks to greatly reduce the amount of area that actually had to be computed.  The concept is fairly straightforward, but the implementation takes a bit of effort--the centers of the circles are given by roots of complex polynomials, as are tangency points (and thus, the radii of the masks).  So your problem turns from doing simple iterations to doing much more complicated root-solving.  Or, you could use a lookup table, but the number of components grows so fast that this would not be practical for a general rendered.  Fairly quickly, the overhead in finding the components which can easily be approximated by circles overwhelms the actual iteration.
Logged
nessalc
Forums Newbie
*
Posts: 3


« Reply #2 on: September 16, 2013, 10:14:01 PM »

Aha! Thanks! At a glance, this seems to be exactly what I was looking for!
 cheesy
Logged
DustyMonkey
Forums Newbie
*
Posts: 6


« Reply #3 on: September 21, 2013, 04:46:48 PM »

As far as deciding on a maxiter value, I've always felt that from a strictly purist point of view that having a maxiter at all is simply the wrong way to go about it. "maxiter" is a completely invented notion that serves no purpose but convenience.

The standard implementation is:

for each pixel of image {
do { pixel.z = pixel.z*pixel.z+pixel.c } until pixel.z escapes or arbitrary maxiter reached
}

...but in actuality there is no reason that it isn't more correctly:

do {
for each unescaped pixel of image { pixel.z = pixel.z*pixel.z+pixel.c }
} until the effort exceeds the subjective gains

When we gauge the cost of the effort as infinitesimal and the benefit of the gains as infinite ("perfectionist"), then the perfect heuristic is "until the number of escaped pixels will no longer change" ... but in practice of course the costs are not infinitesimal and the gains are not infinite, but now that its in the form of a cost/benefit analysis then we can decide on a simple universal heuristic where we wont waste any more time trying to correct just a small number of deviations from perfection.

Lets call that ideal "perfectionist" number of escaped pixels N, and the number of escaped pixels the algorithm has seen on any given pass E(T).

We of course don't know N for any given render, but we can observe E(T) over time while rendering and can thus collect statistics. We can make projections about where E(T + DT) will likely be, compute confidence intervals, and so on. Its suddenly all very formal and the only parameters can be the actual subjective quantities themselves rather than that seat-of-the-pants maxiter, quite specifically the number of deviations from N that we will tolerate and the confidence level that we are within that margin.
Logged
Pages: [1]   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
TedTalk : "Benoit Mandelbrot: Fractals and the art of roughness " Fractal News across the World ker2x 0 1401 Last post July 07, 2010, 01:56:09 PM
by ker2x
Special "offset" and "rotate" options from Fractallab possible in M3d-formula? Mandelbulb 3d « 1 2 » SaMMy 18 5751 Last post April 05, 2011, 12:20:52 PM
by DarkBeam
I had a vague idea that "fractal" was related to "pretty pictures" Meet & Greet Repomancer 5 1059 Last post October 10, 2012, 02:04:23 AM
by David Makin
MC Escher's "Circle Limit" pictures Fractal Art Review « 1 2 » Max Sinister 16 2309 Last post June 08, 2016, 11:20:36 PM
by Max Sinister
"Escape Choices" Mandelbulb3D Gallery cricke49 0 480 Last post October 06, 2017, 05:10:09 PM
by cricke49

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.135 seconds with 27 queries. (Pretty URLs adds 0.007s, 2q)