Title: Bigger Newton's fractals Post by: gamma on November 29, 2008, 12:52:25 AM Hey everybody,
I took an educated look at the topic of Newton sets and came to realize that I can calculate complex polynomial AFTER I choose which number/coordinates (called roots) will be solutions. All those fractals are a bit similar to each other. Different kinds of equations, other than polynomials could have solutions all over the plane and we could be constructive with those points, in theory. Presumably, one fractal set would stretch far out and be more entertaining. I would like to know how to extend fractal sets by adding more equations so to speak. Title: Re: Bigger Newton's fractals Post by: David Makin on November 29, 2008, 02:13:43 AM My formula for Ultra Fractal - mmf3.ufm:Newton by Roots - allows you to control the Newton produced by specifying the degree and roots of the polynomial to be solved - for the traditional Newton form switch to the Julia version.
Title: Re: Bigger Newton's fractals Post by: matsoljare on November 29, 2008, 03:46:16 PM On the subject, has anyone tried rendering a Newton formula with the real and imaginary components of the power varying with the x and y-axis?
Title: Re: Bigger Newton's fractals Post by: gamma on November 29, 2008, 06:22:33 PM I'm looking at it right now, its very advanced matter. COMPELLING :-)
I used simple NovaMandel and NovaJulia framework to insert new equations. I repeated the framework many times to test properties of new equations. On the internet there is a large Newton's fractal from the movie Biocursion. http://www.youtube.com/watch?v=I0wGQaxGg_c Its so big that it looks like a city. What I see different there is that its not limited to small neighborhood of the coordinate ZERO. Instead its wide and differs in parts. I don't expect that it is necessary to use polynomial as a function. Sometimes I make an error when calculating the first derivative and the final result can not represent an accurate equation for Newton-Raphson method, but fractal can be drawn easily. I had some GOOD ones in that series of experiments! Now you see, I constantly wonder what is the true, the pure and beautiful in one fractal. Or I wonder what is the science behind it, or something valuable in the background information. I can categorize shapes that way. For example - don't panic - I'd categorize your default display of fractals in the above mentioned formula file as fertile, but ugly - not very ugly, a little. When I zoom in there is an impression of overlapping shapes and pile-ups of dense colored stripes - that's one characteristic to call slightly unpleasant. Another category is - makes me almost angry - the diabolical, perverted exhibitions of yet another Mandelbrot set shape mutated and mean, lurking bellow the prominent outer features. In contrast, NovaMandel type for (z^2-1)^2 contains Mandelbrot set and all shapes smoothly contoured around clean Mandelbrot shape. Dense areas (extreme up and down edges of a bell shape) are nicer. They insinuate almost a 3-dimensional curvature, a bell shape pointing to the left (or right depending from -+#pixel). To see more order - like Mandelbrot set is nice - in the category beautiful, but unfortunately it also fits a category - bring-me-something-new. Rarely one complete fractal expands beyond certain limit. Some trigonometric functions can bring that. Mandelbrot and Julia sets fit into circle of radius 2. It reminds me of limited mathematics, because when all shapes stick to near coordinate zero I feel more bounded to basic principles instead of having fractal bricks for construction. I mean, fractal would appear in relation to few basic reasons over and over again. What if it is possible to add another sub shape with another function on one spot, adding a third one someplace else, or combine properties one into another one. Just a small problem there, NO ONE sees any mechanical parts of a fractal. Not a computer not a human (maybe few crazy professors). Title: Re: Bigger Newton's fractals Post by: gamma on November 29, 2008, 10:16:49 PM I discovered among endless equations one that produces infinite Newton's fractal. Equation violates the rules for Newton's method so formally its not a Newton's fractal.
You can embed directly into NovaMandel formula file... z = z - @relax * (z^@power - (@power-2)^z - 1) / (@power * z^(@power-1)-(@power-2)*z^(@power-3)) + #pixel Parameters are 5+0 for start value and exponent. Title: Re: Bigger Newton's fractals Post by: David Makin on November 30, 2008, 03:27:20 PM On the subject, has anyone tried rendering a Newton formula with the real and imaginary components of the power varying with the x and y-axis? I don't think there's a formula for UF to do that for a Newton yet but you can for a Nova. Select mmf5.ufm:Generic Switch Formula as the main formula then plug in mmf.ulb:Switch Nova instead of "Switch Standard" then enable the "Enable Advanced Settings" and set "Switch Value" to "Exponent". Title: Re: Bigger Newton's fractals Post by: HPDZ on January 18, 2009, 04:08:27 AM Does anyone know the actual equation that this Biocursion animation is using? It certainly looks like a Newton's method fractal ... Since Newton's method is based on finding roots of some equation (e.g. z^3-1=0) I am wondering what equation this one is finding roots for. Looks like something transcendental because there seem to be lots of roots, maybe sin(x)=0 or something like that?
Title: Re: Bigger Newton's fractals Post by: HPDZ on January 18, 2009, 08:27:09 AM After further consideration, I realize that you don't need a transcendental function in Newton's method to get an image with infinite extent, contrary to what gamma's impression was. Even applying Newton's method to the humble Z^3-1 equation will generate an image with infinite extent, although a rather boring one.
What's cool about the Biocursion muhtoombah video is that it has variation in the detail at high magnifications, which makes me think it's more like a Nova-type fractal, rather than a straight Newton's method. Title: Re: Bigger Newton's fractals Post by: gamma on January 26, 2009, 06:56:27 PM I believe that it would be impossible to CREATE brand new information with just a small equation. In other words, to get bigger fractal they must add more addends for example. I'm guessing that they combined sin(z), exp(z) and polynomials for Muhtoombah.
I am going well with experiments trying to get more complexity... but I am stuck in the mathematics. Its not that I don't know math. These programs for solving equations don't work! Formulas have to be simplified and it would be good to have mathematical meaning, like type or something. Title: Re: Bigger Newton's fractals Post by: HPDZ on January 26, 2009, 09:51:38 PM I agree you can't really create information here. I also agree they probably used a complicated polynomial.
But I think it's possible to get a lot of complexity from a very simple equation ... like z=z^+c. The necessary ingredient is feeding the result back into a nonlinear equation. That's why I suspect Muhtoombah was done with something more like the Nova equation, which is basically the relaxed Newton's method modified to include +c: x = x - k f(x)/f'(x) + c. A huge amount of complex, "interesting" (non-scale-invariant) detail can emerge from this form of iteration even with f(x)=x^3-1. I would expect a more complicated polymials would be even better. A more complex polynomial may add more detail since it has more roots, but not necessarily the kind of detail that's interesting. For example, I recently posted a picture of Newton's method applied to sin(x). It's pretty, but totally scale-invariant, so it looks exactly the same at all magnifications as far as I can tell. Conversely, to get a fractal with infinite extent, you don't necessarily need a very complicated function: the basic x^3-1 function will give an infinite-extent fractal under Newton's method. But again, as with sin(x), it's totally scale-invariant and kind of boring. I'm not sure, but I think that in order to get details that change with magnification, like in the classic Mandelbrot set for z^2+c, you have to have something analogous to the "+c" part. It has at least been that way in my experience so far. I'm looking forward to the results of your experiments.... Title: Re: Bigger Newton's fractals Post by: David Makin on January 27, 2009, 02:25:57 AM If you guys have Ultra Fractal then you might like to play with mmf3.ufm:Multivariable Newton.
It's basically the Newton formula (Mandelbrot and Julia modes) extended to a full generalised 2D system (x,y) rather than being restricted to "plain" complex numbers (using the Jacobians). You can use the parameters to set the f(x,y)/g(x,y) system that is to be solved by the formula including adding in higher functions of x,y,xy etc. From the default the formula is actually a divergent one because the "Iterate the Product" option is enabled which multiplies the (x,y) value (treated as complex) to give a running product on each iteration prior to evaluating the Newton formula. Adding in trig functions in both f() and g() while using the "Iterate the Product" option can produce nice infinite tilings - examples using the formula (or a very similar one): http://website.lineone.net/~dave_makin/fractal_art_4.html http://website.lineone.net/~dave_makin/fractal_art_10.html http://website.lineone.net/~dave_makin/fractal_art_13.html http://website.lineone.net/~dave_makin/fractal_art_37.html http://website.lineone.net/~dave_makin/fractal_art_42.html http://website.lineone.net/~dave_makin/fractal_art_49.html http://website.lineone.net/~dave_makin/fractal_art_56.html Title: Re: Bigger Newton's fractals Post by: cKleinhuis on January 27, 2009, 04:41:26 AM :music: :w00t: :w00t: :w00t:
excellent images, especially 10,42 and 56, i have to play with that formula :D Title: Re: Bigger Newton's fractals Post by: David Makin on January 27, 2009, 06:47:25 PM :music: :w00t: :w00t: :w00t: excellent images, especially 10,42 and 56, i have to play with that formula :D Thanks very much. Note that once you change from the default parameters so that the formula is no longer conforming to true complex calculations it can take a while to find something that doesn't just look ugly :) Title: Re: Bigger Newton's fractals Post by: HPDZ on January 27, 2009, 06:56:11 PM I've only had a chance to take a very quick look but these images are awesome. I'll have to check out exactly what you've done mathematically and see if I can incorporate it into my software. I'll bet it is really s ... l ..... o .........w to render these due to the complexity of the calculations.
Title: Re: Bigger Newton's fractals Post by: David Makin on January 28, 2009, 12:18:22 AM Thanks.
As to render speed, it all depends what you do with the parameters :) For instance "Cabbaged" http://website.lineone.net/~dave_makin/fractal_art_10.html renders in under 23 seconds at 640*480 on this old P4HT even though it is 6 layers. Title: Re: Bigger Newton's fractals Post by: gamma on January 28, 2009, 07:48:33 PM Take a look this amazing work:
http://home.hccnet.nl/titia.1/fractal/white.html Title: Re: Bigger Newton's fractals Post by: cKleinhuis on January 29, 2009, 12:43:44 AM yes, this is titia, an amazing fractal artistine, and a very long and appreciated member of this forum,
in fact, i have used this image for the banner on this site above, and exactly this image is also contained in our gallery :D, do not know if it actually competed in last years spring competition ... :police: no, it was just an entry, look here: http://www.fractalforums.com/gallery/?sa=view;id=127 ... but the one you posted is somehow bigger ... :( Title: Re: Bigger Newton's fractals Post by: HPDZ on March 16, 2009, 12:40:49 AM On the subject, has anyone tried rendering a Newton formula with the real and imaginary components of the power varying with the x and y-axis? I finally got around to trying this today. I've posted five representative examples with various values for the starting location. To be precise, this shows the result of iterating the Newton's method formula applied to zp-1=0, which is z = z*(p-1+z-p)/p, where p is varied across the image: p=x+iy. The standard type of exponential smoothing is applied both to the convergent and divergent cases (I don't check which points do what). The branch cuts caused by the complex exponents have a sort of Picasso-like effect. It reminds me of how I feel in the morning sometimes when I haven't had enough coffee.... I obviously didn't spend a lot of time on the coloring; these are just meant to give a rough feel for what it looks like. The images have either 3x3 anti-aliasing or 5x5. The color mapping is done with the count histogram. This shouldn't be too hard to write in UF; maybe I'll do this as an exercise in learning how to write UF formulas. (http://www.fractalforums.com/gallery/0/359_16_03_09_12_24_58_4.jpg) (http://www.fractalforums.com/gallery/0/359_16_03_09_12_24_57_3.jpg) (http://www.fractalforums.com/gallery/0/359_16_03_09_12_24_57_2.jpg) (http://www.fractalforums.com/gallery/0/359_16_03_09_12_24_57_1.jpg) (http://www.fractalforums.com/gallery/0/359_16_03_09_12_24_57_0.jpg) Title: Re: Bigger Newton's fractals Post by: cKleinhuis on March 16, 2009, 01:19:14 AM the first one sho0ws a nicve grid 3d structure :D
Title: Re: Bigger Newton's fractals Post by: gamma on March 19, 2009, 03:57:25 PM On the subject, has anyone tried rendering a Newton formula with the real and imaginary components of the power varying with the x and y-axis? I finally got around to trying this today. I've posted five representative examples with various values for the starting location. To be precise, this shows the result of iterating the Newton's method formula applied to zp-1=0, which is z = z*(p-1+z-p)/p, where p is varied across the image: p=x+iy. The standard type of exponential smoothing is applied both to the convergent and divergent cases (I don't check which points do what). Hi. It seems that changing the imaginary part often leads to rotation/twisting or better "spiraling" effect observable in 4th and 5th image. Curves/contours are not closed then. Try real power. (phew "try real power"?) Title: Re: Bigger Newton's fractals Post by: HPDZ on March 19, 2009, 11:40:58 PM "Try real power" ... can you elaborate/clarify?
The images vary the exponent over the complex number plane, so using only a real power would mean the image would just be a single line... Perhaps my initial explanation wasn't clear, or perhaps you mean to try something else? Title: Re: Bigger Newton's fractals Post by: gamma on March 20, 2009, 03:35:51 PM "Try real power" ... can you elaborate/clarify? The images vary the exponent over the complex number plane, so using only a real power would mean the image would just be a single line... Perhaps my initial explanation wasn't clear, or perhaps you mean to try something else? Yes! Raising to power with imaginary number is causing a kind of twisting effect and we have these sharp lines interrupting the smooth, , round, continuous flow of contours. You probably know all that.Then, p would slide only across the X axis. Title: Re: Bigger Newton's fractals Post by: HPDZ on March 21, 2009, 12:06:24 AM Right. The sharp interruptions are due to what are called "branch cuts" in the logarithm function that's used to calculate the complex exponentiation.
But if I were to only scan along the real axis, wouldn't the image just be a single line? Or perhaps you have another way of scanning that I'm not understanding? Title: Re: Bigger Newton's fractals Post by: gamma on March 21, 2009, 05:28:41 PM I would not know, I can't see from your specification. I tried some formula in Ultrafractal and added
p=p+1 in each iteration (loop section). The result is still very Newtonian looking. (at Harvard they take it as proof :-) Title: Re: Bigger Newton's fractals Post by: HPDZ on March 21, 2009, 06:27:56 PM Quote I would not know, I can't see from your specification. Well, what I did for the five images above is pretty straightforward: I iterated the Newton's method function corresponding to z^p-1=0 with some fixed initial guess z0, letting the power p equal the coordinates of the pixel in the image. In other words, p=x+iy, where x is the horizontal coordinate in the image and y is the vertical coordinate in the image. As the image point moves, x and y change, so p changes across the image. But p stays fixed for a given pixel on the image as the loop iterates. Quote I tried some formula in Ultrafractal and added p=p+1 in each iteration (loop section). What you describe here is quite different: It looks like you're changing p at each iteration, presumably starting from p=3 at the beginning of each pixel's iteration sequence. Interesting idea. I'll have to try it. I suppose a general increment dp, not necessarily 1, could be used as well.... Title: Re: Bigger Newton's fractals Post by: HPDZ on March 21, 2009, 10:15:09 PM I posted some examples of what happens when you increment the exponent to the "Rate my Fractal" forum under the topic "Exponent increment".
http://www.fractalforums.com/images-showcase-(rate-my-fractal)/exponent-increment/ (http://www.fractalforums.com/images-showcase-(rate-my-fractal)/exponent-increment/) Does this look like what you got? Title: Re: Bigger Newton's fractals Post by: gamma on March 22, 2009, 04:52:25 PM Quote I would not know, I can't see from your specification. Well, what I did for the five images above is pretty straightforward: I iterated the Newton's method function corresponding to z^p-1=0 with some fixed initial guess z0, letting the power p equal the coordinates of the pixel in the image. In other words, p=x+iy, where x is the horizontal coordinate in the image and y is the vertical coordinate in the image. As the image point moves, x and y change, so p changes across the image. But p stays fixed for a given pixel on the image as the loop iterates. Quote I tried some formula in Ultrafractal and added p=p+1 in each iteration (loop section). What you describe here is quite different: It looks like you're changing p at each iteration, presumably starting from p=3 at the beginning of each pixel's iteration sequence. Interesting idea. I'll have to try it. I suppose a general increment dp, not necessarily 1, could be used as well.... I understand. In Ultrafractal that would be, mystuff.ufm init: ; per pixel initialization complex zold complex z=#pixel ;test every point of plane as starting guess; use (1,0) for "mandelbrot" mode loop: z=z-(z^#pixel-1)/(#pixel*z^(#pixel-1)) bailout: |z-zold|>0.0000001 BTW, In outside coloring choose file mt.ucl, Newton Basin for coloring according to roots. Title: Re: Bigger Newton's fractals Post by: HPDZ on March 22, 2009, 05:31:11 PM Essentially, yes -- this is the case of the exponent varying with pixel location but staying constant as the iterations progress.
I generated the images from my own program, which is in C++, but I have also tried using UltraFractal to replicate it. This is my first attempt to use UF's equation editor. :) My loop equation is the mathematically equivalent, simplified version of what you wrote. This form eliminates the redundant exponentiation (these are very expensive with complex exponents). I also added a couple of parameters. Code: init: Title: Re: Bigger Newton's fractals Post by: David Makin on March 23, 2009, 11:18:59 AM Essentially, yes -- this is the case of the exponent varying with pixel location but staying constant as the iterations progress. I generated the images from my own program, which is in C++, but I have also tried using UltraFractal to replicate it. This is my first attempt to use UF's equation editor. :) My loop equation is the mathematically equivalent, simplified version of what you wrote. This form eliminates the redundant exponentiation (these are very expensive with complex exponents). I also added a couple of parameters. Code: init: Note that unless you really need "p" to be a variable you should not assign #pixel to "p", you should use #pixel directly - tthe runtime version will be faster as it saves a level of indirection. Title: Re: Bigger Newton's fractals Post by: HPDZ on March 23, 2009, 05:26:41 PM Quote Note that unless you really need "p" to be a variable you should not assign #pixel to "p", you should use #pixel directly - tthe runtime version will be faster as it saves a level of indirection. Thanks for the advice, David. I just did it that way for ease of reading. I had no idea it would affect the execution speed. I suppose assumptions from my experience in C/C++ do not necessarily translate to UF coding. Title: Re: Bigger Newton's fractals Post by: David Makin on March 23, 2009, 09:45:14 PM Actually I may not be correct on second thoughts - it probably doesn't make that much difference in this case - p would probably end up a stack variable and #pixel would be at a fixed address, either would be cached and I can't remember if there's an overhead for indexing off the stack compared with reading from a fixed address :)
I'm now a bit over-eager to avoid using variables after finding out that UF does absolutely the most it can at compile time - for instance if you have a user parameter @power then you should use (@power-1) rather than assigning @power-1 to a variable - that will most definitely make a difference since using (@power-1) directly would mean no indirection or calculation at all in the run-time code. Title: Re: Bigger Newton's fractals Post by: Nahee_Enterprises on March 24, 2009, 11:19:07 AM Actually I may not be correct on second thoughts - .... ....I can't remember if there's an overhead for indexing off the stack compared with reading from a fixed address. I'm now a bit over-eager to avoid using variables after finding out that UF does absolutely the most it can at compile time - .... There should be a document made available to all formula writers on how the fractal application handles various situations, which includes what goes on within compilers and interpreters. It sure would make better formula writers, not to mention better formulae, if such detailed knowledge was required reading before everybody decided they wanted to try their hand at such things. :evil1: Title: Re: Bigger Newton's fractals Post by: David Makin on March 24, 2009, 12:05:24 PM There should be a document made available to all formula writers on how the fractal application handles various situations, which includes what goes on within compilers and interpreters. It sure would make better formula writers, not to mention better formulae, if such detailed knowledge was required reading before everybody decided they wanted to try their hand at such things. :evil1: Well the material is there - it's the Intel documentation on their assembly language - with respect to UF code just think what's the best way of converting the UF source to assembly code and that's basically what Frederik's compiler does :) Of course not everyone can be bothered learning assembly code.... Title: Re: Bigger Newton's fractals Post by: Nahee_Enterprises on March 24, 2009, 01:48:03 PM ....with respect to UF code just think what's the best way of converting the UF source to assembly code and that's basically what Frederik's compiler does :) Of course not everyone can be bothered learning assembly code.... Which is why there should be a document explaining exactly what the compiler is doing. Something that the many ordinary formula writers would be capable of understanding. A document that gives details and examples of the correct way to write formulas, and what happens when one does something otherwise. A document with various Tips, Tricks, and "I Gotchas" to assist the formula writer. It has been many years since I wrote code directly in Assembler Language. But I do know that this is something the average and/or new formula writer would never wish to learn. So, a document about the compiler and what happens should be essential to a fractal application. Title: Re: Bigger Newton's fractals Post by: HPDZ on March 24, 2009, 05:51:49 PM My two cents on some of these comments:
1. There is essentially no difference in the Intel processors between retrieving a variable from the stack versus some non-stack memory location. There's a slight overhead if you dereference a pointer versus accessing a direct address, maybe, depending on what the cache is doing and how well the look-ahead logic is working and which processor you have, etc. Generally, all the modern CPUs have filled the cache with the data your instruction will need long before the instruction executes, unless you're accessing a whole bunch of widely separated memory locations in a row. 2. Does the UF compiler actually compile down to machine code? 3. If UF does indeed produce "good" machine code, then by far the dominant time-consuming part of evaluating an expression such as z^p where p and z are both complex numbers will be all the trigonometric, exponential, and logarithmic functions involved. It will probably be on the order of 1000-10000 times as much time as accessing any number in memory, or any other housekeeping-type of operation in a fractal forumla's loop (i.e. looking up a user parameter, incrementing and testing a loop conrol variable, etc.). 4. Documentation...is good, of course. One problem with documenting nitty-gritty details is that if you ever change anything, you have to change the documentation. But as far as documenting the UF language for purposes of speed optimization, I would speculate that it won't make much difference in most circumstances. If the code is getting down to the point where details like the CPU cache and branch prediction are important considerations, then documenting the compiler won't help much because the different flavors of CPU all do these things slightly differently (e.g. the Core2 works differently than the Pentium IV). When this kind of optimization is important, you generally need special performance-monitoring tools that can report on things like cache misses, branch prediction failures, and concurrency failures due to dependencies in the code. Often, these optimizations are tailored to a specific processor based on empirical testing. Both AMD and Intel are not good about documenting the details of the more sophisticated stuff in their processors (i.e. the cache algorithms, branch prediction algorithms, scheduling across multiple execution units, etc), I suspect because it's proprietary information they don't want to give away. 5. That said, of course, good programming practices should be used, I'm sure. I suspect that's what Nahee_Enterprises was referring to.... Things like saving results of complicated calculations that are constant across a loop rather than calculating them over and over in each loop iteration, etc. I'd love to help with this, but I don't know if my experience with C and assembly language programming has much applicability to UF programming, which I am still quite new at. Title: Re: Bigger Newton's fractals Post by: David Makin on March 24, 2009, 06:08:27 PM 2. Does the UF compiler actually compile down to machine code? 3. If UF does indeed produce "good" machine code, then by far the dominant time-consuming part of evaluating an expression such as z^p where p and z are both complex numbers will be all the trigonometric, exponential, and logarithmic functions involved. It will probably be on the order of 1000-10000 times as much time as accessing any number in memory, or any other housekeeping-type of operation in a fractal forumla's loop (i.e. looking up a user parameter, incrementing and testing a loop conrol variable, etc.). 2. Yes I think so (it would be a whole lot slower otherwise :) 3. Point taken on the times. Actually I'd guess evaluating z^p is probably only around 500* slower than looking up a variable on a modern FPU - also if that's what your formula needs to do you can't optimise that, you can only optimise what it's possible to optimise. Even shaving 0.1% off the render time can make a big difference when doing large disk renders or long animations (that may take days or weeks in total). Title: Re: Bigger Newton's fractals Post by: David Makin on March 24, 2009, 07:44:26 PM <snip> Actually I'd guess evaluating z^p is probably only around 500* slower than looking up a variable on a modern FPU I meant if computing the full complex calculation. I just checked on the following to verify in UF: If you have a value z^p where p is a run-time variable then UF has to compile allowing for compututation of the full complex calculation. If however p is a known value at compile time (such as using z^(@power-1)) then UF checks the value and compiles the code accordingly - so raising to small integer powers 2,3,4 etc. is magnitudes faster, in fact it's quite a bit faster if the compile time value is just entirely real or entirely imaginary. Title: Re: Bigger Newton's fractals Post by: Nahee_Enterprises on March 24, 2009, 07:52:01 PM 4. Documentation...is good, of course. One problem with documenting nitty-gritty details is that if you ever change anything, you have to change the documentation. ..... 5. That said, of course, good programming practices should be used, I'm sure. I suspect that's what Nahee_Enterprises was referring to.... Things like saving results of complicated calculations that are constant across a loop rather than calculating them over and over in each loop iteration, etc. Thank you, that was pretty much what I was getting at. And without sufficient documentation, it is difficult to have good programming practices (something I have always tried to follow for many years, and something I attempt to teach my students). Yes, change usually does require updating the documentation. Just something that needs to be done with every version/release level, whether it be an Installation Guide, a User Manual or a Formula Writer's Guide. :D Title: Re: Bigger Newton's fractals Post by: David Makin on March 24, 2009, 08:57:50 PM 4. Documentation...is good, of course. One problem with documenting nitty-gritty details is that if you ever change anything, you have to change the documentation. ..... 5. That said, of course, good programming practices should be used, I'm sure. I suspect that's what Nahee_Enterprises was referring to.... Things like saving results of complicated calculations that are constant across a loop rather than calculating them over and over in each loop iteration, etc. Thank you, that was pretty much what I was getting at. And without sufficient documentation, it is difficult to have good programming practices (something I have always tried to follow for many years, and something I attempt to teach my students). Yes, change usually does require updating the documentation. Just something that needs to be done with every version/release level, whether it be an Installation Guide, a User Manual or a Formula Writer's Guide. :D Hi Paul - looks like you're after the sort of documentation that isn't even included in the Visual Studio manuals, or the Xcode manuals...... :D Title: Re: Bigger Newton's fractals Post by: gamma on March 25, 2009, 12:40:49 AM Matlab is probably the slowest environment based on some continuing standard from 1915. Yet, it is so nice to get back to an interpreter,
just coding, having fun... but it can link and has a C compiler. Title: Re: Bigger Newton's fractals Post by: Nahee_Enterprises on March 26, 2009, 01:52:43 AM Hi Paul - looks like you're after the sort of documentation that isn't even included in the Visual Studio manuals, or the Xcode manuals...... :D In my own opinion, Microsoft has never been able to produce good documentation. And that includes their books, CDs, PDFs, Help files (plain, compiled, or HTML), or anything they have Online. I was thinking of something more like what O'Reilly publishes with their technical books, in particular their "In A Nutshell" series. Or maybe something like the old IBM programming manuals from the mainframe days. :D :evil1: Title: Re: Bigger Newton's fractals Post by: lycium on March 26, 2009, 08:32:22 AM In my own opinion, Microsoft has never been able to produce good documentation. While that's generally case, "never" is perhaps too strong - I have fond childhood memories of QBASIC's help system, and these days find MSDN to be quite helpful, especially the DirectX documentation and examples.Still, the unimaginable worldwide frustration caused by Clippy (http://en.wikipedia.org/wiki/Office_Assistant) will not soon be forgotten! Title: Re: Bigger Newton's fractals Post by: HPDZ on March 27, 2009, 01:00:35 AM Quote Actually I'd guess evaluating z^p is probably only around 500* slower than looking up a variable on a modern FPU The "1000-10000" figure I put out was pretty much a SWAG, partilly influenced by my outdated memory of the old 8087. A quick check of the relevant documentaiton for modern CPUs shows your estimate of 500 is closer to the truth. We need one log, one exp, two trig functions, and a few multiplications. Depending on the exact CPU type, this can all be done with a throughput of about 200-300 clock cycles, ignoring load/save time for intermediate calculations, and neglecting any cache or latency issues. Loads have a throughput of 1 clock cycle usually. Title: Re: Bigger Newton's fractals Post by: lycium on March 27, 2009, 08:39:29 AM Quote Actually I'd guess evaluating z^p is probably only around 500* slower than looking up a variable on a modern FPU The "1000-10000" figure I put out was pretty much a SWAG, partilly influenced by my outdated memory of the old 8087. A quick check of the relevant documentaiton for modern CPUs shows your estimate of 500 is closer to the truth. We need one log, one exp, two trig functions, and a few multiplications. Depending on the exact CPU type, this can all be done with a throughput of about 200-300 clock cycles, ignoring load/save time for intermediate calculations, and neglecting any cache or latency issues. Loads have a throughput of 1 clock cycle usually. memory accesses run into the thousands of cpu cycles these days, partly due to these things being massively speculative and superscalar; x86 performance programming has been all about caching and careful memory access since the pentium1. so while memory access has gotten much much faster (in both latency and throughput -- ~50ns and 25gb/s these days), computational throughput has grown exponentially faster still to the point where you have arrays of x86 threads, all memory-starved! it is generally faster to recompute a lot of things (even a 3-vector normalisation, the evil 1/sqrt pair) than to do an uncached load. anyway, no more speculation: these things can easily be measured. on something like a core 2 duo (and newer) you'll be really amazed at what you find O0 Title: Re: Bigger Newton's fractals Post by: lycium on March 27, 2009, 09:06:53 AM here is a really good article: http://people.redhat.com/drepper/cpumemory.pdf
Title: Re: Bigger Newton's fractals Post by: HPDZ on March 28, 2009, 12:48:05 AM memory accesses run into the thousands of cpu cycles these days, partly due to these things being massively speculative and superscalar; x86 performance programming has been all about caching and careful memory access since the pentium1. I agree -- dealing with a cache miss can indeed take thousands of cycles, or more. If something's in the cache, the P-IV and Core2 CPUs can get at it with a throughput of 1 cycle usually. And many register-based instructions can achieve throughputs of more than 1 instruction per cycle. Quote anyway, no more speculation: these things can easily be measured. on something like a core 2 duo (and newer) you'll be really amazed at what you find O0 Yes. I wish I had the time to do this! I'm sure it would be educational. I can say that my high-precision multiplication code runs significantly faster on my Core2 system than it does on my P-IV system, even without adjusting for the clock speed difference (2.4 GHz Core2 vs 3.2 GHz P-IV) so the more advanced architecture is definitely a major factor. I guess the main point to reiterate is that most efforts for optimizing code for fractal generation should focus on the parts of the computation that involve transcendental operations. The number of such instructions should be minimized. Formula writers should be aware that multiplying, dividing, and especially performing exponentiation or transcendental operations on complex numbers is much more computationally intensive than doing the same operations on real numbers. Formula code shoudl be carefully structured their code to minimize such operations. That said, due to the superscalar and highly cached architecture of modern CPUs (everything beyond 486-era units) many counterintuitive effects can arise in timings. A cache miss caused by an unfortunate, but well-intentioned, attempt to save an intermediate result can indeed cost more than re-doing the calculation. This was something I didn't think of in my earlier comments. Interesting discussion. Title: Re: Bigger Newton's fractals Post by: David Makin on March 29, 2009, 05:53:53 PM My take on this is that if your code is going to be around for a while then you should optimise based on likely improvements in processor technology.
The obvious thing is to support multi-threading. But on a single thread if there's a choice between optimising using more memory (at least for data) and using more FPU power (transcendentals) then I'd normally go for the more memory option on the grounds that improvements in cache sizes and cache handling are more likely than improvements in the raw cycles required for transcendentals or indeed for improved FPU pipelining. Title: Re: Bigger Newton's fractals Post by: HPDZ on March 29, 2009, 07:52:36 PM My take on this is that if your code is going to be around for a while then you should optimise based on likely improvements in processor technology. Predictions are tricky, especially when they're about the future.... :)Quote The obvious thing is to support multi-threading. Yes. Fortunately, most fractals are "embarrassingly parallel"-type problems that benefit greatly from multi-threading.Quote But on a single thread if there's a choice between optimising using more memory (at least for data) and using more FPU power (transcendentals) then I'd normally go for the more memory option on the grounds that improvements in cache sizes and cache handling are more likely than improvements in the raw cycles required for transcendentals or indeed for improved FPU pipelining. Sadly, that last part about likely emphasis in future CPU engineering is probably true. Along the same lines, I have watched the MMX/SSE instruction set grow over the years and continue to remain frustrated that a few simple instructions that would be very useful for arbitrary-precision arithmetic have not been added. Similarly for the CUDA instructions in the GPUs (at least the last time I checked, about 6-8 months ago).Title: Re: Bigger Newton's fractals Post by: lycium on March 30, 2009, 07:28:00 AM Predictions are tricky, especially when they're about the future. niels bohr O0 Yes. Fortunately, most fractals are "embarrassingly parallel"-type problems that benefit greatly from multi-threading. if only the "most" part were true: for IFS fractal rendering, all bets are off since the computation is very "irregular". Along the same lines, I have watched the MMX/SSE instruction set grow over the years and continue to remain frustrated that a few simple instructions that would be very useful for [...] have not been added. yeah, intel royally messed up sse, and continue to do so in later revisions (with decreasing severity). then again, they aren't really known for their intelligent ISA designs ;) |