Logo by kr0mat1k - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Visit the official fractalforums.com Youtube Channel
 
*
Welcome, Guest. Please login or register. February 10, 2026, 09:26:51 AM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: 1 2 [3] 4   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Charles says "Hi"  (Read 9284 times)
0 Members and 1 Guest are viewing this topic.
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #30 on: November 03, 2006, 11:14:38 AM »

well, i've had my fun: http://www.fractographer.com/propaganda/fastjulia.zip

parameters: c = -0.74543 + 0.11301i, rendered on [-1.25,1.25]x[-1,1] at 1536x1024 with 12*12 supersamples per pixel, up to 128 iterations.

with 1 32bit thread it completes in 1:15m, with 2 64bit threads it's a bit less than half as long. i'd be interested to know what kind of times people get! it's still not completely optimised (in particular i'd like to add streaming non-temporal memory writes), but already i think one would be hard pressed to match this kind of performance and accuracy in straight assembly language without quite a significant time investment; the c++ code in its entirety is 190 lines, written in a little over two hours - if anyone wants to see it just shout.

last thing: i do hope this will be seen as my taking an opportunity to have some fun while showing that c++ isn't necessarily slower than even well-written asm - i too used to really enjoy my "down to the metal" programming before it all got too complicated, and having found that simple higher-level code in fact ends up faster (floating point code is also more accurate across large scales, which matters a lot for anti-aliasing), well, i just wanted to share that.
Logged

Charleswehner
Guest
« Reply #31 on: November 03, 2006, 01:42:28 PM »

I do not accept challenges, nor give out challenges. Life is not war.

However, I have been there and done that. I have been in the world of floating-point, and way beyond 64 bit. I wrote mathematics to a precision of 65 places of DECIMALS. That was 28 BYTES (mantissa) and two bytes exponent.

A wicked government made problems for me - I lost most of my life's possessions. However, I rebuilt some of my earlier work.

The floating-point core is at http://www.wehner.org/fpoint.

When a language like C, or C++ is created, somebody first writes the inner core in assembler. When Inmos introduced the Transputer for supercomputers, they said that nobody ever programs in assembler any more. That is a joke. At an exhibition I met Sir Clive Sinclair, and repeated the joke. He said "That is what they want you to believe". Later, Inmos published the op-codes of the Transputer (which they had been keeping secret).

Here is another joke. Forth is written in Forth. I created my own version of "super-forth", with many added features. Forth is the only officially annotated list of subroutines. It can barely be called a language. However, it is the nuts-and-bolts of computer programs, and I think it is essential knowledge.

So Forth is "written in Forth"?  Suppose, on a RISC processor there is no subtract. We write : - neg + ;, where the colon starts the definition and the semicolon ends it. It simply negates the adder into a subtractor, the addend staying the same. Now we write : + neg - ;. This negates the subtractor and calls the subtraction. So the addition calls the subtraction, which calls the addition, which calls the subtraction until the return stack overflows and the machine crashes. Forth is not written in Forth. It violates Lord Russell's rule of set theory that a set of subroutines cannot be written in that set of subroutines - sets cannot contain themselves. A subset of Forth - the colon definitions - is written in Forth.

I was telling an expert that I had built a Forth with lots of machine-code, and was converting the subroutines ("words") one by one into colon definitions. "When the last one is converted", I said, "there will be no native machine-code left, and it will run on any processor". He said "That's nice", without any further comment.

Explain to me how a machine can run without a processor. If there is no machine code, one can removed the CPU and the system will still run - in one's dreams!

So there is always machine code somewhere - even if it is hidden from the programmer.

Charles
Logged
heneganj
Guest
« Reply #32 on: November 03, 2006, 06:15:57 PM »

Quote
set of subroutines cannot be written in that set of subroutines

If you call DNA genetic code a series of subroutines clearly it can, or have I spectacularly misunderstood?  (again!)
Logged
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #33 on: November 03, 2006, 06:43:45 PM »

I do not accept challenges, nor give out challenges. Life is not war.

i was very explicit in pointing out that i don't see this as any kind of war :/ that two programmers interested in low-level efficiency might compare performance ideals is natural, no?

frankly i'm very interested to see how fast your integer-asm iteration goes, and if i could egg you on to produce a strong result while having a bit of fun myself (i mentioned also that my forthcoming contract work also involves similar optimisation), where's the harm? i really must say i'm disappointed with this negative interpretation of what i thought might be a bit of fun. producing a 24576x16384 image (1.18gb) with 12*12 supersamples per pixel of up to 256 iterations in under 5 1/2 hours is fun for me, at least.

However, I have been there and done that. I have been in the world of floating-point, and way beyond 64 bit. I wrote mathematics to a precision of 65 places of DECIMALS. That was 28 BYTES (mantissa) and two bytes exponent.

holy smokes... what kind of application needs such precision?! i know pi off by heart to sufficiently many decimals (far less than 65 i can assure) to describe a circle from the sun to pluto accurate to a metre, and that's widely considered sufficiently insane ;)

The floating-point core is at http://www.wehner.org/fpoint.

i read that, but must admit it's a bit secondary to my interests since i need accuracy across many scales. modern architectures also shun the horribly inefficient stack-based x87 architecture in favour of a direct-access scalar computation model offered by cpus starting with the pentium4 (which itself has been around for quite some time).

furthermore, i must admit i "cheated" and used 128bit simd instructions - they admit a remarkably efficient mapping to escape time fractal computation. on my amd k8 architecture box such 128bit operations have a latency of 2 cycles, and on the latest intels (so-called "core" architecture) they take only 1 cycle - a true 128bit machine! furthermore, there are now quad-core versions of that architecture, scaling to 3.4ghz. i'm going to look into getting a benchmark result from one of those machines soon; i would like to point out that this incredible performance scaling happens with exactly the same source code and binary, flexibility which i believe is not easily achieved without using an intermediate language (ignoring the fact that it took zero effort on my part - as it should be, since it's a very mechanical job to replicate jobs across cores and shouldn't be a human burden if possible).

the relevance of all of the above is that the straightforward c/c++ code would need just a recompile to make very good use of these new technologies. what's more, the way i wrote those 128bit operations is via so-called "intrinsics", which are essentially a bunch of macros and wrapper functions that map (often directly) to the intended cpu instructions. the difference is this: the operands are just variables, and the compiler handles the arduous and combinatorial task of instruction scheduling and register allocation. you're still coding the algorithm very close to the metal, but thanks to this elegant method of specifying it the compiler, which can try so very very many different combinations of code and measure which is fastest in the blink of an eye, is free to produce whatever is most efficient given the architectural resources available. i find it very difficult to believe that this isn't a huge step up from directly writing assembly in terms of resulting code speed, programmer effort and sanity, ease of debugging, future proofing, ...

When a language like C, or C++ is created, somebody first writes the inner core in assembler. When Inmos introduced the Transputer for supercomputers, they said that nobody ever programs in assembler any more. That is a joke. At an exhibition I met Sir Clive Sinclair, and repeated the joke. He said "That is what they want you to believe". Later, Inmos published the op-codes of the Transputer (which they had been keeping secret).

everyone who's programmed in asm has encountered such ignorant attitudes towards their beloved "dark art". i'm 23, and already feel rather old telling younger programmers war stories about optimising inner loops for the pentium, trying to best take advantage of both the u and the famously-crippled v pipeline via careful scheduling. such an attitude is unfortunately, however, rather justified both in terms of business sense (development is costly, in most cases execution power is dirt cheap, so these days code is more for human consumption than machine consumption) and in terms of common sense: if cpus are so very capable of re-arranging in-flight instructions to perform well, coupled with incredibly good optimising compilers... well, it just makes sense to focus more on what really matters - getting the algorithm very clean and minimal, avoiding poor cache behavior. that's the #1 performance killer these days, since cpus run in gigaherz and memory in megaherz, and a cache miss costs hundreds of cycles (equivalent of a great many alu operations).

- i hope you don't mind if i don't comment on your forth war stories; while they are certainly appreciable i am far too ignorant of the language and its history to provide meaningful commentary in that regard -

Explain to me how a machine can run without a processor. If there is no machine code, one can removed the CPU and the system will still run - in one's dreams!

So there is always machine code somewhere - even if it is hidden from the programmer.

this is the crux of my little discussion, and the main thrust of why i wrote the program i linked above: i am not suggesting that assembly is dead or useless, only that there are much easier ways to achieve much better results (given the metrics referenced previously) for the same time investment as would be spent on writing it in pure asm. that is the whole of it - there is no war - i saw that you were frustrated by things with which one should not be frustrated in 2006/2007, thought about how much you stand to gain by a slight change of gear (the way i code is still very close to the machine), and sought to try and show it to you. please don't read animosity into this :/
« Last Edit: November 03, 2006, 06:48:49 PM by lycium » Logged

Charleswehner
Guest
« Reply #34 on: November 04, 2006, 03:06:14 PM »

Heneganj wrote about a set of parts containing itself:

"If you call DNA genetic code a series of subroutines clearly it can, or have I spectacularly misunderstood?  (again!)"

You have hit the nail right on the head. Biologists are not mathematicians, so they got it wrong - and teach others wrong.

Here is the truth.

Something is always missing from the set of parts that came from the parents. This "missingness" causes childhood. From the environment, in subtle ways, the child proceeds to update itself. That is adolescence. When adult, the new being can reproduce.

So no child is a perfect clone of an adult. Not even a virus is a perfect clone of its parent.

This is the motor that drives evolution. It puts variety into the world.

Charles
Logged
Charleswehner
Guest
« Reply #35 on: November 05, 2006, 02:18:41 PM »

I am still looking for the logical error in my program. It is not a bug. I have also been searching for my old Almondbread Qbasic program - to check the algorithm. No luck yet.

However, I changed the constant in the above program from -1 to minus one-and-a-quarter. Here is the result:


The full image is at http://wehner.org/tools/fractals/first/neg5by4.gif

There seems to be some fire inside it. I magnified it 256 times linear (65536 times area):


Larger image at http://wehner.org/tools/fractals/first/ravine.gif

A change of palette makes it look very fractal-like:


Larger image at http://wehner.org/tools/fractals/first/ravine4.gif

It is just sunset in the ravine where the turnips grow.

Charles
Logged
Jules Ruis
Fractal Lover
**
Posts: 209


Jules Ruis


WWW
« Reply #36 on: November 05, 2006, 02:33:17 PM »

Please Charles, repeat your question.
Which mathematical formula do you use. Is this a Julia set of z=z^2 +c ( with c= -1.5 or about)

Jules.
Logged

Jules J.C.M. Ruis
www.fractal.org
Zoom
Guest
« Reply #37 on: November 06, 2006, 02:20:45 AM »

However, I have been there and done that. I have been in the world of floating-point, and way beyond 64 bit. I wrote mathematics to a precision of 65 places of DECIMALS. That was 28 BYTES (mantissa) and two bytes exponent.
holy smokes... what kind of application needs such precision?!
Fractal programs. smiley Also you may want to ask about the number of digits calculated in GIMPS.
« Last Edit: November 06, 2006, 02:22:30 AM by Zoom » Logged
Charleswehner
Guest
« Reply #38 on: November 06, 2006, 01:59:50 PM »

I remember that when I wrote "Almondbread", I had received mis/dis-information, and had to correct that. Indeed, wherever I looked on the Web, I came across Z <- Z*Z + C. As I studied this, by the method of al-Khwarismi (looking at all the "parts" of the numbers), it became clear that this is a symmetrical formula - and must be WRONG for Mandelbrot.

I considered that perhaps it was really Z <- Z* (Z+C). That expands to Z4+2CZ3+C2Z2+CZ+C2Z. Well, I tried it:


Other images are at http://wehner.org/tools/fractals/grass/grass.gif and http://wehner.org/tools/fractals/grass/grass3.gif .
The source code is at http://wehner.org/tools/fractals/grass/grass.asm .

You can go to http://wehner.org/tools/fractals/NASM.EXE to download the assembler. It comes from http://www.gnu.org .
The package available from Gnu has the disassembler, and many useful files. To assemble grass.asm, put it together with nasm.exe in a directory.

From the DOS prompt, type nasm grass.asm followed by rename grass grass.com followed by grass. It will make a file called Owl.bmp which you can edit with picture-manipulating software like Photoshop, Photosuite, Paintshop-Pro or IrfanView and then save as a GIF.

I realised that this was not the answer, and moved on to try Znew Julia <- Zold Julia2 + ZMandelbrot . Here the Julia co-ordinates are the working co-ordinates whilst the Mandelbrot ones are the starting conditions.

What I got was this:


The file that makes it is http://wehner.org/tools/fractals/man/man.asm

Another image is at http://wehner.org/tools/fractals/man/man.gif

Enlarged, it becomes this:


The larger image is http://wehner.org/tools/fractals/man/man2.gif

The source is at http://wehner.org/tools/fractals/man/man2.asm

At last I have the tools for my research.

Charles
Logged
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #39 on: November 07, 2006, 07:19:59 AM »

Fractal programs. smiley Also you may want to ask about the number of digits calculated in GIMPS.

fractal apps don't need anywhere near that kind of precision, since the number of pixels on the screen is limited to at most 5 orders of magnitude; what's more, since floating point numbers have the same accuracy across many scales, even with that resolution you can supersample to your heart's content - even with normal 32bit ieee754 i can readily produce really good looking 24k x 16k images with tons of supersampling. even if fractal computations were carried out with "infinite precision", no one would be able to tell the difference between that and normal 32bit float render.

about gimps: that's totally different, and inherently integer-based; ie, you don't need to represent stuff like 1/3, only ever-expanding integers. when multiplying huge numbers they actually use the fast fourier transform to achieve logarithmic time complexity, and afaik such computation doesn't extend to computing square roots, exponents, and stuff like that. the problem is definitely constrained compared to fully generic real arithmetic.
Logged

Charleswehner
Guest
« Reply #40 on: November 08, 2006, 04:31:29 PM »

Yes, I have indeed got the algorithm sorted out. Here is the rosette from the image I showed earlier, but much enlarged:


It is available larger at http://wehner.org/tools/fractals/man/man8.gif . The source code is at http://wehner.org/tools/fractals/man/man8.asm

These programs differ only in the parameters BEGINX, BEGINY, STEPX and STEPY. When carefully and methodically altered, these parameters can bring in images from any part of the Mandelbrot set. Readers are free to take my programs and use them to make their own images.

Here is another view, from below that rosette:


Working from an internet cafe, where the disk drive barely works, I was unable to upload man6.gif - the larger version.

The axes can be seen clearly here:


Another image to examine is http://wehner.org/tools/fractals/man/man5.gif

I have said "Hi", by re-establishing myself as a Fractalist. You have seen my method of working. I shall now proceed with the research, and hope to show you some of the more obscure corners of the art, as soon as I have something worth showing.

Charles

Logged
GFWorld
Fractal Fanatic
****
Posts: 329



WWW
« Reply #41 on: November 08, 2006, 06:47:23 PM »

Charles wrote >I have said "Hi", by re-establishing myself as a Fractalist. You have seen my method of working. I shall now proceed with the research, and hope to show you some of the more obscure corners of the art, as soon as I have something worth showing ...

To be honest / fair / ehrlich in german - I dont understand anything, because there is no mathematical background at last  smiley

What I feel, is a lot of enthusiasm and I am sure I will have a look here >hope to show you some of the more obscure corners of the art, as soon as I have something worth showing ... !  smiley

Margit
Logged
Charleswehner
Guest
« Reply #42 on: November 13, 2006, 08:44:07 PM »

Do you know there are two kinds of Julia sets: with orbits and without orbits. Is this the same like you call bounded and unbounded?

I show two examples with the same coordinates.

1. Without orbits: www.fractal.org/Beelden/Julia-unbounded.jpg

2. With orbits: www.fractal.org/Beelden/Julia-bounded.jpg

Unfortunately, this is not the standard mathematical sense of the word "bounded". The two sets shown are IDENTICAL, but with different colour-schemes.

My own Spin-orbit duality conundrum shows the problem of bounded and unbounded systems. A child spins a top. Looking INWARD towards the top, we see that the child has more than enough energy to do so. Looking OUTWARD from the top, we see that the child has put himself into orbit, also the playroom, also the house, also the planet, also the solar system also..... also..... There are no bounds.

Concealed in the relativity problems of Einstein was the "unbounded conundrum", that mathematics breaks down in unbounded systems. He never solved the problem, and never got a Nobel prize for it. Relativity was a project under development. He got his prize for photoelectrics.

The fractal equivalent is when we scan a page (say, from left to right and downwards). Each pixel has its own reserved space - one pixel wide and one pixel deep. If you colour it in, it is possible that the mathematical properties of the algorithm will enable it to blend smoothly with the next pixel. So a smooth flow of mathematics leads to a smooth flow of colour. The happens in the de Moivre "spin drier", where a pixel is allowed to orbit and to be stretched or squashed away from/towards the origin. That defines Z <- Z*Z

You add the "tumble" to the spin drier, where the pixel tumbles parallel to its original position relative to the origin. That adds Z0, giving Z <- Z*Z + Z0 - which is the standard Julia/Mandelbrot algorithm. The Mandelbrot pattern looks like a bundle of wet clothes. The drops of coloured water coalesce into a continuum. However, as the "tumble-spin-drier" proceeds, droplets of coloured water are thrown outwards into unbounded space.

If a droplet of water lands twice as far out as its starting place, there is an area two units wide and two units deep to fill. It cannot do this, so it has free space around it.

I have called it water droplets, but Fractalists prefer the term DUST, because it has no surface tension and is therefore infinitely divisible.

Charles
Logged
Jules Ruis
Fractal Lover
**
Posts: 209


Jules Ruis


WWW
« Reply #43 on: November 13, 2006, 11:13:35 PM »

Charles,

Do you call this Julia fractal an unbounded Julia?

www.fractal.org/Beelden/Julia-unbounded-2.jpg

Jules.

« Last Edit: November 13, 2006, 11:55:41 PM by heneganj » Logged

Jules J.C.M. Ruis
www.fractal.org
Charleswehner
Guest
« Reply #44 on: November 14, 2006, 02:28:39 PM »

Charles,

Do you call this Julia fractal an unbounded Julia?

www.fractal.org/Beelden/Julia-unbounded-2.jpg

Jules.


This I cannot say, without knowing how it was made. It is because of what I call "congestion". Pixels have their own allocated space in a bounded system. They have no specific bounds in an unbounded one.

There is no spacial pattern in a Mandelbrot image - it is all colour pattern derived from the number of iterations. Julia plots have both spacial and colour pattern. Because the dots are scattered about, seemingly haphazardly, it can happen that several dots (of "dust") land in the same area, and seem to coalesce. This I call "congestion". However, when such clumps are expanded in a Julia set, they always reveal dust within dust.

Charles
Logged
Pages: 1 2 [3] 4   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
Special "offset" and "rotate" options from Fractallab possible in M3d-formula? Mandelbulb 3d « 1 2 » SaMMy 18 42682 Last post April 05, 2011, 12:20:52 PM
by DarkBeam
I had a vague idea that "fractal" was related to "pretty pictures" Meet & Greet Repomancer 5 11794 Last post October 10, 2012, 02:04:23 AM
by David Makin
""justme" ... just would like to introduce himself ... Meet & Greet justme 1 3618 Last post November 30, 2012, 01:17:26 PM
by cKleinhuis
"mercator" vs "power of two" Programming DustyMonkey 4 12113 Last post December 27, 2013, 09:46:49 PM
by SeryZone

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.604 seconds with 23 queries. (Pretty URLs adds 0.018s, 2q)