wait, u need people to help render this and create the interface?
In that case, i can help render.
Got 8-core processor ready
I have 2+ teraflops of GPU compute power, I can can serve them realtime once I get my ray tracer sorted out
Just make the web front end, and a squid cache for the images.
I am writing this a little in jest, DE isn't all that easy on GPU. But seriously, I am surprised that people here are still using CPU. It shouldn't be to hard to have programs such as Ultra Fractal generate the C like kernel code and compile it with Open CL - which seems be a sure bet on an open cross platform standard or GPU computing. Personally I am using Brook+ mostly because the ATI OpenCl drivers for Linux is very much in beta.
How exactly would I use my ATI X600 ?
Short answer: You can't use that particular card.
Long answer:
For over a decade I worked in the games industry, making leading edge 3D games, but even there the whole GPGPU thing managed to sneak unnoticed up on me. Sure there were a constant stream of buzzwords in the air: DX10, unified shaders, full floating point precision pipeline etc. And this was all very fine if you wanted pixel shaders, bump mapping and keep up with the competition in the graphical looks department.
At some point it dawned on Nvidia that the high end gaming graphics card had many of the desired characteristics of a high performance computing system, and along came the ridiculously priced Tesla cards, aimed at the scientific community etc. With Cuda it also became possible to divert some of the computing power away from graphics, and to numerical work, such as physics.
Today, with DX11 and Microsoft take on Cuda / OpenCL (Direct Compute) - almost any regular graphics card from Nvidia is Cuda / PhysX capable. Personally I only took notice when ATI announced the 5800 series card with 1440 / 1600 individual stream cores on chip. I reasoned that whatever the capability of the individual core, the combined power of such massive parallelism would be mind boggling. Then I read the specifications and instruction set for the R700 cores
http://developer.amd.com/gpu_assets/R700-Family_Instruction_Set_Architecture.pdf and discovered that unlike Nvidia, ATIs cores had a complete integer instruction set as well as the regular diet of single cycle trigonometric functions. I was sold, and simply had to get my hands on the first batch of 5850 cards which I thought was the most reasonably priced. And pricing is an important point to note:
http://en.wikipedia.org/wiki/FLOPS These ATI cards simply are the cheapest FLOPS money can buy at the moment.
The fastes CPU at the moment (i7 965) retails for ~1000 USD, and delivers 70GFLOPS (double), but only when you can get your compiler to produce efficient SSE code. But even if you are on a budget, the i7 can be handsomely beaten performance wise. ATI 4600 series cards can be found for USD 60 on Amazon, they offer 320 streams processors giving roughly 220GFLOPS (real, not the doubled figure obtained by only considering the "multiply and add instruction") - and you don't need a new motherboard & DDR3 RAM to trounce the i7 which starts at USD 250. (CPU ony)
I think we are at a watershed right now when it comes to computing performance. The next big leap on the desktop won't come from Intels research into 48 core CPUs, but more likely from such moves as AMDs plans to put 480 stream processors right on the silicon die together with the CPU.
But more importantly, a gazillion flops won't do you any good if you can't use them. And I really do admire your fractal art, and should you decide to go the GPU way, I'll gladly give you the bits and pieces of code that I have written for the ATI GPUs. I am after all (like most people on the forum) a fellow explorer, and not out to make a quick buck on fractal fad of the month
Now if I can only get your DE working properly...