CudaBrot Source code only.Resurrecting an old favorite from my collection of antiques...
This program opens a window and just pans and zooms around the Mandelbrot set using the mouse or arrow keys, that's all it does, no fancy stuff (yet), it calculates every pixel every frame, no optimizing other than calculating the x,y values in one pass then feeding them into the mandelbrot routine for the second pass.
I kept it as simple as possible to make it easy to tinker with Qt, CUDA and Fractals, about 25k of code and the really interesting bits are only a couple of hundred lines.
For the novice, this is an easy bit of code to get your head around. It demonstrates how to use Qt's QGLWidget, Mouse and Keyboard Events, Timers, QtDesigner Forms ( : add some menus if you like : ) , how to set variables on the GPU for CUDA kernels from C++, how to setup and access buffers/textures for writing on the GPU and rendering by OpenGL and how to compile CUDA code in your C++ project.
For the expert, it's a simple bit of code that can be used to test crunching routines on CUDA GPUs.
Two kernels, one fills an array with x,y datas the other one reads data, makes a calculation and stores the result in an RGBA pixel buffer.
Currently only double precision but some preliminary tests with double double and quad double are promising, this is intended as a test bed leading to arbitrary precision on the GPU for calculating fractals.
The default start coords are
Tick-Tock from dinkydau, if you get this compiled just hit the "+" key on the keypad to zoom in, on a good GPU it only takes a couple of seconds to exhaust the limits of 64bit double precision at about 35-40 ms/frame. For a benchmark execute from the commandline with "benchmark" as the only option, you should see something like...
> Device 0: < GeForce GTX 760 >, Compute SM 3.0 detected
Benchmark:
Max iterations: 1024
Number of frames: 1000
Avg msec per frame: 8.979000
An interesting experience seeing my crusty old routines crunching images in milliseconds what used to be minutes...
If you can write a doodler like this...
uint count = 0;
double p = r;
double q = i;
double a;
do {
if ( p * p + q * q > divergence ) break;
a = p;
p += q;
p *= (a - q);
q *= (a + a);
p += r;
q += i;
}
while (count++ < maxiter );
return (count >= maxiter) ? 0 : ( count % maxcol + 2 );
...then you can test it on nVidia GPUs in this little proggie.
My goal is to learn how to implement some of the optimizations to be found here (on FractalForums) in high precision GPU code.