This is my first attempt at writing my own Fragmentarium raytracer from scratch. I've done raytracing stuff before but always modifying the existing raytracers.
I always wanted a non-DE tracer to test formulas, so some time ago I was thinking of writing one by myself. But I still didn't know how to do some things, and then Syntopia made the nice Bruteforce Raytracer, so I abandoned the idea of making my own, as Mike's bruteforce was good enough for me
- But yesterday I was thinking about it and some ideas came to my mind. So now that I understood how to use the buffers and other stuff, I decided to start with my own project.
Obviously I took Syntopia's approach of using the buffer to store the depth map, as is the fastest way, with the only con of having to work with screen space for the normals. But I used a different method for finding the distance to the object, that I'll try to explain below (consider I didn't add comments to the code yet and I have the bad habit of not using very descriptive and friendly variable names at all
)
1) Advance at fixed interval until ray is inside.
2) As the real limit is between "current position" and "current position - raystep", go 1/2 step back... if still inside, go 1/4 step back... if outside, go 1/4 step forward... and so on, dividing the raystep by 2 each time. The amount of times this search is performed, is specified with "SearchSteps" parameter (is this method the so called "binary search"?). This allows the use of not so small steps, and the surfaces still render good and pretty fast, but...
3) In the following subframes, I search for the pixels that have lower depths neighbors, and perform a "fine step" raytracing (controlled by FineStepScale parameter), because some details could be lost in the main fixed-step raytracing (i.e. the borders and small/thin structures). The raytrace starts at the depth of the neighbor pixel that has the nearest depth level so this recovery of lost pixels is also quite fast. However, if a non-connected part of the fractal is smaller/thinner than the raystep, it could be lost and the only solution is to lower the fixed-step, slowing down the render.
The rest of the raytracer is still very rudimentary and it only features basic diffuse lighting with two sources, spotlight and camlight (please note spotlight direction is not relative to object but to camera, I'll fix this later) and distance fog... nothing more. No coloring, no AO (I still don't know how I'll do it), no shadows, etc. - so I have still a lot of work to do
- but even with the basic features of this early development stage, it's a nice tool for trying formulas without DE, and it's all included in a single .frag file. You only have to add to your .frag the "#include" line and a "bool inside" function that returns true if point is inside the fractal/object, then render in continuous mode, as with Syntopia's bruteforce (almost all the render is done in the first 2 frames, the rest is the missing parts recovery that takes a few more frames).
Use MaxDistance and FixedStep parameters wisely, based on the zoom level and depth of the scene. By choosing them right you can achieve excellent rendering performances most of the times, but it depends on the depth of the scene you want to render.
I attached the raytracer, and another .frag testing it with the rotjulia formula with this default parameters:
I'll be uploading new versions to this thread as I make some progress.
Any comments or suggestions are always welcomed.