I'm sorry, after thinking about it I noticed that its only smoothing...
I don't know something about the distance estimator but I'm happy to know what DE stands for because I didnt understand the half of the "calculating normals"-thread. Is there a page to read something about that?
I don't understand the whole Text by David Makin (
http://www.fractalforums.com/mandelbulb-implementation/calculating-normals/msg8794/#msg8794)
Maybe you can help me with some pseudocode or explain how deleft, decenter, ... look like
OK, just to repeat it:
*****************
If our found point is vp+alpha*vcentre has distance estimate decentre from inside where vp is the viewpoint, alpha is the distance stepped and vcentre is the direction vector then I get the 4 adjacent distance estimates at vp+alpha*vleft, vp+alpha*vright, vp+alpha*vtop, vp+alpha*vbottom which gives me 5 points at vp+(alpha+decentre)*vcentre, vp+(alpha+deleft)*vleft etc. and then use the 4 triangles from the 5 points to compute the normals by summing the 4 unnormalised normals and normalising the result. Note that I do do some limited extra ray stepping (backwards) if the adjacent points are found to be truly "inside" in order to get a valid DE value obviously adjusting the calculations accordingly. I do all 4 adjacent points to allow restriction of the extra ray-stepping which means that occaissionally one or more adjacent point values are invalid. The 4 adjacent rays I use by default are at 1/2 pixel offsets.
If not using UF, or if using a global buffer in UF then it would perhaps be slightly more optimum to use whole pixel offsets for the adjacent rays.
This method is vastly superior to using the actual surface found method Smiley
*****************
Now I'll try to explain.
1. We have a viewpoint, vp - this is a 3D coordinate and all the viewing rays start here.
2. We have a unit direction vector, vcentre, for the ray through the current pixel.
3. We have the distance along the ray from the viewpoint at which "solid" has been found, this is alpha (scalar).
4. For this point on the ray for the current pixel (i.e. the point at vp+alpha*vcentre) then the distance estimate was decentre (scalar).
5. Now we take 4 adjacent rays to the ray through the current pixel - how adjacent could be chosen but I use the rays at 1/2 pixel offsets - i.e. one ray above, one left, one right and one below with unit direction vectors vabove, vleft, vright and vbelow
6. For the 4 adjacent rays we iterate at vp+alpha*vabove, vp+alpha*vleft, vp+alpha*vright and vp+alpha*vbelow in order to give us distance estimates at each of these points as deabove, deleft, deright and debelow
7. We now assume that all the distance estimates are 100% accurate *and along the rays* so we have 5 surface points, the central point at vp+(alpha+decentre)*vcentre and the four adjacent points at vp+(alpha+deabove)*vabove, vp+(alpha+deleft)*vleft, vp+(alpha+deright)*vright and vp+(alpha+debelow)*vbelow
8. These 5 points give us 4 triangles (in a diamond shape) to give us four normals in the usual way which we can average to give the normal at the central point.
Edit: Just to stress that for the 4 adjacent rays we do *not* do any ray-tracing, the alpha scale for those is the alpha for where we found the central pixel.