ZsquaredplusC
Guest


« on: November 20, 2009, 03:30:22 AM » 

Was there any progress on a formula to determine a normal once a point in 3D space is found? ie you churn through the iterations and get an XYZ point in space. how do you guys calculate a normal for the point for lighting the point?
Options I can see are; 1. Use the north, west and current pixel and use that as a polygon for normal calculations. Because the north and west points have been calculated if you are going top to bottom and left to right while generating the image. 2. Send out another set of rays for each point very slightly askew from the calculated point to use them to get a normal (but this means you need another 3 rays per pixel).
Any ideas/advice?



Logged




David Makin


« Reply #1 on: November 20, 2009, 03:58:08 AM » 

Was there any progress on a formula to determine a normal once a point in 3D space is found? ie you churn through the iterations and get an XYZ point in space. how do you guys calculate a normal for the point for lighting the point?
Options I can see are; 1. Use the north, west and current pixel and use that as a polygon for normal calculations. Because the north and west points have been calculated if you are going top to bottom and left to right while generating the image. 2. Send out another set of rays for each point very slightly askew from the calculated point to use them to get a normal (but this means you need another 3 rays per pixel).
Any ideas/advice?
If our found point is vp+alpha*vcentre has distance estimate decentre from inside where vp is the viewpoint, alpha is the distance stepped and vcentre is the direction vector then I get the 4 adjacent distance estimates at vp+alpha*vleft, vp+alpha*vright, vp+alpha*vtop, vp+alpha*vbottom which gives me 5 points at vp+(alpha+decentre)*vcentre, vp+(alpha+deleft)*vleft etc. and then use the 4 triangles from the 5 points to compute the normals by summing the 4 unnormalised normals and normalising the result. Note that I do do some limited extra ray stepping (backwards) if the adjacent points are found to be truly "inside" in order to get a valid DE value obviously adjusting the calculations accordingly. I do all 4 adjacent points to allow restriction of the extra raystepping which means that occaissionally one or more adjacent point values are invalid. The 4 adjacent rays I use by default are at 1/2 pixel offsets. If not using UF, or if using a global buffer in UF then it would perhaps be slightly more optimum to use whole pixel offsets for the adjacent rays. This method is vastly superior to using the actual surface found method


« Last Edit: November 20, 2009, 04:04:38 AM by David Makin »

Logged





fractalrebel


« Reply #3 on: November 20, 2009, 05:00:36 AM » 

My first step is to calculate the array that forms the fractal surface. The normals are then calculated from the x, y and z vectors to the immediately neighboring points.



Logged




ZsquaredplusC
Guest


« Reply #4 on: November 20, 2009, 05:27:07 AM » 

Thanks guys.
If iq can get a formula running that will be interesting. Even if it turns out somewhat complex it may be less processor intensive that shooting an extra 4 rays per pixel.
I will have a go at no extra rays and using the immediate neighbours then and see if it loses anything in detail.



Logged




lycium


« Reply #5 on: November 20, 2009, 06:33:00 AM » 

using extra rays is basically incorrect (and outrageously inefficient besides); the gradient of the potential function is the (unnormalised) normal, or at least the finite difference approximation to it.



Logged




JosLeys


« Reply #6 on: November 20, 2009, 07:21:24 PM » 

using extra rays is basically incorrect (and outrageously inefficient besides); the gradient of the potential function is the (unnormalised) normal, or at least the finite difference approximation to it. I've been trying to do this without success. How do you derive the x,y,z components of the gradient to form a normal vector?



Logged




lycium


« Reply #7 on: November 21, 2009, 04:50:37 AM » 

i use a finite difference approach as mentioned, and it's the worstbehaved bit of numerical code i've ever written to approximate the normal at a point p, you get the potential at p, then three potentials at a "small" (herein lies the problem) offset along each dimension. then your grad vector is <px  p0, py  p0, pz  p0> where pxyz are the potentials at the offset locations. this is a first order finite difference approximation, pick up any numerical methods book to find better appoximations (bearing in mind that they assume the existence of higher derivatives, i.e. greater smoothness).



Logged




ZsquaredplusC
Guest


« Reply #8 on: November 21, 2009, 05:17:14 AM » 

Can you share a snippet of code. ie going from the 3d point calculated on the surface up to the point the normals found? Why does the "small amount" cause issues? I would assume that like when shooting more rays to get a normal, you step half the distance to the next pixel?
I have gotten the basic rendering working, but the excess of 5 rays per pixel is killing rendering time.



Logged




lycium


« Reply #9 on: November 21, 2009, 05:29:26 AM » 

if your potential function is p(x,y,z), p0 = p(x,y,z) px = p(x + d, y, z) py = p(x, y + d, z) pz = p(x, y, z + d) then your normal vector is normalise(px  p0, py  p0, pz  p0) the trouble with the delta d is that you want it to be as small as possible, but if you make it too small then you destroy all the floating point precision in the estimate. see for example http://en.wikipedia.org/wiki/Numerical_differentiation#Practical_considerationsthis is exacerbated by the fact that our function p isn't some nice x^2 + 2x or whatever, it's this nasty fractal function


« Last Edit: November 21, 2009, 05:37:39 AM by lycium, Reason: fixed copypaste error »

Logged




lycium


« Reply #10 on: November 21, 2009, 05:38:10 AM » 

whoops, just fixed a copypaste error in my post above.



Logged




ZsquaredplusC
Guest


« Reply #11 on: November 21, 2009, 06:15:01 AM » 

OK, but what is the potential function? How do we get from the XYZ point in space to the potential?
These posts have stirred a lot of interest so hopefully there will be someone reading with the smarts to work out how to automatically find the optimal d value?



Logged




reesej2
Guest


« Reply #12 on: November 21, 2009, 07:10:40 AM » 

I've been diverting around the problem of calculating normals by just assuming that each "pixel" is a separate, very small sphere. Unfortunately, while this does make a somewhat 3D effect, it's not nearly the quality of the images I've seen. I'd be interested to see an efficient way of calculating the normal vector. Alternatively, any ideas on how to improve the sphere strategy would be welcome...



Logged




JosLeys


« Reply #13 on: November 21, 2009, 11:40:15 AM » 

OK Lycium, I think I know what you mean. See the sketch below for the 2D analogy. Is my assumption correct ?
So in 3D you need the analytical distance estimate in three extra points. I will try this..
What I have been doing is shooting 4 extra rays. If the point on the surface is VP+t.V (VP=viewpoint, V=vector), I take VP+(tepsilon).Vi (Vi =vector with slight offset, epsilon=something small). So I go back just a small distance on the ray so that finding the intersection points for the vectors Vi goes very fast. (but I have to calculate the normalised vectors Vi of course)



Logged




lycium


« Reply #14 on: November 21, 2009, 12:01:04 PM » 

um, it is with some embarrassment that i must admit, i have no idea what's going on in that picture think of it this way: your potential function is zero when you're on the surface. the grad vector tells you in which direction the function is growing most rapidly; this is exactly what the normal vector is: it points "away" from the surface (i.e. the direction in which the function most quickly goes from zero to a positive number).



Logged




