Welcome to Fractal Forums

Fractal Software => Programming => Topic started by: Syntopia on January 03, 2014, 10:39:38 PM




Title: Combining ray tracing and polygon graphics in OpenGL shaders
Post by: Syntopia on January 03, 2014, 10:39:38 PM
I know several people on this forum already has done this (Eiffie, Marius, 3dickulus), but since I recently worked out the math, and couldn't find any good references on the internet, I though I might share my notes about this: http://blog.hvidtfeldts.net/index.php/2014/01/combining-ray-tracing-and-polygons/

Not terribly exciting, I'm afraid, but might be useful to others.


Title: Re: Combining ray tracing and polygon graphics in OpenGL shaders
Post by: eiffie on January 04, 2014, 06:26:55 PM
Great article as usual. When I started doing this I just used Marius' code but then I changed to something that seemed simpler to me (because math makes my head hurt). Make sure a polygon is always hit by adding a skybox and then just color those polygons with your raymarching fragment shader. You can get the ray direction and distance easily from the polygon hit point and camera position.
Your article is a must read though if anyone is doing multiple passes and needs to align polys and marching/tracing.


Title: Re: Combining ray tracing and polygon graphics in OpenGL shaders
Post by: 3dickulus on January 07, 2014, 06:04:14 AM
Hi, Lately I've had my head in CUDA code, resurrecting an old screensaver, so I haven't played much with Fragmentarium but I did make time to read the  article  (http://blog.hvidtfeldts.net/index.php/2014/01/combining-ray-tracing-and-polygons/) in your blog, well written and helped me understand the inner workings a bit better (maybe now I can get the geometry right) :) I'm still off a bit, good for what's in the center of the screen but warps as things get closer to the outside borders. I think this is caused by a difference added by the "thin lens" calculation? on the GLSL side and the perspective matrix on the GL side being setup minimally just to use polygons as a rendering surface. I'm using QMatrix4x4 class that has all the operations one needs for transforms and projections rather than libGLU because it exposes things a bit better for easier tweaking (imho) and because it's Qt-centric :) also some of my assumptions about "mindist" and "maxdist" translating to zNear and zFar need to be adjusted.

Thanks for the article, recommended reading, and thanks again for  making Fragmentarium open and free  :beer:


Title: Re: Combining ray tracing and polygon graphics in OpenGL shaders
Post by: Syntopia on January 07, 2014, 09:00:08 PM
Great article as usual. When I started doing this I just used Marius' code but then I changed to something that seemed simpler to me (because math makes my head hurt). Make sure a polygon is always hit by adding a skybox and then just color those polygons with your raymarching fragment shader. You can get the ray direction and distance easily from the polygon hit point and camera position.
Your article is a must read though if anyone is doing multiple passes and needs to align polys and marching/tracing.

Thanks Eiffie - you are right that if you don't need ray traced geometry in front of the polygon objects, it is much easier to draw a background object, such as a skybox (or just a quad). The difficult part is writing to gl_Fragdepth.

...I'm still off a bit, good for what's in the center of the screen but warps as things get closer to the outside borders.

Thanks 3dickulus,

I think that if your projection works best near the center and not away from it, it is probably because that in order to calculate gl_Fragdepth, you should only use the orthogonal part of the length of the ray. This is why I multiply with the dot product in my transformation:

Code:
float eyeHitZ = -distance *dot(cameraForward,rayDirection);
float ndcDepth = ((zFar+zNear) + (2.0*zFar*zNear)/eyeHitZ)/(zFar-zNear);
gl_FragDepth =((gl_DepthRange.diff * ndcDepth) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;

Notice, that for default gl_DepthRange (0 and 1), the formulas reduce to

Code:
float eyeHitZ = -distance *dot(cameraForward,rayDirection);
gl_FragDepth = ((zFar+ (zFar*zNear)/eyeHitZ)/(zFar-zNear);

Which is the same as in Marius' fragment.glsl:
http://code.google.com/p/boxplorer2/source/browse/trunk/fragment.glsl


Title: Re: Combining ray tracing and polygon graphics in OpenGL shaders
Post by: eiffie on January 07, 2014, 09:39:44 PM
I just want to clarify it works for ray traced objects in front of the polygons but you are right the depth is not updated so it is a one pass technique. You simply draw the traced items in front of a poly onto the poly. The depth check is simple because all depths are in world coords.


Title: Re: Combining ray tracing and polygon graphics in OpenGL shaders
Post by: Syntopia on January 07, 2014, 09:58:58 PM
I just want to clarify it works for ray traced objects in front of the polygons but you are right the depth is not updated so it is a one pass technique. You simply draw the traced items in front of a poly onto the poly. The depth check is simple because all depths are in world coords.

Then I misunderstood you? Do you mean that you use the ray tracer fragment shader for drawing all your polygons, and then switch between the polygon base color and the ray tracer fragment shader based on the camera distance? That sounds quite clever.

I guess one possible drawback is, that you might end up using the expensive ray tracer multiple times on the same pixel (overdraw), right?


Title: Re: Combining ray tracing and polygon graphics in OpenGL shaders
Post by: eiffie on January 07, 2014, 11:36:17 PM
Yes that is it. Easy set up but you pay a price in performance. It didn't seem to make much difference in speed but I wasn't using millions of polys. You can use alpha too and erase bits of poly like a billboard treeline.


Title: Re: Combining ray tracing and polygon graphics in OpenGL shaders
Post by: Syntopia on January 08, 2014, 05:09:20 PM
I like this approach. It is simpler, and for the scenarios I'm considering (drawing bounding boxes, and spline paths, etc...) the speed difference will most likely not be relevant. Also, there is no writing to gl_fragdepth, which is not available in OpenGL ES and WebGL (unless a the EXT_frag_depth extension is used).


Title: Re: Combining ray tracing and polygon graphics in OpenGL shaders
Post by: eiffie on January 08, 2014, 06:47:29 PM
Looking back at my code I see I used an inverted sphere rather than a skybox for the background. I think the sharp angles of the box caused some problems with the interpolated hit point. Other than that it was a very straight forward setup.


Title: Re: Combining ray tracing and polygon graphics in OpenGL shaders
Post by: 3dickulus on January 16, 2014, 04:54:23 PM
I'm considering (drawing bounding boxes, and spline paths, etc...)

Yay! that's good to hear, I know spline paths and sky boxes don't really have anything to do with calculating fractals (bounding boxes do though) and everything to do with artistic creativity but I think if you implement these things both artists and coders will be able appreciate the craftsmanship and will thank you for it.

 :beer: