Logo by mrob - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Visit us on facebook
 
*
Welcome, Guest. Please login or register. December 15, 2018, 12:09:47 AM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1]   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Major Raymarching Optimization  (Read 4874 times)
0 Members and 1 Guest are viewing this topic.
keldor314
Guest
« on: April 11, 2010, 10:37:46 AM »

Here's a simple optimization that should make mandelbulb rendering a whole lot faster.  A picture is worth a thousand words:



The idea is to use the distance estimation from one step to update multiple rays.  This doesn't help much far from the fractal, but as you get closer and iteration counts go up, it can be a big win.  This makes this method perfect for supersampling and deep zooming.

Another thing to do is trace a cone, rather than a single ray.  You would then spawn new cones at each raymaching step as you got close to the fractal and the cone ends shrink (the base of the cone would be the distance estimation radius).  The idea is to only traverse a single ray for a group of pixels until the fractal is close enough for the distance estimation to not cover all the pixels.

This method, properly implemented, should cause no artifacts, since we're never advancing rays beyond a "proven" empty space through the distance estimation.

The biggest downside to this method is that it doesn't map to GPUs all that well.  You'd have to use a compute shader, and break the image into chunks that are evaluated independently - a simple pixel shader would not be able to extract the full potential of this method.  This isn't to say it cant be done, mind!

Does anyone want to try to implement this and see how well it performs?
Logged
reesej2
Guest
« Reply #1 on: April 11, 2010, 11:25:34 AM »

I'll give it a try cheesy sounds promising. But I'll be doing it from scratch in C, so someone might want to try it out on something faster and easier to code first.
Logged
cbuchner1
Fractal Phenom
******
Posts: 443


« Reply #2 on: June 05, 2010, 11:53:10 PM »

has anyone attempted this optimization in his renderer yet? Was there a notable speed benefit?

I am thinking about a slightly refined approach. I would be shooting probe rays from the camera towards the fractal. For every coordinate reached in a DE iteration I would fill an octree data structure with voxel data representing the space that is definitely not part of the fractal (i.e. the volume of the sphere). Of course it will be impossible to exactly represent the "DE sphere" using cubes, but it could be approximated using 3 or 4 levels in the octree (imagine a sphere approximated with Lego bricks)

All further rays can then traverse this volume by using a lookup in the octree instead of interating with the fractal formula. My hope is that this will be significantly faster than applying the iteration formula itself. Only for volumes for which the octree currently holds no data the DE based on the iterative fractal formula must be used (further completing the octree in memory).

Ah, one more interesting detail: Slight changes in camera perspective can re-use the previous frame's octree structure, greatly speeding up the rendering. Only where new details become visible, the octree has to be amended with new data.

I have no idea how much memory this will take, but let's find out! Also I have no idea if the tradeoff of the CPU's arithmetic throughput vs. available memory throughput will pay off.
« Last Edit: June 06, 2010, 03:05:15 AM by cbuchner1 » Logged
cbuchner1
Fractal Phenom
******
Posts: 443


« Reply #3 on: June 20, 2010, 10:30:11 PM »

I gave up on the octree approach above. It turns out efficiently traversing octree volume structures isn't trivial and I don't have too much time on my hands.

I just tried your original idea rendering a 512x512 pixel mandelbulb but I only got only moderate speed-up. I did this in a multi-resolution renderer, where I would initially render every N'th pixel horizontally or vertically and I raymarch the neighbour pixel rays by evaluating the bounding sphere as you showed in your diagram. When this grouped marching fails I divide N by 2 until we march every pixel individually.

With this technique I am able to cut down the number of DE steps (and evaluation of the iterative fractal formula) by a factor of 2-3, however the extra overhead of evaluating the intersection with the bounding spheres for my neighbor pixel rays cost a lot of performance, too. So overall this technique doesn't bring a revolution. wink

The smaller the epsilon cutoff value (and the higher the final detail), the less gain is achieved with this method.

Christian
« Last Edit: June 20, 2010, 10:36:38 PM by cbuchner1 » Logged
DaveH
Explorer
****
Posts: 57


« Reply #4 on: December 23, 2013, 05:17:24 PM »

I'm resurrecting this post because I'd like to tie two similar ideas together, here's someone who ray-marches into increasing resolutions, therefore grouping the rays for the majority of the tracing.

<a href="http://www.youtube.com/v/AtsGeuO_ed8&rel=1&fs=1&hd=1" target="_blank">http://www.youtube.com/v/AtsGeuO_ed8&rel=1&fs=1&hd=1</a>
Note the comments and my questions at the time.

So he's rendering at progressive resolutions, using the depth data from the previous resolution to start the ray at the current one.
Quite neat and very fast.

Laters,
Dave H.

Logged
eiffie
Guest
« Reply #5 on: December 23, 2013, 07:58:05 PM »

I'm wondering what the optimum resolution to start at is? It would seem silly to start at 1x1. The overhead would be too much.
Logged
Levi
Alien
***
Posts: 35


« Reply #6 on: December 26, 2013, 01:11:25 AM »

On the note of gradually increasing resolutions, perhaps a good approach would be adjusting epsilon based on the 'size' of a pixel at the current resolution and the potential function at the current location.  It's hard to explain it the way it looks in my head tongue stuck out but if you use the derivative with respect to c, and you know that the distance between this ray and it's neighboring ray, than you can guestimate how much deviation there is between the two rays at a certain distance from the camera.  Then, when the expected deviation starts to become large compared to the distance between the neighboring rays, you increase the resolution (and adjust epsilon accordingly). 

This would work best if you could adjust the resolutions of various areas on the screen separately from the others, because then a single ray would only split into more rays once it has encountered a region of high detail.  Unfortunately, it doesn't seem like this would be very practical given current GPU infrastructure.

On a slightly different note, an idea that I've been mulling over for a while is increasing epsilon as distance from the camera increases (I apologize if this has already been done, but on the chance that it hasn't I will elaborate).  It doesn't really makes sense to have an epsilon smaller than the separation between your rays, since any details seen at that scale are inherently an effect of aliasing.  And since the rays become linearly more separated as they get further from the camera, adjusting epsilon linearly with distance would work nicely.  And if the normal vectors are sampled at a distance that is dependent on epsilon, I think this would reduce aliasing as the ray got farther too--making detail fade smoothly away as objects get further.
Logged

Math isn't the solution, math is the question.
eiffie
Guest
« Reply #7 on: December 27, 2013, 05:56:39 PM »

Yes this has been done Levi but it is always good to have fresh eyes take another look at it. The linear and even exponential epsilon increase with distance goes back to Boxplorer at  least. Lately in this thread we are using the idea to simulate multiple rays (determining the distance between rays at a given depth): http://www.fractalforums.com/programming/fast-fake-montecarlo-for-raymarching-(not-sure-what-to-call-it-yet)
Like you I see how grouping adjacent pixels as you march is a great advancement on the CPU but still not sure how best to try it with a GPU.
Logged
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #8 on: December 27, 2013, 08:48:22 PM »

Yes this has been done Levi but it is always good to have fresh eyes take another look at it. The linear and even exponential epsilon increase with distance goes back to Boxplorer at  least.

Much older, actually. In Hart's 1989 sphere tracing paper he talks a bit about constant, linear, and quadratic epsilon (section 4.2):
https://www.cs.drexel.edu/~david/Classes/Papers/rtqjs.pdf

In fact, he also talks about anti-aliasing using DE:
http://mathinfo.univ-reims.fr/IMG/pdf/hart94sphere.pdf
Logged
Pages: [1]   Go Down
  Print  
 
Jump to:  


Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.126 seconds with 29 queries. (Pretty URLs adds 0.007s, 2q)