Title: Help with errors.. Post by: eq on January 18, 2010, 09:27:06 AM I'm still trying to improve my render-quality and add more global illumination like lighting.
Here's one of the newer renders (11826x8280 pixels with 600 samples per pixel, 21 hours rendering time): (http://www.solidicon.se/eq/mb/Bugs_Small.jpg) http://www.solidicon.se/eq/mb/Bugs_Large.jpg (http://www.solidicon.se/eq/mb/Bugs_Large.jpg) I like the shading and all that but I got some nasty errors that I think is caused by the DE based ray marching: (http://www.solidicon.se/eq/mb/Bugs_Zoom.jpg) http://www.solidicon.se/eq/mb/Bugs_Zoom.jpg (http://www.solidicon.se/eq/mb/Bugs_Zoom.jpg) Has any one had similar problems? I *think* that the formula I shamelessly borrowed from somewhere in the original thread is the analytically sin version (too many variants for me to keep track off!)? Anyway here's a "clarified" (power 8 ) version of what I'm doing (in HLSL): void mandelbulbPower(inout float3 z, float zr0, inout float dr) { float3 t; t.x = asin(z.z / zr0); t.y = atan2(z.y, z.x); t.z = pow(zr0, 7.0f); float q = t.z * zr0; t *= 8.0; dr *= t.z; dr += 1.0f; float2 sco; float2 sci; sincos(t.x, sco.x, sco.y); sincos(t.y, sci.x, sci.y); z = float3(sco.y * sci.y, sco.y * sci.x, sco.x); z *= q; } float estimateDistance(float3 z0) { float3 z = z0; float dr = 1.0f; float r = length(z); int i; for (i = 0; (r < 2.0f) && (i < 40); ++ i){ mandelbulbPower(z, r, dr); z += z0; r = length(z); } return r * log(r) * 0.5f / dr; } I've observed that the problems seems to occur when the ray just misses the surface (and later hits the surface). I've observed that things get better (but I've never got rid of it totally) if I use a smaller cut-off value (i.e epsilon). I *think* this has to do with the fact that the DE is derived for the actual surface (i.e epsilon=0) so you'll get some errors when using other epsilons? Please go ahead and correct me, next to you gurus I feel really stupid.. but I just had to write a renderer for myself. I'd loved to delve into the maths of fractals but I simply can't afford that. I can easily see these things eating up all my precious time (my boss would fire me, my wife would leave me and my kid would hate me ;)). /eq Title: Re: Help with errors.. Post by: David Makin on January 18, 2010, 01:47:35 PM I'm guessing but I think the problems you outlined are simply a matter of your bailout being too small.
Remember distance estimate is an approximatiion based on the orbit approaching the infinite attractor so the larger you make the bailout the more accurate the results. I use a minimum of testing the square of the magnitude against 128 and a default of 1024. Edit: Of course the other possibility is that you're using floats only and the errors are simply round-off problems. Title: Re: Help with errors.. Post by: cKleinhuis on January 18, 2010, 02:14:37 PM hi there, as far is i know, there is NO need for using a bailout when rendering with gpu,
because every source path is taken, no matter if bailouted or not and as far as i know, higher bailout values make better DE possible, so, just leave the bailout condition out in your gpu code, because it is causing MORE gpu time to be processed, with no real performance effect, and only lowering DE possibilities so, by getting rid of the bailout condition you get a performance increase, and a bette DE ! cheers Title: Re: Help with errors.. Post by: Buddhi on January 18, 2010, 06:22:14 PM I also wrote renderer by myself and I had similar problems. I have two questions which maybe help you. Do you have some limit for maximum number of steps (during finding fractal surface using DE)? Did you observed step distances in these strange areas? Maybe steps goes to zero, because you are using float accuracy variables (not double). In that high image resolutions steps could be very short.
When I observed on my images some problems I remembered pixel coordinates where something was wrong. Then I rendered only this one pixel but with printing step and distance values into console. It was very helpful for me. Good luck in solving problem Title: Re: Help with errors.. Post by: eq on January 19, 2010, 09:36:34 AM Thanks for the tips guys!
Quote I'm guessing but I think the problems you outlined are simply a matter of your bailout being too small. I did some quick tests and setting the bailout to 1000 (thats 100000 squared, right?) didn't seem to make any different, the images was identical (at least at the problematic areas).Quote hi there, as far is i know, there is NO need for using a bailout when rendering with gpu, I don't believe this is the case, early outs helps speed.because every source path is taken, no matter if bailouted or not Anyway I did try to remove the bailout, but then the graphics driver was reset because the shader took to long to complete. I did however manage to render some tiles of the test image and those didn't seem to correct the problem. So I don't think that concludes that the bailout value isn't the problem in this case. Quote Do you have some limit for maximum number of steps (during finding fractal surface using DE)? No, I don't terminate based on the number of steps, I terminate if the distance is to big (10.0 in my case).I can't see the distance termination being the problem since I've observed the errors when rendering the complete mandelbulb as well as in the extreme closeups. Quote Maybe steps goes to zero, because you are using float accuracy variables (not double). I'm currently trying to test this, I wanna make sure that my new position is different from the last, if it isn't I need to step a bit further (I still refine the intersection point using a binary search in the end so this over stepping might be ok?).Quote When I observed on my images some problems I remembered pixel coordinates where something was wrong. Then I rendered only this one pixel but with printing step and distance values into console. It was very helpful for me. I'd love to, maybe even convert the hlsl code to C++, comparing a float and double version side by side at the offending pixels.However I will not allow my self to get sucked into this too much ;) so I'll probably just try to debug the pixels by other means. Quote Good luck in solving problem Thanks!Title: Re: Help with errors.. Post by: bib on January 19, 2010, 07:14:35 PM Your images look so real! Congratulations :)
Title: Re: Help with errors.. Post by: Buddhi on January 19, 2010, 09:38:15 PM Hi
Today I found exactly the same problem on my image. I rendered 3D Julia in high resolution with very low value of step factor (step = step_factor*DE). It was only 0,1 because during rendering Julia fractals there is lots of problems with overstepping. Normally I rendered images with higher step values (step_factor = from 0,5 to 1,0). I used analytic version of DE. On attached image you can see these "filled" regions. I rendered one of these pixels and recorded step distances and estimated distances (please find attached .txt file). I my case reason was very simple. I have limit on maximum number of steps for avoiding endless loops when program calculates wrong value of DE steps. I forgot about this limit because it was necessary when I started experimenting with distance estimators. It was 1000 and I thought that I will be enough for every image. When ray goes parallel to fractal surface there is lots of very small steps. When it reaches this limit, programs stops finding fractal surface and starts computing shading. These places are bright because stepping algorithm stops very far from fractal surface and at these points there is nothing to produce shadows. Title: Re: Help with errors.. Post by: David Makin on January 20, 2010, 12:06:08 AM Just to say I have no limit on the number of steps used in my code, I do however have a limit on the minimum step distance to be used - when close to the solid this is often actually larger than the step distance that's calculated by scaling down the DE value.
Title: Re: Help with errors.. Post by: eq on January 20, 2010, 01:02:50 PM Thanks for the responses.
I tried some things yesterday. I started out by skipping the shading and just shade based on distance, i.e: (dist * 10.0f) % 1.0f. The problematic parts showed a nice zebra pattern (no image here at work). This indicated that I got distances over 100.0 units! I though about precision etc and tried various things. I've always kept step distances at DE (no scaling). So I tried both to scale down and scale up, too my surprise things looked MUCH better using a step distance of 2xDE. It still doesn't work 100% as expected, but from messing up 25% of the image it went down to less than 1% I would say, so it's an improvement. Doing this however caused other artifacts to become worse (dust). Old image: (http://www.solidicon.se/eq/mb/LessProblems.png) http://www.solidicon.se/eq/mb/LessProblems.png (http://www.solidicon.se/eq/mb/LessProblems.png) New image (different size and lighting, sorry about that): (http://www.solidicon.se/eq/mb/MoreProblems.png) http://www.solidicon.se/eq/mb/MoreProblems.png (http://www.solidicon.se/eq/mb/MoreProblems.png) I think I also need to clarify how my ray marching code works. I imagine that some people do: d = distanceEstimate(pos); pos = pos + normalizedRayDir * d * scale; I do it somewhat differently because I think it will improve precision: pos = rayOrigin + normalizedRayDir * dist * scale; d = distanceEstimate(pos); dist = dist + d; The case I think it helps is when the ray dir has a component with a very small value, i.e rayDir.x = 0.000000001¨ Using the first method the new pos.x could be something like: large_number (prev position) + small_number (rayDir) * small_number (DE). The effect of the addition is no change in component due to internal precision (LargeNumber + SmallNumber = LargeNumber). In order to avoid getting stuck in an endless loop you either need to have a maximum iteration count (which is bas as we've seen) or to make sure that we get a new position at each step. I ensure it by comparing the new pos with the old, if equal I do: while (oldPos == newPos){ d = d * 2; dist = oldDist + d; newPos = rayOrigin + rayDir * dist * scale; } Anyway, I have a few more things I'd like to test. One good thing is that I switched back to an older HLSL compiler and now my shader compiles in less than a minute instead of over ten, so I can test things more easy. Title: Re: Help with errors.. Post by: David Makin on January 20, 2010, 02:11:14 PM Hi,
Now you've described how you're doing the stepping I've got a suggestion: Instead of this: Pos = Pos + vector*step Try this: alpha = alpha + step Pos = start pos + vector*alpha Theoretically I think your method *should* actually be more accurate but mine may be worth a try (I want the value of alpha anyway). Also did you limit the size of step to a defineable minimum (dependant on what your solid threshold is) ? (so if the scale factor is fairly small then the step distances approaching the surface are not as small as the scale would actually make them) Title: Re: Help with errors.. Post by: eq on January 20, 2010, 03:28:51 PM That's exactly what I'm doing :)
I guess I wasn't clear enough but I even tried to explain why I think that method has more accuracy... Title: Re: Help with errors.. Post by: David Makin on January 20, 2010, 04:02:41 PM That's exactly what I'm doing :) I guess I wasn't clear enough but I even tried to explain why I think that method has more accuracy... Apologies - I have a tendency to respond before fully digesting a post ;) In that case I'd recommend the "minimum step" method since then you'll never get the zero step problem (note however that using the minimum step method does necessitate using a binary search to get the exact solid boundary but the overhead for that is much less than using very small steps - at least on a CPU it is). Title: Re: Help with errors.. Post by: eq on January 20, 2010, 04:16:17 PM I already do a binary search (currently a fixed 20 iterations), i.e:
maxDist = dist when DE < epsilon minDist = MaxDist - 2 * epsilon for (i = 0; i < 20; ++ i){ testDist = (minDist + maxDist) * 0.5f; if (DE(testDist) < epsilon){ maxDist = testDist; }else { minDist = testDist; } } return minDist; What value do you use as a minimum stepping distance? A tweakable constant? (i.e whatever works best). A value based on epsilon? (i.e 4 * epsilon). Some other magic? BTW, thanks for all pointers so far... Title: Re: Help with errors.. Post by: David Makin on January 20, 2010, 04:45:22 PM I already do a binary search (currently a fixed 20 iterations), i.e: maxDist = dist when DE < epsilon minDist = MaxDist - 2 * epsilon for (i = 0; i < 20; ++ i){ testDist = (minDist + maxDist) * 0.5f; if (DE(testDist) < epsilon){ maxDist = testDist; }else { minDist = testDist; } } return minDist; What value do you use as a minimum stepping distance? A tweakable constant? (i.e whatever works best). A value based on epsilon? (i.e 4 * epsilon). Some other magic? BTW, thanks for all pointers so far... I have a user specifiable parameter as the minimum stepping distance but it can also be set to "auto" in which case it gets set to "Solid Threshold/2" irresepective of the DE scale value that gives the calculated step i.e. the auto version never steps less than half the solid threshold distance - apart from when performing the binary search :) Edit: Note that in your binary search then my initial maxdist would be alpha where DE<threshold and my mindist would be alpha-last step. In my binary search I simply do step=0.5*step and step backwards first (alpha=alpha-step) then either forwards or backwards scaling the step a further 0.5 each time - I stop when step<=user search minimum (you can also stop when abs(DE-threshold)<user search minimum). Title: Re: Help with errors.. Post by: David Makin on January 21, 2010, 12:31:44 AM This indicated that I got distances over 100.0 units. In the case of the DE producing very large estimates when it shouldn't (as happens if you hit the "dead points" on Julia Sets) then a fix I use for this is an array containing the current minimum distances stepped for the current ray at each iteration depth - the array is initialised for each ray to a maximum step distance (e.g. 1.0) and when I've got my new step distances (calculated and set to the minimum if too small) I then use the following: Where: miter is the maximum number of iterations used on the ray at any point j is the iteration count for the "found" solid point step is the newly calculated step distance dists[] is the array of minimum distances calculated (where the index is the iteration count) mdist is the absolute minimum step distance used on the ray so far (initialised for each ray to the same value as in dists[]) if j<=miter if step<dists[j] repeat dists[j] = step j = j + 1 until j>miter || step>=dists[j] if j>miter mdist = step endif else step = dists[j] endif else;if j>miter miter = miter + 1 if step>mdist step = mdist endif while miter<j dists[(miter=miter+1)] = mdist endwhile dists[j] = step endif What this does is ensure we never step too far due to the dead points in the gradient, it does slow down rendering somewhat on "holey" and disconnected fractals in particular but it's generally not as great an overhead (on the CPU) as you might imagine. The reason for storing the minimum distances per iteration depth is to ensure that once a ray has "missed" then the step distance will still increase as we accelerate away from the fractal, obvioulsly if you just stored and tested the steps against the current minimum distance found previously on the ray then rendering would be incredibly slow. Edit: Have you also considered what happens in your code if the derivative value is very small, so small in fact that when you calculate the DE you get overflow ? (This could happen in some places especially on "holey" and disconnected Julias but I think in some spots on Mandelbrots as well). This may well explain your errors, since when you increased your step sizes things improved - this could be explained by the bad spots being hit less often when using larger steps generally. Title: Re: Help with errors.. Post by: eq on February 04, 2010, 02:09:24 PM Sorry for not replying earlier but as I said previously the time I allow myself to work on this is limited ;)
Thanks for you solution David, unfortunately for me I can't use array's in my HLSL code which makes your solution very hard to implement. The problem I have right now is so weird I'm going crazy. In essence: better quality (i.e less "misses") equals smaller step distances, right? I.e stepping DE * 0.1 should always gives at least as good results as stepping DE? This is what I get: (http://www.solidicon.se/eq/mb/Error.jpg) The ray marching code I use is: bool hitTest(out float rayDist, float3 rayOrigin, float3 rayDir, float solidThreshold, float intersectionFailDistance) { rayDist = 0.0f; [loop] while (rayDist <= intersectionFailDistance){ float3 testPoint = rayOrigin + rayDir * rayDist; float solidDist = estimateMandelbulbDistance(testPoint); if (solidDist <= solidThreshold) break; // solidDist *= 0.1f; rayDist += solidDist; } return rayDist < intersectionFailDistance; } Scaling the solidDistance as shown above gives me the problem displayed. How could that be? Am I chasing some sort of precision problem here? Title: Re: Help with errors.. Post by: Jesse on February 04, 2010, 05:32:07 PM Have you thresholded the upper and lower limit of the DE function?
Something like: ... return r * log(r) * 0.5f / (dr + maxThreshold) + solidThreshold * 0.2; and: maxThreshold = bailoutR * solidThreshold * 0.01 Or try out better values, but thresholding is essentially, otherwise you are stepping in flat regions, for example beside the star, direct to infinity! Correction: dr is never below 1.0, so maxThreshold should not be necessary. Your method differs from the ones i knew: Accumulating dr should result in accumulating inaccuracies in each iteration? Wondering if this is still working in different areas and with higher iteration counts. Anyway: Bailout R of 2.0 is too low, i have artefacts just between the minibulbs in the 2D view, i would go up to 3.0 or more. Title: Re: Help with errors.. Post by: David Makin on February 04, 2010, 09:23:14 PM How could that be? Am I chasing some sort of precision problem here? I would think that is the problem - compared to my implimentation your code is missing an important step, in English: If new step distance<minimum allowed step distance then new step distance = minimum allowed step distance The minimum allowed step distance to use depends on the resolution/magnification/threshold but generally if using 0.1*DE as the step distance then using 0.1*threshold for the minimum step should work - provided of course that 0.1*threshold is not out of accuracy and actually zero :) Title: Re: Help with errors.. Post by: hobold on February 05, 2010, 03:55:13 PM Clipping the step distance to a lower bound is not the only alternative. For GPUs and other SIMD machines, you can do:
position += DE + minimum_step; and save the conditional branch. Or if your renderer is voxel based, you know that there is a lower size limit on the visible details, hence some minimal step size is guaranteed not to introduce arbitrarily large errors (those were introduced already when you sampled the volume). Title: Re: Help with errors.. Post by: David Makin on February 06, 2010, 02:20:28 AM Thanks for you solution David, unfortunately for me I can't use array's in my HLSL code which makes your solution very hard to implement. Is it not possible to use a texture as a modifiable array ? Or are textures read only ? My knowledge of GPU programming is currently limited - so far all I've done is a 2D Mandelbrot renderer in shader 2 (just as a test on the iPhone). |