Welcome to Fractal Forums

Fractal Software => 3D Fractal Generation => Topic started by: Syntopia on September 28, 2012, 05:52:36 PM




Title: Rendering 3D fractals without distance estimators
Post by: Syntopia on September 28, 2012, 05:52:36 PM
It is not always easy to come with a distance estimator for a 3D fractal. I've seen quite a few requests for drawing fractals, where the defining function only tells you whether you are inside or not for a given point, e.g. functions defined by:

bool inside(vec3 point);

I decided to try out some simple brute-force methods to see how they would compare to the DE methods. Contrary to my expectations, it turned out that you can actually get reasonable results without a DE.

The method I used was to just simply sample random points along the camera ray for each pixel. Whenever a hit is found on the camera ray, the sampling wil proceed on only the interval between the camera and the hit point (since we are only interested in finding the closest pixels). This way you end up with a depth map, like:

(http://hvidtfeldts.net/brute2.jpg)

In order to improve the quality, I then calculated surface normals from the depth buffer and pixel positions, and used this to do a standard Phong shading. I also applied a simple screen space ambient occlusion (and glow) pass to improve depth queuing. This results in:

(http://hvidtfeldts.net/brute1.jpg)

You can do this at interactive frame rates in Fragmentarium, and I further improved responsiveness by using progressive rendering: do a number of samples, then storing the best found solution (closest pixel) in a depth buffer (I use the alpha channel), render the frame and repeat.

I'll add the brute-force tracer to the Fragmentarium repository, as soon as the script is cleaned up a bit.



Title: Re: Rendering 3D fractals without distance estimators
Post by: eiffie on September 28, 2012, 06:27:22 PM
Thanks for all your work! This will be great for trying new functions. I remember my first renderer was brute force but not nearly with these results. Glad you gave it another look.


Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on September 29, 2012, 01:23:20 AM
Wow, this works much better than I would have expected. Great find! Hmm ... I wonder if some statistical techniques could be used here? I.e. controlling the randomness a bit, to speed up convergence towards the "exact" solution ...?


Title: Re: Rendering 3D fractals without distance estimators
Post by: Kali on September 29, 2012, 05:08:07 AM
Great!

Currently I'm using accumulative fixed-step raymarching for trying out weird 3D formulas without using a DE, like in this "3D-rotated Julia" :

(https://dl.dropbox.com/s/iuttdkotcfb0lbu/plasmajuliaexample.jpg?dl=1)

But I always wanted a solid shader for formulas without DE, and I was thinking of modifying your fragmentarium default raytracer to do this (or at least trying to).

I'm glad you already did it, and that it works so good! Can't wait to try it.




Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on September 29, 2012, 05:38:31 AM
The method I used was to just simply sample random points along the camera ray for each pixel. Whenever a hit is found on the camera ray, the sampling wil proceed on only the interval between the camera and the hit point (since we are only interested in finding the closest pixels).

  This sounds similar to what ChaosPro does.  There are a couple cool ideas here in the documentation  (click for documentation) (http://www.chaospro.de/documentation/html/fractaltypes/quaternions/parmparm.htm).  Read the "Resolution", "Start Scan", and "Precision" parts.  

  It makes ChaosPro an excellent piece of software to quickly whip up a new 3d formula, without having to do the extra math to whip up a DE, especially with some of these crazy formulas.... :)


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on September 29, 2012, 09:06:19 AM
Wow, this works much better than I would have expected. Great find! Hmm ... I wonder if some statistical techniques could be used here? I.e. controlling the randomness a bit, to speed up convergence towards the "exact" solution ...?

I think it could.  I already found that a stratified approach, where the camera ray segment is divided into equal pieces and a sample is choosen from each part reduces noise. You could also probably gain speed by sampling closer to the found solution. Since I use  double buffered progressive rendering approach, at the start of each frame, I could actually read the adjacent pixel depth and sample closer to these. On the other hand, if you bias the search you are probably more like to miss subtle details closer to the camera.

Also the function I use for random numbers:
Code:
float rand(vec2 co){
// implementation found at: lumina.sourceforge.net/Tutorials/Noise.html
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}

is quite primitive, and will show visible artifacts. There is no easy solution in GLSL here: I probably should use a precalculated texture with random values (I do have a floating point texture loader), but I'm too lazy.

Currently I'm using accumulative fixed-step raymarching for trying out weird 3D formulas without using a DE, like in this "3D-rotated Julia" :

I know :-) Actually I decided to try this, because I saw that you were able to succesfully use brute-force fixed step rendering on your great volumetric renders. The worst part about these brute-force solids, is that you need to rely on screen space lighting - this introduces artifacts: e.g. when you do tile rendering you get visually transitions near the border of the tiles.

  This sounds similar to what ChaosPro does.

Yes, it seems to the same approach - I just don't choose a fixed step size, so there no bounds to the resolution: the image will converge towards the true set.

Just to make it clear: brute force ray tracing is not a novel idea - it was indeed the simplest method I could think of. What is perhaps surprising, is that it is fast enough to achieve interactive frame rates on a GPU.


Title: Re: Rendering 3D fractals without distance estimators
Post by: subblue on September 29, 2012, 11:40:30 AM
Very interesting. I'm curious how the number of steps for the brute force approach differs from the DE method?


Title: Re: Rendering 3D fractals without distance estimators
Post by: KRAFTWERK on September 29, 2012, 01:31:27 PM
Wow! I've always wanted to test brute force renders of the mandelbulb "to see what it REALLY looks like".
The mandelbulb you rendered have such beautiful details Syntopia.
Got to try out fragmentarium again!


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on September 30, 2012, 09:21:44 AM
Very interesting. I'm curious how the number of steps for the brute force approach differs from the DE method?

There are no steps as such - you just choose a number of samples on the ray for each camera ray. A fractal such as the Mandelbulb becomes recognizable/navigatable at about 50 samples, and has a good resolution at ~500-1000 samples.

Wow! I've always wanted to test brute force renders of the mandelbulb "to see what it REALLY looks like".
The mandelbulb you rendered have such beautiful details Syntopia.
Got to try out fragmentarium again!

Thanks Kraftwerk. There is still no guarantee that you won't miss detail, though - subtle and thin structures may be overlooked in the sampling process. So it is still just an approximation.

Btw, here is another test with Aexions MandelDodecahedron:
(http://blog.hvidtfeldts.net/media/Dodeca.jpg)


Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on September 30, 2012, 02:04:59 PM
And shouldn't this randomized brute force method be effective for robust computation of inside views? It wouldn't be as fast, of course, since most samples iterate to the limit, but perhaps still fast enough.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on September 30, 2012, 10:03:52 PM
I've written a blog post with some more information and links to code:
http://blog.hvidtfeldts.net/index.php/2012/09/rendering-3d-fractals-without-a-distance-estimator/

And shouldn't this randomized brute force method be effective for robust computation of inside views? It wouldn't be as fast, of course, since most samples iterate to the limit, but perhaps still fast enough.

Yes, just tried it, and it works nicely. Just invert the 'inside' function. As you say, it is a lot slower (~3-4x), since all pixels tested are inside.


Title: Re: Rendering 3D fractals without distance estimators
Post by: cKleinhuis on October 01, 2012, 12:38:02 AM
nice, i like it very much, this way it is far more easier to try out hillarious new functions, thx for keeping the program in steady development!


Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on October 01, 2012, 02:08:56 AM
About quick and dirty random number generation ... can't we use a chaotic iterated function for that? Like Feigenbaum's logistic map at parameter value 4, or something like that. Quick to compute, and guaranteed to be chaotic. That might be good enough.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Kali on October 01, 2012, 04:03:29 AM
About quick and dirty random number generation ... can't we use a chaotic iterated function for that? Like Feigenbaum's logistic map at parameter value 4, or something like that. Quick to compute, and guaranteed to be chaotic. That might be good enough.

Back in my early programming experiences, I developed a random number generator based on irrational numbers... I'll try to remember how it worked, but I think the single precision limitation could be a problem.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Kali on October 01, 2012, 07:04:16 AM
Ok, I just tested the raytracer with my "rotjulia" formula, and I'm very excited with the result:

(https://dl.dropbox.com/s/xdikkc2zx16qr53/rotjuliatestff.jpg)


This is the "inside" function for using with the tracer:   (requires complex.frag and mathutils.frag)

Code:
bool inside(vec3 z) {
mat3 rot=rotationMatrix3(normalize(RotVector),RotAngle);
int i=0;
float r;
while (i<Iterations) {
z.xy=cMul(z.xy,z.xy);
z*=rot;
z+=JuliaC;
r=length(z);
i++;
             if ( r>Bailout) {
return false;
}
}
return true;
}

This algorithm takes x and y coordinates as a complex number, squares it, does a 3D rotation and adds julia constants.



Too bad the lighting doesn't work well with tiled large renders, but it's an awesome tool for exploring new formulas and ideas...

Thanks Mikael, good job!!





Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on October 01, 2012, 12:11:51 PM
There is fairly little noise even on the smooth parts of Kali's posted image. I think that last bit could be cured with the usual bisection method.

That is, during the random iterations, we already keep track of the closest point on the ray that is beyond the surface; additionally one would keep track of the farthest point that is just in front of the surface. After the limit of random samples has been reached, we end up with a small interval where we "know" the surface must be.

That interval's length can recursively be halved by evaluating its midpoint, and then adaptively moving the near or the far border.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on October 01, 2012, 08:33:11 PM
wrt random number generators:

The problem is not to find something that generates a nice sequence - you could use linear congruential generators or something similar for that.

The problem is that you cannot easily store the state (the previous output) between frames (which is needed for progressive rendering) - there is not room for that in the four components per pixel that are passed between frames. It might be possible to render to multiple targets, or to an FBO attachment, but it would complicate the rendering code.

So I need something which returns an unique number as of function of position and framecount. This is why I call my random function with something like: rand(viewCoord*float(backbufferCounter*i)). As you can guess this creates a lot of visual correlation. It would be possible to create a precalculated texture with high quality random numbers, but I still need a nice way of looking them up, based on position and frame count.

wrt bisection

Yes, It could be used as a post-processing pass to improve quality, but then you have to put a limit on the number of random samples. There is also a practical problem, since there is not room for storing the farthest outside point (as of now I use xyz for color, and w for closest inside point). This could be circumvented, though.

But I would rather pursue the idea of looking at adjacent pixel, since this means you might reuse some of calculations, and accelerate the rendering.


 



Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on October 01, 2012, 10:06:55 PM
I tried sampling the neighbors at the start of each frame - this did speed up the rendering and removed at lot of noise:

(http://hvidtfeldts.net/blog/media/Sample.jpg)
(Low iteration Mandelbulb for smooth surface)

The image is rendered using 6 frames with 20 samples each: the lower has SampleNeighbors enabled the upper hasn't.
The updated code is in the repository.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Kali on October 02, 2012, 12:27:57 AM
Excelent, works even faster now.

Btw, I tried to implement some coloring using the orbitTrap variable and it worked, but there is a flickering noise when in realtime, and then the render is also more noisy compared with the monochrome version:

(https://dl.dropbox.com/s/xfrqwj9ib27g90f/rojuliamonocolor.jpg)



Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on October 02, 2012, 02:39:42 PM
I found an interesting paper on an unusual type of pseudo random number generator that might be interesting in this context:

http://www.thesalmons.org/john/random123/papers/random123sc11.pdf

The generators are stateless, conforming to an interface like

result = PRNG(seed, N)

yielding sequences of pseudo random numbers as N is incremented. Statistical qualities and speed are said to be competitive with ordinary stateful generators.

Then again, this is probably overkill ... :)


Title: Re: Rendering 3D fractals without distance estimators
Post by: David Makin on October 03, 2012, 03:33:07 AM
Personally I still like the Mersenne Twister ;)


Title: Re: Rendering 3D fractals without distance estimators
Post by: David Makin on October 03, 2012, 04:09:16 PM
Testing this rendering method would be more stressful if done using Mandlboxes or similar rather than Mandelbulbs or similar as its worst-case performance will be dust-like fractals rather than connected ones.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on October 03, 2012, 08:36:09 PM
Btw, I tried to implement some coloring using the orbitTrap variable and it worked, but there is a flickering noise when in realtime, and then the render is also more noisy compared with the monochrome version:

Yes, I didn't implement the orbitTrap thing - I'll reimplement it, but consider using the 'providesColor' approach:
Code:
#define  providesColor
#include "Brute-Raytracer.frag"

vec3 color(vec3 point, vec3 normal) {
  ....
}
This will be faster anyway, since you don't need to evaluate color at each sample.

I found an interesting paper on an unusual type of pseudo random number generator that might be interesting in this context:
http://www.thesalmons.org/john/random123/papers/random123sc11.pdf

It is something like this I was looking for - I looked at their code: http://www.deshawresearch.com/resources_random123.html, but it looks very difficult to port to GLSL.

Btw, I noticed it is a D. E. Shaw paper - I have been following his works for some time. He is an interesting person - he became a multi-billionaire by running a Wall Street investment company, then returned to science, assembled a team of very skilled scientists, and is now building his own supercomputer and software for running molecular dynamics simulations.

Personally I still like the Mersenne Twister ;)
But it really needs a lot of state data :-)

Testing this rendering method would be more stressful if done using Mandlboxes or similar rather than Mandelbulbs or similar as its worst-case performance will be dust-like fractals rather than connected ones.

The Mandelbox renders just fine, but I also thought about this - you are sampling volume, so something like a sphere shell (which it is trivial for a DE) wouldn't render at all. 


Title: Re: Rendering 3D fractals without distance estimators
Post by: Kali on October 03, 2012, 09:12:58 PM
Yes, I didn't implement the orbitTrap thing - I'll reimplement it, but consider using the 'providesColor' approach:
Code:
#define  providesColor
#include "Brute-Raytracer.frag"

vec3 color(vec3 point, vec3 normal) {
  ....
}


Tried, but I got this error:

Code:
Could not create fragment shader: Fragment shader failed to compile with the following errors:
ERROR: 1:315: error(#143) Undeclared identifier hit
ERROR: 1:315: error(#143) Undeclared identifier hitNormal
ERROR: 1:315: error(#202) No matching overloaded function found color
ERROR: 1:315: error(#202) No matching overloaded function found mix
WARNING: 1:315: warning(#402) Implicit truncation of vector from size 1 to size 3.
ERROR: error(#273) 4 compilation errors.  No code generated

The problem seems to be that you call the color function with the hit & hitNormal variables but they are not defined.


EDIT: I fixed it by calling the color function with the result of the "point" variable used in the calculations.


Title: Re: Rendering 3D fractals without distance estimators
Post by: cbuchner1 on October 03, 2012, 10:21:26 PM
Btw, I tried to implement some coloring using the orbitTrap variable and it worked, but there is a flickering noise when in realtime, and then the render is also more noisy compared with the monochrome version:

Oh, this is cool. I shall try to grab your code and put my "plancton" formula in there. Essentially a modification of the triplex multiplication that somehow tilts the rotation axis. Unfortunately the links to the software on the CUDA forums are down (they had a security breach). If I remember correctly, my CUDA accelerated implementation used quaternions.

Here's a link to the fractalforums thread where I show some of the plancton-like bulbs...
http://www.fractalforums.com/mandelbulb-renderings/a-new-class-of-bulb/


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on October 03, 2012, 11:03:01 PM
EDIT: I fixed it by calling the color function with the result of the "point" variable used in the calculations.

Exactly - the normal is not even calculated in that shader anymore. Perhaps I should try running my own code sometimes, instead of assuming it works :-)

I checked in a bug fix.


Title: Re: Rendering 3D fractals without distance estimators
Post by: David Makin on October 04, 2012, 11:43:34 PM

The Mandelbox renders just fine, but I also thought about this - you are sampling volume, so something like a sphere shell (which it is trivial for a DE) wouldn't render at all. 

Correct - of course a swiss-cheese in this case gives the same issues as a dust ;)

To be honest I think the method outlined in this thread but also partially directed using say iteration count (and possibly iteration density) is likely to be most optimum - plus it could overcome the missed thins problem.


Title: Re: Rendering 3D fractals without distance estimators
Post by: tomaya on October 21, 2012, 10:33:08 AM
Ok, I just tested the raytracer with my "rotjulia" formula, and I'm very excited with the result:

(https://dl.dropbox.com/s/xdikkc2zx16qr53/rotjuliatestff.jpg)







   Hi Kali
I'm completely new here and in the fractal world , This picture is really amazing , i wonder if it could be possible
to get the same result with mandelbulb 3d ...and if you would share this formula (or not  ;D)...

20 years of 3d vfx but it's the first time i try the fractal tools . You can call me "junior"  :D

Tom



Title: Re: Rendering 3D fractals without distance estimators
Post by: cbuchner1 on October 22, 2012, 06:23:12 PM
Got the latest Fragmentarium source code via GIT, compiled the Qt libraries in 64bit, created a Visual Studio 2010 solution that also builds Fragmentarium in 64bit and behold - it runs!

When I select Examples/Tutorials/26 - 3D Fractal without DE.frag I get the following yellow log messages (warnings?)

Could not find: AlternateVersion
Could not find: AntiAliasScale
Could not find: ColorIterations
Could not find: GaussianWeight

Is this something to worry about?


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on October 22, 2012, 09:05:36 PM
Got the latest Fragmentarium source code via GIT, compiled the Qt libraries in 64bit, created a Visual Studio 2010 solution that also builds Fragmentarium in 64bit and behold - it runs!

When I select Examples/Tutorials/26 - 3D Fractal without DE.frag I get the following yellow log messages (warnings?)

Could not find: AlternateVersion
Could not find: AntiAliasScale
Could not find: ColorIterations
Could not find: GaussianWeight

Is this something to worry about?


No - these warnings occur when the a preset value tries to change a uniform which is no longer available - for instance if some raytracer params have been renamed, or if you change the raytracer to another with different params. There are loads of these warnigns, and I guess at some point I should try to clean them up :-)


Title: Re: Rendering 3D fractals without distance estimators
Post by: Kali on October 23, 2012, 12:22:27 AM
   Hi Kali
I'm completely new here and in the fractal world , This picture is really amazing , i wonder if it could be possible
to get the same result with mandelbulb 3d ...and if you would share this formula (or not  ;D)...

20 years of 3d vfx but it's the first time i try the fractal tools . You can call me "junior"  :D

Tom

Thanks tomaya, the formula is the standard 2D Julia (z=z*z+c), but adding 3D rotations at each iteration. Tried to render this with distance estimation and I couldn't find a way to make it work, so was the first thing I wanted to try with Syntopia's bruteforce renderer. I don't know how Mandelbulb3D could handle it because it's a distance estimation renderer, so I don't want to bother anyone for a M3D implementation that it's most likely not going to work properly.
Anyway, thanks for your interest and welcome to the forum  :)



Title: Re: Rendering 3D fractals without distance estimators
Post by: Alef on October 28, 2012, 05:05:14 PM
If someone are interested, here is Chaos Pro Helpfile, how it optimises noDE brute force method.
 
Quaternion Parameters
 
Here you can define all parameters which describe the 3D space to calculate:
You can exactly specify the 3D space to use. Additionally you can specify, how this 3D space gets scanned by ChaosPro: Simply scanning the whole 3D space would be possible, but awfully slow. So I had to apply some logic in order to reduce calculation times.

Lets describe the parameters:

Normal vector N and base vector V: These two 4D vectors describe a 3D space in normal form:
The 3D space consists of all 4D points P, which satisfy N·(P-V)=0

(where · means dot product: [a1,a2,a3,a4]·[b1,b2,b3,b4] = a1*b1+a2*b2+a3*b3+a4*b4 )

If you do not want to bother with mathematical meaning: Simply play around with the values: They affect the shape of the fractal.
The following parameters are somewhat difficult to explain: They are the result of a strange algorithm to scan the 3D space. The algorithm is "strange" because I had to speed up the calculation time.
Resolution: As the name implies, this specifies some kind of resolution: To be more exact: It specifies the resolution in z-direction as a multiply of the fractal width (so on resizing the fractal you wont have to change this value: It will be adjusted according to the size of the fractal). For each pixel in the window ChaosPro calculates the ray which starts at the observer, goes through this pixel and ends at the transformation plane.
This ray gets examined using the specified resolution. In short terms: Increasing this value enhances the image quality and slows down calculation speed: If there are too many "holes" in the fractal, or if the fractal looks more like a collection of points rather than a solid object, then it is a good idea to increase the resolution.
Start Scan:
This parameter speeds up the calculation:

Most of the time a Quaternion fractal is some smooth object. So if we want to examine a ray in order to find the border of a quaternion object and if we already know the distance of the neighbouring pixels, we can assume that the pixel in question nearly has the same distance.

So there is no need for ChaosPro to start searching at the beginning of the ray, i.e. at the observer. Instead, it can start at a distance near the neighbouring pixels. And that's what this parameter is ment for:

Assume the neighbouring pixels are at distance d and Start Scan has been set to 10 (parameter is specified in percent, thus this means 10%).
Then ChaosPro starts scanning at d-d*Start Scan/100, or in other words, ChaosPro goes back 10% (=Start Scan) of the distance and starts scanning from there.

This greatly speeds up calculation times: Just watch the fractal calculation: The very first line of a quaternion, where *no* information about the fractal is available, needs much longer than all the succeeding lines, where this neighbouring information is available.

But the drawback is: Sudden big jumps in a quaternion object won't be detected. So play around with this parameter, set it to 100 in order to "disable" the feature (this means: Go back 100%, i.e. to the observer, i.e. start from scratch...).

If you have a very strange object with many holes and cliffs and whatever, I would suggest to increase this parameter, although the calculation time then increases, too.

Precision: The resolution basically should be at least about 20 times the fractal width.
If the resolution is set to 2, then the fractal object would seem to have "steps" in it, which disappear when the resolution is increased. If you want to see what I mean, choose Precision=1 and resolution=2, then increase the resolution. Do you see the steps? And do you see how they disappear when you increase the resolution?
Explanation: ChaosPro must determine the distance between a quaternion pixel and the observer. Lets say that ChaosPro scans a ray of length 4 at 200 points. That means, each 0.02 (=4/200) unit a pixel gets tested whether it belongs to the quaternion or not. But now imagine a pixel is 1.009283 units away!
ChaosPro would test whether the pixel 1 unit away belongs to the quaternion:
Result: no
Now ChaosPro continues and tests the pixel 1.02.
Result: yes

And due to this effect steps appear in the fractal when the resolution is too low.
So basically we need to increase the resolution by some magnitude, but we do not want to increase the calculation times.
So a trick gets applied:
After having found a pixel ChaosPro uses a second pass to determine the exact distance of that pixel: In short terms: When ChaosPro finds a pixel in step i, it knows somewere between the previous step and the current step the object is hit. It then tests the middle of that distance, then knows whether the object is hit in the lower or upper half. It continues doing so for Precision times. In more technical terms: It uses a bisection algorithm of depth "Precision" to actually find out where the object is hit by the current ray.
Lets take the previous example:

ChaosPro scans a ray of length 4 at 200 points.
That means, each 0.02 (=4/200) unit a pixel gets tested whether it belongs to the quaternion or not.
Assume Precision has been set to 8.
Assume there is a pixel 1.009283 units away.

ChaosPro would test whether the pixel which is 1 unit away belongs to the quaternion:
Result: no
Now ChaosPro continues and tests the pixel 1.02 (next step).
Result: yes
Now ChaosPro starts with the second pass:
Iter Min Max Middle Test at middle
1 1,000000 1,020000 1,010000 Y
2 1,000000 1,010000 1,005000 N
3 1,005000 1,010000 1,007500 N
4 1,007500 1,010000 1,008750 N
5 1,008750 1,010000 1,009375 Y
6 1,008750 1,009375 1,009063 N
7 1,009063 1,009375 1,009219 N
8 1,009219 1,009375 1,009297 Y
And after only 8 further iterations ChaosPro knows that the border of our quaternion is at about 1,009297, which is quite near at the real distance of 1.009283

Conclusion: Precision artificially increases the resolution. The scan resolution basically is Resolution * 2^Precision * Width_of_fractal
Setting precision to 0 disables the bisection algorithm.
Roughness - lets you specify an artificial roughness of the surface by modifying the angle in which the light falls onto the surface. If you set it to 0.1 then the surface seems to be a little bit rough. 0 turns roughness off, the pure fractal surface will appear without any special modification.
Gamma - the gamma value: If the object seems to be too dark, it's a good idea to adjust the gamma value and make it smaller.
Contrast - lets you adjust the contrast of the image

Inside
If checked ChaosPro will render the inside of a quaternion. This of course only works if the camera is "inside" the quaternion. The difference between the "inside" and "outside" view is when ChaosPro thinks it has found the border of the object:

Outside View: ChaosPro scans a ray starting from the camera position until it finds a point which is "inside".
Inside View: ChaosPro scans a ray starting from the camera position until it finds a point which is "outside".

The inside view might not work for all formulas: For example most distance estimation formulas only work for points outside an object. The standard ray scan algorithm should work in any case, as it only performs a stupid, slow, complete ray scan.



Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 01, 2012, 05:59:29 AM
(https://dl.dropbox.com/s/xdikkc2zx16qr53/rotjuliatestff.jpg)


Quote
This algorithm takes x and y coordinates as a complex number, squares it, does a 3D rotation and adds julia constants.

Too bad the lighting doesn't work well with tiled large renders, but it's an awesome tool for exploring new formulas and ideas...

Thanks Mikael, good job!!


Hi Kali;

I just came across this image and it's stunning! I downloaded Fragmentarium and was wondering if you could outline the steps involved to get what you created? It's a good way for me to learn this way. ;)

Thanks,
-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: Kali on November 01, 2012, 06:35:49 AM
Hi Rich, here is the .frag file, just play with the sliders ;D

If you are a programmer I can detail you later what the code does...

thanks for your interest!


Hi Kali;

I just came across this image and it's stunning! I downloaded Fragmentarium and was wondering if you could outline the steps involved to get what you created? It's a good way for me to learn this way. ;)

Thanks,
-Rich



Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 01, 2012, 06:59:02 AM
Hi Kali;

Thank you so much!

I tried to run it just now but it reported a bunch of errors:

Could not find: Z
Could not find: FloorHeight
Could not find: FloorNormal
Could not find: FocalPlane
Could not find: Fog
Could not find: FudgeFactor
Could not find: Gamma
Could not find: GaussianWeight
Could not find: Glow
Could not find: GlowMax
Could not find: GradientBackground
Could not find: HardShadow
Could not find: Iterations
Could not find: Julia
Could not find: JuliaC
Could not find: MaxRaySteps
Could not find: NormalBackStep
Could not find: OrbitStrength
Could not find: Power
Could not find: R
Could not find: Reflection
Could not find: RotAngle
Could not find: RotVector
Could not find: Saturation
Could not find: ShadowSoft
Could not find: Specular
Could not find: SpecularExp
Could not find: SpecularMax
Could not find: SpotLight
Could not find: SpotLightDir
Could not find: Target
Could not find: ToneMapping
Could not find: Up
Could not find: X
Could not find: Y

Do I need to set something up beforehand?

Also, if I try to move or spin the view it crashes...

I am using Fragmentarium v0.9.12

Perhaps some interpretations were changed in this latest version?

Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: Kali on November 01, 2012, 07:43:39 AM
Mmmm, sorry, I don't know, it should work... 

Hi Kali;

Thank you so much!

I tried to run it just now but it reported a bunch of errors:




Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 01, 2012, 07:33:18 PM
Hi richardrosenman,

The warnings are not important - these are just parameter settings which are ignored.

If Fragmentarium crashes the reason is nearly always that the GPU times out - if a single frame takes more than 2 seconds to render this will happen (on Windows). There are ways around this:
http://blog.hvidtfeldts.net/index.php/2011/12/fragmentarium-faq/

But the simplest test is the set the Preview slider to 4x - this will speed up rendering by a factor of 16 (do this before loading the fragment). If Fragmentarium still crashes, it is because of another bug, and I need to hear more details about OS and GPU.

Cheers, Mikael.



Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 01, 2012, 08:13:42 PM
Hi Mikael;

Thank you for your reply. I have to say I haven't been so impressed with fractal imagery as I have been with the results of Fragmentarium for a long time. I am used to doing brute force monte carlo global illumination sampling for my 3D rendering and love this tactile look when applied to fractals. Congratulations on what appears (at the surface) to be a fantastic piece of software.

I'll look into your suggestions tonight when I'm home from work.

In the meantime, and I did do a search already, do you have recommendations for high performance video cards that would make Fragmentarium fly? I read you use Nvidia (I do too) but I know they come in many flavors, boasting different amounts of ram and processing cores. If there's a list or guideline for getting one that will be optimal for Fragmentarium, I'd love to know. I would consider a gpu card update well worth the output from Fragmentarium.

Any info appreciated! ;)

Cheers,
Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 02, 2012, 05:04:47 AM
Mmmm, sorry, I don't know, it should work... 



Hmmm... it seems I am missing "Brute-Raytracer.frag" Should this file be included with the latest Fragmentarium build or is this a custom, 3rd party file?

-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 02, 2012, 04:48:44 PM
Hmmm... it seems I am missing "Brute-Raytracer.frag" Should this file be included with the latest Fragmentarium build or is this a custom, 3rd party file?

-Rich

Hi Richard,

As of now, the brute-force raytracer is only available in the GitHub repository - if I remember correctly I haven't changed any binary files, so you can just download the required fragments from: https://github.com/Syntopia/Fragmentarium/tree/master/Fragmentarium-Source/Examples/Include

It will be part of the next Fragmentarium release.


In the meantime, and I did do a search already, do you have recommendations for high performance video cards that would make Fragmentarium fly? I read you use Nvidia (I do too) but I know they come in many flavors, boasting different amounts of ram and processing cores. If there's a list or guideline for getting one that will be optimal for Fragmentarium, I'd love to know. I would consider a gpu card update well worth the output from Fragmentarium.

The most important thing for GLSL pixel shaders is the raw GFLOPS rating - memory and ram bandwidth is not important.
If you divide the GFLOPS rating with the retail price, you have a nice measure of how to get most value for your bucks.

A list such as:
http://en.wikipedia.org/wiki/GeForce_600_Series

is useful. Better deals can probably be found be buying a 500 series GPU.

Also notice that newer AMD GPU's seems to do very good for general purposes computations (probably better than Nvidia), but I am still reluctant to buy them - mostly because of previous experiences and the lack of CUDA support.



Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 09, 2012, 07:22:28 AM
Mmmm, sorry, I don't know, it should work... 



Kali; Do you have control over lights with your brute-raytracer? I figured out how to get your sample to work but lights, specularity, etc, are non-functional. Is it the same for you?

-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: Kali on November 09, 2012, 10:10:22 AM
Kali; Do you have control over lights with your brute-raytracer? I figured out how to get your sample to work but lights, specularity, etc, are non-functional. Is it the same for you?

-Rich

uncheck "showdepth" option


Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 10, 2012, 06:02:56 AM
Is anyone else having this issue? Any examples ("26 - 3D fractals without a DE.frag", "rotjuliatestff.frag") using the Brute-raytracer method compile with a black (un-illuminated) shape. Adjusting light direction, intensity, specularity, etc, all result in no change. Like they do not work. The only light feature that works is camlight & camlightmin which for some reason, is set by default to 0. Increasing the camlight will show the fractal, but of course, with no specularity or directional lighting.

Kali's gorgeous image appears to have directional light (not just relying on the camlight) and increased specularity. So my question is: does anyone else have control over those features? Is it just me? Because I could swear virtually none of the standard lighting features mentioned above work with the brute-raytracer. And of course, I therefore can't produce anything near Kali's stunning images. ;)

Any info would be appreciated.

-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: knighty on November 10, 2012, 04:29:21 PM
Wow! excellent shader syntopia. I never expected it would work so well. Thank you!

@richardrosenman & syntopia: I have the same issue. I'm also using v0.9.12 precompiled version. It looks like the host programm doesn't send the correct value of "pixelSize" when calling DepthBufferShader's vertex shader or just does not initialize it. To (partially) solve this issue, just comment the line:
Code:
viewCoord.x*= pixelSize.y/pixelSize.x;
in DepthBufferShader.frag then save it.
I'm not sure if the normals will be correct this way. they should be if the aspect ratio of the rendering window is 1. (is pixelSize actually the viewport size in pixels?)

Also, it's easier to begin with a value of 1 for ToneMapping in post tab.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 10, 2012, 05:32:58 PM
You are completely right, Knighty - there was a bug with the pixelSize, which I corrected in the Git repository, and then forgot all about. The newest version of my script relied on this bug to be fixed. Kali probably used the first version (or compiled the code from repository).

I uploaded a raw build here, which should work:
http://hvidtfeldts.net/Fragmentarium-Windows-Build-10-11-2012.zip

Btw, pixelSize (as used here) is (1.0/viewport_width_in_pixels,1.0/viewport_height_in_pixels), i.e. the size of a pixel in a 'unit' coordinate system.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Kali on November 10, 2012, 09:59:23 PM
Wow! excellent shader syntopia. I never expected it would work so well. Thank you!

I agree, excelent tool indeed. Despite it's limitations, it works very well. Using this tracer I managed to make the best renders so far of the 3D Kaliset formula:

(https://dl.dropbox.com/s/hopbdh7tgn5gsvp/3dkaliset.jpg)

I used exponential smoothing calculation and a bailout value of 1 (the formula is all-inside)


Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 11, 2012, 08:07:46 AM
Thanks, Knighty, Kali & Syntopia.

I downloaded the latest build and it works great! Lighting and spec is working as it should. Color cycling doesn't but I have a feeling it never did. I also went out today and picked up an EVGA GeForce 650 TI and words can't describe how blazingly fast the rendering is compared to before. Of course, the first thing I did was render a high resolution tile image and you can imagine my surprise when I realized the lighting doesn't work for tile rendering when using Brute-Raytracer! Argh! ;)

Syntopia is there any plans to fix this (if it's even fixable?) I think Kali mentioned something about this before and now I understand what he was talking about.

I hope it's fixable - the Brute-Raytracer produces such realistic renders it would be a shame not to be able to take advantage of it.

In the meantime, I have plenty more to play with my new video card but I figured it doesn't hurt to mention it in case it's fixable. ;)

Amazing, the speeds I'm getting... wow.

-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 11, 2012, 12:14:06 PM
Great image, Kali!

Of course, the first thing I did was render a high resolution tile image and you can imagine my surprise when I realized the lighting doesn't work for tile rendering when using Brute-Raytracer! Argh! ;)

Syntopia is there any plans to fix this (if it's even fixable?) I think Kali mentioned something about this before and now I understand what he was talking about.

Yes, the tile-rendering will introduce artifacts near the tile borders, because of the screen space ambient occlusion. There may be some ways to work around this, and I plan to do something about it, but I can't promise when. I'm considering the following approaches:

- Simply use a larger GPU buffer than the visible buffer. This is the simplest approach, but rendering output will be limited by the memory on the GPU. As of now, the standard Fragmentarium "buffer swap" setup uses two HDR-buffers and one output buffer, which means each pixel use 36 bytes, so it should be possible to do quite large renders. There may be other (OpenGL) limits, though. One annoyance is that the GPU operations are likely to time out on such large buffers, and you need to disable the GPU watchdog timer.

- When doing tile renders, I could "pad" the tiles, and only use the center portion. The disadvantage of this is that some systems (Game of Life, Reaction-Diffusion, Video Feedback) rely on having access to the full buffer, and won't work.

Quote
I hope it's fixable - the Brute-Raytracer produces such realistic renders it would be a shame not to be able to take advantage of it.

The Brute-Raytracer is far the worst of the Raytracers in Fragmentarium :-) It is only meant for the case, when a proper Distance Estimator can not be found. For instance, things like soft shadows and depth-of-field won't work.

If possible, go for one of the other raytracers - they will provide much better quality and be faster as well.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Alef on November 11, 2012, 05:43:17 PM
p.s.
Maybe this formula should have its way into Fragmentarium?
http://www.fractalforums.com/theory/an-old-formula-revised/msg53970/ (http://www.fractalforums.com/theory/an-old-formula-revised/msg53970/)


I'm alsou a little bit egoistic. I could code it by myself, but at first for that I would need younger PC:
https://sites.google.com/site/3dfractals/baguabox (https://sites.google.com/site/3dfractals/baguabox)
posiytive version:
https://sites.google.com/site/3dfractals/baguabox/baguaboxvariations (https://sites.google.com/site/3dfractals/baguabox/baguaboxvariations)
good pics (just not secure and secure):
https://sites.google.com/site/3dfractals/baguabox/baguaboxvariations/baguabox-animation (https://sites.google.com/site/3dfractals/baguabox/baguaboxvariations/baguabox-animation)


Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 12, 2012, 02:01:56 AM
Thanks for the info, Syntopia.

I have to say that for something titled as the worst raytracer in Fragmentarium, it produces much more realistic and detailed renders than those of DE. At least for me. Edges are beautifully beveled with specular highlights whereas DE seems to produce 3D structures that don't make sense when you follow them with your eye. I understand about the lighting limitations using the screenspace ambient occlusion. It seems the specular also suffers: every tile seems to render the lighting in it's own relative space so rendering 3x3 tiles results in 9 of them with the light placed in the same position for each. I know this is just for discussion's sake but seeing as this is a brute-force renderer, why not try a brute-force true global illumination system? I know... easier said than done.

I will play around more the DE now that I see brute-raytracer's limitations. I have one last question for Kali and Syntopia: In playing with the Brute-Raytracer and Kali's example, I still can't reach the refinement in quality that Kali has in his image, no matter how long I let mine progressively render (it could be overnight). Did anything change in this version vs. the one Kali is using? My renders result in aliased speculars, and other pixilated artefacts, no matter how far I push the settings or how long I let the image converge. I could probably post an example but is there anything I could be missing? I tried almost everything but nothing comes close to the detail and aliasing of his renders.

Thanks again!
-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 12, 2012, 04:32:03 AM
  I'm not sure of the syntax- having problems getting the following to work with the non-DE method (just wrote over a copy of tutorial #26). 

  Help with syntax please (maybe a link??).  This is a simple version of a power 2 bulb that works nicely.

Code:
void powN2(inout vec3 z, float zr0) {
float sr23=sqrt(2/3);
float sr13=sqrt(1/3);
float nx=z.x*sr23-z.z*sr13;
float sz=z.x*sr13 + z.z*sr23;
float sx=nx;
float sr12=sqrt(.5);
nx=sx*sr12-z.y*sr12;                 // tried with declaring float again, and not...
float sy=sx*sr12+z.y*sr12;
sx=nx*nx;
sy=sy*sy;
sz=sz*sz;
float r=sx+sy+sz;
while (r>0 || r<0) {                                              //not sure of != syntax ???
nx=(sx+r)*(9*sx-sy-sz)/(9*sx+sy+sz)-.5;       
float ny=(sy+r)*(9*sy-sx-sz)/(9*sy+sx+sz)-.5;
sz=(sz+r)*(9*sz-sx-sy)/(9*sz+sx+sy)-.5;
}
sx=nx;
sy=ny;
nx=sx*sr12+sy*sr12;
sy=-sx*sr12+sy*sr12;     // can I start out with a - sign??
sx=nx;
nx=sx*sr23+sz*sr13;
sz=-sx*sr13+sz*sr23;
float sx2=sx*sx;
float sy2=sy*sy;         //  does variable^2 work instead??
float sz2=sz*sz;
nx=sx2-sy2-sz2;
float r3=2*|sx|/sqrt(sy2+sz2);     
nz=r3*(sy2-sz2);
ny=r3*2*sy*sz;
z= vec3(nx,ny,nz);


}


bool inside(vec3 pos) {
vec3 z=pos;
float r;
int i=0;
r=length(z);
while(r<Bailout && (i<Iterations)) {
powN2(z,r);
if (pos.x>0) {                          // can I do this???
z+=vec3(pos.x*.5,0,0);
} else {
z+=vec3(pos.x,0,0);
}

r=length(z);
i++;
}
return (r<Bailout);

}


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 12, 2012, 05:15:14 PM
Hi M Benesi,

There are some issues with your formula:

1) First, all floating point literals should be marked with a period, i.e. 'sqrt(1./3.)' instead of 'sqrt(1/3)'. Otherwise you will get integer division, and other errors. (sqrt(1/3) is zero)
2) != syntax is fine.
3) ^ for power exponentiation can not be used.
4) Use abs(sx) instead of |sx|
5) It is okay to start out with a minus sign (it is an unary minus).

After doing this, I ended up with:
Code:
#define providesInside
#define providesColor

#include "Brute-Raytracer.frag"
#group Mandelbulb
//#define IterationsBetweenRedraws 5

// Number of fractal iterations.
uniform int Iterations;  slider[0,9,100]
// Bailout radius
uniform float Bailout; slider[0,5,30]


vec3 color(vec3 p) {
return abs(vec3(p));
}
void powN2(inout vec3 z, float zr0) {
float sr23=sqrt(2./3.);
float sr13=sqrt(1./3.);
float nx=z.x*sr23-z.z*sr13;
float sz=z.x*sr13 + z.z*sr23;
float sx=nx;
float sr12=sqrt(.5);
nx=sx*sr12-z.y*sr12;             
float sy=sx*sr12+z.y*sr12;
sx=nx*nx;
sy=sy*sy;
sz=sz*sz;
float r=sx+sy+sz;
while (r>0. || r<0.) {                                       
nx=(sx+r)*(9.*sx-sy-sz)/(9.*sx+sy+sz)-.5;
float ny=(sy+r)*(9.*sy-sx-sz)/(9.*sy+sx+sz)-.5;
sz=(sz+r)*(9.*sz-sx-sy)/(9.*sz+sx+sy)-.5;
}
sx=nx;
sy=ny;
nx=sx*sr12+sy*sr12;
sy=-sx*sr12+sy*sr12;
sx=nx;
nx=sx*sr23+sz*sr13;
sz=-sx*sr13+sz*sr23;
float sx2=sx*sx;
float sy2=sy*sy;       
float sz2=sz*sz;
nx=sx2-sy2-sz2;
float r3=2.*abs(sx)/sqrt(sy2+sz2);
float nz=r3*(sy2-sz2);
  float ny=r3*2*sy*sz;
z= vec3(nx,ny,nz);
}


bool inside(vec3 pos) {
vec3 z=pos;
float r;
int i=0;
r=length(z);
while(r<Bailout && (i<Iterations)) {
powN2(z,r);
if (pos.x>0.) {                          // can I do this???
z+=vec3(pos.x*.5,0.,0.);
} else {
z+=vec3(pos.x,0.,0.);
}
r=length(z);
i++;
}
return (r<Bailout);
}

But there are still errors: you use the value of 'ny' before declaring and assigning to it.

Be careful of the warnings in the console when you compile the fragments. They should guide you towards the answer.
And if you want the syntax, click the 'GLSL Specification' link in the Help menu.

Feel free to post your code again - but please post all the code - this makes it easier for me to try out.


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 12, 2012, 08:56:14 PM
  Thanks!   I've got a few things to do this afternoon (EST), then I'll get around to editing and reposting the full code sometime tonight- hopefully it'll be working. 


  About the incomplete code-  I cheated more than a little bit and used your tutorial #26 (downloaded from your github page).  I simply changed 2 portions that are in the above post (PowN2 and bool inside sections), and left out the majority of the unchanged code.   


Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 13, 2012, 01:15:24 AM
Ok, this is killing me. Surely, I must be doing something wrong here.  :fiery:

This is kali's rotjuliatestff fragment after 24 hours of progressive rendering (over 420,000 subframes!). You can see the aliased and oversized jagged pixels that never refine. Compare this to his image posted earlier in this thread and there is a world of difference. I wonder if this has anything to do with the pixelsize bug that was there before? After all, it appears as if many of the pixels here are oversized / undersampled:

http://www.richardrosenman.com/storage/poor_sampling.jpg

I'd love to try this script on an earlier Fragmentarium compile to compare but I can't find any.

Syntopia, does this look like correct output to you after 24 hours of rendering? Especially when compared to Kali's: https://dl.dropbox.com/s/xdikkc2zx16qr53/rotjuliatestff.jpg

Thanks,
-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 13, 2012, 04:38:21 AM
But there are still errors: you use the value of 'ny' before declaring and assigning to it.

Be careful of the warnings in the console when you compile the fragments. They should guide you towards the answer.
And if you want the syntax, click the 'GLSL Specification' link in the Help menu.  

  You helped out a lot.

  The current problem may lie in my archaic 8600m (yeah.. the infamous one) GPU, or maybe I'm missing something in the rewritten code: pesky while statement.  It compiles fine- but even set to "manual", I've ended up with a BSOD that tells me that my video card driver is stuck in an infinite loop.  Haven't seen a BSOD in years....   The one time I did get a partial image, it was white noise (like "snow" on an old fashioned analog CRT TV) before being kicked out to a BSOD.    


Quote
Feel free to post your code again - but please post all the code - this makes it easier for me to try out.

  Here is the "corrected", full code.   IT WORKS!!!  Having problem with orbit trap coloring though.  Would like to implement some type of palette coloring scheme, to cycle through defined palettes.  That's for the future.    Also need to implement the full version of this fractal, rather than simply the double z^2 version.

  NOTE:  Just fixed this at ~2:16am EST.  Old version had a bug.  In addition, I messed with the coloring in the brute 3d without DE file, so don't know how this will fly with the regular.  Make sure to increase lighting (I suppose I could change the presets... but it's late).
Code:

#info Mandelbulb without Distance Estimator

#define providesInside

#define providesColor

#include "Brute-Raytracer-color.frag"
#group Mandelbulb
//#define IterationsBetweenRedraws 5      // I really need to clean this code up... LATER though...

// Number of fractal iterations.
uniform int Iterations;  slider[0,4,100]

// Number of color iterations.
uniform int ColorIterations;  slider[0,3,100]

// Mandelbulb exponent (8 is standard)
//uniform float Power; slider[-10,8,10]

// Bailout radius
uniform float Bailout; slider[0,12,30]

//uniform vec3 RotVector; slider[(0,0,0),(1,1,1),(1,1,1)]

//uniform float RotAngle; slider[0.00,0,180]

uniform bool Julia; checkbox[false]
uniform vec3 JuliaC; slider[(-2,-2,-2),(0,0,0),(2,2,2)]

vec3 color(vec3 p) {
return abs(vec3(p)); // should this point to z??
}
void powN2(inout vec3 z, float zr0) {
float sr23=sqrt(2./3.);
float sr13=sqrt(1./3.);
float nx=z.x*sr23-z.z*sr13;
float sz=z.x*sr13 + z.z*sr23;
float sx=nx;
float sr12=sqrt(.5);
nx=sx*sr12-z.y*sr12;            
float sy=sx*sr12+z.y*sr12;
sx=nx*nx;
sy=sy*sy;
float ny=sy;
sz=sz*sz;
float r2=sx+sy+sz;
if (r2!=0.) {                                      
nx=(sx+r2)*(9.*sx-sy-sz)/(9.*sx+sy+sz)-.5;
ny=(sy+r2)*(9.*sy-sx-sz)/(9.*sy+sx+sz)-.5;
sz=(sz+r2)*(9.*sz-sx-sy)/(9.*sz+sx+sy)-.5;

}
sx=nx;
sy=ny;
nx=sx*sr12+sy*sr12;
sy=-sx*sr12+sy*sr12;
sx=nx;
nx=sx*sr23+sz*sr13;
sz=-sx*sr13+sz*sr23; //some things can be cleaned up
sx=nx;
float sx2=sx*sx;
float sy2=sy*sy; // will be switching code around later      
float sz2=sz*sz;
nx=sx2-sy2-sz2;
float r3=2.*abs(sx)/sqrt(sy2+sz2);
float nz=r3*(sy2-sz2);
  ny=r3*2.*sy*sz;
z= vec3(nx,ny,nz);
}


bool inside(vec3 pos) {
vec3 z=pos;
float r;
vec4 colortime=vec4(0.,0.,0.,0.);
int i=0;
r=length(z);
orbitTrap=colortime;
while(r<Bailout && (i<Iterations)) {
powN2(z,r);
if (i<ColorIterations) {
orbitTrap=orbitTrap+vec4(z.x*(z.x),z.y*(z.y),z.z*(z.z),length(z));
}
if (pos.x>0.) {                          // can I do this???
z+=vec3(pos.x*.5,0.,0.);
} else {
z+=vec3(pos.x,0.,0.);
}
r=length(z);

i++;
}
return (r<Bailout);
}


#preset Default
FOV = 0.62536
Eye = 0.633014,-0.13205,-1.83663
Target = -3.07766,0.866233,6.14854
Up = -0.87195,0.22693,-0.433562
EquiRectangular = false
Gamma = 2.5
ToneMapping = 3
Exposure = 1.34694
Brightness = 1
Contrast = 0.9901
Saturation = 1
GaussianWeight = 1
AntiAliasScale = 0
CamLight = 1,1,1,0.38462
CamLightMin = 0
Fog = 0
BaseColor = 1,1,1
OrbitStrength = 0
X = 1,1,1,1
Y = 0.345098,0.666667,0,0.02912
Z = 1,0.666667,0,1
R = 0.0784314,1,0.941176,-0.0194
BackgroundColor = 0.607843,0.866667,0.560784
GradientBackground = 0.86955
CycleColors = false
Cycles = 1.1
NormalScale = 0.00024
AOScale = 0.00631
Glow = 0.34167
AOStrength = 0.86047
Specular = 0
SpecularExp = 5.455
SpotLight = 1,0.678431,0.494118,0.78431
SpotLightDir = 1,0.78126
Iterations = 4
ColorIterations = 3
Power = 8
Bailout = 12
AlternateVersion = false
RotVector = 1,1,1
RotAngle = 0
Julia = false
JuliaC = 0,0,0
ShowDepth = true
Samples = 100
Near = 0.7368
Far = 2.45904
#endpreset


  Here is an image.. click to biggify:
(https://lh3.googleusercontent.com/-k_nAAgQ3F3A/UKHz1Y0I0-I/AAAAAAAAB0k/N4YXI7QvO3U/s400/out-2.png) (https://lh3.googleusercontent.com/-k_nAAgQ3F3A/UKHz1Y0I0-I/AAAAAAAAB0k/N4YXI7QvO3U/s0/out-2.png)


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 13, 2012, 09:50:00 PM
Good to see you got it working!

It works on my machine too - after changing to the old 'Brute-raytracer.frag'. Here is an example:

(http://www.hvidtfeldts.net/blog/media/unk.jpg)

If possible, you should implement the 'color' function, instead of changing the 'Brute-raytracer'. This will make the code more reusable.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 13, 2012, 10:03:28 PM
Ok, this is killing me. Surely, I must be doing something wrong here.  :fiery:

This is kali's rotjuliatestff fragment after 24 hours of progressive rendering (over 420,000 subframes!). You can see the aliased and oversized jagged pixels that never refine. Compare this to his image posted earlier in this thread and there is a world of difference. I wonder if this has anything to do with the pixelsize bug that was there before? After all, it appears as if many of the pixels here are oversized / undersampled:

http://www.richardrosenman.com/storage/poor_sampling.jpg

I'd love to try this script on an earlier Fragmentarium compile to compare but I can't find any.

Syntopia, does this look like correct output to you after 24 hours of rendering? Especially when compared to Kali's: https://dl.dropbox.com/s/xdikkc2zx16qr53/rotjuliatestff.jpg

Thanks,
-Rich

Hi Rich,

Did you see my post on the Brute-raytracer: http://blog.hvidtfeldts.net/index.php/2012/09/rendering-3d-fractals-without-a-distance-estimator/

The Brute-raytracer uses screen-space methods to calculate normals and AO. For this particular raytracer, there is no subpixel jittering and no anti-alias.
So no matter for how long you run your calculation you will have jaggies. The only way to avoid this by rendering as large as your screen permits and down-size the image (this was probably what Kali did).

The best way to deal with this would be to use the High-Resolution Render, and manually downsize the image, but as mentioned earlier in this thread, it has some issues with tiling artifacts because of the SSAO (and another error in the lighting, but this I can fix).

Finally, it should be mentioned that I changed the 'DepthBufferShader.frag' implementation to use hardware derivates (dFdx, dFdy) for the screen space normals in my later version. This may have lowered the resolution for normals, since I think most GPU's calculate these in 2x2 blocks. I will check this when I get a chance.



Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 13, 2012, 10:19:29 PM
Quote
Finally, it should be mentioned that I changed the 'DepthBufferShader.frag' implementation to use hardware derivates (dFdx, dFdy) for the screen space normals in my later version. This may have lowered the resolution for normals, since I think most GPU's calculate these in 2x2 blocks. I will check this when I get a chance.

Hi Syntopia;

I actually got Kali's help with this and he too confirmed the newest release renders incorrectly with the Brute-Raytracer (he tried it as well and compared to his current one). He sent me most of the older .frag include files and behold - it renders it correctly. So you might want to check that - we can both confirm it does not render in brute force at the quality it previously did. Perhaps the modification you mentioned did in fact change something. I understand about downsampling and such but this is something different. Perhaps the best way for you to see what I mean is to render Kali's image with the newest and then older raytracer.

Anyway, I'm a happy camper with the old files from Kali - the results are spectacular. I can post a before and after if you want to see the difference. Just let me know.

Regarding this last image - can you explain how to use the color function for brute rendering?

Thanks Syntopia (and Kali for the help!)

-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 13, 2012, 10:36:53 PM
Hmm, guess I have to do some debugging...

Wrt colors, I provide them manually, i.e. placed a
Code:
#define providesColor

at top of the script, and then I reused the color-cycling code:
Code:

vec3 cycle(vec3 c, float s) {
return vec3(0.5)+0.5*vec3(cos(s*Cycles+c.x),cos(s*Cycles+c.y),cos(s*Cycles+c.z));
}

vec3 color(vec3 p) {
        // We need to call 'inside' again.
orbitTrap = vec4(10000.0);
inside(p);
orbitTrap.w = sqrt(orbitTrap.w);

vec3 orbitColor;
if (CycleColors) {
orbitColor = cycle(X.xyz,orbitTrap.x)*X.w*orbitTrap.x +
cycle(Y.xyz,orbitTrap.y)*Y.w*orbitTrap.y +
cycle(Z.xyz,orbitTrap.z)*Z.w*orbitTrap.z +
cycle(R.xyz,orbitTrap.w)*R.w*orbitTrap.w;
} else {
orbitColor = X.xyz*X.w*orbitTrap.x +
Y.xyz*Y.w*orbitTrap.y +
Z.xyz*Z.w*orbitTrap.z +
R.xyz*R.w*orbitTrap.w;
}
return orbitColor;
}

Be sure to set the OrbitStrength to 1, to see the effect of the custom coloring!

In the 'inside' function I keep track of the closest points (the orbit traps):

Code:
/// inside loop
if (i>ColorIterationsMin && i<ColorIterations) {
orbitTrap=min(orbitTrap,vec4(z.x*z.x,z.y*z.y,z.z*z.z,dot(z,z)));
}

I'm not particular fond of this orbit trap/color cycling scheme, so I'm sure you can come up with something better :-)


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 13, 2012, 11:15:11 PM

If possible, you should implement the 'color' function, instead of changing the 'Brute-raytracer'. This will make the code more reusable.


  Nice render.  For whatever reason, I'm having a few problems with the coloring function- you got it to do what I wanted it to do.  I'll mess around with it and take into consideration your post above this one. 

  I had a few problems with atan as well, trying to get atan2 type outputs.

  I think I might have that figured out- had to use some math trick, but am not satisfied using the math trick as the GLSL documentation site says that the atan function should output atan2 type outputs (I'll go back and retry atan(y,x) and atan (y/x) to see if they are different as the documentation seems to imply). 

  I'm assuming that GPU rendering with trigonometric functions is pretty fast :), even on my old beast. 


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 13, 2012, 11:41:51 PM
  Nice render.  For whatever reason, I'm having a few problems with the coloring function- you got it to do what I wanted it to do.  I'll mess around with it and take into consideration your post above this one. 

  I had a few problems with atan as well, trying to get atan2 type outputs.

  I think I might have that figured out- had to use some math trick, but am not satisfied using the math trick as the GLSL documentation site says that the atan function should output atan2 type outputs (I'll go back and retry atan(y,x) and atan (y/x) to see if they are different as the documentation seems to imply). 

  I'm assuming that GPU rendering with trigonometric functions is pretty fast :), even on my old beast. 

In GLSL, the two-argument atan(y,x) is an atan2 type function. For x>0 it is equal to the one-argument atan(y/x).

I've attached my whole code, in case you want to see it.
Code:

// Output generated from file: Unnamed
// Created: ti 13. nov 21:32:37 2012

#info Mandelbulb without Distance Estimator

#define providesInside

#define providesColor

#include "Brute-Raytracer.frag"
#group Mandelbulb
//#define IterationsBetweenRedraws 5      // I really need to clean this code up... LATER though...

// Number of fractal iterations.
uniform int Iterations;  slider[0,4,100]

// Number of color iterations.
uniform int ColorIterations;  slider[0,3,100]
uniform int ColorIterationsMin;  slider[0,1,100]

// Mandelbulb exponent (8 is standard)
//uniform float Power; slider[-10,8,10]

// Bailout radius
uniform float Bailout; slider[0,12,30]

//uniform vec3 RotVector; slider[(0,0,0),(1,1,1),(1,1,1)]

//uniform float RotAngle; slider[0.00,0,180]

uniform bool Julia; checkbox[false]
uniform vec3 JuliaC; slider[(-2,-2,-2),(0,0,0),(2,2,2)]

vec3 cycle(vec3 c, float s) {
return vec3(0.5)+0.5*vec3(cos(s*Cycles+c.x),cos(s*Cycles+c.y),cos(s*Cycles+c.z));
}

vec3 color(vec3 p) {
orbitTrap = vec4(10000.0);
inside(p);

orbitTrap.w = sqrt(orbitTrap.w);
vec3 orbitColor;
if (CycleColors) {
orbitColor = cycle(X.xyz,orbitTrap.x)*X.w*orbitTrap.x +
cycle(Y.xyz,orbitTrap.y)*Y.w*orbitTrap.y +
cycle(Z.xyz,orbitTrap.z)*Z.w*orbitTrap.z +
cycle(R.xyz,orbitTrap.w)*R.w*orbitTrap.w;
} else {
orbitColor = X.xyz*X.w*orbitTrap.x +
Y.xyz*Y.w*orbitTrap.y +
Z.xyz*Z.w*orbitTrap.z +
R.xyz*R.w*orbitTrap.w;
}

//vec3 color = mix(BaseColor, 3.0*orbitColor,  OrbitStrength);
return orbitColor;

}
void powN2(inout vec3 z, float zr0) {
float sr23=sqrt(2./3.);
float sr13=sqrt(1./3.);
float nx=z.x*sr23-z.z*sr13;
float sz=z.x*sr13 + z.z*sr23;
float sx=nx;
float sr12=sqrt(.5);
nx=sx*sr12-z.y*sr12;             
float sy=sx*sr12+z.y*sr12;
sx=nx*nx;
sy=sy*sy;
float ny=sy;
sz=sz*sz;
float r2=sx+sy+sz;
if (r2!=0.) {                                       
nx=(sx+r2)*(9.*sx-sy-sz)/(9.*sx+sy+sz)-.5;
ny=(sy+r2)*(9.*sy-sx-sz)/(9.*sy+sx+sz)-.5;
sz=(sz+r2)*(9.*sz-sx-sy)/(9.*sz+sx+sy)-.5;

}
sx=nx;
sy=ny;
nx=sx*sr12+sy*sr12;
sy=-sx*sr12+sy*sr12;
sx=nx;
nx=sx*sr23+sz*sr13;
sz=-sx*sr13+sz*sr23; //some things can be cleaned up
sx=nx;
float sx2=sx*sx;
float sy2=sy*sy; // will be switching code around later       
float sz2=sz*sz;
nx=sx2-sy2-sz2;
float r3=2.*abs(sx)/sqrt(sy2+sz2);
float nz=r3*(sy2-sz2);
  ny=r3*2.*sy*sz;
z= vec3(nx,ny,nz);
}


bool inside(vec3 pos) {
vec3 z=pos;
float r;
int i=0;
r=length(z);
while(r<Bailout && (i<Iterations)) {
powN2(z,r);
if (i>ColorIterationsMin && i<ColorIterations) {
orbitTrap=min(orbitTrap,vec4(z.x*z.x,z.y*z.y,z.z*z.z,dot(z,z)));

}
if (pos.x>0.) {                          // can I do this???
z+=vec3(pos.x*.5,0.,0.);
} else {
z+=vec3(pos.x,0.,0.);
}
r=length(z);

i++;
}
return (r<Bailout);
}




#preset default
FOV = 0.62536
Eye = -2.06349,0.0288363,0.0261386
Target = 6.7978,-0.0399766,-0.0109628
Up = -0.0063931,-0.829538,0.0116395
EquiRectangular = false
Gamma = 2.5
Exposure = 1.34694
Brightness = 1
Contrast = 0.9901
Saturation = 1
SpecularExp = 5.455
SpotLightDir = 0.87654,0.78126
CamLight = 1,1,1,1.13044
CamLightMin = 0
BackgroundColor = 0.666667,0.666667,0.498039
ToneMapping = 3
Near = 0.12372
Far = 2.45904
NormalScale = 0.00024
AOScale = 0.00631
Glow = 0.34167
AOStrength = 0.86047
Samples = 100
Stratify = true
DebugInside = false
SampleNeighbors = true
Specular = 10
SpotLight = 1,0.678431,0.494118,0.78431
Fog = 1.84
ShowDepth = false
DebugNormals = false
BaseColor = 1,1,1
OrbitStrength = 1
X = 1,1,1,0.86408
Y = 0.345098,0.666667,0,1
Z = 1,0.666667,0,1
R = 0.0784314,1,0.941176,0.54902
GradientBackground = 0.76085
CycleColors = false
Cycles = 6.47914
Iterations = 10
ColorIterations = 3
Bailout = 12
Julia = false
JuliaC = 0,0,0
ColorIterationsMin = 1
#endpreset




Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 14, 2012, 12:04:20 AM
Hi Syntopia;

Thank you for that explanation regarding the color mapping to orbit. I'm excited to try it out.

I decided to whip up a comparison for your debugging needs - in case it helps:

(http://www.richardrosenman.com/storage/brute-raytracer_comparison.jpg)

These are both rendered with 1000 subframes (you should consider upping the limit of 2000 for the next release) with identical settings and  they have not been downsampled. However, I think the forum may be doing some of that (downsampling) so here's the original image:

http://www.richardrosenman.com/storage/brute-raytracer_comparison.jpg

Hope this helps!

-Rich

P.S. Darn. Of course your code above doesn't work with my new 'old' fragment files... ;)


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 14, 2012, 02:48:37 AM
In GLSL, the two-argument atan(y,x) is an atan2 type function. For x>0 it is equal to the one-argument atan(y/x).
  Thanks.  Will check that out- must have been using atan(y/x) instead of atan(y,x).
I've attached my whole code, in case you want to see it.

  Cool.  Definitely will try it out.  Glad you saw to grab the orbit prior to adding in the x pixel component. 


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 14, 2012, 04:59:25 AM
  Ok.  Your program is awesome.  Seems likes it's more than an order of magnitude faster than CPU rendering.  Wow...   Also, if you want this split from this thread, I can delete it, a mod could move it, or whatever.  

  Anyways, here is some slightly improved code for <everyone> to play with, after a few bits of info.  I think the trig version is a bit slower than the algebraic, but I'm not sure.  If it is, I've noticed it takes half the time to use complex numbers in ChaosPro for integer "n", although I don't know how much faster that would be on the GPU.   ALSO-  Don't know if I should assign more variables in the following code to break it up into more streams- whoever is familiar with this stuff could say something (or maybe I could simply do a quick re-write and test it, ehh??)

  For the "Spokes" option set it to an integer value (otherwise you get breaks).  4 if you're going for a fake holy grail type fractal, 3 is normal.      Side note: couldn't get intBitsToFloat to work, so had to use a float instead of an integer for Spokes.

  There are also manPower and magPower sliders- these are for the 2 inter-related formulas, each with its own "n" (z^n).

  Note that the Mandy portion of the formula can be replaced with ANY of the various Mandy type formulas (like the 3d Burning Ship, the "flat" Mandy, etc.  Also, you can do lots of tricks with these formulas, like adding in the old portions from the beginning of a section to the end of the section, etc.  

  Lastly (well, not the ultimate lastly...) you can switch the Mandy portion to the beginning of the formula if you so desire.  In fact, you can separate the Mag portion with its rotations into a whole separate function call, then call various Mandy functions...


Code:

// Copied and modified from Syntopia's code  
// Created: ~ 10:30 pm 11-13-2012

#info Mandelbulb without Distance Estimator

#define providesInside

#define providesColor

#include "Brute-Raytracer.frag"
#group Mandelbulb
//#define IterationsBetweenRedraws 5      // I really need to clean this code up... LATER though...

// Number of fractal iterations.
uniform int Iterations;  slider[0,4,100]

// Number of color iterations.
uniform int ColorIterations;  slider[0,3,100]
uniform int ColorIterationsMin;  slider[0,1,100]

// Mandelbulb exponent (2 is standard)
uniform float manPower; slider[-10,2,10]

//mag exponent  (2 is standard)
uniform float magPower; slider[-10,2,10]

//  Spokes number
uniform float Spokes; slider[-4,3,20]

// Bailout radius
uniform float Bailout; slider[0,12,30]

//uniform vec3 RotVector; slider[(0,0,0),(1,1,1),(1,1,1)]

//uniform float RotAngle; slider[0.00,0,180]

uniform bool Julia; checkbox[false]
uniform vec3 JuliaC; slider[(-2,-2,-2),(0,0,0),(2,2,2)]

vec3 cycle(vec3 c, float s) {
return vec3(0.5)+0.5*vec3(cos(s*Cycles+c.x),cos(s*Cycles+c.y),cos(s*Cycles+c.z));
}

vec3 color(vec3 p) {
orbitTrap = vec4(1.0);
inside(p);

orbitTrap.w = sqrt(orbitTrap.w);
vec3 orbitColor;
if (CycleColors) {
orbitColor = cycle(X.xyz,orbitTrap.x)*X.w*orbitTrap.x +
cycle(Y.xyz,orbitTrap.y)*Y.w*orbitTrap.y +
cycle(Z.xyz,orbitTrap.z)*Z.w*orbitTrap.z +
cycle(R.xyz,orbitTrap.w)*R.w*orbitTrap.w;
} else {
orbitColor = X.xyz*X.w*orbitTrap.x +
Y.xyz*Y.w*orbitTrap.y +
Z.xyz*Z.w*orbitTrap.z +
R.xyz*R.w*orbitTrap.w;
}

//vec3 color = mix(BaseColor, 3.0*orbitColor,  OrbitStrength);
return orbitColor;

}
void powN2(inout vec3 z, in float d) {
float sr23=sqrt(2./3.);
float sr13=sqrt(1./3.);
float nx=z.x*sr23-z.z*sr13;
float sz=z.x*sr13 + z.z*sr23;   // sz rotated
float sx=nx;
float sr12=sqrt(.5);
nx=sx*sr12-z.y*sr12;               //nx
float sy=sx*sr12+z.y*sr12;  //sy rotated
sx=nx;

float rxyz=pow((sx*sx+sy*sy+sz*sz),magPower*.5);

float r1=sqrt(sy*sy+sz*sz);
float victor=atan(r1,abs(sx*d))*magPower; // Is it faster to use
nx=(sx*sx+rxyz)*cos(victor)-.5; // multiple variables

r1=sqrt(sx*sx+sz*sz); // for these to split
victor=atan(r1,abs(sy*d))*magPower; // processes???
float ny=(sy*sy+rxyz)*cos(victor)-.5;

r1=sqrt(sx*sx+sy*sy);
victor=atan(r1,abs(sz*d))*magPower;
sz=(sz*sz+rxyz)*cos(victor)-.5;

sx=nx;
sy=ny;

nx=sx*sr12+sy*sr12;
sy=-sx*sr12+sy*sr12;
sx=nx;
nx=sx*sr23+sz*sr13;
sz=-sx*sr13+sz*sr23; //some things can be cleaned up
sx=nx;

rxyz=pow((sx*sx+sy*sy+sz*sz),manPower/2.);
r1=sqrt(sy*sy+sz*sz);
victor=atan(r1,sx)*manPower;
float phi=atan(sz,sy)*manPower;
sx=rxyz*cos(victor);
r1=rxyz*sin(victor);
sz=r1*cos(phi);
sy=r1*sin(phi);
z= vec3(sx,sy,sz);

}


bool inside(vec3 pos) {

float ryz=sqrt(pos.y*pos.y+pos.z*pos.z);
float theta=(Spokes)*atan(pos.z,pos.y)/3.;
vec3 z=vec3(pos.x,ryz*cos(theta),ryz*sin(theta));

float r;
int i=0;
r=length(z);
float d=magPower*1.5; // sometimes I make d= magPower +1
while(r<Bailout && (i<Iterations)) { //  for a sharper fractal.. we can add a switch
powN2(z,d);
if (i>ColorIterationsMin && i<ColorIterations) {
orbitTrap=min(orbitTrap,vec4(z.x*z.x,z.y*z.y,z.z*z.z,dot(z,z)));

}
if (pos.x>0.) {                          // can I do this???
z+=vec3(pos.x*.5,0.,0.);
} else {
z+=vec3(pos.x,0.,0.);
}
r=length(z);

i++;
}
return (r<Bailout);
}




#preset default
FOV = 0.62536
Eye = 2.02493,-0.000258894,0.00365772
Target = -0.498369,0.00627817,-0.0886997
Up = -0.0024414,-0.999997,0
EquiRectangular = false
Gamma = 2.5
ToneMapping = 3
Exposure = 1.34694
Brightness = 1
Contrast = 0.9901
Saturation = 1
Near = 0.48984
Far = 2.71704
NormalScale = 0.00024
AOScale = 0.00291
Glow = 0.34167
AOStrength = 0.86047
Samples = 254
Stratify = true
DebugInside = false
SampleNeighbors = true
Specular = 1
SpecularExp = 5.455
SpotLight = 0.027451,0.196078,0.658824,0.89706
SpotLightDir = 0.80246,0.78126
CamLight = 1,1,1,1.44928
CamLightMin = 0.54217
Fog = 1.84
ShowDepth = false
DebugNormals = false
BaseColor = 0.388235,0.388235,0.388235
OrbitStrength = 0.78
X = 0.329412,0.321569,0.364706,0.08738
Y = 0.262745,0.333333,0.282353,0.02912
Z = 0.054902,0.0431373,0.439216,0.1068
R = 0.317647,0.670588,0.690196,0.64706
BackgroundColor = 0.0196078,0.0117647,0.192157
GradientBackground = 1.73915
CycleColors = false
Cycles = 23.1868
Iterations = 6
ColorIterations = 4
Bailout = 12
Julia = false
JuliaC = 0,0,0
ColorIterationsMin = 2
#endpreset

  Here are 2 images.  Click to enbiggen:
(https://lh3.googleusercontent.com/-NsSO8nmUFfE/UKMTJvY1BbI/AAAAAAAAB1A/JEjpbRuO5zY/s400/4%2520spokes%2520example.png) (https://lh3.googleusercontent.com/-NsSO8nmUFfE/UKMTJvY1BbI/AAAAAAAAB1A/JEjpbRuO5zY/s0/4%2520spokes%2520example.png)(https://lh6.googleusercontent.com/-Onkgj6G1b8Y/UKMi7sLw5FI/AAAAAAAAB1Y/N6xKmiIZobI/s400/4%2520spokes%2520pointy%2520end%2520higher%2520res.png) (https://lh6.googleusercontent.com/-Onkgj6G1b8Y/UKMi7sLw5FI/AAAAAAAAB1Y/N6xKmiIZobI/s0/4%2520spokes%2520pointy%2520end%2520higher%2520res.png)


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 14, 2012, 09:46:57 PM
Hi Syntopia;

Thank you for that explanation regarding the color mapping to orbit. I'm excited to try it out.

I decided to whip up a comparison for your debugging needs - in case it helps:

Hi Richard,

I've changed from the hardware derivate version back to the original texture look up version. This removed a lot of jaggies. So sad - the hardware derivatives very much more elegant :-)

I've attached the modified DepthBufferShader.frag - please try it, and see if it improves the rendering. This also removes some of the artifacts when doing tile rendering - now it is only the SSAO, that makes visible tile artifacts.

(http://www.richardrosenman.com/storage/brute-raytracer_comparison.jpg)
These are both rendered with 1000 subframes (you should consider upping the limit of 2000 for the next release) with identical settings and  they have not been downsampled. However, I think the forum may be doing some of that (downsampling) so here's the original image:

Notice, that you could also increase the number of samples per subframe, from the default value of 1000.
Also notice, that if you go to continious mode, and put a zero in the Max (subframe) field in the main window, it will run indefinately.



Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 14, 2012, 09:59:49 PM
 Ok.  Your program is awesome.  Seems likes it's more than an order of magnitude faster than CPU rendering.  Wow...   Also, if you want this split from this thread, I can delete it, a mod could move it, or whatever.  

Thanks, Benesi!

Quote
ALSO-  Don't know if I should assign more variables in the following code to break it up into more streams- whoever is familiar with this stuff could say something (or maybe I could simply do a quick re-write and test it, ehh??)

Not sure what you are asking, but use as few variables as posible, and try to use Vec3 wherever possible (the latter because of readability).
There is no need for the GPU to reorder instructions, or to try to make independent code paths. Each core of the GPU will execute all of your program (and they will run in lockstep in groups of 16 'threads' - executing the same command at the same time).

Quote
 For the "Spokes" option set it to an integer value (otherwise you get breaks).  4 if you're going for a fake holy grail type fractal, 3 is normal.      Side note: couldn't get intBitsToFloat to work, so had to use a float instead of an integer for Spokes.

To convert from integer to float, you just cast the numbers:
Code:
uniform int Spokes;  slider[0,9,100]

// Use Spokes as float
float f = float(Spokes)*2.0;


Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 14, 2012, 11:54:30 PM
Hi Syntopia;

Yes, the new fix seems to work correctly again -  nice!

I did some more rendering tests and I've noticed a couple of other things (should it be important).

(http://www.richardrosenman.com/storage/tile_lighting_comparison.jpg)

I've been trying to get high resolution brute renderings despite the limitations. I've noticed rendering with tiles yields different lighting results than the same without tile rendering. Is this a typical limitation of tile rendering? In the example above, the first image was rendered at 4k, then downsampled to 1k. The second image was rendered at 1k without tile rendering. These do not have specular or occlusion enabled.

I also noticed that there is still tiling artefacts at the edges, even without spec or AO. It's not really visible in this example because it's been downsampled but I can post the original if required. You said there should be no more artefacts if not using AO? Here's the 4k original if you want to see it closely: Click here for 4k boring render. (http://www.richardrosenman.com/storage/tile_render4k.jpg)

Have a look and tell me what you think. I know you've mentioned the brute-renderer has many limitations when rendering in high-res but I thought I'd point these out for simply discussion's sake.

In full-screen rendering your new script works perfectly. ;)

Link: http://www.richardrosenman.com/storage/tile_lighting_comparison.jpg

-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 15, 2012, 12:17:01 AM
I've been trying to get high resolution brute renderings despite the limitations. I've noticed rendering with tiles yields different lighting results than the same without tile rendering. Is this a typical limitation of tile rendering? In the example above, the first image was rendered at 4k, then downsampled to 1k. The second image was rendered at 1k without tile rendering. These do not have specular or occlusion enabled.

No, this is not typical - it must be an error. Looking at the pictures, I think the normals are somehow scaled scaled wrongly in the 'into screen' direction. The left image looks flattened.

Quote
I also noticed that there is still tiling artefacts at the edges, even without spec or AO. It's not really visible in this example because it's been downsampled but I can post the original if required. You said there should be no more artefacts if not using AO? Here's the 4k original if you want to see it closely: Click here for 4k boring render. (http://www.richardrosenman.com/storage/tile_render4k.jpg)

Yep, I said it, but I was wrong: The diffuse and specular part both depend on the normal, and the normal is also screen space calculated, so near the edges these will be wrong. You need to render only with ambient light, I guess :-(

I can fix the first part, but the second problem is a bit tougher. Still, it would be nice to have large scale renderings.


Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 15, 2012, 12:31:44 AM
Quote
I can fix the first part, but the second problem is a bit tougher. Still, it would be nice to have large scale renderings.

Yes, it would be but you're pretty damn close I'd say. The current tiling artefacts are minimal at best and easy Photoshop cleanup. The AO issue - well, no way around that I suppose if it uses screen-space AO unless a different approach to the AO is used. But I intend to do a high resolution final when this is all said and done so I will know how much Photoshop cleanup the AO requires.

I'd say the current big thing is the different lighting in tile rendering as you pointed out. But you sound confident you can fix that.

So really, it's pretty awesome as it is. ;)

-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: cKleinhuis on November 15, 2012, 02:30:20 AM
if i shall split the arbitrary/double floating point topic, tell me from which posting to which

awesome work dudes, gotta play with it as well, and hack and slay in some formulas, lets get a nice hybrid renderer inside, cant wait to try it out, this will be featured in the last news of the year :D keep it coming

good nite, sleep well oO


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 15, 2012, 05:32:38 AM
Not sure what you are asking, but use as few variables as posible, and try to use Vec3 wherever possible (the latter because of readability).
There is no need for the GPU to reorder instructions, or to try to make independent code paths. Each core of the GPU will execute all of your program (and they will run in lockstep in groups of 16 'threads' - executing the same command at the same time).
  Italicized portion is the answer I was looking for.  Probably missed it in your blog.. or read it and forgot. 

To convert from integer to float, you just cast the numbers:
Code:
uniform int Spokes;  slider[0,9,100]

// Use Spokes as float
float f = float(Spokes)*2.0;
  Thanks- will do.  I'm forcing myself to take a break from math and code tonight, so it will probably be tomorrow (or later if something comes up).


Title: Re: Rendering 3D fractals without distance estimators
Post by: visual.bermarte on November 15, 2012, 10:58:59 AM
@Benesi: great job!   O0 - no distance estimation? :spork:


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 15, 2012, 09:43:51 PM
Thanks.  No DE type, as of yet.  Mikael's new brute force non-DE patch (from this thread) allowed this implementation.  I should look into DE for this type, simply because it would be fun to learn and useful for rendering. 

  I'm going to split off the one section of code so that people can see which section can be mixed with other codes- it might be useful as an intermediary function in KIFS- and I've some ideas to make it symmetric over all axes that I need to try out. 


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 16, 2012, 03:13:44 AM
To convert from integer to float, you just cast the numbers:
Code:
uniform int Spokes;  slider[0,9,100]

// Use Spokes as float
float f = float(Spokes)*2.0;

  For whatever reason (perhaps keyboard actuator failure on my part), the above code didn't work.  No problem however- not necessary for the fractal to work.

  Below this is code for a version with several rotations built in.  Note that using rotation 1 or 3 creates "breaks" in the fractal if you set spokes to anything other than the default value of 3.  Rotation 2 is AWESOME and works with various spokes values.  You should try it out... turns the whole fractal into a machine of fractal gears.  For Rotation 1 or 3, if you set the vector portion to 0,1,0 or any singular value like that, you get cool results as well.

 If I can figure out how to do animations with this awesome software, I'll throw together a quick demo- but my GPU is soooooo old....

  Here is the code- remember, Spokes setting other than 3 has discontinuities if you play with rotations 1 or 3.  Rotation 2 is awesome.

UPDATE Nov. 17th 2012:  If you set the magPower to 0, implemented a special mode that allows pure Mandy type.  Also added in a "seed" parameter- note that the code below I haven't tested (just adding these ideas off the top of my head- they work in ChaosPro).  Will update with modified TESTED code later.
Code:

// Output generated from file: Unnamed
// Created: ti 13. nov 21:32:37 2012

#info Mandelbulb without Distance Estimator

#define providesInside

#define providesColor
#include "MathUtils.frag"
#include "Brute-Raytracer.frag"
#group Fractal
//#define IterationsBetweenRedraws 5      // I really need to clean this code up... LATER though...

// Number of fractal iterations.
uniform int Iterations;  slider[0,4,100]

// Number of color iterations.
uniform int ColorIterations;  slider[0,3,100]
uniform int ColorIterationsMin;  slider[0,1,100]

// Mandelbulb exponent (2 is standard)
uniform float manPower; slider[-10,2,10]

//mag exponent  (2 is standard)
uniform float magPower; slider[-10,2,10]

//mag seed  (-.5 is standard)
uniform float magSeed; slider[-6,-.5,6]
//mag Softness
uniform float magSoft; slider[-10,1,100]

// Bailout radius
uniform float Bailout; slider[0,12,30]

uniform bool Julia; checkbox[false]
uniform vec3 JuliaC; slider[(-5,-5,-5),(0,0,0),(5,5,5)]

#group Rotation
//  Spokes number
uniform float Spokes; slider[-4,3,20]

uniform vec3 RotVector1; slider[(0,0,0),(1,1,1),(1,1,1)]
uniform float RotAngle1; slider[-180.00,0,180]

mat3 rot1= rotationMatrix3(normalize(RotVector1), RotAngle1);

uniform vec3 RotVector2; slider[(0,0,0),(1,1,1),(1,1,1)]
uniform float RotAngle2; slider[-180.00,0,180]

mat3 rot2= rotationMatrix3(normalize(RotVector2), RotAngle2);

uniform vec3 RotVector3; slider[(0,0,0),(1,1,1),(1,1,1)]
uniform float RotAngle3; slider[-180.00,0,180]

mat3 rot3= rotationMatrix3(normalize(RotVector3), RotAngle3);


vec3 cycle(vec3 c, float s) {
return vec3(0.5)+0.5*vec3(cos(s*Cycles+c.x),cos(s*Cycles+c.y),cos(s*Cycles+c.z));
}

vec3 color(vec3 p) {
orbitTrap = vec4(1.0);
inside(p);

orbitTrap.w = sqrt(orbitTrap.w);
vec3 orbitColor;
if (CycleColors) {
orbitColor = cycle(X.xyz,orbitTrap.x)*X.w*orbitTrap.x +
cycle(Y.xyz,orbitTrap.y)*Y.w*orbitTrap.y +
cycle(Z.xyz,orbitTrap.z)*Z.w*orbitTrap.z +
cycle(R.xyz,orbitTrap.w)*R.w*orbitTrap.w;
} else {
orbitColor = X.xyz*X.w*orbitTrap.x +
Y.xyz*Y.w*orbitTrap.y +
Z.xyz*Z.w*orbitTrap.z +
R.xyz*R.w*orbitTrap.w;
}

//vec3 color = mix(BaseColor, 3.0*orbitColor,  OrbitStrength);
return orbitColor;

}
void magpart(inout vec3 z, in float d) {
// from here until the next section break is the section that can
// be blended with other rotation based fractals- and maybe KIFS!

float sr23=sqrt(2./3.);
float sr13=sqrt(1./3.);
float nx=z.x*sr23-z.z*sr13;
float sz=z.x*sr13 + z.z*sr23;   // sz rotated
float sx=nx;
float sr12=sqrt(.5);
nx=sx*sr12-z.y*sr12;               //nx
float sy=sx*sr12+z.y*sr12;  //sy rotated
sx=nx;
float ny=sy;
float r1=0.;
float rxyz=0.;
float victor=0.;
if ((magPower==2.)) {
sx=nx*nx;
sy=sy*sy;

sz=sz*sz;
r1=sx+sy+sz;
if (r1!=0.) {                                      
nx=(sx+r1)*(9.*sx-sy-sz)/(9.*sx+sy+sz)+magSeed;
ny=(sy+r1)*(9.*sy-sx-sz)/(9.*sy+sx+sz)+magSeed;
sz=(sz+r1)*(9.*sz-sx-sy)/(9.*sz+sx+sy)+magSeed;
}
} else if ((magPower==0.)) {
nx=(abs(sx)+magSeed)*2.;
ny=(abs(sy)+magSeed)*2.;
sz=(abs(sz)+magSeed)*2.;
} else {
rxyz=pow((sx*sx+sy*sy+sz*sz),(magPower)*.5);

r1=sqrt(sy*sy+sz*sz);
victor=atan(r1,abs(sx*d))*magPower;
nx=(sx*sx+rxyz)*cos(victor)+magSeed; //  The -.5 value is a seed value

r1=sqrt(sx*sx+sz*sz); //  you can use different ones
victor=atan(r1,abs(sy*d))*magPower; //  but ~ -.5 is the best, and keeping it the same for
ny=(sy*sy+rxyz)*cos(victor)+magSeed; //  all 3 variables works the best- but you can vary it between them

r1=sqrt(sx*sx+sy*sy);
victor=atan(r1,abs(sz*d))*magPower;
sz=(sz*sz+rxyz)*cos(victor)+magSeed;
}
vec3 z2=vec3(nx,ny,sz);
z2*=rot2;

sx=z2.x;
sy=z2.y;
sz=z2.z;
nx=sx*sr12+sy*sr12;
sy=-sx*sr12+sy*sr12;
sx=nx;
nx=sx*sr23+sz*sr13;
sz=-sx*sr13+sz*sr23; //some things can be cleaned up
sx=nx;

//  Here is the end of the section
//  the above code can be used with other fractal types

if ((manPower==2.)) {
float sx2=sx*sx;
float sy2=sy*sy; // will be switching code around later      
float sz2=sz*sz;
nx=sx2-sy2-sz2;   //
r1=2.*(sx)/sqrt(sy2+sz2);  //  you can switch ny and nz... I'll add a boolean later
float nz=r1*(sy2-sz2);
  ny=r1*2.*sy*sz;   //
z= vec3(nx,ny,nz);  
} else {
rxyz=pow((sx*sx+sy*sy+sz*sz),abs(manPower)/2.);
r1=sqrt(sy*sy+sz*sz);
victor=atan(r1,sx)*manPower;
float phi=atan(sz,sy)*manPower;
sx=rxyz*cos(victor);
r1=rxyz*sin(victor);
sz=r1*cos(phi);
sy=r1*sin(phi);
z= vec3(sx,sy,sz);
}
}


bool inside(vec3 pos) {

//vec3 z=pos;
float ryz=sqrt(pos.y*pos.y+pos.z*pos.z);
float theta=(Spokes)*atan(pos.z,pos.y)/3.;
vec3 z=vec3(pos.x,ryz*cos(theta),ryz*sin(theta));

float r;
int i=0;
r=length(z);
float d=magPower+magSoft;
while(r<Bailout && (i<Iterations)) {

magpart(z,d);
z*=rot1;

if (i>ColorIterationsMin && i<ColorIterations) {
orbitTrap=min(orbitTrap,vec4(z.x*z.x,z.y*z.y,z.z*z.z,dot(z,z)));

}
if (Julia) {
z+=JuliaC;
} else {
if (pos.x>0.) {                          // can I do this???
z+=vec3(pos.x*.5,0.,0.);
} else {
z+=vec3(pos.x,0.,0.);
}
}
r=length(z);
z*=rot3;
i++;
}
return (r<Bailout);
}




#preset default
FOV = 0.62536
Eye = 2.02493,-0.000258894,0.00365772
Target = -0.498369,0.00627817,-0.0886997
Up = -0.0024414,-0.999997,0
EquiRectangular = false
Gamma = 2.5
ToneMapping = 3
Exposure = 1.34694
Brightness = 1
Contrast = 0.9901
Saturation = 1
Near = 0.48984
Far = 2.71704
NormalScale = 0.00024
AOScale = 0.00291
Glow = 0.34167
AOStrength = 0.86047
Samples = 254
Stratify = true
DebugInside = false
SampleNeighbors = true
Specular = 1
SpecularExp = 5.455
SpotLight = 0.027451,0.196078,0.658824,0.89706
SpotLightDir = 0.80246,0.78126
CamLight = 1,1,1,1.44928
CamLightMin = 0.54217
Fog = 1.84
ShowDepth = false
DebugNormals = false
BaseColor = 0.388235,0.388235,0.388235
OrbitStrength = 0.78
X = 0.329412,0.321569,0.364706,0.08738
Y = 0.262745,0.333333,0.282353,0.02912
Z = 0.054902,0.0431373,0.439216,0.1068
R = 0.317647,0.670588,0.690196,0.64706
BackgroundColor = 0.0196078,0.0117647,0.192157
GradientBackground = 1.73915
CycleColors = false
Cycles = 23.1868
Iterations = 6
ColorIterations = 4
Bailout = 12
Julia = false
JuliaC = 0,0,0
ColorIterationsMin = 2
#endpreset


  Should clean up that code.   Anyways- couldn't get animations to work, for whatever reason, so I had to use ChaosPro for the following (also, am going to post a "pseudopalette" type idea I have for Framentarium, sometime soon- basically use a jpeg of a palette and Texture2D calls... maybe can do the palette type stuff that I love- already messed with textures earlier- nothing too great as of yet).

  Here are the gears turning in my brain, for those of you with a sense of humor, you might notice the slippage from my youtube splice (without deleting the extra frames).  It's Rotation2, vector (1,1,1) in the above formula:
http://www.youtube.com/watch?edit=vd&v=Ih4k5KZAVxg (http://www.youtube.com/watch?edit=vd&v=Ih4k5KZAVxg)


Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 18, 2012, 07:13:53 PM
Your animation looks awesome, Benesi - nice job!

-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 18, 2012, 11:19:13 PM
  Thanks Rich,  there are a few more over in the movies section HERE. (http://www.fractalforums.com/movies-showcase-%28rate-my-movie%29/intricate-rotations)

   When Fragmentarium is fixed, it'll be much easier to make these.  Now it's about 30 seconds a frame using CPU in ChaosPro. 


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 18, 2012, 11:26:05 PM
I've uploaded a new build of Fragmentarium, which should make the brute-raytracer more usable.

I have adding a 'padding' option to the high-resolution render dialog: this allows you to set a border around each tile that will be discarded. This makes it possible to avoid the screen-space artifacts. I have also fixed the scale dependency on the lighting (hi-res is no more less contrasty). Notice, that NormalScale, and AOScale must be set to something close to 1.0 now - not a small number.

I have also made a quick fix for the animation bar. Now it should be possible to create an animation, where each animation frame will be rendered using the subframes specified in the UI (that is, animation for progressive systems is now posible). You still need to create the animation based on a 'uniform float time' parameter.

The (experimental) build is here: http://www.hvidtfeldts.net/Fragmentarium%20-%20Windows%20-%20Build%2018-11-2012.zip




Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 19, 2012, 02:32:41 AM
Awesome!

Ok, I tried it out and came across a new issue. The first thing I noticed was that of course, the rendered looked completely buggy so I adjusted the NormalScale and AOScale to 1.0 as you suggested. Then I did I tile render.

I immediately noticed the tile rendering looked like 2 images overlayed upon one another with a diagonal offset of a few pixels. I cancelled the tile render and zoomed in using the tile zoom option in Fragmentarium but in that case, it looked ok. Not as good as the previous version renders but ok. I then did another bunch of tests to figure out what was wrong. It was only happening during high resolution tile rendering. At one point, I did a high res tile render and during the render, adjusted the NormalScale back down to a low value (say 0.01) and voila! The two overlayed images sharpened out and moved together to form a nicely-rendered one like the previous Fragmentarium did.

It's difficult to explain and you should try the above procedure to see what I mean. But basically, a normalScale value of 1.0, as you suggested, results in inferior high-resolution tile renders that look like they've been overlayed upon one another and offset.

I have an image that shows this but you will only really see if if you follow my procedure above and adjust the normalScale value while it's rendering to see the real difference. And it IS a big difference. The rendering now looks almost undersampled whereas the previous version looked great. Likewise, if you adjust the normalScale back to a low value you can see it sharpening up and going towards what it should be like.

(http://www.richardrosenman.com/storage/normal_scalebug.jpg)

Original Image: http://www.richardrosenman.com/storage/normal_scalebug.jpg

Please let me know if you can see what I mean?

-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 19, 2012, 09:37:02 PM
Hi, I've updated the raytracer. I think my results are more reasonable now - the NormalScale should be okay at 1.0, but the AOScale may need to be adjusted from around 0.2 - 10.0 to choose the area you want to sample for ambient occlusion. There is also a new options 'CentralDifferences', which determines whether normals use forward or central differences for derivatives. I've put the 'rotjuliaff.frag' in Kali's folder - try using it when comparing quality.

The animation should also work now for progressive rendering. Here is a very simple demo (lousy image quality):
http://www.youtube.com/watch?v=dw7UOjS2dlQ&feature=youtu.be

The latest (experimental) release is here:
http://www.hvidtfeldts.net/blog/media/Fragmentarium%20-%20Windows%20-%20Build%2019-11-2012.zip



Title: Re: Rendering 3D fractals without distance estimators
Post by: Kali on November 20, 2012, 01:58:30 AM
@Christian: We need a smiley for saying "thank you". A big one.

@Syntopia: THANK YOU!


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 20, 2012, 03:29:08 AM
Awesome, and thanks!


Title: Re: Rendering 3D fractals without distance estimators
Post by: cKleinhuis on November 20, 2012, 05:55:45 AM
@Christian: We need a smiley for saying "thank you". A big one.

@Syntopia: THANK YOU!


so, any particular in mind ?! a fractal one ?!

will se what i can come up with, please make suggestions of your own as well ... perhaps in the community part of the board, to not hijack this thread...


dont talk do ....

here we go:

 :thanks2: :thanks1: :peacock: :thanks1: :thanks2:


Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 20, 2012, 06:46:28 AM
Very nice, Syntopia!

I tried your latest build and the output is just perfect. Thanks! ;)

I also did some comparisons to your latest current stable build (the one we were using before all the recent mods with the great output - 0.9.12) and it matches it as well. The only difference was that I had to crank the AOScale to maximum value of 5.0 to match the output of of 0.9.12. Otherwise, the default AOScale seems to be too low.

Thank you for doing these improvements. I also did a huge tile render with yesterday's release testing out the padding and it worked beautifully. I'm assuming that the amount of padding is directly related to size of the AOScale, right?

Oh, and could you briefly explain (in layman's terms) what the CentralDifferences does?

The results are awesome. :)

On another topic altogether, tonight I also tried to add the coloring you described for Benesi to Kali's example. I followed your instructions but for some reason, I don't get any color variations. This is what I've done:

Code:
// Output generated from file: C:/Fractales/Fragmentarium Windows Binary v0.9.12b/Examples/rotajulia3d.frag
// Created: lun 1. oct 01:36:29 2012
#info Mandelbulb without Distance Estimator

#define providesColor
#define providesInside
#include "Brute-Raytracer.frag"

#include "MathUtils.frag"
#include "Complex.frag"

#group Mandelbulb

// Number of fractal iterations.
uniform int Iterations;  slider[0,9,100]

// Number of color iterations.
uniform int ColorIterations;  slider[0,3,100]
uniform int ColorIterationsMin;  slider[0,1,100]

// Mandelbulb exponent (8 is standard)
uniform float Power; slider[0,8,16]

// Bailout radius
uniform float Bailout; slider[0,5,3000]
uniform float Scale; slider[0,1,2]


// Alternate is slightly different, but looks more like a Mandelbrot for Power=2
uniform bool AlternateVersion; checkbox[false]

uniform vec3 RotVector; slider[(0,0,0),(0,1,0),(1,1,1)]

uniform float RotAngle; slider[0.00,0,180]
uniform bool Julia; checkbox[false]
uniform vec3 JuliaC; slider[(-2,-2,-2),(0,0,0),(2,2,2)]

// Compute the distance from `pos` to the Mandelbox.

mat3 rot;

vec3 cycle(vec3 c, float s) {
return vec3(0.5)+0.5*vec3(cos(s*Cycles+c.x),cos(s*Cycles+c.y),cos(s*Cycles+c.z));
}

vec3 color(vec3 p) {
orbitTrap = vec4(10000.0);
inside(p);

orbitTrap.w = sqrt(orbitTrap.w);
vec3 orbitColor;
if (CycleColors) {
orbitColor = cycle(X.xyz,orbitTrap.x)*X.w*orbitTrap.x +
cycle(Y.xyz,orbitTrap.y)*Y.w*orbitTrap.y +
cycle(Z.xyz,orbitTrap.z)*Z.w*orbitTrap.z +
cycle(R.xyz,orbitTrap.w)*R.w*orbitTrap.w;
} else {
orbitColor = X.xyz*X.w*orbitTrap.x +
Y.xyz*Y.w*orbitTrap.y +
Z.xyz*Z.w*orbitTrap.z +
R.xyz*R.w*orbitTrap.w;
}

//vec3 color = mix(BaseColor, 3.0*orbitColor,  OrbitStrength);
return orbitColor;

}

bool inside(vec3 pos) {
vec3 f;
vec3 z=pos, p,m=vec3(1,1,1);
vec2 z1,z2;
float r;
float dr=1.0;
int i=0;
r=length(z);
//z*=rot;
rot=rotationMatrix3(normalize(RotVector),RotAngle);
vec3 J=JuliaC;
while (i<Iterations) {
if (i>ColorIterationsMin && i<ColorIterations) {
orbitTrap=min(orbitTrap,vec4(z.x*z.x,z.y*z.y,z.z*z.z,dot(z,z)));

}
z=vec3(cMul(z.xy,z.xy),z.z);
z*=rot;
z*=Scale;
z+=J;
r=length(z);
i++;
             if ( r>Bailout) {
return false;
}
}
return true;
}


I have also noticed that when you increase the OrbitStrength even a smidge past 0.0, everything blows out and you need to set the light intensity and specularity to extremely small values such as 0.00010. Is this normal?

Thanks,
Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: Kali on November 20, 2012, 08:29:09 AM
@Chris: :thanks1:

dont talk do ....

:hit: :hurt:  but I do fractals, formulas, and scripts  :D

Tested the updated toy with latest rotjulia version (this one has folding and sphere inversion):

(https://dl.dropbox.com/s/0kan9bn4bewdce9/foldedrotjulia.jpg)

It really seems to work great on large renders  :beer: :beer: :beer:
The only remaining issue is the fog not working, would be nice to have it for a better depth effect.

Attached .frag with this and some more presets.



Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 20, 2012, 05:29:51 PM
Wow, Kali! Amazing!

-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 20, 2012, 07:48:23 PM
On another topic altogether, tonight I also tried to add the coloring you described for Benesi to Kali's example. I followed your instructions but for some reason, I don't get any color variations. This is what I've done:
....

I have also noticed that when you increase the OrbitStrength even a smidge past 0.0, everything blows out and you need to set the light intensity and specularity to extremely small values such as 0.00010. Is this normal?

  It is (for this fractal type) if:     ColorIterationsMin>=ColorIterations-1


  If you want, this helps a bit in color adjustment (if the orbits aren't quite strong enough):

  Declare:
uniform float orbitMult; slider[0,1,2]    //you can make the range vary more if you like... of course  :)

  Add (after the orbitTrap.w=... in your color function):
orbitTrap*= orbitMult;



Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on November 20, 2012, 09:13:41 PM
I'm assuming that the amount of padding is directly related to size of the AOScale, right?
Exactly!

Quote
Oh, and could you briefly explain (in layman's terms) what the CentralDifferences does?
It is just the way you calculate the gradient: either central differences, or forward differences:
http://en.wikipedia.org/wiki/Finite_difference

Quote
I have also noticed that when you increase the OrbitStrength even a smidge past 0.0, everything blows out and you need to set the light intensity and specularity to extremely small values such as 0.00010. Is this normal?

Thanks,
Rich

I just tried your example, and it works perfectly here?
I tried OrbitStrength=1, ColorIterationsMin=0, ColorIterations=5.

Are you sure you have ColorIterationsMin<ColorIterations?


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 20, 2012, 10:22:55 PM
:grin:  Maybe 1 less for the Min. 


Title: Re: Rendering 3D fractals without distance estimators
Post by: cKleinhuis on November 20, 2012, 11:52:07 PM
@Chris: :thanks1:

:hit: :hurt:  but I do fractals, formulas, and scripts  :D
 


<joke>
:hit:  :hit:  :hit:    :evil1: :fiery: is not enough, do transparent gif smiles as well !!! O0
</joke>

just joking, stick to the fractals, formulas, and scripts, i wanna see global illuminated movies next years compo!!!!!!!!!!!!!

but i love this smiley, so let me post it another time:
:hit:

although i hate violence ... but it is just a clap on the arse ... 4 times :D


Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on November 21, 2012, 12:19:05 AM
Syntopia, Benesi & Kali;

The coloring is SUPERB!

I am rendering away large versions which I'll eventually post. Again, the quality of Fragmentarium is just incredible.

The only thing I can contribute in this post is my wishlist for 16/32bit HDR output and a future alternative to the screen-space AO (http://www.ozone3d.net/tutorials/ambient_occlusion.php?) so that 20-30% of padding doesn't go to waste.

Cheers,
-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: cbuchner1 on November 21, 2012, 12:55:28 AM
I did it, converted the CUDA code for my "voxel plancton" fractals to the GLSL based non-DE raytracer. Left side is the voxel renderer, and right hand side is Fragmentarium. The CUDA version has a limited spatial resolution (here 256x256x256 voxels), whereas Fragmentarium doesn't have that restriction any more.

I will post code soon, with some presets.

I haven't quite understood how to add coloring with orbit traps yet - I need to inspect the code from the RotJulia.frag program that was posted earlier.

Christian


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 21, 2012, 04:40:21 AM
@cbuchner-  It might work if you remove the "define providesColor" statement (in your code), and cut and paste the orbitTrap statement into your iteration.  If you decide to play with the one below, you'll need the various parts (cycle, color, and the orbitTrap statement, in addition to the "define providesColor" statement, the whole Palette group, and the color iterations and "cycle" button....... it's a mess).  


update: removed code.  Updating it with a better palette management and animation management system. 


Title: Re: Rendering 3D fractals without distance estimators
Post by: cbuchner1 on November 21, 2012, 09:46:13 PM
Here's my weird plancton formula with three colorful presets. Let it render in continuous mode with at least 20 subframes.

This formula produces lots of whipped cream if you're not careful. High iteration counts generally lead to a lot of pixel noise that does not seem to disappear even when letting it render many subframes. A nice combination is low power with higher iteration counts (see Parrot preset).

Generally vary theta and phi in the RotaBulb parameters. For theta = 0 and theta = 180 degrees, this formula converges towards a mandelbulb - anything else is uncharted territory.

Here's my code, save it as RotaBulb.frag

Code:
#info Mandelbulb without Distance Estimator

#define providesInside

#include "Brute-Raytracer.frag"
#include "MathUtils.frag"
#include "Complex.frag"

#group RotaBulb

// Number of fractal iterations.
uniform int Iterations;  slider[0,10,100]

// Number of color iterations.
uniform int ColorIterations;  slider[0,5,100]

// Mandelbulb exponent (8 is standard)
uniform float Power; slider[-10,3,10]

// Bailout radius
uniform float Bailout; slider[0,2,30]

uniform float theta; slider[0.0,0,180]
uniform float phi; slider[0.0,0,360]

/*
vec3 color(vec3 p) {
return abs(vec3(p));
}
*/

#define D2R 0.01745329251994329576923690768489

// return what I call the "pole vector"
vec3 PVec()
{
    return vec3(cos(D2R*phi)*sin(D2R*theta), sin(D2R*phi)*sin(D2R*theta), cos(D2R*theta));
}

vec3 QVec()
{
    return vec3(1.0, 0.0, 0.0);
}

vec3 RVec()
{
    return vec3(0.0, 0.0, 1.0);
}

// create quaternion from rotation axis v and the given angle
vec4 describe_rotation(in vec3 v, in float angle)
{
    float sina_2 = sin(angle/2.0);
    float cosa_2 = cos(angle/2.0);
    return vec4(cosa_2, sina_2*v.x, sina_2*v.y, sina_2*v.z);
}

// perform a rotation of vector v using the given quaternion q
vec3 rotate(in vec3 v, in vec4 q)
{
    float t2 =   q.x*q.y;
    float t3 =   q.x*q.z;
    float t4 =   q.x*q.w;
    float t5 =  -q.y*q.y;
    float t6 =   q.y*q.z;
    float t7 =   q.y*q.w;
    float t8 =  -q.z*q.z;
    float t9 =   q.z*q.w;
    float t10 = -q.w*q.w;
    return vec3(2.0*( (t8 + t10)*v.x + (t6 -  t4)*v.y + (t3 + t7)*v.z ) + v.x,
                           2.0*( (t4 +  t6)*v.x + (t5 + t10)*v.y + (t9 - t2)*v.z ) + v.y,
                           2.0*( (t7 -  t3)*v.x + (t2 +  t9)*v.y + (t5 + t8)*v.z ) + v.z);
}

vec3 powN(in vec3 z, float magnitude, vec3 P) {

    orbitTrap=10000;
    vec3 z_power = z;

    float costheta = dot(z, P) / magnitude;
    float costheta2 = costheta * costheta;
    float sintheta2 = 1.0 - costheta2;
    float sintheta = sqrt(sintheta2);

    vec3 N = cross(P, z) / (magnitude * sintheta);

    // Step 1
    // generate a vector that is situated in the plane spanned by
    // origin, the pole and z and has an resulting angle of power*theta
    // with respect to the pole.
    float theta = acos(costheta);
    vec4 rot = describe_rotation( N, (Power-1.0)*theta );
    z_power = rotate(z_power, rot) * pow(magnitude, Power-1.0);

    // Step 2: Perform rotation around pole axis by an angle of (power-1)*phi
    vec3 z_normal = z_power -  RVec() * dot(z_power, RVec());
    float cosphi = dot(z_normal, QVec()) / length(z_normal);
    float phi = acos(cosphi);
    if ( dot(cross(QVec(), z_normal), RVec()) < 0.0 )
        phi = 6.2831853071 - phi;  // 2*PI - phi

    vec4 rot2 = describe_rotation( RVec(), Power*phi - phi );
    z_power = rotate(z_power, rot2);

    return z_power;
}

bool inside(vec3 z) {

    vec3 P = PVec();
    vec3 c=z;
    float r=length(z);

    int i=0;
    while(r<Bailout && (i<Iterations)) {
        z = powN(z,r,P) + c;
        i++;
        r=length(z);
        if (i<ColorIterations) orbitTrap = min(orbitTrap, abs(vec4(z.x,z.y,z.z,r*r)));
    }

    return (r<Bailout);
}


#preset Default
FOV = 0.77
Eye = 0,0,2
Target = 0,0,0
Up = 0,1,0
EquiRectangular = false
Gamma = 1
ToneMapping = 1
Exposure = 1
Brightness = 1
Contrast = 1
Saturation = 1
NormalScale = 1
AOScale = 1
Glow = 0
AOStrength = 1
Samples = 50
Stratify = true
DebugInside = false
CentralDifferences = true
SampleNeighbors = true
Near = 0
Far = 5
ShowDepth = false
DebugNormals = false
Specular = 0.5208
SpecularExp = 20
SpotLight = 1,1,1,0.23529
SpotLightDir = -0.18518,-0.18518
CamLight = 1,1,1,0
CamLightMin = 0
Fog = 0
BaseColor = 1,1,1
OrbitStrength = 1
X = 0.333333,0,0,1
Y = 0.333333,0.333333,0,1
Z = 0.666667,0.333333,0,1
R = 1,1,1,0
BackgroundColor = 1,1,1
GradientBackground = 0.97825
CycleColors = false
Cycles = 14.0735
Iterations = 4
ColorIterations = 100
Power = 3
Bailout = 2
theta = 80
phi = 90
#endpreset

#preset Parrot
FOV = 0.5
Eye = 0.153051,0.144426,0.599367
Target = -0.758841,-1.3586,-2.21299
Up = 0.274098,0.953879,0.12241
EquiRectangular = false
Gamma = 1
ToneMapping = 1
Exposure = 1
Brightness = 1
Contrast = 1
Saturation = 1
NormalScale = 1
AOScale = 1
Glow = 0
AOStrength = 1
Samples = 50
Stratify = true
DebugInside = false
CentralDifferences = true
SampleNeighbors = true
Near = 0
Far = 5
ShowDepth = false
DebugNormals = false
Specular = 1
SpecularExp = 18
SpotLight = 1,1,1,0.25
SpotLightDir = 0.35802,0.50618
CamLight = 1,1,1,0
CamLightMin = 0
Fog = 0
BaseColor = 1,1,1
OrbitStrength = 0.8
X = 0.333333,0.666667,0.498039,1
Y = 1,0.666667,0,1
Z = 1,0,0,1
R = 1,1,1,0
BackgroundColor = 1,1,1
GradientBackground = 1.08695
CycleColors = false
Cycles = 1
Iterations = 60
ColorIterations = 99
Power = 1.1
Bailout = 2
theta = 40
phi = 250
#endpreset

#preset Fossil
FOV = 0.5
Eye = 0.907837,0.501134,1.85441
Target = -0.35304,-1.11791,-0.751134
Up = 0.76982,0.637907,0.0212406
EquiRectangular = false
Gamma = 1
ToneMapping = 1
Exposure = 1
Brightness = 1
Contrast = 1
Saturation = 1
NormalScale = 1
AOScale = 1
Glow = 0
AOStrength = 1
Samples = 50
Stratify = true
DebugInside = false
CentralDifferences = true
SampleNeighbors = true
Near = 0
Far = 5
ShowDepth = false
DebugNormals = false
Specular = 1
SpecularExp = 18
SpotLight = 1,1,1,0.25
SpotLightDir = 0.35802,0.50618
CamLight = 1,1,1,0
CamLightMin = 0
Fog = 0
BaseColor = 1,1,1
OrbitStrength = 0.8
X = 0.666667,0,0,1
Y = 0,0.666667,1,1
Z = 1,1,0.498039,1
R = 1,1,1,0
BackgroundColor = 1,1,1
GradientBackground = 1.08695
CycleColors = false
Cycles = 1
Iterations = 11
ColorIterations = 99
Power = 1.35
Bailout = 2
theta = 51.4278
phi = 129.229
#endpreset


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 22, 2012, 06:46:34 AM
  Here is some slightly improved Palette mode code.  Has 3 presets (lava is a bit iffy... but the preset is nice and bright the way it is- should add a dark part to it).

  Code is long.  :D  3 Presets.. etc. etc.   the frag is attached.  Will rename it when I'm done experimenting....


  Removed this code.  Will change the palette and animation settings tomorrow or the next day.  It'll be MUCH better.

  Fixed code.  Palette and animation settings are as below.  Fixed the Palette Rotation Bug (that was in the formula in the post below this one).  

  All of the presets are with 100% zeropercent type- you need to lower the zeropercentage to get variation when you change magPower!!!!


Title: Re: Rendering 3D fractals without distance estimators
Post by: M Benesi on November 23, 2012, 04:53:20 AM
  Here is the "Mandy Mag" type, Updated.  Greatly improved the palette tab, made 2 good presets to start you out.

  The palette tab now lets you pick colors, like you can in the color tab.  I only put in 6 colors to switch between, although they still can be varied by having different "Distance" settings for the red, green, and blue values (between each selected color- look at the math, I'm tired, you figure it out).  

  The animation tab currently has 5 "time" variables.  I will rename them later, for now, they are commented at the top of the code, and here:

time1:  angle of rotation one: spins the old gears.
time2:  changes the magPower setting
time3:  changes the zeropercent setting
time4:  changes colorSpeedNumerator
time5:  Rotates the palette   (didn't even use this one on the following animation- used colorSpeedNumerator one)

UPDATE:  CORRECTED PALETTE ROTATION CODE.  IT NOW WORKS CORRECTLY.  Palette rotation of 1, with paletterotationspeed=1, is a full rotation of the palette.  It rotates one more time by palette rotation 2.  If you set rotation speed lower, the palette rotation rotates slower.  You can set it to negative values as well.  have fun...

20 second 640x480 video is taking quite some time to upload, because I didn't bother compressing it~ .5 gig.  I find that if I do so, then upload it to youtube, the 2nd compression swipe from youtube really blurs the animation up.  

It'll be here after youtube processes it.  VLC could play it, so I assume youtube will be able to do it as well.  I'm attaching the greatly improved Mandy Mag code to this message.    

  All right, as far as I can tell, youtube made it pretty blurry.  It looks decent on my computer.  

http://www.youtube.com/watch?v=ycFRqb1BUkU&feature=youtu.be (http://www.youtube.com/watch?v=ycFRqb1BUkU&feature=youtu.be)

  A couple more code changes are in the works, a couple ideas to try (blending more than one mag z^n with a z^0, or simply with eachother).  And I still have to get around to making additional movement options (rotate around viewpoint and rotate around eye).  


  UPDATE Nov. 24 2012 12:45 AM EST: Fixed Palette rotation bug.  Keep in mind that all presets have the "zeropercentage" maxed out (first slider on the mutations tab).  Set it lower to play with magPower.  You might want to move OUTSIDE of the fractal (move the eye away from 0,0,0 to something like 0,0,-2.6 or something).

  I've got stuff to do for the weekend, so will work on the other things I mentioned a bit later.  Maybe make a decent exterior preset or 2 now.  The presets can be used in either Mandy Mags or Mag Mandys (Mandy Mag is this post, Mag Mandy is the one above).



Title: Re: Rendering 3D fractals without distance estimators
Post by: richardrosenman on December 17, 2012, 07:39:30 AM
Hey gang;

Just wanted to let you know I finished and posted my image set created with Fragmentarium:

http://www.fractalforums.com/images-showcase-(rate-my-fractal)/fragmentarium-fractal-image-set/

It was a pleasure working with the software. ;)

-Rich


Title: Re: Rendering 3D fractals without distance estimators
Post by: folded on February 11, 2013, 11:29:06 AM
Hi,

I have a question about the brute-force renderer. I lifted the .frag files from the github repo, and added them to 0.9.12b (the current mac binary) and it works beautifully with the depth buffer shader, but despite all my best efforts, I get nothing but black (or background) when the depth buffer shader is turned off, despite the fact that there's clearly geometry in frame.

Did supporting the brute-force renderer require some code tweaks, as well as the shader? I've tried to build the current head on OSX, but it doesn't like the macports installed Qt4 at all.

Thanks a lot, though. It's great to be able to see the shape of things I want to construct meshes for in almost realtime, but staring at depth buffers is leaving me feeling like a miner whose torch is failing!

Tobias.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on February 11, 2013, 09:30:05 PM
Did supporting the brute-force renderer require some code tweaks, as well as the shader? I've tried to build the current head on OSX, but it doesn't like the macports installed Qt4 at all.

Yes it does require some code tweaks. If I recall correctly, the 'pixelSize' var was not set in the final shader.

However, if you modify the 'DepthBufferShader.frag' and remove the 'viewCoord.x*= pixelSize.y/pixelSize.x;' line, I think it may work. Of course, your image will render most correctly for a square viewport :-)



Title: Re: Rendering 3D fractals without distance estimators
Post by: Buddhi on May 06, 2013, 10:16:04 PM
This topic remember me beginning of story about rendering 3D fractals. In 2009 I had no idea how to implement DE, so all images were rendered by brute-force algorithm on CPU. Of course it was terribly slow (10 hours to render one image) but results were very nice because of nice impression of fractal details. Here you have some of my historical renders:
http://fav.me/d2ezw09
http://fav.me/d2dwoiq
http://fav.me/d2a3a6q
http://fav.me/d2a3at0
http://fav.me/d2ekinu
http://fav.me/d2ev3h2


Title: Re: Rendering 3D fractals without distance estimators
Post by: cbuchner1 on May 07, 2013, 01:12:46 PM
This topic remember me beginning of story about rendering 3D fractals. In 2009 I had no idea how to implement DE, so http://fav.me/d2ezw09

2009, really? I thought the Mandelbulb is less than 3 years old. Hmm. Or maybe I was just getting older faster than I remember ;)


Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on December 11, 2013, 09:02:28 PM
Sorry for necro'ing this thread, but I wanted to follow up with perhaps a small improvement to the brute force raytracing method. In order to reduce the noise on smooth surfaces, I biased the random number distribution towards the closest hit found so far. The following image is a comparison between two brute force renders with 300 samples per ray each. Left side is with sampling bias, right side is uniform sampling along the ray:

(http://vectorizer.org/samplingbias.jpg)

As you can see, the difference is quite striking. The sampling bias is implemented very cheaply as a second degree polynomial function applied to the uniform random numbers between 0.0 and 1.0, like this:

where = randomGenerator();
where = (2.0f - where) * where; // parabola p with p(0) = 0, p(1) = 1, p´(1) = 0; so bias samples towards hit
where = closestHit * where;

This mapping will necessarily mean that fewer samples will be taken near the camera. The first derivative of the parabola at 0.0 is 2.0. So the samples will always be at least half as dense as without the bias. Thus far the reduced sampling density at the near end does not seem to cause additional artifacts.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Buddhi on December 11, 2013, 10:22:28 PM
This mapping will necessarily mean that fewer samples will be taken near the camera. The first derivative of the parabola at 0.0 is 2.0. So the samples will always be at least half as dense as without the bias. Thus far the reduced sampling density at the near end does not seem to cause additional artifacts.

Idea is very interesting but I think you did some wrong assumption. This will work only when object will be far from the camera. If you have on the same stage objects in foreground (e.g. in distance 1.0e-5) and also in background (e.g in distance 1.0) then objects which are close will be very noisy or event not visible


Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on December 11, 2013, 10:41:23 PM
If you have on the same stage objects in foreground (e.g. in distance 1.0e-5) and also in background (e.g in distance 1.0) then objects which are close will be very noisy or event not visible

This should not be a problem. Each ray is sampled independently. A ray that hits a nearby object will get a small value for the "closestHit" variable; then it will continue to refine the hit distance of this near surface.

Another ray in the same image which hits a distant object will have a large value for the "closestHit" variable, and will then concentrate sample points near that far surface.

If at any time the long ray finds a closer object (remember that there will still be samples along the whole ray, albeit with perhaps half the sample density as without the bias), the situation will adapt as described in the first case.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Buddhi on December 13, 2013, 09:52:47 PM
Today I tested this algorithm. You are partially right about this parabolic function. If it hits object then probability of finding object for next hits is higher. However it is like I thought. When on the scene there are objects very far and also some small objects which are near, then there is very low probability that it hits even one time the object which is small and near. Some example:
- two spheres on the scene: first at distance 2.0 and diameter 0.5, second at distance 0.001 and diameter 0.0005 (small, but at this distance should take about half of image)
- initial scan range is 10.0
- it hits object at distance 1.0 probably in about 20 hits
- now closestHit will be about 2.0. After few next trials it will go to distance about 1.75 (parabolic function increases probability to hit the object just in front of closestHit)
- then there is only this small object in the front. Now probability to find this object is very low. Even if the random function will be linear the probability will about 0.000028 (0.0005 / 1.75). In 300 trials is not possible to hit this object (probability about 8%). For parabolic function is much worse.


Title: Re: Rendering 3D fractals without distance estimators
Post by: knighty on December 13, 2013, 11:06:55 PM
Isn't this a quite extreme example?  :D


Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on December 14, 2013, 12:40:32 PM
Isn't this a quite extreme example?  :D
Not extreme in the sense that Buddhi's test scene were unrealistic for fractals. The scale invariance of fractal attractors means that chunks of the set are often surrounded by smaller pebbles, which are in turn surrounded by grains of sand, which are surrounded by motes of dust ...

So it is quite possible that there are many small objects everywhere; in particular near the camera, too.

However, Buddhi's example is extreme in another sense: the probability of the small object being hit by a randomly selected point on the ray is tiny. Regardless of the sample distribution along the ray. A rendering algorithm based on random samples is just not suitable for that kind of object (unless you have prior knowledge of the small object and sample specifically where it is located).

So in my opinion, his test case is really an argument against the brute force raytracer, not an argument against biasing the samples.

There is also a heuristic argument in favour of sample bias: "interesting" objects, i.e. those that the human mind can grasp and decode into "useful" information, usually have a property named "object coherence". That means, looking at just a restricted portion of the object gives enough information that we can make reasonably accurate guesses about the unseen parts that are immediately adjacent. In other words, we expect there to be some surface around some structure, and we expect that the small parts together form a sensible whole.

(Just for completeness: "interesting" also includes some amount of surprise as well. Simple shapes that we can fully grasp immediately don't capture our attention for long.)

Anyway, the biasing of samples was based on that expectation that the object to be rendered has some sort of coherent surface. Likewise, Syntopia's technique of using pixels of previously rendered frames and adjacent pixels to find the currently computed pixel more quickly, is equally based on the assumption of object coherence. This is why brute force raytracing works at all: object coherence exists for many of the fractal bodies we like to look at.


Title: Re: Rendering 3D fractals without distance estimators
Post by: kram1032 on December 14, 2013, 12:59:58 PM
hmm...

so Bi-Directional Path-tracers work by, instead of tracing paths just from the camera and bouncing around until hitting light sources, they go from light-sources too, which, in turns, helps to locate really tiny light-sources, right?

Now I wonder, would it be possible to do a two-pass-ray-tracer, where the first pass treats all geometry as light-sources and is bi-directional, so it essentially gives an idea of where geometry is located and perhaps how the normals of that geometry look like
And then you take that knowledge for the second pass, where you have a path-tracer that is biased in a way so all the geometry gets sampled uniformly but where ever there "is nothing", no paths are wasted at all?


Title: Re: Rendering 3D fractals without distance estimators
Post by: knighty on December 14, 2013, 06:01:42 PM
Just remembering an article about random search seen some years ago (sorry I don't remember where exactly). IIRC it roughtly says that the "best" strategy is to do a "multiscale" random walk. Maybe this would help to find small or thin features.

@Kram1032: Maybe you are looking for something like this (http://graphics.pixar.com/library/Adjoints/paper.pdf). (there are other great articles at http://graphics.pixar.com .


Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on December 18, 2013, 04:02:19 PM
Syntopia mentioned that he reduces noise significantly with another technique: stratification. That means randomness is being applied not as raw chaos, but tamed to have a desirable characteristic: sample points should not cluster near each other too much, nor should they leave too large gaps.

Thinking about this some more, I realized that the brute force raytracer does not need to be based on randomness at all. We need randomness only to decouple adjacent pixels from another, so that we don't get Moiré patterns or other objectionable artifacts.

So what we need is a way to place sample points in an interval that has the following qualities:

- cheap to compute
- sample density is reasonably uniform across the interval (i.e. stratified)
- works for any number of points N (because we keep shortening the sampling interval and then start over)
- the method is incremental, i.e. we can add another point to an existing set of samples, and maintain the other qualities

As it turns out, something like that actually exists. It is related to the Golden Ratio, or more specifically to the Golden Angle. If you look at the stylized flower on the wikipedia page

http://en.wikipedia.org/wiki/Golden_angle

you can probably see how and why this works. New petals are always being added into the largest existing gap. All we need to do is crack the circle open, and roll out its circumference in our [0.0 .. 1.0] interval.


So the algorithm for sampling along the ray becomes something like this:

- prepare a constant for the (normalized) Golden Angle:
const float GoldenAngle = 2.0f - 0.5f*(1.0f + sqrtf(5.0f));

- once per ray, a random seed value from [0. .. 1.0] is passed in as a parameter RandSeed to decouple the rays from each other

- then the core loop for one ray looks like this

for (number of samples ...) {
  RandSeed += GoldenAngle;
  if ((RandSeed - 1.0f) >= 0.0f) {  // keep wrapping back into [0.0 .. 1.0]
      RandSeed = RandSeed - 1.0f;
  }
  where = RandSeed;                    // current sample location in original parameter interval
  where = (2.0f - where) * where;  // bias sample locations as described above in the thread
  where = closest * where;            // shorten to current ray length

  if (iterate(rayBase + where*rayDirection) == hit) {
    closest = where;                      // remember closest hit
  }
}
return closest;


With this method, I could halve the number of samples to 150 per ray, and get visually identical results to the example above with 300 biased random samples.


So, to recap: we started with 500 to 1000 samples for pure random sampling. This was reduced to 300 samples with biasing sample locations towards the closest hit. Another reduction to 150 samples per ray was then enabled by the stratification gained from GoldenAngle sampling. All in all the number of samples could be reduced to somewhere between a quarter and an eighth.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on December 18, 2013, 10:56:51 PM
Very interesting! I actually used golden ratio sequences a couple of months ago in order to produce sets of diverse colors, based on this approach (http://martin.ankerl.com/2009/12/09/how-to-create-random-colors-programmatically/)

There is also a chapter on generating low-discrepancy sequences in Physically-Based Rendering (the chapter is freely available here: http://graphics.stanford.edu/~mmp/chapters/pbrt_chapter7.pdf) - I googled a bit to see how their methods compared to golden ratio sets, and it seems golden ratio sets do fine (and they are easier to implement than Hammersley / Halton): http://www.graphics.rwth-aachen.de/media/papers/jgt.pdf

One minor optimization idea: the conditional can be removed by using something like: RandSeed = fract(RandSeed + GoldenAngle)

One final idea: for a given ray, once a point inside the fractal has been found, I think binary search (which is very fast converging) should be used to find the closest boundary to viewer (this will be faster than biasing the samples). After the binary search, the normal (stratified) sampling strategy could continue on the new interval.


Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on December 18, 2013, 11:37:41 PM
One minor optimization idea: the conditional can be removed by using something like: RandSeed = fract(RandSeed + GoldenAngle)

I formulated it in this peculiar way because I was thinking of SIMD machines. The term "RandSeed - 1.0f" is a common subexpression in both the condition and the assignment. A SIMD machine would do a subtraction, a comparison to zero, and then a conditional select.

Special floating point operations working on the internals, like fract() or floor(), used to be slow. Not sure how fast those would run on a GPU, or in a vector instruction set like SSE/AVX.

Quote
One final idea: for a given ray, once a point inside the fractal has been found, I think binary search (which is very fast converging) should be used to find the closest boundary to viewer (this will be faster than biasing the samples). After the binary search, the normal (stratified) sampling strategy could continue on the new interval.

I guess this would be very dependant on the object being rendered. Non-fractal surfaces should react well to binary search. Not so sure about fractals, because of self-similarity and scale invariance. But that is just an excuse, really, and could simply be tested in practice.

For the moment I want to keep the innermost loop simple. Hopefully that will make it easier for the OpenCL compiler (after I ported the code at some point in the future ...) to fully utilize whatever hardware is there. If rays can alternate between the two "modes" bisection and stratified sampling, vector utilization could drop to 50% (i.e. the dreaded "warp divergence").


Title: Re: Rendering 3D fractals without distance estimators
Post by: eiffie on December 19, 2013, 07:40:34 PM
This seems to work quite well :) Here is a sample I tried of a landscape using only 100 samples that runs real-time.
http://www.youtube.com/watch?v=HZwwS2h3n6k (http://www.youtube.com/watch?v=HZwwS2h3n6k)

I tried it at shadertoy too but it looks like crap. Texture filtering must be setup differently.
Here is the code:
https://www.shadertoy.com/view/ls2GW1 (https://www.shadertoy.com/view/ls2GW1)


Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on December 19, 2013, 09:24:16 PM
Surprising that 100 samples look okay-ish, despite the fact that the horizon is fairly far away. I would have expected that the samples are then spaced too sparsely. Nice. :)


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on December 19, 2013, 10:12:15 PM
I formulated it in this peculiar way because I was thinking of SIMD machines. The term "RandSeed - 1.0f" is a common subexpression in both the condition and the assignment. A SIMD machine would do a subtraction, a comparison to zero, and then a conditional select.

Special floating point operations working on the internals, like fract() or floor(), used to be slow. Not sure how fast those would run on a GPU, or in a vector instruction set like SSE/AVX.

GLSL functions like fract and floor can be expected to be fast. On my machine (Nvidia 310M) floor() takes the same amount of time as an add, while fract() takes twice as long (thus probably implemented as fract(x)=x-floor(x)). Using the conditional takes four times as long.

I tried it at shadertoy too but it looks like crap. Texture filtering must be setup differently.

Yep, looks weird in shadertoy, even with ANGLE disabled. If I initialize RandSeed to zero (instead of the random value), it looks much better, though.


Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on December 20, 2013, 12:22:43 PM
GLSL functions like fract and floor can be expected to be fast. On my machine (Nvidia 310M) floor() takes the same amount of time as an add, while fract() takes twice as long (thus probably implemented as fract(x)=x-floor(x)). Using the conditional takes four times as long.
Okay, I really need to find out what GPU hardware is capable of at the instruction level! Do you happen to be aware of any good documentation?


Title: Re: Rendering 3D fractals without distance estimators
Post by: Syntopia on December 20, 2013, 03:15:45 PM
Okay, I really need to find out what GPU hardware is capable of at the instruction level! Do you happen to be aware of any good documentation?

Some of the CUDA documentation is good: http://docs.nvidia.com/cuda/cuda-c-programming-guide/#arithmetic-instructions. But still, it is difficult to know exactly what the GLSL commands are mapped to.

I have also read that some operations are actually free when combined with other arithmetic operations (this should be the case for abs, negate, saturation), but I cannot find any clear documentation. I imagine this kind of information must be very important to game programmers, so it is weird that it is so hard to find.


Title: Re: Rendering 3D fractals without distance estimators
Post by: Buddhi on December 20, 2013, 03:29:48 PM
To see how the program is translated into GPU code I use AMD APP Kernel Analyzer (its from AMD SDK). It shows asm output for analyzed kernel. Some example from net: http://www.jarredcapellman.com/wp-content/uploads/2012/06/AMDKernelAnalyzer.png


Title: Re: Rendering 3D fractals without distance estimators
Post by: hobold on January 16, 2014, 05:11:18 PM
One final idea: for a given ray, once a point inside the fractal has been found, I think binary search (which is very fast converging) should be used to find the closest boundary to viewer (this will be faster than biasing the samples). After the binary search, the normal (stratified) sampling strategy could continue on the new interval.
I had first rejected this because of thread divergence on GPUs (or other SIMD hardware). Nevertheless I tried the idea in scalar code. When used in isolation, it completely removes noise from smooth surfaces (I used 20 halving steps to pretty much hit the limit of single precision floating point).

So biasing the samples towards the hit is no longer necessary, which improves sample density near the camera. Additionally, as we have now located the surface exactly, in all "simple" cases (non-dusty surface, voluminous object) the remaining samples on the current ray will all hit empty space.


That means we can afford another optimization which was actually detrimental when I tried it without bisection refinement. Specifically: to reset the sample counter (on the current ray) back to zero after a hit has been found and refined. This change is meant to benefit rendering quality in dusty areas, or for grazing rays (i.e. the silhouette of the object). In such hard cases, closer hits may be found very late in the sampling process, and then there are too few samples left to properly check the area in front of the hit.

The "simple" cases used to get very expensive with that change, because they would reset the sample counter way too often. But with bisection refinement, they reset the counter exactly once, and very early in the sampling process. So the quality improvement now invests processor cycles where they matter: in the hard cases of grazing rays or dusty objects.


Both these modifications will cost substantial performance on GPUs, but for scalar (multithreaded, of course) implementations they are a win, because the default number of samples per ray can be reduced while the accuracy of the computed depth image is notably improved.