## The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!

 Pages: 1 [2] 3   Go Down
 Author Topic: (bi directional) path tracing a mandelbulb  (Read 8188 times) Description: 0 Members and 1 Guest are viewing this topic.
Syntopia
Fractal Molossus

Posts: 681

 « Reply #15 on: August 12, 2011, 08:07:54 PM »

Hmm I don't think the definitions you've given are correct, Syntopia.

In global illumination, the (surface) rendering equation describes light transport (in a vacuum). Roughly speaking it's L_out = L_emit + integral over all angles { R(L_in) cos(theta) }, and in this setting a diffuse BRDF R is just a constant k / pi, with k in [0, 1).

I'm not talking any global illumination here, but just about the Lambertian reflection that ker2x mentioned (http://en.wikipedia.org/wiki/Lambertian_reflectance). As in Phong shading (http://en.wikipedia.org/wiki/Phong_reflection_model), the diffuse component depends on the angle: "The reflection is calculated by taking the dot product of the surface's normal vector, \mathbf{N}, and a normalized light-direction vector, \mathbf{L}, pointing from the surface to the light source".

Quote
Luminance is a completely different quantity, which is perceptual in nature rather than a basic physical quantity.

Yes, that was my point: By the definition (http://en.wikipedia.org/wiki/Luminance) the luminance is proportional to the luminous power divided by cos(angle). Hence the two terms cancel, and diffuse reflection becomes isotropic, right?
 Logged
ker2x
Fractal Molossus

Posts: 795

 « Reply #16 on: August 12, 2011, 08:12:46 PM »

Hmm I don't think the definitions you've given are correct, Syntopia.

Ha.. if my 2 references about fractals and raytracing disagree, i'm in big trouble
 Logged

often times... there are other approaches which are kinda crappy until you put them in the context of parallel machines
(en) http://www.blog-gpgpu.com/ , (fr) http://www.keru.org/ ,
lycium
Fractal Supremo

Posts: 1158

 « Reply #17 on: August 12, 2011, 08:29:20 PM »

I'm not talking any global illumination here,

but just about the Lambertian reflection that ker2x mentioned (http://en.wikipedia.org/wiki/Lambertian_reflectance).
Yes, and from that article you will see, as ker2x said, that it is constant, no cosines.

The cosine comes from the rendering equation, which is what you are approximating when you do some kind of lighting; all rendering algorithms can be considered in the common framework of the rendering equation, which fully expresses all possible light interactions (under the assumptions of classical optics, no quantum phenomena).

As in Phong shading (http://en.wikipedia.org/wiki/Phong_reflection_model), the diffuse component depends on the angle: "The reflection is calculated by taking the dot product of the surface's normal vector, \mathbf{N}, and a normalized light-direction vector, \mathbf{L}, pointing from the surface to the light source".
The Phong reflection model they are discussing assumes purely directional lights, and it doesn't take global illumination into account. For a single directional light source, without considering indirect illumination, the rendering equation reduces what you see there.

But we ARE discussing global illumination, and in this context that simple shading model doesn't apply.

Quote
Luminance is a completely different quantity, which is perceptual in nature rather than a basic physical quantity.

Yes, that was my point: By the definition (http://en.wikipedia.org/wiki/Luminance) the luminance is proportional to the luminous power divided by cos(angle). Hence the two terms cancel, and diffuse reflection becomes isotropic, right?
Nope, you're confusing a few things here:

1. The diffuse BRDF is isotropic, because it is a constant and doesn't have a cosine term.
2. By the definition of luminance, which you linked, it is a perceptual measure, i.e. it has to do with the human visual system, not physics.

"Luminance is a photometric measure" -> "Photometry is the science of the measurement of light, in terms of its perceived brightness to the human eye.[1] It is distinct from radiometry, which is the science of measurement of radiant energy (including light) in terms of absolute power; rather, in photometry, the radiant power at each wavelength is weighted by a luminosity function (a.k.a. visual sensitivity function) that models human brightness sensitivity."

The perception of light is a separate issue to doing the physical computations.
 « Last Edit: August 12, 2011, 08:32:09 PM by lycium, Reason: changed \"since it doesn\'t\" to \"and it doesn\'t\" » Logged

Syntopia
Fractal Molossus

Posts: 681

 « Reply #18 on: August 12, 2011, 09:12:31 PM »

Wow, this discussion is getting heated :-)

I'm not talking any global illumination here,

but just about the Lambertian reflection that ker2x mentioned (http://en.wikipedia.org/wiki/Lambertian_reflectance).
Yes, and from that article you will see, as ker2x said, that it is constant, no cosines.

As I read ker2x, he talked about tracing a path from the camera to the fractal surface hit point, and from the fractal surface hit point to a point light source. This is simple ray tracing, with no global illumination. If he wants to estimate the diffuse reflection from the point light, he must multiply the light intensity by the cos(angle)-factor between the surface normal and point light direction (as the only formula from the link states). The same goes if he samples multiple lights from the hemisphere - the normal angle / light direction must be taken into account (otherwise you could rotate the hemisphere without changing lighting!)

I'm not questioning your expertise on raytracing - personaly I know nothing about global illumination - but I really think a simple shading model like Blinn-Phong will be the best first step for ker2x :-)

Have a nice evening!
 Logged
ker2x
Fractal Molossus

Posts: 795

 « Reply #19 on: August 12, 2011, 09:43:59 PM »

As I read ker2x, he talked about tracing a path from the camera to the fractal surface hit point, and from the fractal surface hit point to a point light source. This is simple ray tracing, with no global illumination. If he wants to estimate the diffuse reflection from the point light, he must multiply the light intensity by the cos(angle)-factor between the surface normal and point light direction (as the only formula from the link states). The same goes if he samples multiple lights from the hemisphere - the normal angle / light direction must be taken into account (otherwise you could rotate the hemisphere without changing lighting!)

I'm not questioning your expertise on raytracing - personaly I know nothing about global illumination - but I really think a simple shading model like Blinn-Phong will be the best first step for ker2x :-)

Have a nice evening!

Ha ! i understand now

That is indeed what i described in one of my post (yes, i reinvented raytracing :p ), but not what i was willing to do.
Sorry about the confusion and the discussion it created. (i'm confused myself between all the differents technics)

what i want to do is described here : http://en.wikipedia.org/wiki/Path_tracing and here http://en.wikipedia.org/wiki/Metropolis_light_transport which is the technic used by lycium, explained here : http://www.fractalforums.com/3d-fractal-generation/true-3d-mandlebrot-type-fractal/15/

Sorry again. (in some way, you're both correct)
 Logged

often times... there are other approaches which are kinda crappy until you put them in the context of parallel machines
(en) http://www.blog-gpgpu.com/ , (fr) http://www.keru.org/ ,
Syntopia
Fractal Molossus

Posts: 681

 « Reply #20 on: August 12, 2011, 10:17:48 PM »

Sorry again. (in some way, you're both correct)

Well, not need to apologize :-) Nothing wrong with your questions - we just misunderstood each other.

Btw, you said you did manage to get an OpenCL prototype running. Out of curiosity, how fast it is to render a frame?
 Logged
ker2x
Fractal Molossus

Posts: 795

 « Reply #21 on: August 12, 2011, 10:26:17 PM »

Sorry again. (in some way, you're both correct)

Well, not need to apologize :-) Nothing wrong with your questions - we just misunderstood each other.

Btw, you said you did manage to get an OpenCL prototype running. Out of curiosity, how fast it is to render a frame?

No no, not at all.
i wrote that with opencl http://www.fractalforums.com/programming/the-simpliest-naive-bruteforce-code-for-mandelbulb/15/ (the the result at the bottom of the page) but it's just an impletementation the dumbest possible raymarching  technic.
 Logged

often times... there are other approaches which are kinda crappy until you put them in the context of parallel machines
(en) http://www.blog-gpgpu.com/ , (fr) http://www.keru.org/ ,
A Noniem
Alien

Posts: 38

 « Reply #22 on: August 12, 2011, 11:14:10 PM »

Sorry again. (in some way, you're both correct)

Well, not need to apologize :-) Nothing wrong with your questions - we just misunderstood each other.

Btw, you said you did manage to get an OpenCL prototype running. Out of curiosity, how fast it is to render a frame?

Since I have an openCL renderer (well it's more as you said a prototype) as well I can answer your question as well. Rendering a mandelbox (using single precision) takes about 2-3 seconds at 1280x1024 on an ati 4350.
 Logged
Syntopia
Fractal Molossus

Posts: 681

 « Reply #23 on: August 12, 2011, 11:57:09 PM »

Have you also tried running the OpenCL code on the CPU using Intel's OpenCL SDK?

And is your ATI card capable of running double precision in OpenCL? (emulated or otherwise)
 Logged
lycium
Fractal Supremo

Posts: 1158

 « Reply #24 on: August 12, 2011, 11:58:49 PM »

Interestingly, AMD's CPU OpenCL implementation seems to be faster than Intel's.
 Logged

A Noniem
Alien

Posts: 38

 « Reply #25 on: August 13, 2011, 12:31:10 AM »

Have you also tried running the OpenCL code on the CPU using Intel's OpenCL SDK?

And is your ATI card capable of running double precision in OpenCL? (emulated or otherwise)

I have an amd proc and the amd sdk has support for cpu, so why use intel's sdk  . Rendering on the cpu is ~4 times slower (athlon 64 x2 @2.2ghz). I hope to use the gpu and cpu simultaneously someday to speed up rendering even further. My setup isn't that fast and new. I wasn't really into computers yet when I bought this thing

And no my card doesn not support doulbe precision. It was €25,-, supports only openCL 1.0 (the radeon hd 4xxx series was never designed with openCL in mind), so I'm lucky to even run openCL   I've done a bit of research on which cards support double precision. Almost all recent nvidia cards support it, ati is a different story however. The top models of the 4xxx and 5xxx series support double precision, 4xxx only supports openCL 1.0 and lacks some useful extensions, leaving you only with the 5xxx series for the serious stuff. I don't know what AMD did for the 6xxx series, but somehow only the 6950 and 6970 support double precision. The cheapest openCL 1.1/double precision ati gpu costs €100,- (radeon hd 5830) It's a huge surprise for me the cheapest double precision capable 6xxx card is the 6950 which costs a whopping €190,-.

If you want to see which ati cards support double precision go to
http://developer.amd.com/sdks/AMDAPPSDK/assets/AMD_APP_SDK_Getting_Started_Guide_v2.5.pdf

I'm not sure whether it is emulated or native, I think that in the 4xxx series it is emulated, but I'm not sure. I read somewhere that nVidia used to do this as well.
 « Last Edit: August 13, 2011, 12:58:10 AM by A Noniem » Logged
Jesse
Fractal Schemer

Posts: 1013

 « Reply #26 on: August 13, 2011, 02:43:56 AM »

Even if you're using single precision, writing code for a GPU just as you would for a CPU is a bad idea. The architecture is different, and heavy branching alone can effectively turn your 800mhz GPU into a weak 800mhz CPU with just a few threads. If you don't know why, then you don't understand GPU architecture, and I'd suggest taking a look at how they work before claiming 100x etc There's definitely massive potential with GPU for compute-limited workloads (fractals are pretty ideal since they use very little memory), but realising this potential in practice requires quite some care.

Not sure if you referring to me, but i agree with you that the way of coding is very important, same with CPU but with GPU it is more critical what you can see on the amount of different memory types in OpenCL... so make the memory usage as local as possible (as all the code).
But i had just a quick look at OpenCL, did not used it at all.

Btw, would never claim a 100x speed increment for GPU, just a ~100x slower rendering on monte carlo compared with phong lighting.
(if you mixed this up, but doesnt matter at all)
 Logged
lycium
Fractal Supremo

Posts: 1158

 « Reply #27 on: August 13, 2011, 04:28:15 AM »

Oh, I did indeed misread that  Thanks / sorry!

Please also excuse my reflexive distrust of such numbers as GPU speedups, too. On a related note, an excellent tongue-in-cheek guide to making GPU performance claims: http://www.walkingrandomly.com/?p=3736
 Logged

Jesse
Fractal Schemer

Posts: 1013

 « Reply #28 on: August 14, 2011, 11:54:59 PM »

Quote
The GPU thing:  OpenCL can be used also for double precision if the card does not supports it, this should be still faster than a common c compiler, for what i read.

Double precision support is optional in OpenCL 1.1 (chapter 9.3 in the spec). I'm guessing it is only implemented on architectures with decent hardware support.

Hmm, would be cool to know if double precision is supported anyways or only with a decent card.
I thought OpenCL (1.1) would use the CPU, maybe SSE2, if the GPU doesnt supports it... somebody who knows the answer?
 Logged
Syntopia
Fractal Molossus

Posts: 681

 « Reply #29 on: August 15, 2011, 08:34:25 AM »

Hmm, would be cool to know if double precision is supported anyways or only with a decent card.
I thought OpenCL (1.1) would use the CPU, maybe SSE2, if the GPU doesnt supports it... somebody who knows the answer?

Geeks3D GPU Caps Viewer allows you to view OpenCL information (OpenCL / More OpenCL Information...).
In order to support double precision the CL_DEVICE_EXTENSION 'cl_khr_fp64' must be present. Otherwise code will refuse to run.

For my Geforce 8800 GTX (with drivers dating from 1-7-2011) there is no double precision support :-(
My Intel OpenCL implementation does support it though (no big surprise here).

I don't think there will automatically fallback to CPU, if the GPU does not support doubles - however an application could check the available OpenCL implementation on a system, and choose one with double support.

Notice that GPU Caps Viewer also comes with a 4D Quaternion Julia demo in OpenCL which is useful for speed comparisons!

Update: I tried changing some OpenCL code to double precision, and my Nvidia driver does not seems to check the extension - it just fails compilation. The Nvidia the compiler complained with: 'warning: Double is not supported. Demoting to float' suggestion that doubles would be converted to floats. However, the compiler failed anyway with a message: 'Instruction 'cvt' requires SM 1.3 or higher, or map_f64_tof32 directive'
 « Last Edit: August 15, 2011, 09:00:27 AM by Syntopia » Logged
 Pages: 1 [2] 3   Go Down