Logo by slon_ru - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Follow us on Twitter
 
*
Welcome, Guest. Please login or register. March 28, 2024, 07:24:11 PM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: 1 [2] 3   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Inigo Quilez's brute force global illumination  (Read 8950 times)
0 Members and 1 Guest are viewing this topic.
eiffie
Guest
« Reply #15 on: November 20, 2012, 04:42:35 PM »

First I'll answer a few questions:
Softology - You got it (thanks marius!)
cKleinhuis - Yeah its like render to texture except its called a frame buffer (more generic term). Outside of Fragmentarium you could also run the script as multiple passes with alpha blending (disable the depth field) then no need to set up a buffer. You have to fake random numbers in glsl - basically fract(sin(lastRandom*ridiculouslyLargeNumber)).

Now I have a question. Inigo skimmed over a few details so I had to guess the following:
Code:
vec3 cosineDirection(in vec3 nor)
{//return a random direction on the hemisphere
vec2 r = rand2()*6.283;
vec3 dr=vec3(sin(r.x)*vec2(sin(r.y),cos(r.y)),cos(r.x));
return (dot(dr,nor)<0.0)?-dr:dr;
}
vec3 coneDirection(in vec3 nor, float ratio)
{//return a random direction within a cone where ratio is width/length
vec3 up=(dot(vec3(0.0,1.0,0.0),nor)>0.9)?vec3(1.0,0.0,0.0):vec3(0.0,1.0,0.0);
vec3 rt=normalize(cross(up,nor));up=cross(nor,rt);
vec2 r=rand2();r=sqrt(r.y)*vec2(cos(r.x*6.283),sin(r.x*6.283));
return normalize(nor+(rt*r.x+up*r.y)*ratio);
//return normalize(nor+cosineDirection(nor)*ratio);//faster but even less correct
}
cosineDirection I get. Just a random direction in the hemisphere with the pole "nor".
coneDirection should return a random direction within a cone. I faked this by using either a flat disk or hemisphere pushed in the direction of the normal but there must be a better way. If I get the polar coordinates and "jitter" them randomly will that be uniform??

Finally since you read this far you get a treat: Emissive Material! I have been using this for awhile and really like it. It allows you to fractally place lights all over the scene without added tons of lighting calculations. Its a fake but a nice one:)

<a href="http://www.youtube.com/v/gP6nqhBmF0k&rel=1&fs=1&hd=1" target="_blank">http://www.youtube.com/v/gP6nqhBmF0k&rel=1&fs=1&hd=1</a>

* iqPathWithRefrEm.frag (9.13 KB - downloaded 170 times.)
« Last Edit: November 20, 2012, 04:56:06 PM by eiffie » Logged
marius
Fractal Lover
**
Posts: 206


« Reply #16 on: November 20, 2012, 06:30:57 PM »

BTW, would anyone know if SLI can accelerate OpenGL applications? Does it need specific SLI profiles for the application?

I finally assembled a machine with two 7970s at 1GHz. Took some liquid cooling to keep it from melting down  undecided

SLI/crossfire has its issues as you might imagine.
It tends to only kick in on full-screen and if the system thinks it has a profile for your application.
At the moment I rename boxplorer.exe to SeriousSam.exe  grin

But then it works for boxplorer, a SDL/opengl app. And scales near linearly, all gpus at 100%.

You still have to adjust code for it: split workload in multiple parts etc.
At the moment it still drops back to single gpu performance if I enable a post-render fake-DoF pass; need to look into that.

Most of the fragments with decent DE run at 30 fps or more at 1080p, single precision that is.
Logged
eiffie
Guest
« Reply #17 on: November 20, 2012, 06:49:32 PM »

I get jealous of these awesome machines. I wrote the last script on my second machine with a motherboard GPU and 32 temporary registers. It worked but took 30 seconds per frame! smiley
Logged
cbuchner1
Fractal Phenom
******
Posts: 443


« Reply #18 on: November 20, 2012, 07:24:49 PM »


30 seconds per frame or not. You coded a Mandel-Lightbulb.  cop cop cop   *faints*
Logged
knighty
Fractal Iambus
***
Posts: 819


« Reply #19 on: November 20, 2012, 07:30:20 PM »

If I get the polar coordinates and "jitter" them randomly will that be uniform??
I can't answer your question but I think you may find these interresting and useful:
http://www.cs.virginia.edu/~jdl/importance.doc
http://madebyevan.com/webgl-path-tracing/
(and the code source: http://madebyevan.com/webgl-path-tracing/webgl-path-tracing.js )
Logged
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #20 on: November 20, 2012, 09:02:52 PM »

Quote
cosineDirection I get. Just a random direction in the hemisphere with the pole "nor".
coneDirection should return a random direction within a cone. I faked this by using either a flat disk or hemisphere pushed in the direction of the normal but there must be a better way. If I get the polar coordinates and "jitter" them randomly will that be uniform??

For a cone direction on a hemisphere, I use the following in Fragmentarium in sample e.g. a sun-like light source:
Code:
vec3 getSample(vec3 dir, float extent) {
// Create orthogonal vector (fails for z,y = 0)
vec3 o1 = normalize(vec3(0., -dir.z, dir.y));
vec3 o2 = normalize(cross(dir, o1));

// Convert to spherical coords aligned to dir
vec2 r = getUniformRandomVec2();
r.x=r.x*2.*PI;
r.y=1.0-r.y*extent;

float oneminus = sqrt(1.0-r.y*r.y);
return cos(r.x)*oneminus*o1+sin(r.x)*oneminus*o2+r.y*dir;
}
Here ‘extent’ is the size of the light source we sample. It is given as ’1-cos(angle)’, so 0 means a point-like light source (sharp shadows) and 1 means a full hemisphere light source (no shadows).

It is formular 34 in http://people.cs.kuleuven.be/~philip.dutre/GI/TotalCompendium.pdf, just adapted to a coordinate system aligned with the 'dir' direction.

But you shouldn't use the above formula for glossy specular reflectance. I can see that IQ suggests it, but it is really a hack - you are effectively sampling using a box distribution function, whereas the phong reflection uses a cosine-power distribution. You need to sample the full hemisphere and weight the samples according to dot(reflectedVector, sampleDirection)^Power. Since this is slow, you will want to use importance sampling, and sample according to the cosinus-distribution. Notice, that you should not multiply by the samples by the dot-product-power term if you do this.

Here is a cosine-power distribution sampling function:
Code:
vec3 getSampleBiased(vec3  dir, float power) {
// create orthogonal vector (fails for z,y = 0)
vec3 o1 = normalize( vec3(0., -dir.z, dir.y));
vec3 o2 = normalize(cross(dir, o1));

// Convert to spherical coords aligned to dir;
vec2 r = rand(viewCoord*(float(backbufferCounter)+1.0));
if (Stratify) {r*=0.1; r+= cx;}
r.x=r.x*2.*PI;
r.y = 1.0-r.y;

// This should be cosine^n weighted.
// See, e.g. http://people.cs.kuleuven.be/~philip.dutre/GI/TotalCompendium.pdf
// Item 36
r.y=pow(r.y,1.0/(power+1.0));

float oneminus = sqrt(1.0-r.y*r.y);
vec3 sdir = cos(r.x)*oneminus*o1+
sin(r.x)*oneminus*o2+
r.y*dir;

return sdir;
}

Btw, I think that IQ's cosineDirection means a direction chosen according to the power-1 cosinus distribution (because the diffuse light has a cos(normal, sampleDirection) weight), and not a random direction.

The code above is part of the 'Theory/Convolution.frag' example in Fragmentarium. This fragment can be used to derive precalculated specular and diffuse light maps for IBL lightning. I plan to write a blog entry with more details of this soon.
« Last Edit: November 20, 2012, 09:29:15 PM by Syntopia » Logged
eiffie
Guest
« Reply #21 on: November 20, 2012, 09:19:44 PM »

Ah thanks a bunch now I'm learning something!
Logged
eiffie
Guest
« Reply #22 on: November 21, 2012, 06:01:44 PM »

OK now I have specular light working but just as a sanity check maybe someone can give feedback on this light model.

Using Snytopia's functions for getSample and getSampleBiased I am getting light from...

For scattered light
searchDirection=getSample(surfaceNormal,1.0)//this is the cosine weighted sample
For direct lighting
searchDirection=getSample(lightDirection,extent)//where extent is approx 0.001 for soft shadows from a sun-like source
Specular lighting
searchDirection=getSampleBiased(reflect(rayDirection,surfaceNormal),specularExponent)//checking for near perfect reflections

searchDirection is then used for a shadow check

This picture is a comparison of the cheap version of specular (left) and then the way Syntopia suggested.
Thanks for the help on this guys. Now I'm going back to build a fast raymarcher with the same fake caustics in it:)


* specularEx.jpg (48.54 KB, 430x382 - viewed 872 times.)
* iqPathWithRefrEmSpec.zip (4.02 KB - downloaded 149 times.)
« Last Edit: November 21, 2012, 09:17:00 PM by eiffie, Reason: added attachments » Logged
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #23 on: November 21, 2012, 11:06:29 PM »

For scattered light
searchDirection=getSample(surfaceNormal,1.0)//this is the cosine weighted sample
For direct lighting
searchDirection=getSample(lightDirection,extent)//where extent is approx 0.001 for soft shadows from a sun-like source
Specular lighting
searchDirection=getSampleBiased(reflect(rayDirection,surfaceNormal),specularExponent)//checking for near perfect reflections

The directions are probably right, but ther are more to importance sampling than just biasing the samples towards the most important regions - you have to take the distribution your sampling with into account. So the samples must be weighted according to the reciprocal of their chance of being picked, I think. I still have some trouble with my code so I can't share it yet,
Logged
eiffie
Guest
« Reply #24 on: November 24, 2012, 06:10:55 PM »

I admit as soon as you do release your code I will steal most of it smiley but I am having fun screwing around until then.

I actually thought it went the other way. If you just chose uniformly random rays you would have to weight them based on their likelyhood of actually arriving at the camera (remember we are doing this all backwards the rays really come uniformly from the lights and only a few hit the camera).

But I admit my brain has reached its limit here.
Logged
richardrosenman
Conqueror
*******
Posts: 104



WWW
« Reply #25 on: November 24, 2012, 11:55:55 PM »

Amazing stuff here guys!

-Rich
Logged

Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #26 on: November 25, 2012, 08:13:12 PM »

I admit as soon as you do release your code I will steal most of it smiley but I am having fun screwing around until then.

I actually thought it went the other way. If you just chose uniformly random rays you would have to weight them based on their likelyhood of actually arriving at the camera (remember we are doing this all backwards the rays really come uniformly from the lights and only a few hit the camera).

But I admit my brain has reached its limit here.

I'm not an expert here, but as I see it, we have to integrate over all light directions (and paths) reaching the camera, e.g:

I = \int_{\Omega}f(\bar{\mathbf{x}}) \, d\bar{\mathbf{x}}

We can do this by taking a finite number of uniform samples, and estimate an approximation of the integral as the average sample value times the volume we are integrating over:

I \approx V \frac{1}{N} \sum_{i=1}^N f(\bar{\mathbf{x}}_i)

Now, this only holds if we are choosing samples uniformly. If we biased our samples to regions with high values, we would get too high a value for the integral. However, sometimes we know that the contributions follow a very narrow distribution (e.g. specular lights).

Therefore we have to weight the samples according to the reciprocal distribution:

 Q_N \equiv \frac{1}{Z_N} \sum_{i=1}^N \frac{f(\bar{\mathbf{x}}_i)}{p(\bar{\mathbf{x}}_i)}

Z_N \equiv \sum_{i=1}^N \frac{1}{p(\bar{\mathbf{x}}_i)}

In Fragmentarium, I multiply the weights to the color values before accumulating them, and keep track of the sum of the weights in the alpha channel. But there are some issues - in particular when the distribution goes towards zero, I get very high terms, and noise pixels.

The formulas above were pasted from: http://en.wikipedia.org/wiki/Monte_Carlo_integration, where this more discussion.
Logged
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #27 on: November 25, 2012, 10:53:32 PM »



After a bit of experimentation, I've found out that I shouldn't try to sum the of the weights in the alpha-channel - instead I should calculate the integral of the PDF (Probability Density Function), and normalize by that. It is explained here: http://www.rorydriscoll.com/2009/01/07/better-sampling/

He doesn't derive the normalization for the cosine-powered distribution, but if you do the integral, you end up with:

L_o \approx \frac{2 c}{N * (Power+1)} \sum_{i=1}^{N}{L_i}

I've checked it, and it works - it converges much faster when using the biased (importance sampled) form. Unfortunately the gain is largest for high powers, so not much is gained for the diffuse term.
Logged
cKleinhuis
Administrator
Fractal Senior
*******
Posts: 7044


formerly known as 'Trifox'


WWW
« Reply #28 on: November 25, 2012, 10:59:38 PM »

i need to break in here:

people, why are you all relying on a random integral aproaching method ?
as a starter, the directions from that a light source can actually come should be somehow fractally approximated, with the bonus of not having to search the whole semi-sphere obtaining more realistic results wink ??

i mean, dudes we are in the fractalforums here, and i wonder why the solution for the beloved rendering equatuion that lies behind the global illumination renderings couldnt be modified similar to what i suggested before
Logged

---

divide and conquer - iterate and rule - chaos is No random!
eiffie
Guest
« Reply #29 on: November 27, 2012, 04:43:27 PM »

Since I know Syntopia will arrive at the best physical model given time I now feel free to just "wing it". I re-wrote the engine from scratch since I realized all the shadow checks were redundant. The code is much faster now and the results are better. It still takes time to converge but fast enough to create small videos...
<a href="http://www.youtube.com/v/iMBKRdGI6Q4&rel=1&fs=1&hd=1" target="_blank">http://www.youtube.com/v/iMBKRdGI6Q4&rel=1&fs=1&hd=1</a>

...attached script is up to date as of: June 13, 2013



* simpleGI3.jpg (30.41 KB, 400x397 - viewed 823 times.)
* eiffieGI2.zip (10.76 KB - downloaded 170 times.)
« Last Edit: June 13, 2013, 06:08:56 PM by eiffie, Reason: updated attachment » Logged
Pages: 1 [2] 3   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
Global illumination here we come! Mandelbulb Renderings « 1 2 3 » twinbee 41 14600 Last post April 03, 2011, 09:11:26 PM
by pfrancke
global illumination (article) Programming willvarfar 6 2603 Last post April 06, 2013, 05:28:14 PM
by eiffie
Brute force raytracing + binary 0.9.12b = broken? Fragmentarium Roquen 9 1554 Last post July 19, 2013, 03:45:16 PM
by Roquen
Iñigo Quilez - Mandelbulb fractal in my Desk! Fractal News across the World cKleinhuis 0 1176 Last post September 19, 2015, 03:20:47 PM
by cKleinhuis
inigo quilez - the main bulb of the Mandelbrot set (analytical) Fractal News across the World cKleinhuis 2 1098 Last post April 30, 2016, 03:19:04 AM
by eiffie

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.312 seconds with 28 queries. (Pretty URLs adds 0.011s, 2q)