eiffie
Guest
|
|
« Reply #15 on: November 20, 2012, 04:42:35 PM » |
|
First I'll answer a few questions: Softology - You got it (thanks marius!) cKleinhuis - Yeah its like render to texture except its called a frame buffer (more generic term). Outside of Fragmentarium you could also run the script as multiple passes with alpha blending (disable the depth field) then no need to set up a buffer. You have to fake random numbers in glsl - basically fract(sin(lastRandom*ridiculouslyLargeNumber)). Now I have a question. Inigo skimmed over a few details so I had to guess the following: vec3 cosineDirection(in vec3 nor) {//return a random direction on the hemisphere vec2 r = rand2()*6.283; vec3 dr=vec3(sin(r.x)*vec2(sin(r.y),cos(r.y)),cos(r.x)); return (dot(dr,nor)<0.0)?-dr:dr; } vec3 coneDirection(in vec3 nor, float ratio) {//return a random direction within a cone where ratio is width/length vec3 up=(dot(vec3(0.0,1.0,0.0),nor)>0.9)?vec3(1.0,0.0,0.0):vec3(0.0,1.0,0.0); vec3 rt=normalize(cross(up,nor));up=cross(nor,rt); vec2 r=rand2();r=sqrt(r.y)*vec2(cos(r.x*6.283),sin(r.x*6.283)); return normalize(nor+(rt*r.x+up*r.y)*ratio); //return normalize(nor+cosineDirection(nor)*ratio);//faster but even less correct }
cosineDirection I get. Just a random direction in the hemisphere with the pole "nor". coneDirection should return a random direction within a cone. I faked this by using either a flat disk or hemisphere pushed in the direction of the normal but there must be a better way. If I get the polar coordinates and "jitter" them randomly will that be uniform?? Finally since you read this far you get a treat: Emissive Material! I have been using this for awhile and really like it. It allows you to fractally place lights all over the scene without added tons of lighting calculations. Its a fake but a nice one:)
http://www.youtube.com/v/gP6nqhBmF0k&rel=1&fs=1&hd=1
|
|
« Last Edit: November 20, 2012, 04:56:06 PM by eiffie »
|
Logged
|
|
|
|
marius
Fractal Lover
Posts: 206
|
|
« Reply #16 on: November 20, 2012, 06:30:57 PM » |
|
BTW, would anyone know if SLI can accelerate OpenGL applications? Does it need specific SLI profiles for the application?
I finally assembled a machine with two 7970s at 1GHz. Took some liquid cooling to keep it from melting down SLI/crossfire has its issues as you might imagine. It tends to only kick in on full-screen and if the system thinks it has a profile for your application. At the moment I rename boxplorer.exe to SeriousSam.exe But then it works for boxplorer, a SDL/opengl app. And scales near linearly, all gpus at 100%. You still have to adjust code for it: split workload in multiple parts etc. At the moment it still drops back to single gpu performance if I enable a post-render fake-DoF pass; need to look into that. Most of the fragments with decent DE run at 30 fps or more at 1080p, single precision that is.
|
|
|
Logged
|
|
|
|
eiffie
Guest
|
|
« Reply #17 on: November 20, 2012, 06:49:32 PM » |
|
I get jealous of these awesome machines. I wrote the last script on my second machine with a motherboard GPU and 32 temporary registers. It worked but took 30 seconds per frame!
|
|
|
Logged
|
|
|
|
|
|
Syntopia
|
|
« Reply #20 on: November 20, 2012, 09:02:52 PM » |
|
cosineDirection I get. Just a random direction in the hemisphere with the pole "nor". coneDirection should return a random direction within a cone. I faked this by using either a flat disk or hemisphere pushed in the direction of the normal but there must be a better way. If I get the polar coordinates and "jitter" them randomly will that be uniform??
For a cone direction on a hemisphere, I use the following in Fragmentarium in sample e.g. a sun-like light source: vec3 getSample(vec3 dir, float extent) { // Create orthogonal vector (fails for z,y = 0) vec3 o1 = normalize(vec3(0., -dir.z, dir.y)); vec3 o2 = normalize(cross(dir, o1)); // Convert to spherical coords aligned to dir vec2 r = getUniformRandomVec2(); r.x=r.x*2.*PI; r.y=1.0-r.y*extent;
float oneminus = sqrt(1.0-r.y*r.y); return cos(r.x)*oneminus*o1+sin(r.x)*oneminus*o2+r.y*dir; }
Here ‘extent’ is the size of the light source we sample. It is given as ’1-cos(angle)’, so 0 means a point-like light source (sharp shadows) and 1 means a full hemisphere light source (no shadows). It is formular 34 in http://people.cs.kuleuven.be/~philip.dutre/GI/TotalCompendium.pdf, just adapted to a coordinate system aligned with the 'dir' direction. But you shouldn't use the above formula for glossy specular reflectance. I can see that IQ suggests it, but it is really a hack - you are effectively sampling using a box distribution function, whereas the phong reflection uses a cosine-power distribution. You need to sample the full hemisphere and weight the samples according to dot(reflectedVector, sampleDirection)^Power. Since this is slow, you will want to use importance sampling, and sample according to the cosinus-distribution. Notice, that you should not multiply by the samples by the dot-product-power term if you do this. Here is a cosine-power distribution sampling function: vec3 getSampleBiased(vec3 dir, float power) { // create orthogonal vector (fails for z,y = 0) vec3 o1 = normalize( vec3(0., -dir.z, dir.y)); vec3 o2 = normalize(cross(dir, o1)); // Convert to spherical coords aligned to dir; vec2 r = rand(viewCoord*(float(backbufferCounter)+1.0)); if (Stratify) {r*=0.1; r+= cx;} r.x=r.x*2.*PI; r.y = 1.0-r.y;
// This should be cosine^n weighted. // See, e.g. http://people.cs.kuleuven.be/~philip.dutre/GI/TotalCompendium.pdf // Item 36 r.y=pow(r.y,1.0/(power+1.0));
float oneminus = sqrt(1.0-r.y*r.y); vec3 sdir = cos(r.x)*oneminus*o1+ sin(r.x)*oneminus*o2+ r.y*dir; return sdir; }
Btw, I think that IQ's cosineDirection means a direction chosen according to the power-1 cosinus distribution (because the diffuse light has a cos(normal, sampleDirection) weight), and not a random direction. The code above is part of the 'Theory/Convolution.frag' example in Fragmentarium. This fragment can be used to derive precalculated specular and diffuse light maps for IBL lightning. I plan to write a blog entry with more details of this soon.
|
|
« Last Edit: November 20, 2012, 09:29:15 PM by Syntopia »
|
Logged
|
|
|
|
eiffie
Guest
|
|
« Reply #21 on: November 20, 2012, 09:19:44 PM » |
|
Ah thanks a bunch now I'm learning something!
|
|
|
Logged
|
|
|
|
eiffie
Guest
|
|
« Reply #22 on: November 21, 2012, 06:01:44 PM » |
|
OK now I have specular light working but just as a sanity check maybe someone can give feedback on this light model.
Using Snytopia's functions for getSample and getSampleBiased I am getting light from...
For scattered light searchDirection=getSample(surfaceNormal,1.0)//this is the cosine weighted sample For direct lighting searchDirection=getSample(lightDirection,extent)//where extent is approx 0.001 for soft shadows from a sun-like source Specular lighting searchDirection=getSampleBiased(reflect(rayDirection,surfaceNormal),specularExponent)//checking for near perfect reflections
searchDirection is then used for a shadow check
This picture is a comparison of the cheap version of specular (left) and then the way Syntopia suggested. Thanks for the help on this guys. Now I'm going back to build a fast raymarcher with the same fake caustics in it:)
|
|
« Last Edit: November 21, 2012, 09:17:00 PM by eiffie, Reason: added attachments »
|
Logged
|
|
|
|
Syntopia
|
|
« Reply #23 on: November 21, 2012, 11:06:29 PM » |
|
For scattered light searchDirection=getSample(surfaceNormal,1.0)//this is the cosine weighted sample For direct lighting searchDirection=getSample(lightDirection,extent)//where extent is approx 0.001 for soft shadows from a sun-like source Specular lighting searchDirection=getSampleBiased(reflect(rayDirection,surfaceNormal),specularExponent)//checking for near perfect reflections
The directions are probably right, but ther are more to importance sampling than just biasing the samples towards the most important regions - you have to take the distribution your sampling with into account. So the samples must be weighted according to the reciprocal of their chance of being picked, I think. I still have some trouble with my code so I can't share it yet,
|
|
|
Logged
|
|
|
|
eiffie
Guest
|
|
« Reply #24 on: November 24, 2012, 06:10:55 PM » |
|
I admit as soon as you do release your code I will steal most of it but I am having fun screwing around until then. I actually thought it went the other way. If you just chose uniformly random rays you would have to weight them based on their likelyhood of actually arriving at the camera (remember we are doing this all backwards the rays really come uniformly from the lights and only a few hit the camera). But I admit my brain has reached its limit here.
|
|
|
Logged
|
|
|
|
richardrosenman
|
|
« Reply #25 on: November 24, 2012, 11:55:55 PM » |
|
Amazing stuff here guys!
-Rich
|
|
|
Logged
|
|
|
|
Syntopia
|
|
« Reply #26 on: November 25, 2012, 08:13:12 PM » |
|
I admit as soon as you do release your code I will steal most of it but I am having fun screwing around until then. I actually thought it went the other way. If you just chose uniformly random rays you would have to weight them based on their likelyhood of actually arriving at the camera (remember we are doing this all backwards the rays really come uniformly from the lights and only a few hit the camera). But I admit my brain has reached its limit here. I'm not an expert here, but as I see it, we have to integrate over all light directions (and paths) reaching the camera, e.g: We can do this by taking a finite number of uniform samples, and estimate an approximation of the integral as the average sample value times the volume we are integrating over: Now, this only holds if we are choosing samples uniformly. If we biased our samples to regions with high values, we would get too high a value for the integral. However, sometimes we know that the contributions follow a very narrow distribution (e.g. specular lights). Therefore we have to weight the samples according to the reciprocal distribution: In Fragmentarium, I multiply the weights to the color values before accumulating them, and keep track of the sum of the weights in the alpha channel. But there are some issues - in particular when the distribution goes towards zero, I get very high terms, and noise pixels. The formulas above were pasted from: http://en.wikipedia.org/wiki/Monte_Carlo_integration, where this more discussion.
|
|
|
Logged
|
|
|
|
Syntopia
|
|
« Reply #27 on: November 25, 2012, 10:53:32 PM » |
|
After a bit of experimentation, I've found out that I shouldn't try to sum the of the weights in the alpha-channel - instead I should calculate the integral of the PDF (Probability Density Function), and normalize by that. It is explained here: http://www.rorydriscoll.com/2009/01/07/better-sampling/He doesn't derive the normalization for the cosine-powered distribution, but if you do the integral, you end up with: I've checked it, and it works - it converges much faster when using the biased (importance sampled) form. Unfortunately the gain is largest for high powers, so not much is gained for the diffuse term.
|
|
|
Logged
|
|
|
|
cKleinhuis
|
|
« Reply #28 on: November 25, 2012, 10:59:38 PM » |
|
i need to break in here: people, why are you all relying on a random integral aproaching method ? as a starter, the directions from that a light source can actually come should be somehow fractally approximated, with the bonus of not having to search the whole semi-sphere obtaining more realistic results ?? i mean, dudes we are in the fractalforums here, and i wonder why the solution for the beloved rendering equatuion that lies behind the global illumination renderings couldnt be modified similar to what i suggested before
|
|
|
Logged
|
---
divide and conquer - iterate and rule - chaos is No random!
|
|
|
eiffie
Guest
|
|
« Reply #29 on: November 27, 2012, 04:43:27 PM » |
|
Since I know Syntopia will arrive at the best physical model given time I now feel free to just "wing it". I re-wrote the engine from scratch since I realized all the shadow checks were redundant. The code is much faster now and the results are better. It still takes time to converge but fast enough to create small videos...
http://www.youtube.com/v/iMBKRdGI6Q4&rel=1&fs=1&hd=1...attached script is up to date as of: June 13, 2013
|
|
« Last Edit: June 13, 2013, 06:08:56 PM by eiffie, Reason: updated attachment »
|
Logged
|
|
|
|
|