Logo by S Nelson - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Visit the official fractalforums.com Youtube Channel
 
*
Welcome, Guest. Please login or register. March 29, 2024, 12:05:22 AM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1]   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: A Very Basic Question  (Read 1556 times)
0 Members and 1 Guest are viewing this topic.
ForestCat
Alien
***
Posts: 24


« on: February 12, 2016, 01:11:50 AM »

I'm attempting to teach myself enough about Fragmentarium & GLSL to begin to be able to write my own basic shaders.  My standard approach to learning any new environment is to pick a programming task that will be useful to me, in this case adapt cfg & shader from another program (Boxplorer) to work in Fragmentarium.  The way I generally approcah it is to take a working example in the target environment (Fragmentarium), and systematically whittle down the code until I have the bare minimum that will 'build' with no errors. Then I add back code to begin to achive some simple "hello world" style result.  Then I do the same thing in the source environment (Boxplorer).  Next step is to make that example work in the target environment.  Hope that makes any sense.

I've hit a bit of a wall.  Let me explain what I THINK I've learned about Fragmentarium:

1. The code in the "editor" essentially passes function calls, etc. to the #include'd raytracer frag, which does the heavy lifting, the actual 'shading', if that makes sense.

2. The minimum required code/syntax in the "editor" seems heavily dependent upon the structure of the #include'd shader, i.e. frags utilizing the DE or Fast raytracer seem to require at minimum init(),  DE() functions

3. Some shader frags contain main(), some don't, and include, for example, 3D.frag which does have main(). 


I probably sound stupid at this point, but I'm wondering if someone could either explain the rules re: the relationship of the code in the editor to the shader frag itself, or perhaps give me an example of the simplest possible combination of an 'editor" frag, and a raytracer that will build and display something.  No sliders, etc. That part I understand.  I've looked through all the posts in this subforum, as well as on Syntopia's blog, maybe this is right under my nose, or it's just so basic that for most of you it's 'understood', but I'm stumbling on it.

I've got it down to this in the editor, and it works:


#define providesInit
#include "DE-Raytracer.frag"

void init(){}

float DE(vec3 pos) {
   vec3 z=pos;
   return log(length(z));
}

But the DE-Raytracer is where I'm stuck.  I just don't know what the minimum requirements/syntax in each file are for Fragmentarium's engine to be able to recognize these two components as its own and succesfully build.



I'd sure appreciate any insight.  I do learn fast once I get a jumpstart.  smiley

Thanks!

Logged
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« Reply #1 on: February 14, 2016, 01:37:42 PM »

The general layout for 3D DE-based shaders with fragmentarium is as follows (from top level bottom down):

1) 3D.frag + Buffershader - that's top of the pipeline where the layout with 2 quads is set and pixels are shaded calling color() function down the pipeline. 3D handles the basic camera too.
2) DE Raytracer (in any form) which implements color function and does all the actual raytracing, it makes calls (at least) to DE() function which has to be implemented in lower level frag. There are also few other functions that may be called if defined down the pipeline - such as init() or color(). Check particular raytracers implementations for either echoed comments or comments in the code. These will tell you a lot about #ifdef and #define flags that you can use at bottom of the pipe to enable some features. My approach was to provide non-obligatory additional features in the raytracer which are included on demand (thanks to #ifdef and #define structures) so you compile only what's needed. In DE-raytracer and DE-Kn2 you got a few features like that too.
3) Your frag. Essentially a bare minimum to work with a range of included raytracers is that these have to implement your distance estimation function

float de = DE(vec3 p){
float de;

de =;
return de;
}.

The DE function is called by raytracer numerous times and it has to provide a float DE value for a given point in space (p). DE value tells that the closest fractal (or other object) point is not closer than DE. So essentially it's a sort of 3D minimum distance function. It's up to you how you define objects and how you implement DE. You can combine multiple objects using min(de1, de2), max functions etc. If your DE is accurate, you'll get clean picture. If your DE is messed, you'll be getting a picture, but it'll have artifacts.

Check Inigo Quilez for many DE implementations for implicits and other useful bits of code.
http://www.iquilezles.org/www/index.htm
Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
ForestCat
Alien
***
Posts: 24


« Reply #2 on: February 14, 2016, 02:27:23 PM »

Patryk,

        I've been doing a lot of experimentation, and I've gotten a couple of very simple frag files to output something.  I think my question could be more accurately worded as this (two questions, actually..)

1. Must  fragmentarium code that utilizes a DE function and a 3-D camera ALWAYS rely on external #includes, i.e. buffershader.frag, 3D.frag, Fast-Raytracer.frag, etc.) ?

2.  If not, what would be the minimum 'all-in-one' (in the editor, no where else) script that would for example, render a 3-d primitive and allow 'camera navigation' with either mouse/keys or uniform sliders?

It seems that Fragmentarium itself, because of the way the GLSL 'hooks'/setup is coded, has some very specific syntax requirements for eye, up, etc. 

Also, since all of the #include'd files have their own main() init(), I'm struggling w/ how to incorporate them all in one file.  As I said, I'm shamefully new to these environments, trying to get a fundamental handle on how it all fits together, using one 'interactive' learning file in the editor. 

Thanks again for your reply
Logged
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« Reply #3 on: February 14, 2016, 03:08:54 PM »

Quote
Must fragmentarium code that utilizes a DE function and a 3-D camera ALWAYS rely on external #includes, i.e. buffershader.frag, 3D.frag, Fast-Raytracer.frag, etc.) ?
No, you can easily put all this in one file. Even if it's split across a few files, it's always treated as one single piece of code just before compiling. If you need, you can merge code bits into one file using menu option 'output preprocessed script'.

Quote
2.  If not, what would be the minimum 'all-in-one' (in the editor, no where else) script that would for example, render a 3-d primitive and allow 'camera navigation' with either mouse/keys or uniform sliders?
Check examples bundled with Frag. I think there's an example with global illumination that sits all in one file (Effie GI or something).

Yes, there are a few 'specials' like the Eye and camera pos coordinates that are explicitly set by C++ code bound to the windown interaction. But no reason to bother with it. You don't need to know that.

As for better learning - I strongly recommend getting a good code editor, cause Fragmentarium is very limiting and cumbersome. I've been using Notepad++ on a PC which handles code nicely, but this can be any programming IDE.

One thing to note is that GLSL code is handles chronologically - meaning that if in the code you call a variable that is declared a line later, this won't work. Some environments are clever, but here it's all chronological, so you have to pay attention to the order of code.

 
Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
ForestCat
Alien
***
Posts: 24


« Reply #4 on: February 14, 2016, 04:03:40 PM »

Patryk, this is the exact kind of info I'm looking for.  There's a certain level of assumption on most forums re: 'basic' knowledge, and I TOTALLY get that, not complaining.  But sometimes , the really fundamental stuff (which, for some odd reason, is what I choke on, lol)  is often WIDELY disseminated.  I've been scrounging for glsl tutorials, C++ stuff, Shadertoy, Syntopia's excellent blog (still a bit over my head at this point, the terms derivative or integral still strike fear in my heart...).  I know the info is 'out there', I'm slowly fining it, but piecing it together has been very slow going.

re: Code editor, hell yeah. Agreed.  Been using Notepad++ forever, just picked up the glsl language def, VERY nice.  Also running File Locator pro to globally search for strings throughout an entire folder tree, etc. 

I did finally discover the preprocessed script when trying to correlate the build errors to the often nonexistant line# refs in the build output, lol.  Nice tool.

The tools mostly are all here, albeit not as slick as QT, Eclipse, VS, et al.

Fragmentarium is such an immediately gratifying programming experience that it's highly addicting.  And highly frustrating until one gets enough "overview" of the environment.
Logged
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #5 on: February 14, 2016, 08:52:19 PM »

There is a preprocessor command that tells Fragmentarium to use a second shader program #buffershader "BufferShader.frag", the first shader does all the heavy lifting, shadows, lighting, color, DE etc, the second shader renders the generated data from a texture to a quad for screen display. This preprocessor command is in the 3D.frag file, the BufferShader.frag file contains Gamma, Exposure, Brightness, Contrast, Saturation, ToneMapping and Bloom functions, all in the "Post" tab.

A good example of an "all in one" raytracer is Fast-Raytracer.frag, this one does not #include any other files and does not use a second shader program. This means it is limited by not being able to do subframe accumulation or post render effects. It's fast because you only render one frame with one shader program, subframes are the same as the first frame so when using this raytracer you should set subframes to 1.

BufferShader.frag has #vertex #endvertex part, the rest of this file is the fragment shader part, both parts have a main() routine, this makes up the second shader program.

3D.frag has #vertex #endvertex part, the rest of this file is the fragment shader part, both parts have a main() routine and the fragment shader part uses routines from DE-Raytracer.frag which does not have main() or #vertex part, the color() routine is implemented in DE-Raytracer.frag file but declared and called in 3D.frag file.

I can see how it is a bit confusing when looking at all of the GLSL code together, 4 main() routines, 2 #vertex parts and no obvious indication of what belongs to each fragment part.

3D.frag + DE-Raytracer.frag + user.frag = primary shader program

BufferShader.frag = secondary shader program (no user code)

Fast-Raytracer.frag + user.frag = singular shader program

I hope this helps, and yes definitely, the text editor needs some better functionality, part of the problem is the size, thinking about embedding an opensource full featured editor rather than trying to reinvent the wheel, maybe a "file watcher" to detect changes and reload/build the frags when using an external editor.
« Last Edit: February 14, 2016, 09:20:12 PM by 3dickulus » Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
M Benesi
Fractal Schemer
****
Posts: 1075



WWW
« Reply #6 on: February 14, 2016, 09:45:29 PM »

  Here is a bit more information to add to what 3Dickulus and Patryk provided (lol... let's deluge ForestCat with info!!):

1)  Look at the most basic "Fast-Raytracer.frag".  

  The code in between #vertex and #endvertex assigns your ray direction, based on window coordinates and Eye/Target position.
Code:
#vertex

#group Camera

// Field-of-view
uniform float FOV; slider[0,0.4,2.0] NotLockable
uniform vec3 Eye; slider[(-50,-50,-50),(0,0,-7),(50,50,50)] NotLockable
uniform vec3 Target; slider[(-50,-50,-50),(0,0,0),(50,50,50)] NotLockable
uniform vec3 Up; slider[(0,0,0),(0,1,0),(0,0,0)] NotLockable
//uniform float ApplyOnIteration;slider[0,0,30]
//uniform float FormulaType;slider[0,0,30]
//uniform float ApplicationType;slider[0,0,30]

varying vec3 dirDx;
varying vec3 dirDy;
varying vec3 from;
uniform vec2 pixelSize;
varying vec2 coord;
varying float zoom;
varying vec3 dir;
//varying vec3 Dir;
void main(void)
{
gl_Position =  gl_Vertex;
coord = (gl_ProjectionMatrix*gl_Vertex).xy;
coord.x*= pixelSize.y/pixelSize.x;
// we will only use gl_ProjectionMatrix to scale and translate, so the following should be OK.
vec2 ps =vec2(pixelSize.x*gl_ProjectionMatrix[0][0], pixelSize.y*gl_ProjectionMatrix[1][1]);

zoom = length(ps);

from = Eye;
vec3 Dir = normalize(Target-Eye);  
vec3 up = Up-dot(Dir,Up)*Dir;
up = normalize(up);
vec3 Right = normalize( cross(Dir,up));
dir = (coord.x*Right + coord.y*up )*FOV+Dir;
dirDy = ps.y*up*FOV;
dirDx = ps.x*Right*FOV;
}
#endvertex
 http://www.songho.ca/opengl/index.html   <-- good resource to help you visualize the openGL projection process, if you're unfamiliar with it.  Look at the projection matrix part.


2)   Skip to the bottom (Like Patryk said, order matters... cheesy), to main () {..}  

Code:
void main() {
init();
vec3 hitNormal = vec3(0.0);
vec3 hit;
        depthFlag=true; // do depth on the first hit not on reflections
vec3 color =  trace(from,dir,hit,hitNormal);    
if (ShowDepth) color = vec3(depthMag * depth);
color = clamp(color, 0.0, 1.0);
gl_FragColor = vec4(color, 1.0);  //color of vertex assigned
}

  This assigns a color to every vertex with the trace function.  We haven't made it to the DE yet....

3)  Look at the trace function (sections between ******s):
   3a)  initialize variables

   3b)  DE loop.  The DE function is generally in the other frag that includes this raytracer, although you could put the DE function in the raytracer like Patryk said

      
  • from = Eye position- look at the openGL tutorial to get an idea of what they Eye position is
  • dir = direction from Eye, through the window  (look at the openGL tutorial, projection matrix part)
  • dist = DE...  
  • totalDist = distance traveled along the ray from the Eye, towards the object.
  • epsmodified = total distance traveled * "minimum distance" modifier
  • check whether we hit (dist<epsmodified) or we are above the maximum allowed distance from object

   3c)  calculate normal (angle surface is oriented towards), color (assign with orbitTrap, or whatever), and light effects using normal for reflections, etc.  Fast-Raytracer doesn't do much in the way of lighting effects, and uses the number of steps to calculate ambient occlusion.  I wrote a couple of coloring modifications for the fast raytracer intended for archaic (in GPU years) hardware (like mine!) that should be around somewhere.  

  3D)ickulus's code to record depth for various things, including the project that I have been slacking on for a little bit.  FeedBack.  Which I've got to get around to, but.. it's 76 degrees, sunny, there is a dog that would enjoy going for a walk and chores to be done. 

Code:
vec3 trace(vec3 from, vec3 dir, inout vec3 hit, inout vec3 hitNormal) {

// 3a) Initialize a couple of variables**************************************************

hit = vec3(0.0);
orbitTrap = vec4(10000.0);
vec3 direction = normalize(dir);

float dist = 0.0;
float totalDist = 0.0;
int steps;
colorBase = vec3(0.0,0.0,0.0);

// We will adjust the minimum distance based on the current zoom
float eps = minDist; // .001  
float epsModified = 0.0;

// 3b)  Check DE function, that is in the other frag!!!!)  **************************************

for (steps=0; steps<MaxRaySteps; steps++) {
orbitTrap = vec4(10000.0);      //initialize every time so we get the last trap
vec3 p = from + totalDist * direction;
dist = DE(p);

dist *= FudgeFactor;

totalDist += dist;
epsModified = pow(totalDist,ClarityPower)*eps;
if (dist < epsModified) break;
if (totalDist > MaxDistance) break;
}


//DE is checked  ***********************************************************
// 3c) calculate color

vec3 hitColor;
float stepFactor = clamp((float(steps))/float(GlowMax),0.0,1.0);
vec3 backColor = BackgroundColor;
if (GradientBackground>0.0) {
float t = length(coord);
backColor = mix(backColor, vec3(0.0,0.0,0.0), t*GradientBackground);
}

if (  steps==MaxRaySteps) orbitTrap = vec4(0.0);

if ( dist < epsModified) {
// We hit something, or reached MaxRaySteps
hit = from + totalDist * direction;
float ao = AO.w*stepFactor ;

hitNormal= normal(hit-NormalBackStep*epsModified*direction, epsModified); // /*normalE*epsModified/eps*/


#ifdef  providesColor
hitColor = mix(BaseColor,  color(hit,hitNormal),  OrbitStrength);
#else
ColorTex=BaseColorTex;   //set colortex for base color texture
MapType=BaseMapType;
texturespeed=TextureSpeed*SpeedMult;
textoff=TextureOffset/100.0;
//orbitTrap = vec4(0.0);
// vec3 p = from + totalDist* direction + hitNormal*ColorDepth;
//float dist2 = DE(p);
hitColor = getColor();
#endif

hitColor = mix(hitColor, AO.xyz ,ao);
float shadowStrength = 0.0;
hitColor = lighting(hitNormal, hitColor,  hit,  direction,epsModified,shadowStrength);
// OpenGL  GL_EXP2 like fog
float f = totalDist;
hitColor = mix(hitColor, backColor, 1.0-exp(-pow(Fog,4.0)*f*f));
}
else {

hitColor = backColor;
  hitColor +=Glow.xyz*stepFactor* Glow.w;

}
//3D  ********************************ickulus***************************
if(depthFlag) {
// do depth on the first hit not on reflections
depthFlag=false;
// for rendering depth to alpha channel in EXR images
// see http://www.fractalforums.com/index.php?topic=21759.msg87160#msg87160
depth = 1.0/totalDist;
//if(DepthToAlpha==true) gl_FragDepth = depth;
if(DepthToAlpha==true) gl_FragDepth = clamp(depth, 0.00001, 1000.0);
else
// sets depth for spline path occlusion
// see http://www.fractalforums.com/index.php?topic=16405.0
gl_FragDepth = ((1000.0 / (1000.0 - 0.00001)) +
(1000.0 * 0.00001 / (0.00001 - 1000.0)) /
clamp(totalDist, 0.00001, 1000.0));
}

return hitColor;
}
« Last Edit: February 14, 2016, 10:04:20 PM by M Benesi » Logged

ForestCat
Alien
***
Posts: 24


« Reply #7 on: February 14, 2016, 10:17:05 PM »

Deluge???  Bring it, guys  smiley   This is to be my 'owners manual', lol.  Saving it all.

One quick question that's relevant to where I'm at at this moment.

For a given fractal equation, if the camera coords are normalized to, say:

Eye 0,0,1
Target 0,1,0
Up 0,0,1

For any given FOV, should this view always be 'portable' between different raytracers within Fragmentarium?
How about between programs, i.e Boxplorer and Fragmentarium?

Are there parameters beside the three params above, or the fractal equation itself, that could affect the location of the fractal within cartesian space?

Trying to solve the "Damn, it built without errors, but why is the render all black???" dilemma, lol.

As a study assignment to myself (with about a week of Fragmentarium/glsl experience) I'm trying to get Knighty's most awesome Reflectoids into Fragmentarium, and have it render IDENTICALLY.  You guys are probably laughing, but it's been kicking my ass a bit :-)

The challenge is that I've been trying to get it to run ENTIRELY in the code editor (no #includes), using the built-in (I think..) raytracer, shader, etc. 

I figure if I can suss that out, it will greatly enhance my understanding of what I can get away with re: cut & paste programming between  Fragmentarium, Synclipse, Boxplorer, etc.

I really appreciate you guys taking the time to share your expertise & wisdom.  Nice forum.

Logged
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« Reply #8 on: February 15, 2016, 02:23:04 PM »

One quick question that's relevant to where I'm at at this moment.
For a given fractal equation, if the camera coords are normalized to, say:
Eye 0,0,1
Target 0,1,0
Up 0,0,1
Unless you modify it with your raytracer code.

Quote
For any given FOV, should this view always be 'portable' between different raytracers within Fragmentarium?
How about between programs, i.e Boxplorer and Fragmentarium?
No, because it depends on FOV and the projection method (Orthogonal, perspective, etc… )

Quote
Are there parameters beside the three params above, or the fractal equation itself, that could affect the location of the fractal within cartesian space?

To move the fractal around, rotate or scale just transform the input p in a de loop. Check my code posted here and there - I added a bunch of features into the raytracer itself for global affine transforms of the fractal.

Essentially in your DE loop (pseudo code):

Code:
float de (vec3 p){

vec3 p1 = p;
translate(p1);
rotate(p1);
scale(p1);

de = iteratefractal(p1)

Return de

}

Quote
I figure if I can suss that out, it will greatly enhance my understanding of what I can get away with re: cut & paste programming between  Fragmentarium, Synclipse, Boxplorer, etc.

Not totally doable, but to some extent. Synthclipse and fragmentarium handle a few low-level things differently, meta code for uniforms is different etc… So it works to import to Synth, but you need to patch them a bit to work and no easy way back.
Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
Pages: [1]   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
Anyone still use Basic? Programming Cyclops 10 4072 Last post September 27, 2010, 01:29:42 PM
by Nahee_Enterprises
Basic location & zoom question Programming jwm-art 6 6090 Last post February 21, 2010, 04:43:00 AM
by Duncan C
Basic 3D Fractal Coloring Programming asimes 8 6061 Last post March 07, 2012, 10:03:22 AM
by cKleinhuis
basic box Mandelbulber Gallery taurus 0 583 Last post August 25, 2013, 09:26:41 PM
by taurus
Basic Mandelbrot question: out-coloring modes General Discussion tobiles 2 5954 Last post October 30, 2014, 07:44:31 PM
by SeryZone

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.217 seconds with 25 queries. (Pretty URLs adds 0.01s, 2q)