Patryk Kizny
|
|
« Reply #180 on: May 24, 2016, 10:57:28 PM » |
|
Guys, I don't want to be taking fun away from you, but IMHO there's not much sense implementing all these effects in the (already overloaded) fragment shader as you have them with way better performance, options and control in majority of postproduction softwares (either for stills or film). No point doing it in frag unless you're hitting Vjing and live applications.
|
|
|
Logged
|
Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
|
|
|
3dickulus
|
|
« Reply #181 on: May 25, 2016, 04:58:57 AM » |
|
uhm... @Patryk Kizny the code we are playing with goes in the buffershader.frag which is compiled into a separate program and executed "post DE" hence the tab title "Post" using the nv version of Fragmentarium you can see (Ctrl + A) exactly how much assembler code is in each program currently, DE-Kn2.frag (the one I'm using) turns into about 4000 lines of assembler, this includes the vertex shader assembler code too, while Buffershader-1.0.1.frag turns into about 256 lines including the vertex shader plus lensflare code, so lots of room there to play in the buffershader I reiterate: these two things, DE vert+frag and buffershader vert+frag, are 2 separate programs executing independentlythere is no reason at all why the whole thing can't be broken down into much smaller progs, each working on a specific part of the overall image, 1 for depth, 1 for color, 1 for ambient light, 1 for shadows etc etc etc, in fact, this is the way the GL pipeline is supposed to work, not all crammed into one prog, it just happens that the way Fragmentarium does this is convenient for us humans to manage, otherwise, a small change in one part might require big changes or a complete rewrite of other parts and this would be far beyond the patience threshold of most of the coders that are tinkering with Fragmentarium fragments, or perhaps it's more fun to just play and experiment instead of dedicating some serious time to recode the entire approach. ...and if anyone can correct my bold assumptions, please jump in and add your 2 cents worth, I am always eager to learn... remember, Fragmentarium is not meant to be a studio production type of rendering environment, it is experimental and things learned from it may end up (have ended up) in other progs. no one has even commented or figured out what to do with the "spray gun" feature that M Benesi invented... yet @ http://www.fractalforums.com/announcements-and-news/fragmentarium-1-0-10/msg91090/#msg91090and even though I did not quite understand it I hacked it into Fragmentarium anyways, with lots of help from MB, just for fun (it's actually really quite powerful) edit: btw, this lensflare code is so fast it has virtually no impact on rendertime
|
|
« Last Edit: May 25, 2016, 05:25:42 AM by 3dickulus, Reason: posterity »
|
Logged
|
|
|
|
Crist-JRoger
|
|
« Reply #182 on: May 25, 2016, 09:28:28 AM » |
|
I use Fragmentarium only for making fractals. I like fractals very much and nice visualization too. I am not interested in serious programming, just I don't have programming education. But I'm interested in improving of existing possibilities of this great program. Master Knighty had a great help, biggest thanks for all features. And Eiffie and Patryk and all programmers! If some things can be done here in free soft, I don't need any other application for edit results. About "spray gun" - I do not like this effect
|
|
|
Logged
|
|
|
|
Patryk Kizny
|
|
« Reply #183 on: May 25, 2016, 12:25:18 PM » |
|
I reiterate: these two things, DE vert+frag and buffershader vert+frag, are 2 separate programs executing independently
But sharing same resource of a GPU which at least in my case always hits 100%. edit: btw, this lensflare code is so fast it has virtually no impact on rendertime
Probably you're right on the performance. My point was that it's just all been done, is avail via established softwares where it can be used with more control. there is no reason at all why the whole thing can't be broken down into much smaller progs, each working on a specific part of the overall image, 1 for depth, 1 for color, 1 for ambient light, 1 for shadows etc etc etc, in fact, this is the way the GL pipeline is supposed to work, not all crammed into one prog, it just happens that the way Fragmentarium does this is convenient for us humans to manage, otherwise, a small change in one part might require big changes or a complete rewrite of other parts and this would be far beyond the patience threshold of most of the coders that are tinkering with Fragmentarium fragments, or perhaps it's more fun to just play and experiment instead of dedicating some serious time to recode the entire approach.
This is interesting. I wonder how breaking that into many shaders would affect (improve) performance. I may assume it may not be profitable since you'd probably need to recalculate a bunch of things many times and a single frag anyways scales up on resources use full saturating GPU. So the advantage would be when you actually hit the limits of a single frag in terms of number of uniforms or just codebase. Actually, the thing you suggested is something I am already implementing in 'my' version of the raytracer and with help of Kamil implementing multiple render targets in Synthclipse. It's still sitting in one frag shader that calculates lots of information (including lighting passes) and writes to multiple output channels. Then Synthclipse saves these buffers to separate files, ultimately a layered EXR. So far so good.
|
|
|
Logged
|
Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
|
|
|
3dickulus
|
|
« Reply #184 on: May 25, 2016, 02:18:14 PM » |
|
I'll abandon Fragmentarium and start using synthclipse with "your" raytrcer right away
|
|
|
Logged
|
|
|
|
Patryk Kizny
|
|
« Reply #185 on: May 25, 2016, 03:31:15 PM » |
|
I'll abandon Fragmentarium and start using synthclipse with "your" raytrcer right away Would be cool to have you cooperating on coding. I'm slow with this and missing good foundation. There's still a good range of todos in the code and things screaming for implementation.
|
|
|
Logged
|
Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
|
|
|
3dickulus
|
|
« Reply #186 on: May 25, 2016, 08:20:13 PM » |
|
as a moderator I don't think you should be discouraging experimentation, exploration and learning by hobbyists and artists Guys, I don't want to be taking fun away from you, but IMHO there's not much sense implementing all these effects in the (already overloaded) fragment shader as you have them with way better performance, options and control in majority of postproduction softwares (either for stills or film). No point doing it in frag unless you're hitting Vjing and live applications. I also find your exploitation of this venue and the talent here, for your own self promotion and benefit, to be somewhat distasteful, to such a degree that I am discouraged from adding or fixing anything else in the code and may not make my version available for you to exploit any longer, so you can go back to the original version by Syntopia, or use synthclipse, and add all the features you want, maybe even better, rewrite the entire thing from scratch to meet the modern GL environment. sorry man, just having a bad day and feeling a little grumpy, maybe will feel better tomorrow... @CJR I will have something for you after this weekend re:lensflare but will probably PM you the code rather than post it here btw PK the fragment shader and the buffer shader execute independently and don't interfere with each other, if the raytacer uses 100% GPU that's ok because the buffer shader uses 0% until the raytracer is finished (interleaved between subframes)
|
|
|
Logged
|
|
|
|
Patryk Kizny
|
|
« Reply #187 on: May 25, 2016, 11:08:12 PM » |
|
as a moderator I don't think you should be discouraging experimentation, exploration and learning by hobbyists and artists
Sure. Didn't want it to sound that way. sorry man, just having a bad day and feeling a little grumpy, maybe will feel better tomorrow...
Hopefully tomorrow will be better for you.
|
|
|
Logged
|
Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
|
|
|
3dickulus
|
|
« Reply #188 on: May 27, 2016, 04:58:18 AM » |
|
ok, I'm old and grumpy, but I still think you should give a bit more credit to all the shoulders you are standing on, it would help encourage more positive input. How long did it take you to become familiar enough with the equipment and code to produce a good presentation? Any Idea how many collective person hours of coding and schooling it took to get progs and frags to the current state? Were you able to do any of this stuff before stumbling onto FF? I'm not looking for fame or fortune, just sayin'... I'm ok with what you do, some very cool stuff, and I do understand what it takes to produce those nice looking vids. (the questions are rhetorical and only intended to encourage some introspection) so some thoughts that are more on topic... 1 pass to generate depths from camera view 1 pass to generate depths from light position to the same target = 2 24bit depth buffers with 2 8bit mask buffers testing these for intersects gets used for ambient occlusion and shadows 1 pass to generate a normals buffer when generating depths no color buffer writing or lighting calculation is done on these first passes so should be reasonably quick hand this data off to a geometry shader to generate a mesh once the depth buffers are available (could be more than 2 for more light sources) the color and lighting would be very fast and could exploit some fancy GL features like global illumination, hardware anti-aliasing, texture application, normal mapping, instancing, etc... but, in reality, the whole thing (fragm) would need a complete rewrite/rethink and it would be better to start from scratch, or a modern framework, and keep the spirit of Fragmentarium at the heart of it... fractal exploration and freedom to play with the shader code.
|
|
|
Logged
|
|
|
|
Crist-JRoger
|
|
« Reply #189 on: May 27, 2016, 11:08:28 AM » |
|
New OpenGL fractal software?... Sounds cool! I ask less of metaphysics, philosophy, rhetorical questions, and purely subjective evaluations Welcomed the new code, improve, improvements and new ideas, even crazy!
|
|
|
Logged
|
|
|
|
Patryk Kizny
|
|
« Reply #190 on: May 27, 2016, 01:29:02 PM » |
|
hand this data off to a geometry shader to generate a mesh once the depth buffers are available (could be more than 2 for more light sources) the color and lighting would be very fast and could exploit some fancy GL features like global illumination, hardware anti-aliasing, texture application, normal mapping, instancing, etc...
Thinking aloud: Having access to all hardware-accelerated and built-in stuff sounds like a promised land. How would you think of generating geometry? Marchingcubes? No meshes generated for fractals that I've seen so far looked any good. To get good results you'd need to triangle detail comparable to pixel-scale (at least on a detail-rich regions) plus would be good to optimize sparser areas and add normal interpolation. This allows you tap into GL resources, but do we need triangles to do all the stuff you mention? What I mean is that when you have all the passes you can easily relight everything and render in compositing apps such as Nuke or Houdini (without triangulation). BTW, Nuke is free now for personal use with HD resolution cap. That's the direction I am heading now with the multi-target rendering and saving. If mesh generation makes sense, to me it would be a tempting approach to march a volume and create a full mesh, not only a frontal view. And this leads me to the other thing I had in mind - instead of triangulating everything, why not stay with pointcloud data and voxels? There are 3D textures available, not sure how much of use that would be. There's a range of good pointcloud visualization tools capable of rendering massive pointclouds very fast - for example Thinkbox Krakatoa. The only problem though is that so far there is no software capable of creating a fractal pointcloud with good point counts. I've played with latest XenoDream, but it's still only 32bit app, with huge memory limits and the biggest I was able to generate was about 50M points. Which is about 10 times too small to get a good level of detail.
|
|
|
Logged
|
Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
|
|
|
3dickulus
|
|
« Reply #191 on: May 27, 2016, 02:34:09 PM » |
|
I'm just one guy with no formal education in math, programming or OpenGL, this sort of thing, I imagine, would take an organized team of brilliance all dedicated to the same goal. Syntopia's initial idea and realization of it in Fragmentarium is one of those things that is worthy of pursuit. I have learned a great deal and can't say thankyou enough to these folks at FF.
all good questions PK
thinking...
|
|
|
Logged
|
|
|
|
phtolo
|
|
« Reply #192 on: May 28, 2016, 11:37:55 AM » |
|
The only problem though is that so far there is no software capable of creating a fractal pointcloud with good point counts.
You could try StyrofoamIFS, http://www.phtolo.se/fractals/It's clunky and have no responsive interface, but it can create an arbitrary number of points.
|
|
|
Logged
|
|
|
|
Patryk Kizny
|
|
« Reply #193 on: May 28, 2016, 02:11:03 PM » |
|
You could try StyrofoamIFS, http://www.phtolo.se/fractals/It's clunky and have no responsive interface, but it can create an arbitrary number of points. Thanks! How can I export points?
|
|
|
Logged
|
Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
|
|
|
phtolo
|
|
« Reply #194 on: May 28, 2016, 08:24:45 PM » |
|
Thanks! How can I export points?
I uploaded a new version now that will let you export as .xyz files, previously an internal format was used. There is now a export.scene file with the basic settings needed for this. The .xyz files can be found in the cache folder.
|
|
|
Logged
|
|
|
|
|